Language:
switch to room list switch to menu My folders
Go to page: First ... 64 65 66 67 [68] 69 70 71 72 ... Last
[#] Thu Apr 01 2021 19:58:26 EDT from Nurb432

Subject: Re: Thank you, Lennart Poettering.

[Reply] [ReplyQuoted] [Headers] [Print]

It dont piss me off. As much as i hate those things, i do still believe that people should get to choose what they want to run ( which is my biggest beef with it all i guess..not that i disagree with them technically or politically, its their railroading that removes practical choice )



[#] Sat Apr 03 2021 11:11:44 EDT from nonservator

Subject: Re: SCO.. yet again

[Reply] [ReplyQuoted] [Headers] [Print]

[#] Sat Apr 03 2021 11:31:44 EDT from IGnatius T Foobar

Subject: Re: SCO.. yet again

[Reply] [ReplyQuoted] [Headers] [Print]

It's like a recurring villain from a really bad B-movie that keeps coming back no matter how many times you kill it. I'm sure they know they're just being a troll at this point, hoping perhaps for a few million as a settlement to make them go away, or perhaps for IBM to buy them out to make the problem go away. I suppose the latter would be problematic because as IBM continues to decline it would be the next in line to troll the rest of the Linux world.

24 years ago a conversation took place, possibly in this very room, about the future of operating systems. At the time, many of us feared that despite the superiority of unix, it might not have a future ("Microsoft will squash it like a bug." -- Peter Pulse) and that "Linux will *be* unix -- or what's left of it." Today, we are pleased that the former prediction failed to become reality, but the latter DID.

Linux has won. It's won pretty much everything. No one takes other operating systems seriously anymore, except in very specific niches -- BSD for iPhones and iMacs, Windows for desktops -- everything else is legacy. Linux has become what Jim Allchin predicted Windows would become: "the fabric of standard computing."

SCO Unix, or whatever it's called now, has no value as a product. I'd be surprised if there were more than a couple hundred instances of it running in the world at this point. The only value it has anymore, is as a -3 cursed IP portfolio. Someone needs to put a stake through its heart and *dismantle* it to prevent any more unwanted sequels.

[#] Mon Apr 05 2021 10:58:07 EDT from IGnatius T Foobar

Subject: Re: Thank you, Lennart Poettering.

[Reply] [ReplyQuoted] [Headers] [Print]

Arrgh. I am moving an application from Debian to CentOS and have to go back to the non-systemd interface naming. Sure enough, as soon as I started adding interfaces, they started reordering. eth0 became eth1 and others moved around too. systemd does this better, objectively.

[#] Mon Apr 05 2021 11:46:10 EDT from Nurb432

Subject: Re: Thank you, Lennart Poettering.

[Reply] [ReplyQuoted] [Headers] [Print]

Not bashing PotteringOS here in this case, but why CentOS? With its current state, id avoid it like the plague if i needed a BlueHat based system.

 

And isn't CentOS based on SystemD? Should it not act like you are expecting?

Mon Apr 05 2021 10:58:07 EDT from IGnatius T Foobar Subject: Re: Thank you, Lennart Poettering.
Arrgh. I am moving an application from Debian to CentOS and have to go back to the non-systemd interface naming. Sure enough, as soon as I started adding interfaces, they started reordering. eth0 became eth1 and others moved around too. systemd does this better, objectively.

 



[#] Mon Apr 05 2021 15:38:56 EDT from IGnatius T Foobar

Subject: Re: Thank you, Lennart Poettering.

[Reply] [ReplyQuoted] [Headers] [Print]

I'd like to avoid it too, but the people who do in-life support for our Linux systems have advised me that whatever replaces CentOS as the preferred Linux for in-house applications will be something BlueHat-like. The merit of that decision is a completely different discussion; all I care about right now is that if I stick with what they can support the best, my phone rings a lot less.

Whatever comes next will almost certainly have systemd interface naming, but CentOS 7 does not. So as a vile sleazy workaround, I am going to recommend they deploy this application on virtual machines with ten vNICs (the maximum for VMware) configured, shut down and/or pointed to null networks, so new connections can be made later on without disrupting the existing ones.

[#] Mon Apr 05 2021 20:17:52 EDT from zelgomer

Subject: Re: Thank you, Lennart Poettering.

[Reply] [ReplyQuoted] [Headers] [Print]

2021-04-05 10:58 from IGnatius T Foobar
Subject: Re: Thank you, Lennart Poettering.
Arrgh. I am moving an application from Debian to CentOS and have to go

back to the non-systemd interface naming. Sure enough, as soon as I

started adding interfaces, they started reordering. eth0 became eth1

and others moved around too. systemd does this better, objectively.




I use systemd-networkd to bring up my interfaces. I also use a virtual bridge so that I can create virtual machines that appear on the LAN as though they were real devices without NAT. The host machine leaves its physical NIC with no address, and uses DHCP to assign an address to its virtual bridge, so that the virtual machines can also access the host using the address of the virtual bridge. And finally, my DHCP server is configured to reserve an address for the MAC of the virtual bridge.

All of this to say: I have upgraded before and after rebooting found that systemd decided to assign my virtual bridge a new MAC, which broke my DHCP reservation. I'm glad it resolved the interface renaming problem that you were having, but it also comes with its own renaming problems. I also don't think there's any reason why the interface naming issue couldn't have been solved without systemd. So, even though I use systemd, I have some gripes with it, and I certainly don't thank Poettering for anything.
P.S. Hello! This is my first post on Citadel.

[#] Mon Apr 05 2021 20:30:47 EDT from zelgomer

Subject: Re: Thank you, Lennart Poettering.

[Reply] [ReplyQuoted] [Headers] [Print]

2021-04-05 10:58 from IGnatius T Foobar
Subject: Re: Thank you, Lennart Poettering.
Arrgh. I am moving an application from Debian to CentOS and have to go

back to the non-systemd interface naming. Sure enough, as soon as I

started adding interfaces, they started reordering. eth0 became eth1

and others moved around too. systemd does this better, objectively.




I use systemd-networkd to bring up my network interfaces. I also use a virtual bridge so that I can create virtual machines that appear on my LAN without NAT. And finally, I use DHCP reservation with the virtual machine's MAC so that the host machine is always assigned the same address.
I have upgraded before and rebooted only to find that systemd decided to generate a new MAC for my virtual bridge, breaking my DHCP reservation. So, I'm glad that it has fixed your interface renaming problem, but it's not without its own renaming problems. Also, I don't think there is any technical reason why the interfacing naming problem couldn't have been resolved without systemd.

I use systemd, and I like some things about it and dislike others. But I will never thank Poettering for anything. He is cancer.
P.S. Hello! This is my first post on Citadel. I hope I did it right.




[#] Tue Apr 06 2021 07:38:57 EDT from Nurb432

Subject: Re: Thank you, Lennart Poettering.

[Reply] [ReplyQuoted] [Headers] [Print]

LOL your 2nd is your first :) 

( not that i have not been there... double posting ...  if i log off between sessions, i found that problem went away for me, else when i came back by browser would post the last thing i just posted... 

 

Mon Apr 05 2021 20:30:47 EDT from zelgomer Subject: Re: Thank you, Lennart Poettering.
P.S. Hello! This is my first post on Citadel. I hope I did it right.



 



[#] Tue Apr 06 2021 09:04:48 EDT from IGnatius T Foobar

Subject: Re: Thank you, Lennart Poettering.

[Reply] [ReplyQuoted] [Headers] [Print]

That's interesting. My system is set up almost identically to what you describe.
There are two bridges, though. One has a physical Ethernet interface as one of the bridge members, and the bridge MAC is derived from that. There's another one that brings in a connection from an external router over a VLAN, and I found that it didn't come up at all unless I manually assigned a MAC address to it.

[#] Thu Apr 08 2021 11:02:51 EDT from zelgomer

Subject: Re: Thank you, Lennart Poettering.

[Reply] [ReplyQuoted] [Headers] [Print]

I will say that, once I got past the initial knee-jerk disgust at INI-style configs, at least they appear to be fully thought-out and support just about everything I've ever wanted to do in either a systemd.netdev or a systemd.unit or whatever. I don't know why you have to manually assign a MAC, I've never had to do that.
And sorry for the double-post. I'm using the ssh client and the options presented to me after hitting enter twice weren't clear to me. "Save" and "Hold" both sound like they mean "set a draft aside for later" to me. Now I'm struggling to figure out how to backspace through a newline....

[#] Tue Apr 13 2021 10:36:13 EDT from Nurb432

[Reply] [ReplyQuoted] [Headers] [Print]

Well, like RMS or not, agree with the FSF or not, at least they are not bowing to pressure and sticking to their internal decisions. 

 

https://www.zdnet.com/article/the-fsf-doubles-down-on-restoring-rms-after-his-non-apology-apology/



[#] Tue Apr 13 2021 13:33:21 EDT from darknetuser

[Reply] [ReplyQuoted] [Headers] [Print]

2021-04-13 10:36 from Nurb432
Well, like RMS or not, agree with the FSF or not, at least they are
not bowing to pressure and sticking to their internal decisions. 

 

https://www.zdnet.com/article/the-fsf-doubles-down-on-restoring-rms-af

ter-his-non-apology-apology/


I don't know if I had mentioned it, but soon after an open letter with sounsands of signers was published against him, another one supporting him gained even more signers.

[#] Wed Apr 14 2021 15:13:32 EDT from ParanoidDelusions

[Reply] [ReplyQuoted] [Headers] [Print]

So... I see a problem with doing live backup of my Citadel from the current production box to my test environment on Proxmox. 

I looked at Rsync - but the problem is, Rsync runs as the local user on the target machine (for a pull operation) that executes the rsync. In order to give it root, you seem to have several options. 

Turn off SUDO passwords for the account that is launching the rsync job. This is unsatisfactory - as it basically makes that account ROOT. 

Enable ROOT SSH sessions - this effectively makes having non-root accounts require SU. 

Other obscure configuration changes that all seem to compromise security. 

This makes me think that the easiest way is to set up a utility account that has full permissions to the folder I want to rsync - which in this case, seems to be /usr/local/citadel. 

Then run the rsync session from that account. 

Is my thinking correct? I don't need OWNERSHIP for this account, right, and I don't need WRITE access for this account, if I just want to PULL from the source (production) to the destination (test). 

Is this the right way to do this? Is there a better way to do this? 

And once I figure it out, how do I automate it? A chron job running every night? 

 



[#] Wed Apr 14 2021 17:10:20 EDT from Nurb432

[Reply] [ReplyQuoted] [Headers] [Print]

Sounds right to me.  ( admittedly its been a long day and im tired )

 

And ya id use a cron job or something. Instead of adding some sort of 3rd party job scheduling system .. 



[#] Wed Apr 14 2021 20:36:51 EDT from ParanoidDelusions

[Reply] [ReplyQuoted] [Headers] [Print]

The web tells me that the way to do it is to back it up to an attached USB device using RSYNC, then move that device to the target server, and restore it there, also using RSYNC. 

This seems to defeat the purpose of RSYNC - but... it also gets around issues with permissions, as I'm able to execute from both the source and the target as SU. 

I'm backing it up now. My earlier copy of Citadel from production to test did not work. I got Citadel installed - both by the easyinstall method and then by the appinstall method - but it doesn't see the database in either case. 

So... I'm still stumbling my way through this. 

But being able to rsync to a USB drive then either store that image or restore it to test will beat my previous backup method, which was to buy a 240GB SSD drive and image the BBS drive to that drive. 

Assuming I can get it to restore faithfully. 

 



[#] Wed Apr 14 2021 20:43:28 EDT from ParanoidDelusions

[Reply] [ReplyQuoted] [Headers] [Print]

My gut feeling is that restoring it to like hardware will probably work... but restoring it to a VM is going to cause a kernel panic - but we'll see. I try things, they break, I figure out what I broke, I learn something new. 

 



[#] Thu Apr 15 2021 05:59:54 EDT from darknetuser

[Reply] [ReplyQuoted] [Headers] [Print]

2021-04-14 15:13 from ParanoidDelusions
So... I see a problem with doing live backup of my Citadel from the

current production box to my test environment on Proxmox. 

I looked at Rsync - but the problem is, Rsync runs as the local user

on the target machine (for a pull operation) that executes the rsync.

In order to give it root, you seem to have several options. 

Turn off SUDO passwords for the account that is launching the rsync

job. This is unsatisfactory - as it basically makes that account
ROOT. 

Enable ROOT SSH sessions - this effectively makes having non-root
accounts require SU. 

Other obscure configuration changes that all seem to compromise
security. 

This makes me think that the easiest way is to set up a utility
account
that has full permissions to the folder I want to rsync -
which in this case, seems to be /usr/local/citadel. 

Then run the rsync session from that account. 

Is my thinking correct? I don't need OWNERSHIP for this account,
right, and I don't need WRITE access for this account, if I just want

to PULL from the source (production) to the destination (test). 

Is this the right way to do this? Is there a better way to do this? 


And once I figure it out, how do I automate it? A chron job running

every night? 

 


I'd do it the other way around. Instead of having the backupsystem pull the data from the master system, have the master system push the data into the backup system.

What I am doing in my low load systems is to turn the service I am backing up off, have a local (privileged enough) user copy the data over to the backup server using an ssh tunnel, and then restart the service. This does not translate well for rsync. You may prefer to use GNU Tar with incremental backups instead. This approach has the advantage that file permissions and attibutes are more likely to be saved unbroken. You can also pass the tar file through an encryption filter.

I hope it helps.

[#] Thu Apr 15 2021 06:03:08 EDT from darknetuser

[Reply] [ReplyQuoted] [Headers] [Print]

2021-04-14 20:43 from ParanoidDelusions
My gut feeling is that restoring it to like hardware will probably

work... but restoring it to a VM is going to cause a kernel panic -

but we'll see. I try things, they break, I figure out what I broke, I

learn something new. 

 


If you are using Promox in production, I think Promox supports hot backups. You may consider that.

I don't like rsync much for transferring privileged data across different systems because you have to ensure permissions and ownerships are not trashed.


If you want to use rsync, something you mayn consider is to mount the backup location as an NFS share and rsync into it.

Go to page: First ... 64 65 66 67 [68] 69 70 71 72 ... Last