It dont piss me off. As much as i hate those things, i do still believe that people should get to choose what they want to run ( which is my biggest beef with it all i guess..not that i disagree with them technically or politically, its their railroading that removes practical choice )
24 years ago a conversation took place, possibly in this very room, about the future of operating systems. At the time, many of us feared that despite the superiority of unix, it might not have a future ("Microsoft will squash it like a bug." -- Peter Pulse) and that "Linux will *be* unix -- or what's left of it." Today, we are pleased that the former prediction failed to become reality, but the latter DID.
Linux has won. It's won pretty much everything. No one takes other operating systems seriously anymore, except in very specific niches -- BSD for iPhones and iMacs, Windows for desktops -- everything else is legacy. Linux has become what Jim Allchin predicted Windows would become: "the fabric of standard computing."
SCO Unix, or whatever it's called now, has no value as a product. I'd be surprised if there were more than a couple hundred instances of it running in the world at this point. The only value it has anymore, is as a -3 cursed IP portfolio. Someone needs to put a stake through its heart and *dismantle* it to prevent any more unwanted sequels.
Not bashing PotteringOS here in this case, but why CentOS? With its current state, id avoid it like the plague if i needed a BlueHat based system.
And isn't CentOS based on SystemD? Should it not act like you are expecting?
Arrgh. I am moving an application from Debian to CentOS and have to go back to the non-systemd interface naming. Sure enough, as soon as I started adding interfaces, they started reordering. eth0 became eth1 and others moved around too. systemd does this better, objectively.
Whatever comes next will almost certainly have systemd interface naming, but CentOS 7 does not. So as a vile sleazy workaround, I am going to recommend they deploy this application on virtual machines with ten vNICs (the maximum for VMware) configured, shut down and/or pointed to null networks, so new connections can be made later on without disrupting the existing ones.
2021-04-05 10:58 from IGnatius T Foobar
Subject: Re: Thank you, Lennart Poettering.
Arrgh. I am moving an application from Debian to CentOS and have to go
back to the non-systemd interface naming. Sure enough, as soon as I
started adding interfaces, they started reordering. eth0 became eth1
and others moved around too. systemd does this better, objectively.
I use systemd-networkd to bring up my interfaces. I also use a virtual bridge so that I can create virtual machines that appear on the LAN as though they were real devices without NAT. The host machine leaves its physical NIC with no address, and uses DHCP to assign an address to its virtual bridge, so that the virtual machines can also access the host using the address of the virtual bridge. And finally, my DHCP server is configured to reserve an address for the MAC of the virtual bridge.
All of this to say: I have upgraded before and after rebooting found that systemd decided to assign my virtual bridge a new MAC, which broke my DHCP reservation. I'm glad it resolved the interface renaming problem that you were having, but it also comes with its own renaming problems. I also don't think there's any reason why the interface naming issue couldn't have been solved without systemd. So, even though I use systemd, I have some gripes with it, and I certainly don't thank Poettering for anything.
P.S. Hello! This is my first post on Citadel.
2021-04-05 10:58 from IGnatius T Foobar
Subject: Re: Thank you, Lennart Poettering.
Arrgh. I am moving an application from Debian to CentOS and have to go
back to the non-systemd interface naming. Sure enough, as soon as I
started adding interfaces, they started reordering. eth0 became eth1
and others moved around too. systemd does this better, objectively.
I use systemd-networkd to bring up my network interfaces. I also use a virtual bridge so that I can create virtual machines that appear on my LAN without NAT. And finally, I use DHCP reservation with the virtual machine's MAC so that the host machine is always assigned the same address.
I have upgraded before and rebooted only to find that systemd decided to generate a new MAC for my virtual bridge, breaking my DHCP reservation. So, I'm glad that it has fixed your interface renaming problem, but it's not without its own renaming problems. Also, I don't think there is any technical reason why the interfacing naming problem couldn't have been resolved without systemd.
I use systemd, and I like some things about it and dislike others. But I will never thank Poettering for anything. He is cancer.
P.S. Hello! This is my first post on Citadel. I hope I did it right.
LOL your 2nd is your first :)
( not that i have not been there... double posting ... if i log off between sessions, i found that problem went away for me, else when i came back by browser would post the last thing i just posted...
P.S. Hello! This is my first post on Citadel. I hope I did it right.
There are two bridges, though. One has a physical Ethernet interface as one of the bridge members, and the bridge MAC is derived from that. There's another one that brings in a connection from an external router over a VLAN, and I found that it didn't come up at all unless I manually assigned a MAC address to it.
And sorry for the double-post. I'm using the ssh client and the options presented to me after hitting enter twice weren't clear to me. "Save" and "Hold" both sound like they mean "set a draft aside for later" to me. Now I'm struggling to figure out how to backspace through a newline....
Well, like RMS or not, agree with the FSF or not, at least they are not bowing to pressure and sticking to their internal decisions.
https://www.zdnet.com/article/the-fsf-doubles-down-on-restoring-rms-after-his-non-apology-apology/
2021-04-13 10:36 from Nurb432
Well, like RMS or not, agree with the FSF or not, at least they are
not bowing to pressure and sticking to their internal decisions.
https://www.zdnet.com/article/the-fsf-doubles-down-on-restoring-rms-af
ter-his-non-apology-apology/
I don't know if I had mentioned it, but soon after an open letter with sounsands of signers was published against him, another one supporting him gained even more signers.
So... I see a problem with doing live backup of my Citadel from the current production box to my test environment on Proxmox.
I looked at Rsync - but the problem is, Rsync runs as the local user on the target machine (for a pull operation) that executes the rsync. In order to give it root, you seem to have several options.
Turn off SUDO passwords for the account that is launching the rsync job. This is unsatisfactory - as it basically makes that account ROOT.
Enable ROOT SSH sessions - this effectively makes having non-root accounts require SU.
Other obscure configuration changes that all seem to compromise security.
This makes me think that the easiest way is to set up a utility account that has full permissions to the folder I want to rsync - which in this case, seems to be /usr/local/citadel.
Then run the rsync session from that account.
Is my thinking correct? I don't need OWNERSHIP for this account, right, and I don't need WRITE access for this account, if I just want to PULL from the source (production) to the destination (test).
Is this the right way to do this? Is there a better way to do this?
And once I figure it out, how do I automate it? A chron job running every night?
Sounds right to me. ( admittedly its been a long day and im tired )
And ya id use a cron job or something. Instead of adding some sort of 3rd party job scheduling system ..
The web tells me that the way to do it is to back it up to an attached USB device using RSYNC, then move that device to the target server, and restore it there, also using RSYNC.
This seems to defeat the purpose of RSYNC - but... it also gets around issues with permissions, as I'm able to execute from both the source and the target as SU.
I'm backing it up now. My earlier copy of Citadel from production to test did not work. I got Citadel installed - both by the easyinstall method and then by the appinstall method - but it doesn't see the database in either case.
So... I'm still stumbling my way through this.
But being able to rsync to a USB drive then either store that image or restore it to test will beat my previous backup method, which was to buy a 240GB SSD drive and image the BBS drive to that drive.
Assuming I can get it to restore faithfully.
My gut feeling is that restoring it to like hardware will probably work... but restoring it to a VM is going to cause a kernel panic - but we'll see. I try things, they break, I figure out what I broke, I learn something new.
2021-04-14 15:13 from ParanoidDelusions
So... I see a problem with doing live backup of my Citadel from the
current production box to my test environment on Proxmox.
I looked at Rsync - but the problem is, Rsync runs as the local user
on the target machine (for a pull operation) that executes the rsync.
In order to give it root, you seem to have several options.
Turn off SUDO passwords for the account that is launching the rsync
job. This is unsatisfactory - as it basically makes that accountthat has full permissions to the folder I want to rsync -
ROOT.
Enable ROOT SSH sessions - this effectively makes having non-root
accounts require SU.
Other obscure configuration changes that all seem to compromise
security.
This makes me think that the easiest way is to set up a utility
account
which in this case, seems to be /usr/local/citadel.
Then run the rsync session from that account.
Is my thinking correct? I don't need OWNERSHIP for this account,
right, and I don't need WRITE access for this account, if I just want
to PULL from the source (production) to the destination (test).
Is this the right way to do this? Is there a better way to do this?
And once I figure it out, how do I automate it? A chron job running
every night?
I'd do it the other way around. Instead of having the backupsystem pull the data from the master system, have the master system push the data into the backup system.
What I am doing in my low load systems is to turn the service I am backing up off, have a local (privileged enough) user copy the data over to the backup server using an ssh tunnel, and then restart the service. This does not translate well for rsync. You may prefer to use GNU Tar with incremental backups instead. This approach has the advantage that file permissions and attibutes are more likely to be saved unbroken. You can also pass the tar file through an encryption filter.
I hope it helps.
2021-04-14 20:43 from ParanoidDelusions
My gut feeling is that restoring it to like hardware will probably
work... but restoring it to a VM is going to cause a kernel panic -
but we'll see. I try things, they break, I figure out what I broke, I
learn something new.
If you are using Promox in production, I think Promox supports hot backups. You may consider that.
I don't like rsync much for transferring privileged data across different systems because you have to ensure permissions and ownerships are not trashed.
If you want to use rsync, something you mayn consider is to mount the backup location as an NFS share and rsync into it.