So would that be the reservation expanding its tent-pegs?
That's kind of what I'm getting at here. At times, making something "unix-like"
can be at odds with making it maintainable by non-gurus. Is it ok when Apple
does it but not ok when Red Hat or Ubuntu does it? People have been trying
to get the best of both worlds for decades (remember linuxconf?)
I was hoping for some thoughts from Ragnar, who is a fan of both the Apple
system *and* the "unix-like" way of doing things. Clearly they are at odds
with each other. Was it ok for Apple to break from tradition, but only Apple
because they serve a different area of computing than, say, RedHat or Debian?
Back on that topic, I think I solved my systemd problem. Basically, systemd thinks it knows whether your service is running or not; it can be wrong. If your service was invoked directly from the init script, it doesn't know it's running when it is; if the start script fails, systemd will think it's running when it isn't; if the stop script fails, it may think it's not running when it is. Etc. In either of those states, the "stop" or "start" command may do nothing at all, by itself, because systemd thinks there's no state transition that needs to happen.
It's all a bit boneheaded, but there are ways to deal with it.
Do you control the start script or is it third party software? What I've
found is that a lot of systemd start scripts are built as naive ports of sysvinit
scripts -- maybe they were even auto-converted -- and they don't really take
advantage of systemd.
Specifically: the script will have a start command, a stop command, and monitoring instructions, and the service still runs in the background, because that's how it was done under sysvinit. But if you instead write the script to have systemd start the service in the *foreground* then there's nothing to monitor.
When the service exits, either normally or abnormally, the kernel reliably tells systemd that the process ended; there's nothing to detect.
Obviously I loved this when I saw it because I used to have a habit of running services directly out of /etc/inittab this way, and was very frustrated when I couldn't rely on /etc/inittab being available anymore.
Specifically: the script will have a start command, a stop command, and monitoring instructions, and the service still runs in the background, because that's how it was done under sysvinit. But if you instead write the script to have systemd start the service in the *foreground* then there's nothing to monitor.
When the service exits, either normally or abnormally, the kernel reliably tells systemd that the process ended; there's nothing to detect.
Obviously I loved this when I saw it because I used to have a habit of running services directly out of /etc/inittab this way, and was very frustrated when I couldn't rely on /etc/inittab being available anymore.
Well, it works either way. Your service can fork itself into the background... usually (though as pointed out in my last message, not always) systemd will be able to track its status in the background via cgroups.
sysvinit scripts are directly supported, and that's what we're using (and it's our script, at this point.) What I did was make the "stop" command try a little harder if its first attempt to stop the service gracefully fails, it will kill -9
I'd have more problems with OSX if it weren't geared towards the desktop....
It's pretty rare you need to get that deep into the system.
It's pretty rare you need to get that deep into the system.
2017-08-18 11:22 from LoanShark @uncnsrd
Back on that topic, I think I solved my systemd problem. Basically,
systemd thinks it knows whether your service is running or not; it can
be wrong. If your service was invoked directly from the init script, it
doesn't know it's running when it is; if the start script fails,
systemd will think it's running when it isn't; if the stop script
fails, it may think it's not running when it is. Etc. In either of
those states, the "stop" or "start" command may do nothing at all, by
itself, because systemd thinks there's no state transition that needs
to happen.
It's all a bit boneheaded, but there are ways to deal with it.
[Service]
Type=forking
PIDFile=[path to your PID file]
[...]
If your service creates a PID file, systemd will use the file specified by PIDFile to track whether or not the service is actually running, rather than trying to chase forks in a daemonizing process.
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/System_Administrators_Guide/sect-Managing_Services_with_systemd-Unit_Files.html
So, if your service can be told to create a PID file, that should help significantly.
If not... well, I dunno how you dealt with it in sysvinit.
And that's why I'd love it if systemd became universal, at least for Linux ... instead of all the tedious mucking about with pid files, you could just run the program without forking into the background, systemd will spawn the process directly and can respond to it exiting.
It's universal enough though, right? It's now the default on both Ubuntu and RedHat-derived systems.
Honestly, if I didn't have to deal with older distributions, I'd stop doing the whole daemonization thing and just run it as you might run any command on a command line.
Hrm... although, honestly, I could do both. I could take a command line argument that would daemonize if needed...
It's universal enough though, right? It's now the default on both
Ubuntu and RedHat-derived systems.
Hmm. According to https://en.wikipedia.org/wiki/Systemd#Adoption_and_reception it has been the default on CentOS, Debian, Fedora, Mint, SuSE, Red Hat, and Ubuntu for some time now (plus some others that I'm not counting as anything other than fringe players).
The question of course is: "as a software distributor, should I write exclusively to systemd?" I'm starting to think the answer is "yes" because anyone still running sysvinit in 2017 is probably already dealing with being a fringe player.
Heh... well, I'm having to deal with ancient distributions, so I still have to write for sysvinit, upstart, and pretty much anything else.
Detecting which init tool the distribution uses is a new kind of entertainment.
In the sense that amputation of major limbs is entertaining.
Right, and that's supposedly why every init-replacement at least *tries* to
support sysvinit scripts. But it still feels sort of emulated and second-class.
I figured one had to support sysvinit for backwards compatibility.
I could have taken the easy approach, and just written sysvinit scripts and said 'screw it'. But... I really do want our software to start more dynamically than sysvinit might permit.
Last year, Snoracle announced that there would be no Solaris 12, but instead
Solaris 11 would be updated forever (rolling release cycle, just like everyone
else is doing these days).
Now, they've just announced that "Solaris 11.NEXT" (stupid name) won't be released this year; it'll be released in 2018. Maybe. They've also been laying off people in the Solaris and SPARC groups.
Are we at a point where we can call Solaris and SPARC a "legacy platform" yet? Rather than answer that question from an emotional-attachment point of view, let's try it this way: would anyone in their right mind deploy a *new* workload on Solaris or SPARC today?
Now, they've just announced that "Solaris 11.NEXT" (stupid name) won't be released this year; it'll be released in 2018. Maybe. They've also been laying off people in the Solaris and SPARC groups.
Are we at a point where we can call Solaris and SPARC a "legacy platform" yet? Rather than answer that question from an emotional-attachment point of view, let's try it this way: would anyone in their right mind deploy a *new* workload on Solaris or SPARC today?
Well sure, that's a trend that we're seeing with a number of vendors now (Oracle,
IBM, Microsoft) if the customer is deploying [vendor] operating system it
is because they're using it to run [same vendor]'s own server software.
The next step is "oh geez, it's just not worth it, we'll pick other software" (H-pukes is there now).
The next step is "oh geez, it's just not worth it, we'll pick other software" (H-pukes is there now).
For those who haven't seen it or tried it yet...
[ https://guacamole.apache.org/ ]
Guacamole is an access server that requires nothing but a web browser to connect to the RDP, VNC, SSH, Telnet sessions of your choice. One might expect this to be slow and clunky, but it's actually *really* good. It's more responsive than even some non-browser-based clients.
I'm using it now, in fact :)