I'm seeing a lot of people who learn the security part of the job, but do not know anything about maintaining a system.
That is, they might understand how to call the various programs to break into a network and maybe run any one of the standard exploits that are commonly available, but they don't know how to configure a firewall, or even how to connect a Windows machine to a Windows domain.
It's more maddening than ever. And yet doesn't seem to have advanced in a lot of ways.
The balkanization is bordering on the psychotic.
Sadly, Windows 10 is easier to live with these days.
The balkanization is bordering on the psychotic.
BITD, it was simple: you used Red Hat, or you were out of the mainstream.
Too bad they shot themselves in the foot.
And every year you make the same false assumption, that the entire Linux universe is expected to behave as it were a single entity, like Microsoft or Apple.
I will agree at this point that the traditional Linux desktop is a non-starter. There was a time when everyone "knew" that if the Windows desktop monopoly was not vanquished, Microsoft would eventually use it as leverage to take over everything else in computing. But here we are two decades later, Windows still rules the desktop, and only the desktop. Linux is king on servers - no one who's serious about servers uses Windows Server except to run Microsoft's own server applications. Linux owns 66% of mobile and tablet market share, and most of the rest belongs to Apple. "The Cloud" is almost exclusively built on Linux.
So does it make a difference? In some ways, yes: Miguel Hitler de Icaza, widely known as the person who single-handedly sabotaged Linux's first and best chance of having a unified desktop, is now officially (instead of secretly) employed by Microsoft, and it's frustrating to see how well that worked. In other ways, no: since we do pretty much everything through a browser these days, it's hard to get excited about any desktop operating systems at all. I spend my days tabbing back and forth between Chrome and MobaXterm.
I don't have a problem with so many different distributions.
Each one seems tuned to a particular purpose. It's a kind of customization that I find encouraging, actually. If you understand what you're trying to do, and you choose the right distribution, you'll do okay.
I think a good chunk of the dominant distributions seem derived from Debian. Next, I'd look at Redhat (or CentOS if you wanna be cheap).
I guess you'd use Knoppix if you want something that boots off a stick drive on any machine, and maybe arch linux if you kinda want to build something from scratch, but you don't really know how (maybe you're trying to create your own embedded OS distribution or something).
At least, I think that's how these things break down.
Debian/Redhat for general OSes, derivatives for their particular purposes (e.g. Ubuntu if you want something with slightly more current packages than Debian and you don't mind being a bit on the edge, etc).
I think once you understand this tuning that goes on, it's not so bad. Especially since most distributions seem derived from only a few main distributions.
Each one seems tuned to a particular purpose. It's a kind of
Well yes, that's kind of the point. Every distributor has use cases in mind.
A certain well-known operating system vendor has in recent history tried to build a single operating system designed to work the same on servers, desktops, tablets, mobile devices, and video game consoles ... and we all know well *that* went.
I hate to say it, but I'm kind of warming up to systemd. Despite the fact that it replaces a lot of subsystems that have been around forever, it has the feel of having been *designed* as a system manager, as opposed to just a pile of scripts that have evolved and adapted over the last 40 years. Having "one way" to install and maintain services on the system is really nice ... as long as you can convince everyone to adopt it. But with RedHat/Fedora/CentOS, Debian/Ubuntu, and SuSE/OpenSuSE all moving to systemd, it's kind of a done deal at this point. (Sorry slackware and gentoo folks, we love you, but no one is running those variants in serious production deployments.)
systemd has been compared to svchost.exe on Windows. I suppose that's both a good and bad thing.
However, we're not talking about specialized device distros. We're just talking mainstream desktop or server. The variations are enough to drive one mad.
I think systemd feels a bit like the Windows service control manager, but less intrusive.
That is, the Windows SCM requires you write your code with the SCM in mind.
You have to know certain structures, and call certain functions with certain values in order to be a proper Windows service.
In Linux, it feels more like you have to understand how to daemonize your code... unless you don't feel like it, in which case you set up your .service file to take care of the daemonization details for you. You just write your code however you intend to write it, and then tell systemd how to deal with it.
Which approach is better?
I dunno. On the one hand, if you're writing something that should run for long periods of time (months, years?) then you maybe ought to be extremely mindful of what you're doing... the SCM approach is yet another thing to deal with, but it isn't that awful since you can get a loose structure of code from Microsoft and build from there.
But the idea of just writing your code how you want it to work, and being able to run it from the command line or as a service (which aids in tracking down problems when you need to use a debugger to catch something while the thing is starting up)... I rather like that as well. Something feels kind of clean about the linux approach.
Upstart, SysV, systemd... just pick one, I feel. I'm kind of annoyed at having to support all of them because we are committed to dealing with 10 year old linux distributions.
Considering that sysvinit lasted for decades in more or less the same form, and everyone is moving to systemd, I think it's a good guess that systemd will be long term viable. I like it better, anyway. You don't have to worry about daemonizing your program and it can automatically be restarted if it crashes. I've been known to run server programs directly from /etc/inittab for exactly those reasons. Sometimes I'd even call them via openvt(1) so their logs would show up on a virtual console, like they did on a Netware server.
daemonizing your program is pretty easy now too. It was always pretty easy, but now there's a daemon(3) library call that does it for you.
Heh, there's also a daemon executable you can install, if you prefer.
even call them via openvt(1) so their logs would show up on a virtual
console, like they did on a Netware server.
So, what happens when some smartaleck switches to that virtual console, they're not even logged in and then can do this - type Control-S.
Anyway if they can get to the virtual console they're probably already in a position to shut the whole server off anyway.
I liked being able to "see it running" very easily, whether it was Netware or Lotus Notes or whatever. I liked being able to connect to the console and have the ability to type stuff at it. Asterisk does this pretty well, actually. If you have the authority you can connect to the console and see everything it's doing and enter commands, and disconnect without stopping the server.
Going forward though, it's pretty simple. systemd won't need your program to daemonize at all, but when you do need to go into the background, a simple call to daemon(0,0); does the trick. I'm not sure which of those imitation Linuxes like FreeBSD or Mac OS or Solaris it works on, though.
It looks like the FreeDesktop folks have written a bit of a field guide for us, though:
[ https://www.freedesktop.org/software/systemd/man/daemon.html ]
I recently stumbled across a tool called "winexe", which lets you remotly connect from a linux box to a windows box in order to execute a command there, cmd.exe for example.
A groupware distribution for schools uses this in a very neat way: You join a windows machine to its samba nt4 domain, you use the webinterface to install the opsi-agent via winexe and then you are able to install a large amount of software via OPSI itself. I have to study their configs in order to replicate that for my other clients.
Groupware distribution: https://iserv.eu/ (german only, as it seems)
Winexe: https://lbtwiki.cern.ch/bin/view/Online/WinExe (source via sourceforge, sadly)
OPSI (Open PC Server Integration) is an software distribution tool, which also lets you keep taps on installed software, some license keys and the hardware: http://www.opsi.org/en
Subject: Systemd uber.
I have butted heads with Patrick Volkerding before (on the topic of including PAM in the mainline of Slackware) - and failed back in the late 90's. What would you suggest to help him take in Systemd as a "good thing" - Martha Stuart "T.M."
I don't have enough education on System D to make an educated guess as to what is best for the future of Linux, but it sounds like you do.
This URL does a reasonably decent job of describing the problem that systemd (and other such tools) solve:
Problem is, right there in the article, they note that Patric Volkerding views systemd as the attitude of the key developer for the tool towards users and bug reports doesn't seem to be good. That kinda suggests the tool might struggle not because of its technical merits, but because of it handlers.
For my part, I struggled a little to find simple documentation for how I would modify my setup to embrace systemd, and had to learn much of what I know through trial and error... and I'm still not sure I quite grasp the way they handle dependencies (which might be a matter of poor documentation on the part of the OS distributor, or how systemd is a tad too loosy-goosy about defining things such that you can reliably spell out your dependencies).
Gads.. but that's not the best use of English there.
The article notes Volkerding doesn't view the maintainer of systemd as being good towards users and bug reports. And I guess some of these guys feel its philosophy is 'weird,' not quite matching the unix ideal for controlling system processes.
What 'good' means, I don't know. I guess maybe the maintainer isn't responsive, or is overly dismissive, or ... something?
"I'll be here again with another Interesting article you people will love to read."
If I thought the author had good English skills, I'd find that offensive.
Instead, I think he doesn't quite intend what he wrote.
Why put systemd in every distribution? If all distros were alike, we wouldn't need so many of them.
imho, this old system where you manually enumerate initscripts to determine boot order was bad. I hated it in SuSE.
Gentoo for example uses openrc, where you define the the dependencies of an initscript and then it all gets resolved automatically. I like that. I like Gentoo. IIRC, arch used it for a while, too.
I have yet to get used to systemd, since it will be in every major distro in the future. Wether that is a good thing or not. Until now, I maintain mostly Centos6 based systems.
Subject: Look what I just found...
The following command, when run as any user, will crash systemd:
NOTIFY_SOCKET=/run/systemd/notify systemd-notify ""
After running this command, PID 1 is hung in the
pause system call. You can no longer start and stop daemons. inetd-style services no longer accept connections. You cannot cleanly reboot the system. The system feels generally unstable (e.g. ssh and su hang for 30 seconds since systemd is now integrated with the login system). All of this can be caused by a command that's short enough to fit in a Tweet.
Neato! That worked rather well on Ubuntu 16 server.