I'm seeing a lot of people who learn the security part of the job, but do not know anything about maintaining a system.
That is, they might understand how to call the various programs to break into a network and maybe run any one of the standard exploits that are commonly available, but they don't know how to configure a firewall, or even how to connect a Windows machine to a Windows domain.
It's more maddening than ever. And yet doesn't seem to have advanced in a lot of ways.
The balkanization is bordering on the psychotic.
Sadly, Windows 10 is easier to live with these days.
The balkanization is bordering on the psychotic.
BITD, it was simple: you used Red Hat, or you were out of the mainstream.
Too bad they shot themselves in the foot.
And every year you make the same false assumption, that the entire Linux universe is expected to behave as it were a single entity, like Microsoft or Apple.
I will agree at this point that the traditional Linux desktop is a non-starter. There was a time when everyone "knew" that if the Windows desktop monopoly was not vanquished, Microsoft would eventually use it as leverage to take over everything else in computing. But here we are two decades later, Windows still rules the desktop, and only the desktop. Linux is king on servers - no one who's serious about servers uses Windows Server except to run Microsoft's own server applications. Linux owns 66% of mobile and tablet market share, and most of the rest belongs to Apple. "The Cloud" is almost exclusively built on Linux.
So does it make a difference? In some ways, yes: Miguel Hitler de Icaza, widely known as the person who single-handedly sabotaged Linux's first and best chance of having a unified desktop, is now officially (instead of secretly) employed by Microsoft, and it's frustrating to see how well that worked. In other ways, no: since we do pretty much everything through a browser these days, it's hard to get excited about any desktop operating systems at all. I spend my days tabbing back and forth between Chrome and MobaXterm.
I don't have a problem with so many different distributions.
Each one seems tuned to a particular purpose. It's a kind of customization that I find encouraging, actually. If you understand what you're trying to do, and you choose the right distribution, you'll do okay.
I think a good chunk of the dominant distributions seem derived from Debian. Next, I'd look at Redhat (or CentOS if you wanna be cheap).
I guess you'd use Knoppix if you want something that boots off a stick drive on any machine, and maybe arch linux if you kinda want to build something from scratch, but you don't really know how (maybe you're trying to create your own embedded OS distribution or something).
At least, I think that's how these things break down.
Debian/Redhat for general OSes, derivatives for their particular purposes (e.g. Ubuntu if you want something with slightly more current packages than Debian and you don't mind being a bit on the edge, etc).
I think once you understand this tuning that goes on, it's not so bad. Especially since most distributions seem derived from only a few main distributions.
Each one seems tuned to a particular purpose. It's a kind of
Well yes, that's kind of the point. Every distributor has use cases in mind.
A certain well-known operating system vendor has in recent history tried to build a single operating system designed to work the same on servers, desktops, tablets, mobile devices, and video game consoles ... and we all know well *that* went.
I hate to say it, but I'm kind of warming up to systemd. Despite the fact that it replaces a lot of subsystems that have been around forever, it has the feel of having been *designed* as a system manager, as opposed to just a pile of scripts that have evolved and adapted over the last 40 years. Having "one way" to install and maintain services on the system is really nice ... as long as you can convince everyone to adopt it. But with RedHat/Fedora/CentOS, Debian/Ubuntu, and SuSE/OpenSuSE all moving to systemd, it's kind of a done deal at this point. (Sorry slackware and gentoo folks, we love you, but no one is running those variants in serious production deployments.)
systemd has been compared to svchost.exe on Windows. I suppose that's both a good and bad thing.
However, we're not talking about specialized device distros. We're just talking mainstream desktop or server. The variations are enough to drive one mad.
I think systemd feels a bit like the Windows service control manager, but less intrusive.
That is, the Windows SCM requires you write your code with the SCM in mind.
You have to know certain structures, and call certain functions with certain values in order to be a proper Windows service.
In Linux, it feels more like you have to understand how to daemonize your code... unless you don't feel like it, in which case you set up your .service file to take care of the daemonization details for you. You just write your code however you intend to write it, and then tell systemd how to deal with it.
Which approach is better?
I dunno. On the one hand, if you're writing something that should run for long periods of time (months, years?) then you maybe ought to be extremely mindful of what you're doing... the SCM approach is yet another thing to deal with, but it isn't that awful since you can get a loose structure of code from Microsoft and build from there.
But the idea of just writing your code how you want it to work, and being able to run it from the command line or as a service (which aids in tracking down problems when you need to use a debugger to catch something while the thing is starting up)... I rather like that as well. Something feels kind of clean about the linux approach.
Upstart, SysV, systemd... just pick one, I feel. I'm kind of annoyed at having to support all of them because we are committed to dealing with 10 year old linux distributions.
Considering that sysvinit lasted for decades in more or less the same form, and everyone is moving to systemd, I think it's a good guess that systemd will be long term viable. I like it better, anyway. You don't have to worry about daemonizing your program and it can automatically be restarted if it crashes. I've been known to run server programs directly from /etc/inittab for exactly those reasons. Sometimes I'd even call them via openvt(1) so their logs would show up on a virtual console, like they did on a Netware server.
daemonizing your program is pretty easy now too. It was always pretty easy, but now there's a daemon(3) library call that does it for you.
Heh, there's also a daemon executable you can install, if you prefer.
even call them via openvt(1) so their logs would show up on a virtual
console, like they did on a Netware server.
So, what happens when some smartaleck switches to that virtual console, they're not even logged in and then can do this - type Control-S.
Anyway if they can get to the virtual console they're probably already in a position to shut the whole server off anyway.
I liked being able to "see it running" very easily, whether it was Netware or Lotus Notes or whatever. I liked being able to connect to the console and have the ability to type stuff at it. Asterisk does this pretty well, actually. If you have the authority you can connect to the console and see everything it's doing and enter commands, and disconnect without stopping the server.
Going forward though, it's pretty simple. systemd won't need your program to daemonize at all, but when you do need to go into the background, a simple call to daemon(0,0); does the trick. I'm not sure which of those imitation Linuxes like FreeBSD or Mac OS or Solaris it works on, though.
It looks like the FreeDesktop folks have written a bit of a field guide for us, though:
[ https://www.freedesktop.org/software/systemd/man/daemon.html ]
I recently stumbled across a tool called "winexe", which lets you remotly connect from a linux box to a windows box in order to execute a command there, cmd.exe for example.
A groupware distribution for schools uses this in a very neat way: You join a windows machine to its samba nt4 domain, you use the webinterface to install the opsi-agent via winexe and then you are able to install a large amount of software via OPSI itself. I have to study their configs in order to replicate that for my other clients.
Groupware distribution: https://iserv.eu/ (german only, as it seems)
Winexe: https://lbtwiki.cern.ch/bin/view/Online/WinExe (source via sourceforge, sadly)
OPSI (Open PC Server Integration) is an software distribution tool, which also lets you keep taps on installed software, some license keys and the hardware: http://www.opsi.org/en