Actually, truth be told, I'm coming to hate linux as I hate windows.
It's a different set of problems, but boy-howdy does linux give me headaches sometimes.
I have to be able to support linux machines that are 10 years old (which is already ridiculous, but I fully understand why we have this need). And because we're closed-source, we have to build these things and provide installers and setups.
Windows is nice and simple if you want to have a service. You just conform to the SCM and everything works nicely.
But linux? Daemonization is only the first (and easiest) hurdle to cross.
After that, you have to figure out what kind of init script... are they using systemd, upstart, or the traditional sysv init scripts? Yeah, maybe you can extend a middle finger and just do sysv init scripts for everything, but you miss out on the features the other methods provide.
Oh, and if you're installing on an older Kali box, you have to fiddle with upstart-rc.d a little bit to make it so your service will start when the machine comes up. Unless you want to do what that script does manually in your debian postinst file.
I just want to get stuff done, and I'd like as few surprises as possible while trying to make that happen. Both oses provide lots of surprises.
The problem, of course, is that you can't (yet) count on systemd being everywhere.
Upstart sucks and is going away, so we can disregard that.
I have to rewrite my install routines soon, and I'm thinking about abandoning sysv-init scripts altogether. If systemd is detected, we go with that; otherwise we put an entry directly into /etc/inittab. In both cases, the program runs in non-forking mode, and can restart automatically if it crashes.
I'd love to disregard upstart, but sadly, I can't. Or, at least, I shouldn't, if I want this tool to work and play nicely on older ubuntu machines.
Ugh. Having to support old machines suck.
I also can't drop sysv scripts, much as I wish to, for the same reason.
So, I have a funny postinst script that detects which flavor of init the machine uses, and installs the appropriate script (with, in systemd's case, slight modifications to the file because systemd doesn't pass environment variables, so I have to set them explicitely).
released a statement that if your system cannot go into the newer
power states, premature failure may occur for those using Skylake
(and similar) SOCs and CPUs.
I caught wind of this article again, while researching some flakiness on a *Haswell* *Windows* box.
I think the system damage concern may be limited to Skylake notebook. There's a lot more thermal headroom on a desktop.
Look, C6/C7 states are hard to implement. They shut down the whole CPU package and disable PCI-E cache coherence snoops. Every driver for every PCI-E device on your system must cooperate with the whole song and dance, and it's not going to be possible to enter C7 on a system with discrete graphics.
My Windows 10 box was regularly blowing up (hard freezes) after I tried enabling C6/7 in the BIOS. This problem is by no means limited to Linux.
For example: I suspect every SATA controller on your system must agree to enter Link-Power-Management state when your CPU goes C[17~6 or 7.
That's what I've heard though, and in practice the people crowing the
most about Kali seem to be "not Linux people" who are just happy to
have a pre-built set of tools for them in one place so they don't have
to "become Linux people" to perform pen testing. And that's ok, I
But these are not people who are competent to perform pen testing. Wow.
I'm seeing a lot of people who learn the security part of the job, but do not know anything about maintaining a system.
That is, they might understand how to call the various programs to break into a network and maybe run any one of the standard exploits that are commonly available, but they don't know how to configure a firewall, or even how to connect a Windows machine to a Windows domain.
It's more maddening than ever. And yet doesn't seem to have advanced in a lot of ways.
The balkanization is bordering on the psychotic.
Sadly, Windows 10 is easier to live with these days.
The balkanization is bordering on the psychotic.
BITD, it was simple: you used Red Hat, or you were out of the mainstream.
Too bad they shot themselves in the foot.
And every year you make the same false assumption, that the entire Linux universe is expected to behave as it were a single entity, like Microsoft or Apple.
I will agree at this point that the traditional Linux desktop is a non-starter. There was a time when everyone "knew" that if the Windows desktop monopoly was not vanquished, Microsoft would eventually use it as leverage to take over everything else in computing. But here we are two decades later, Windows still rules the desktop, and only the desktop. Linux is king on servers - no one who's serious about servers uses Windows Server except to run Microsoft's own server applications. Linux owns 66% of mobile and tablet market share, and most of the rest belongs to Apple. "The Cloud" is almost exclusively built on Linux.
So does it make a difference? In some ways, yes: Miguel Hitler de Icaza, widely known as the person who single-handedly sabotaged Linux's first and best chance of having a unified desktop, is now officially (instead of secretly) employed by Microsoft, and it's frustrating to see how well that worked. In other ways, no: since we do pretty much everything through a browser these days, it's hard to get excited about any desktop operating systems at all. I spend my days tabbing back and forth between Chrome and MobaXterm.
I don't have a problem with so many different distributions.
Each one seems tuned to a particular purpose. It's a kind of customization that I find encouraging, actually. If you understand what you're trying to do, and you choose the right distribution, you'll do okay.
I think a good chunk of the dominant distributions seem derived from Debian. Next, I'd look at Redhat (or CentOS if you wanna be cheap).
I guess you'd use Knoppix if you want something that boots off a stick drive on any machine, and maybe arch linux if you kinda want to build something from scratch, but you don't really know how (maybe you're trying to create your own embedded OS distribution or something).
At least, I think that's how these things break down.
Debian/Redhat for general OSes, derivatives for their particular purposes (e.g. Ubuntu if you want something with slightly more current packages than Debian and you don't mind being a bit on the edge, etc).
I think once you understand this tuning that goes on, it's not so bad. Especially since most distributions seem derived from only a few main distributions.
Each one seems tuned to a particular purpose. It's a kind of
Well yes, that's kind of the point. Every distributor has use cases in mind.
A certain well-known operating system vendor has in recent history tried to build a single operating system designed to work the same on servers, desktops, tablets, mobile devices, and video game consoles ... and we all know well *that* went.
I hate to say it, but I'm kind of warming up to systemd. Despite the fact that it replaces a lot of subsystems that have been around forever, it has the feel of having been *designed* as a system manager, as opposed to just a pile of scripts that have evolved and adapted over the last 40 years. Having "one way" to install and maintain services on the system is really nice ... as long as you can convince everyone to adopt it. But with RedHat/Fedora/CentOS, Debian/Ubuntu, and SuSE/OpenSuSE all moving to systemd, it's kind of a done deal at this point. (Sorry slackware and gentoo folks, we love you, but no one is running those variants in serious production deployments.)
systemd has been compared to svchost.exe on Windows. I suppose that's both a good and bad thing.
However, we're not talking about specialized device distros. We're just talking mainstream desktop or server. The variations are enough to drive one mad.
I think systemd feels a bit like the Windows service control manager, but less intrusive.
That is, the Windows SCM requires you write your code with the SCM in mind.
You have to know certain structures, and call certain functions with certain values in order to be a proper Windows service.
In Linux, it feels more like you have to understand how to daemonize your code... unless you don't feel like it, in which case you set up your .service file to take care of the daemonization details for you. You just write your code however you intend to write it, and then tell systemd how to deal with it.
Which approach is better?
I dunno. On the one hand, if you're writing something that should run for long periods of time (months, years?) then you maybe ought to be extremely mindful of what you're doing... the SCM approach is yet another thing to deal with, but it isn't that awful since you can get a loose structure of code from Microsoft and build from there.
But the idea of just writing your code how you want it to work, and being able to run it from the command line or as a service (which aids in tracking down problems when you need to use a debugger to catch something while the thing is starting up)... I rather like that as well. Something feels kind of clean about the linux approach.
Upstart, SysV, systemd... just pick one, I feel. I'm kind of annoyed at having to support all of them because we are committed to dealing with 10 year old linux distributions.
Considering that sysvinit lasted for decades in more or less the same form, and everyone is moving to systemd, I think it's a good guess that systemd will be long term viable. I like it better, anyway. You don't have to worry about daemonizing your program and it can automatically be restarted if it crashes. I've been known to run server programs directly from /etc/inittab for exactly those reasons. Sometimes I'd even call them via openvt(1) so their logs would show up on a virtual console, like they did on a Netware server.
daemonizing your program is pretty easy now too. It was always pretty easy, but now there's a daemon(3) library call that does it for you.