Tue Nov 28 2017 04:42:19 PM EST from LoanShark @ Uncensored
I don't think we're talking about the same thing. Amazon is committed to supporting what they call PVM-based AMIs, which as I understand it are Xen-based paravirt kernels that predate pv-ops and don't have the ability to boot as HVM, bare-metal, dom0 etc (all of which are supported by recent pv-ops) or else there would be no point to any distinction between PVM and HVM AMIs
They do, however, have step-by-step instructions on how to convert from PVM to HVM.
And yet encouraging at the same time. Linux Journal was the dead tree publication of record for the Linux revolution. Now that Linux has, for all practical purposes, become the fabric of standard computing, the need for such a publication has come and gone.
This is not to say it won't be missed, but did any of us here have a subscription?
I was a subscriber for a couple of years but that was a long time ago.
In other news, Slashdot is still around, but its headlines are absolutely terrible.
Had they kept going with the printed material, it might seem a bit like trying to create paper from a long-dead horse by beating it.
They do, however, have step-by-step instructions on how to convert
from PVM to HVM.
Not that it's rocket science or anything, but different distributions have different selection of PVM vs HVM AMIs available, so you may find yourself in the middle of a weeks-long migration process before you know it
Icinga appears to be maintained by assholes:
"...we were aware that users won't read changelogs/blog posts/docs prior to upgrading and breaking their environment ... still, the new notification scripts make sense and were a whole lot of work, carefully considering what could possibly break."
Yeah, like my notifications, which lead to a critical problem that went unnoticed for about half a day because, although Icinga was trying to notify us that the service was dead, it couldn't send the e-mails.
This happened after I applied an 'apt update/apt upgrade', which is the normal procedure one does to maintain security updates. One isn't expected to read volumes of bullshit on how to get some byzantine pile of shit to work after an upgrade... you do an upgrade, and it just fucking works, applying whatever needs to be applied to make the upgrade work. If you have to break something, do it in the next release of the OS. Don't pile it onto the apt chain.
So, yeah. Assholes.
If there isn't anything significantly better than this pile of turd, there's an opportunity to make something.
fleeb, are you running 16.04 and using the package from `universe`?
"Closing this as a non-issue." Yeah, that's classy.
They are correct in that people don't bother to read ... my experience has been that people don't even bother to make backups/snapshots of production systems before performing an upgrade, and then they get their panties in a bunch when something goes wrong ... but you have to at least try to bring the existing installation cleanly forward, or at least stop and decline the install if it can't be done.
We've got a module in Citadel that steps through every in-place upgrade that is needed for almost 20 years worth of old versions. It's a pain in the neck but you have to do it.
anything delivered by apt-get in an ubuntu default config ("universe" possibly excepted?) should be critical-patch-only without major upgrade issues
LS: Debian OS, not Ubuntu.
And I'm sure their out on the matter is the fact that I had to add their repository, so they can dictate whatever dumb rules they want (in their feral brain), so caveat empty and shit like that.
The apt system is powerful enough from them to convert someone's old configuration files over to a new format, if they gave a damn about the folks using their crap. Too much work? Well, the work put into such a thing pales in comparison to the number of people they likely fucked with their updates, and the work *those* guys had to do to unfuck their systems.
All of this said, I've learned way more about Icinga than I ever expected or wanted.
It might be entertaining to start contributing code to them, to the point that you become indispensible, then fix all the ways they're nasty to their users. Starting with the abortion of a GUI they call 'Director'.
Reading through the backlog: OpenRC is hot shit in my opinion. It did all the things the other init replacements tried to (fast boot, parallelization, etc.) and that before system was even conceived. And it still does. I am using it on my gentoo installs with joy. It feels natural, it can be called like old init scripts, etc. One of the reasons why I love gentoo.
anything delivered by apt-get in an ubuntu default config ("universe"
possibly excepted?) should be critical-patch-only without major upgrade
I've recently experienced the adventure of living through a major Debian upgrade from an upstream developer's point of view. They went to OpenSSL 1.1, which breaks all of the deprecated API's from OpenSSL 1.0. These folks do a lot of hard work. I only had to make some upstream changes. These folks have to do hundreds, or even thousands, of packages.
Not sure where I'm going with this other than that it's a lot of work and it's impressive, even with its faults.
I agree about the upgrade to OpenSSL 1.1.x That was an amazing accomplishment, for how smoothly it went.
What I did find, however, is that once you port applications to OpenSSL 1.1, they still work in OpenSSL 1.0, because the "new" APIs have been there for a while.
The funny thing is, when you install libssl-dev, you get 1.1, and then when you build any significantly complex program, you get compiler warnings about other libraries using libssl-1.0. Then when you look at your final binary you find it's linked to OpenSSL *and* GnuTLS because some other library brought the latter along with it.
This is the real reason we end up with containers :)
Yeah, right? I'm not a big fan of containers, but Linux probably does need them, in a sense, because some of the shared library decisions are so careless.
Plus you can install and remove them in one step, and they can be sandboxed, etc. etc.
But I don't like them? ;-)
I think what bothers me the most is there are signs that sandboxing is starting to become a userland crutch for DLL hell. That just leads to bloat.
As suggested above, I think in the long run, memory deduplication is probably the bigger win, and if the operating systems can get it right, more long term viable than shared libraries.
Subject: 2017 LinuxQuestions.org Members Choice Results
For anyone who cares ... linuxquestions.org did a poll to see what software is popular in the Linux world this year. Here are the results:
[ https://www.linuxquestions.org/questions/2017mca.php ]
Some of the highlights:
* Ubuntu, Slackware, and Mint are the most popular desktop distributions
* MariaDB is now twice as popular as MySQL ... not surprising, considering Oracle is (unsurprisingly) getting stupid with the licensing
* Firefox is the overwhelmingly favorite browser.
* KDE is the favorite desktop, followed closely by Xfce. GNOME seems to have fragmented.
* VLC dominates both audio and video playback.
* vi and vim win the text editor category. Even lightweight editors like nano and kate are more popular than emacs.
* Python is still the favorite programming language.
No big surprises here, I guess.