That would mean proving myself wrong.
Ford II isn't here to say, "Progress."
Believe me, I've thought about that. Or as I like to say, we've spent decades automating the same tasks over and over again and we keep getting less and less efficient at it.
To carry an entire separate copy of all those libraries and OS components in a container does seem woefully inefficient. Most of us remember programming for small computers that didn't have enough memory to fit the splash screen of a modern application, and it does feel a bit weird to just haphazardly throw around megabytes of overhead.
Maybe that's one reason why I am enjoying playing around with microcontrollers.
Having to fit everything into 32 KB ROM / 2 KB RAM makes it feel like the 1980's again.
If Linux core OS devs didn't want other devs to throw up their hands and deal with dependency hell via containerization, they should have sucked less.
Fri Sep 06 2019 10:04:13 EDT from IGnatius T Foobar @ UncensoredPretty much all operating environments now use some sort of self-contained sandboxes to run software.
As I see it, these self-contained sandboxes are primarily used to run proprietary software. Because it seems vendors of such software seem to believe they own your machine. Open-source stuff coexists with other open-source stuff just fine. In fact, they often depend a lot on each other. Look at the 50,000-odd packages available in the standard Debian repositories to see what I mean.
As a data center operator, I can also attest that no one in commercial IT wants to put up with that frustration, which is why until recently we've resorted to putting every major application into a dedicated OS instance on its own virtual machine.
Containers solve *both* problems. You get isolation from other workloads, you get to bring along all of your known-compatible libraries and dependencies, and you don't have to consume the overhead of maintaining an entirely separate virtual machine.
Yes, if you cut your teeth in the pre-Linux era when a unix machine cost big bucks and you only had one of them, you had to learn how to make all of the software get along on the same host. I know how to do that. You sound like you know how to do that too. But I assure you no one wants to pay the likes of us for the time it takes to make that work, when it's less expensive (to the business) to simply deploy a little more memory and containerize everything.
I will concede, however, that containers represent a huge *opportunity* for proprietary software vendors to more easily distribute packages that contain opaque blobs which no longer need to be understood by the rest of the system software.
Right. More and more open-source is shipping as snaps or whatever. Docker is becoming the new industry-standard packaging mechanism to spin up an instance of your favorite open source service in the cloud, because you're going to want to upgrade the host OS on a schedule that is not dictated by the package dependencies of the several containers sitting on top of it.
You don't really need to do this with containers, but containers do make it significantly easier for most applications.
We write proprietary code for Linux and Windows that must work on 10-year-old versions of the operating system. We only distribute binaries, no sources.
And we can't use containers, as they isolate software too much for our needs.
Because of what we want to accomplish, we *can't* 'own the machine'. But, we don't want the hassle of wondering what fucking dependency broke our shit, so we build our own dependencies (to 10-year-old specs), and engineer everything to only depend on the kernel, to the degree that's possible.
We do, however, want to own our own sources. We're a closed-source shop.
We're very far ahead of any competitors in our space. If we opened the sources, we would significantly risk losing that edge.
I would point out, though, that open-source software also conflicts with each other... it happens quite a lot. It falls on package maintainers for the distribution to curate everything to ensure they work together. What you see is a bit of an illusion caused by this curation process. Eventually, some breaking piece of software is either brought to heel, or causes changes in dependant software to help make everything work again, but if you think open-source software never breaks and everything is peaches and funshine with cookies and happy squeaky ponies, you haven't read through a lot of the tickets those guys are solving.
As a closed-source developer, I find it frustrating that I can't be part of that process. I recognize that I risk 'poisoning' open-source with certain contributions if I'm not careful. I can't build features for open-source very easily... I'd have to do it on my own time, and would therefore not be of particularly good quality (since I've already spent much of my day working on closed-source stuff to earn my living). I can contribute tiny bug fixes, but that's about it.
So, I don't have a lot of influence on open-source development.
Because of what we want to accomplish, we *can't* 'own the machine'.
But, we don't want the hassle of wondering what fucking dependency
broke our shit, so we build our own dependencies (to 10-year-old
specs), and engineer everything to only depend on the kernel, to the
degree that's possible.
And THAT is one of the main problems that containers can solve.
Some people say that containers are just a more lightweight virtual machine, but that's looking at it from the wrong direction. By containerizing an application, you eliminate the *reason* we've gotten into the habit of building dedicated virtual machines for a single application. The dependencies are guaranteed to be there, you don't have to worry about how it interoperates with other software on the same machine, etc. etc.
Yeah, but then you create a dependency on docker itself, which is a nogo for us.
Furthermore, it isolates everything so nicely, the whole point of our software becomes invalidated... we're trying to pay attention to the changing state of the machine to evaluate students. Containers would get in the way of that.
changing state of the machine to evaluate students. Containers would
get in the way of that.
Maybe. Or maybe containers would create a fast & easy way to spin up various alternative environments with differing vulnerabilities
The dependency on Docker is still going to be there, though. No getting around that. Unless, as LS suggested, you run the environment being observed *in* a container.