Ford II isn't here to say, "Progress."
Believe me, I've thought about that. Or as I like to say, we've spent decades automating the same tasks over and over again and we keep getting less and less efficient at it.
To carry an entire separate copy of all those libraries and OS components in a container does seem woefully inefficient. Most of us remember programming for small computers that didn't have enough memory to fit the splash screen of a modern application, and it does feel a bit weird to just haphazardly throw around megabytes of overhead.
Maybe that's one reason why I am enjoying playing around with microcontrollers.
Having to fit everything into 32 KB ROM / 2 KB RAM makes it feel like the 1980's again.
If Linux core OS devs didn't want other devs to throw up their hands and deal with dependency hell via containerization, they should have sucked less.
That's true in a lot of places. Pretty much all operating environments now
use some sort of self-contained sandboxes to run software.
Fri Sep 06 2019 10:04:13 EDT from IGnatius T Foobar @ UncensoredPretty much all operating environments now use some sort of self-contained sandboxes to run software.
As I see it, these self-contained sandboxes are primarily used to run proprietary software. Because it seems vendors of such software seem to believe they own your machine. Open-source stuff coexists with other open-source stuff just fine. In fact, they often depend a lot on each other. Look at the 50,000-odd packages available in the standard Debian repositories to see what I mean.
Maybe. Sometimes not, though. Yes, if you're in the repo, your software
has been tested and tuned with everything else in the repo, and it all matches
up. But what if your development moves faster than Debian releases? What
if you need to support multiple distributions? As an open source developer
I can attest that it's tiring to listen to people say "I can't get the program
running on Stallman GNAA/Lunix 99.1 with qlibc and OpenSHH 8.7" , and it's
equally as tiring to hear the hundredth person say "I have bug XYZ while using
the Debian packages" and it's a bug that was fixed ages ago.
As a data center operator, I can also attest that no one in commercial IT wants to put up with that frustration, which is why until recently we've resorted to putting every major application into a dedicated OS instance on its own virtual machine.
Containers solve *both* problems. You get isolation from other workloads, you get to bring along all of your known-compatible libraries and dependencies, and you don't have to consume the overhead of maintaining an entirely separate virtual machine.
Yes, if you cut your teeth in the pre-Linux era when a unix machine cost big bucks and you only had one of them, you had to learn how to make all of the software get along on the same host. I know how to do that. You sound like you know how to do that too. But I assure you no one wants to pay the likes of us for the time it takes to make that work, when it's less expensive (to the business) to simply deploy a little more memory and containerize everything.
I will concede, however, that containers represent a huge *opportunity* for proprietary software vendors to more easily distribute packages that contain opaque blobs which no longer need to be understood by the rest of the system software.
As a data center operator, I can also attest that no one in commercial IT wants to put up with that frustration, which is why until recently we've resorted to putting every major application into a dedicated OS instance on its own virtual machine.
Containers solve *both* problems. You get isolation from other workloads, you get to bring along all of your known-compatible libraries and dependencies, and you don't have to consume the overhead of maintaining an entirely separate virtual machine.
Yes, if you cut your teeth in the pre-Linux era when a unix machine cost big bucks and you only had one of them, you had to learn how to make all of the software get along on the same host. I know how to do that. You sound like you know how to do that too. But I assure you no one wants to pay the likes of us for the time it takes to make that work, when it's less expensive (to the business) to simply deploy a little more memory and containerize everything.
I will concede, however, that containers represent a huge *opportunity* for proprietary software vendors to more easily distribute packages that contain opaque blobs which no longer need to be understood by the rest of the system software.
Right. More and more open-source is shipping as snaps or whatever. Docker is becoming the new industry-standard packaging mechanism to spin up an instance of your favorite open source service in the cloud, because you're going to want to upgrade the host OS on a schedule that is not dictated by the package dependencies of the several containers sitting on top of it.
You don't really need to do this with containers, but containers do make it significantly easier for most applications.
We write proprietary code for Linux and Windows that must work on 10-year-old versions of the operating system. We only distribute binaries, no sources.
And we can't use containers, as they isolate software too much for our needs.
Because of what we want to accomplish, we *can't* 'own the machine'. But, we don't want the hassle of wondering what fucking dependency broke our shit, so we build our own dependencies (to 10-year-old specs), and engineer everything to only depend on the kernel, to the degree that's possible.
We do, however, want to own our own sources. We're a closed-source shop.
We're very far ahead of any competitors in our space. If we opened the sources, we would significantly risk losing that edge.
I would point out, though, that open-source software also conflicts with each other... it happens quite a lot. It falls on package maintainers for the distribution to curate everything to ensure they work together. What you see is a bit of an illusion caused by this curation process. Eventually, some breaking piece of software is either brought to heel, or causes changes in dependant software to help make everything work again, but if you think open-source software never breaks and everything is peaches and funshine with cookies and happy squeaky ponies, you haven't read through a lot of the tickets those guys are solving.
As a closed-source developer, I find it frustrating that I can't be part of that process. I recognize that I risk 'poisoning' open-source with certain contributions if I'm not careful. I can't build features for open-source very easily... I'd have to do it on my own time, and would therefore not be of particularly good quality (since I've already spent much of my day working on closed-source stuff to earn my living). I can contribute tiny bug fixes, but that's about it.
So, I don't have a lot of influence on open-source development.
Because of what we want to accomplish, we *can't* 'own the machine'.
But, we don't want the hassle of wondering what fucking dependency
broke our shit, so we build our own dependencies (to 10-year-old
specs), and engineer everything to only depend on the kernel, to the
degree that's possible.
And THAT is one of the main problems that containers can solve.
Some people say that containers are just a more lightweight virtual machine, but that's looking at it from the wrong direction. By containerizing an application, you eliminate the *reason* we've gotten into the habit of building dedicated virtual machines for a single application. The dependencies are guaranteed to be there, you don't have to worry about how it interoperates with other software on the same machine, etc. etc.
Yeah, but then you create a dependency on docker itself, which is a nogo for us.
Furthermore, it isolates everything so nicely, the whole point of our software becomes invalidated... we're trying to pay attention to the changing state of the machine to evaluate students. Containers would get in the way of that.
changing state of the machine to evaluate students. Containers would
get in the way of that.
Maybe. Or maybe containers would create a fast & easy way to spin up various alternative environments with differing vulnerabilities
Or you could mount the root as a read-only volume, and the software in the
container could observe the host without being part of the host.
The dependency on Docker is still going to be there, though. No getting around that. Unless, as LS suggested, you run the environment being observed *in* a container.
The dependency on Docker is still going to be there, though. No getting around that. Unless, as LS suggested, you run the environment being observed *in* a container.
Well, all that gets super weird, as you layer VMs, Docker images, and whatnot together, while trying to track everything.
This said, I haven't ruled out the use of Docker, but I'd have to install our client software within it, I'm pretty sure.
It's too bad you don't control both sides, because it would actually work
pretty well. You could put the target environment in one container, your
software in another container, and they could interact using the rules you
decide. Then you could even get into techno-philosophical discussions about
Heisenberg and Schrodinger effects within the target environment.
Riddle me this, Bat-programmers.
Is it valid/legal for a program (say, an API service) to accept an XML document as input, and then require the tags within that document to be in a particular order?
I'm doing some development against the VMware vCloud API, and finding that some of the calls barf on any inputted XML document that doesn't follow the *exact* order of tags shown in the example.
What kind of fucked-up parser says "I wasn't expecting tag X, I was expecting one of P, Q, or R" even when tag X is going to be required later in the same document? I know of two ways to parse XML input:
1. Use a parser like expat which generates callbacks on every tag, and your program runs as a state machine until the document has been fully parsed, not caring what order the callbacks ran in (this is what I use when writing C programs)
2. Use a parser like Untangle which reads the document and creates a complex data type that you can then iterate through (this is what I use when writing Python programs)
But to barf midway through the parsing because you were expecting the tags to be in a specific order? This isn't something like XMPP where the program acts upon the stream as it's being received. The program has the entire document, right down to the closing tags, before parsing and execution begins. I can't think of any legitimate reason for it to have this restriction. Am I wrong to expect it not to suck this much?
Is it valid/legal for a program (say, an API service) to accept an XML document as input, and then require the tags within that document to be in a particular order?
I'm doing some development against the VMware vCloud API, and finding that some of the calls barf on any inputted XML document that doesn't follow the *exact* order of tags shown in the example.
What kind of fucked-up parser says "I wasn't expecting tag X, I was expecting one of P, Q, or R" even when tag X is going to be required later in the same document? I know of two ways to parse XML input:
1. Use a parser like expat which generates callbacks on every tag, and your program runs as a state machine until the document has been fully parsed, not caring what order the callbacks ran in (this is what I use when writing C programs)
2. Use a parser like Untangle which reads the document and creates a complex data type that you can then iterate through (this is what I use when writing Python programs)
But to barf midway through the parsing because you were expecting the tags to be in a specific order? This isn't something like XMPP where the program acts upon the stream as it's being received. The program has the entire document, right down to the closing tags, before parsing and execution begins. I can't think of any legitimate reason for it to have this restriction. Am I wrong to expect it not to suck this much?
I agree, it sounds weird and lazy parser design.
I hope you have plenty bourbon to ease the pain of working with such products.
I hope you have plenty bourbon to ease the pain of working with such products.
Is it valid/legal for a program (say, an API service) to accept an XML
document as input, and then require the tags within that document to be
in a particular order?
This is definitely legal. Both XML Schema and DTD validation provide ways to enforce that tags are in the proper order, it's baked into the spec. Java Servlet containers, in particular, are picky about ordering in web.xml.
Doesn't mean it's good design, but it's definitely legal.
Another example - the SOAP Header comes before the Body!
Well all I can say then is that I'm disappointed ... especially when it comes
from a place like VMware, where bringing in large frameworks like they're
candy is standard practice.