Language:
switch to room list switch to menu My folders
Go to page: First ... 36 37 38 39 [40]
[#] Tue Sep 03 2019 18:57:42 EDT from wizard of aahz @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

Prove them wrong?

[#] Tue Sep 03 2019 20:50:48 EDT from LoanShark @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]


That would mean proving myself wrong.

[#] Wed Sep 04 2019 10:00:37 EDT from IGnatius T Foobar @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

Ford II isn't here to say, "Progress."

Believe me, I've thought about that. Or as I like to say, we've spent decades automating the same tasks over and over again and we keep getting less and less efficient at it.

To carry an entire separate copy of all those libraries and OS components in a container does seem woefully inefficient. Most of us remember programming for small computers that didn't have enough memory to fit the splash screen of a modern application, and it does feel a bit weird to just haphazardly throw around megabytes of overhead.

Maybe that's one reason why I am enjoying playing around with microcontrollers.
Having to fit everything into 32 KB ROM / 2 KB RAM makes it feel like the 1980's again.

[#] Wed Sep 04 2019 14:02:24 EDT from LoanShark @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]


If Linux core OS devs didn't want other devs to throw up their hands and deal with dependency hell via containerization, they should have sucked less.

[#] Fri Sep 06 2019 10:04:13 EDT from IGnatius T Foobar @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

That's true in a lot of places. Pretty much all operating environments now use some sort of self-contained sandboxes to run software.

[#] Sat Sep 07 2019 22:25:50 EDT from ldo @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

 

Fri Sep 06 2019 10:04:13 EDT from IGnatius T Foobar @ Uncensored
Pretty much all operating environments now use some sort of self-contained sandboxes to run software.

As I see it, these self-contained sandboxes are primarily used to run proprietary software. Because it seems vendors of such software seem to believe they own your machine. Open-source stuff coexists with other open-source stuff just fine. In fact, they often depend a lot on each other. Look at the 50,000-odd packages available in the standard Debian repositories to see what I mean.



[#] Fri Sep 13 2019 16:01:24 EDT from IGnatius T Foobar @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

Maybe. Sometimes not, though. Yes, if you're in the repo, your software has been tested and tuned with everything else in the repo, and it all matches up. But what if your development moves faster than Debian releases? What if you need to support multiple distributions? As an open source developer I can attest that it's tiring to listen to people say "I can't get the program running on Stallman GNAA/Lunix 99.1 with qlibc and OpenSHH 8.7" , and it's equally as tiring to hear the hundredth person say "I have bug XYZ while using the Debian packages" and it's a bug that was fixed ages ago.

As a data center operator, I can also attest that no one in commercial IT wants to put up with that frustration, which is why until recently we've resorted to putting every major application into a dedicated OS instance on its own virtual machine.

Containers solve *both* problems. You get isolation from other workloads, you get to bring along all of your known-compatible libraries and dependencies, and you don't have to consume the overhead of maintaining an entirely separate virtual machine.

Yes, if you cut your teeth in the pre-Linux era when a unix machine cost big bucks and you only had one of them, you had to learn how to make all of the software get along on the same host. I know how to do that. You sound like you know how to do that too. But I assure you no one wants to pay the likes of us for the time it takes to make that work, when it's less expensive (to the business) to simply deploy a little more memory and containerize everything.

I will concede, however, that containers represent a huge *opportunity* for proprietary software vendors to more easily distribute packages that contain opaque blobs which no longer need to be understood by the rest of the system software.

[#] Sat Sep 14 2019 19:13:44 EDT from LoanShark @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]


Right. More and more open-source is shipping as snaps or whatever. Docker is becoming the new industry-standard packaging mechanism to spin up an instance of your favorite open source service in the cloud, because you're going to want to upgrade the host OS on a schedule that is not dictated by the package dependencies of the several containers sitting on top of it.

[#] Mon Sep 16 2019 06:17:23 EDT from fleeb @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]


You don't really need to do this with containers, but containers do make it significantly easier for most applications.

We write proprietary code for Linux and Windows that must work on 10-year-old versions of the operating system. We only distribute binaries, no sources.
And we can't use containers, as they isolate software too much for our needs.

Because of what we want to accomplish, we *can't* 'own the machine'. But, we don't want the hassle of wondering what fucking dependency broke our shit, so we build our own dependencies (to 10-year-old specs), and engineer everything to only depend on the kernel, to the degree that's possible.

We do, however, want to own our own sources. We're a closed-source shop.
We're very far ahead of any competitors in our space. If we opened the sources, we would significantly risk losing that edge.

I would point out, though, that open-source software also conflicts with each other... it happens quite a lot. It falls on package maintainers for the distribution to curate everything to ensure they work together. What you see is a bit of an illusion caused by this curation process. Eventually, some breaking piece of software is either brought to heel, or causes changes in dependant software to help make everything work again, but if you think open-source software never breaks and everything is peaches and funshine with cookies and happy squeaky ponies, you haven't read through a lot of the tickets those guys are solving.

As a closed-source developer, I find it frustrating that I can't be part of that process. I recognize that I risk 'poisoning' open-source with certain contributions if I'm not careful. I can't build features for open-source very easily... I'd have to do it on my own time, and would therefore not be of particularly good quality (since I've already spent much of my day working on closed-source stuff to earn my living). I can contribute tiny bug fixes, but that's about it.

So, I don't have a lot of influence on open-source development.

[#] Fri Sep 20 2019 10:04:33 EDT from IGnatius T Foobar @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

Because of what we want to accomplish, we *can't* 'own the machine'.

But, we don't want the hassle of wondering what fucking dependency
broke our shit, so we build our own dependencies (to 10-year-old
specs), and engineer everything to only depend on the kernel, to the
degree that's possible.

And THAT is one of the main problems that containers can solve.

Some people say that containers are just a more lightweight virtual machine, but that's looking at it from the wrong direction. By containerizing an application, you eliminate the *reason* we've gotten into the habit of building dedicated virtual machines for a single application. The dependencies are guaranteed to be there, you don't have to worry about how it interoperates with other software on the same machine, etc. etc.

[#] Fri Sep 20 2019 14:03:40 EDT from fleeb @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]


Yeah, but then you create a dependency on docker itself, which is a nogo for us.

Furthermore, it isolates everything so nicely, the whole point of our software becomes invalidated... we're trying to pay attention to the changing state of the machine to evaluate students. Containers would get in the way of that.

[#] Fri Sep 20 2019 15:37:57 EDT from LoanShark @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

changing state of the machine to evaluate students. Containers would

get in the way of that.

Maybe. Or maybe containers would create a fast & easy way to spin up various alternative environments with differing vulnerabilities

[#] Sat Sep 21 2019 16:59:41 EDT from IGnatius T Foobar @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

Or you could mount the root as a read-only volume, and the software in the container could observe the host without being part of the host.

The dependency on Docker is still going to be there, though. No getting around that. Unless, as LS suggested, you run the environment being observed *in* a container.

[#] Wed Oct 16 2019 17:13:51 EDT from fleeb @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]


Well, all that gets super weird, as you layer VMs, Docker images, and whatnot together, while trying to track everything.

This said, I haven't ruled out the use of Docker, but I'd have to install our client software within it, I'm pretty sure.

[#] Sun Nov 17 2019 18:05:24 EST from IGnatius T Foobar @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

It's too bad you don't control both sides, because it would actually work pretty well. You could put the target environment in one container, your software in another container, and they could interact using the rules you decide. Then you could even get into techno-philosophical discussions about Heisenberg and Schrodinger effects within the target environment.

Go to page: First ... 36 37 38 39 [40]