Docker on Windows actually hosts Linux containers on a HyperV hosted VM with a true Linux kernel. So there are many things that will work just fine.
BUT -- and this a big BUT -- it is in a state of active development and things are in flux in a big way. The last time I looked into it (last November), some features were on dev branches, some things were broken. Even if it's all fixed by now, you'll have to account for the fact that you're invoking the docker command-line tools as Windows binaries from a typical Windows shell environment, with all the major or minor compatibility issues that entails.
So I'm doing Docker development on Windows. I just do it by hosting a normal Ubuntu on Virtualbox, and I use Docker for Linux and do all my dev on the Linux side.
So when people talk about "Docker on Windows" is there such a thing as containerized
Windows software, or does it always refer to running Linux software in containers
on a Windows host? I pigged out on Kubernetes classes at VMworld this week
and I *still* don't know the answer to that question.
It's weird to see so much mainstream energy behind a technology that has this much of an open source pedigree.
And if Docker on Windows is just running in a hidden Linux VM, what is the benefit of that over simply running it on Linux in the first place? (It doesn't seem to matter whether the VM is hosted on your desktop or in a data center)
My personal hosting environment is a Supermicro twin-tandem rack server with two nodes. The nodes share nothing except a pair of power supply modules, and there is dual 10 Gbps Ethernet between them. I want to install Kubernetes.
Right now I'm trying to decide whether to build a Kubernetes VM on each node, or simply run Kubernetes directly on each node alongside KVM.
It's weird to see so much mainstream energy behind a technology that has this much of an open source pedigree.
And if Docker on Windows is just running in a hidden Linux VM, what is the benefit of that over simply running it on Linux in the first place? (It doesn't seem to matter whether the VM is hosted on your desktop or in a data center)
My personal hosting environment is a Supermicro twin-tandem rack server with two nodes. The nodes share nothing except a pair of power supply modules, and there is dual 10 Gbps Ethernet between them. I want to install Kubernetes.
Right now I'm trying to decide whether to build a Kubernetes VM on each node, or simply run Kubernetes directly on each node alongside KVM.
2019-08-30 00:57 from IGnatius T Foobar
So when people talk about "Docker on Windows" is there such a thing as
containerized Windows software, or does it always refer to running
Linux software in containers on a Windows host? I pigged out on
Kubernetes classes at VMworld this week and I *still* don't know the
answer to that question.
Docker does both. I don't think it's the only containerization technology on Windows (even Wow64 might qualify, and WSL definitely does, and I'm not sure where Azure fits in)
And if Docker on Windows is just running in a hidden Linux VM, what is
the benefit of that over simply running it on Linux in the first place?
(It doesn't seem to matter whether the VM is hosted on your desktop or
in a data center)
I don't see any benefit either. It's just going to introduce compatibility issues because of the different commandline semantics.
That said, WSL2 is going to be very interesting. It's a HyperV hosted Linux kernel that's been optimized to start up very quickly, so it looks and quacks almost like a native process or the WSL1 subsystem.
I have mixed feelings about WSL2. The lightweight translation layer in WSL1
is something I'm going to miss, but if WSL2 can pull off the same sort of
transparency while also supporting things like dbus (making it easy to run
desktop software) and cgroups (allowing Docker to run natively) it would definitely
become a lot more useful.
docker also requires functioning iptables.
They're claiming that WSL2 is actually faster on filesystem operations, as long as you stay within the Linux rootfs.
Does WSL2 have its own rootfs? One thing that I like about WSL1 is that it
shares a filesystem with the host. I basically think of WSL1 as "cygwin done
right".
WSL1 shared a filesystem with the host, yes, but I think it had a lot of weird limitations. It wasn't a good idea to actually access Linux files from the host.
On the machine I'm typing on right now, I have WSL1 installed, but it's not easy for me to find the rootfs in the Windows filesystem. They moved it when they changed the distro installation mechanism, but my installation is carried over from the original release.
OK. Found it. It's a hidden directory.
The files look pretty normal, but I think you can mess up the metadata pretty easily if you do any directory modifications. File content modifications presumably OK.
All I needed to do was share a home directory between my Windows and Linux
environments on the same host. It worked reasonably well. Some things were
messed up because of file permissions (like ~/.ssh for example) but it mostly
kept things in order.
It just occurred to me that if WSL2 requires Hyper-V, that causes all kinds of problems. Video drivers have performance problems due to TLB flushes when running on Hyper-V. VirtualBox doesn't work on Hyper-V. Etc.
But, Windows is evoling towards having Hyper-V enabled by default, basically because they're starting to use virtualization as an anti-rootkit technology.
Microsoft have already admitted that their Azure cloud serves up more Linux instances than Windows ones. Seems their strategy with WSL, cross-platform PowerShell, open-sourcing the Terminal etc is to try to stem the tide of developers moving from Windows to Linux, by offering them a “develop on Windows, deploy on Linux” option. I’m not sure what the point is, myself: Windows seems a horrible platform to do development on.
I used to prefer MacOS for development targeting Linux -- but that was before Docker became the norm for developer workstations.
Building Linux docker images, on MacOS or Windows, inevitably implies some form of virtualization; either VirtualBox or Hyper-V. I'm somewhat optimistic that in the long term, Windows will have the better development-environment story, because of WSL2.
And I'm not quite willing to run Linux on the bare metal. Not yet. Not until I'm really hurting for that last little bit of RAM on a laptop that I can't upgrade.
Wed Sep 04 2019 20:19:14 EDTfrom LoanShark @ Uncensored
I used to prefer MacOS for development targeting Linux ...
I tried running Emacs on a Mac once. The insistence of the system to commandeer basic keystrokes like ctrl-space for its own purposes was so painful, it drove me to ask for a Windows machine instead.
My recent experience is similar. WSL1 was fine for Linux development, until
Docker. Then I had to switch to a virtual machine. This is, of course, on
a corporate network where they've deployed so many Windows-based tools that
it's difficult to run anything else, but I also have a Linux machine on my
desk (I'm on it now, in fact) and a KVM switch.
I could probably get away with Linux on bare metal and it might even be slightly better, and avoid VirtualBox's video sluggishness -- although lately I am noticing problems with Chrome on Linux that don't exist on Chrome for Windows, necessitating that I drop into Firefox for particular sites.
I don't do much Excel stuff, and when I do, I can just use Google Sheets instead, because my use cases are pretty lightweight. That's one of the last remaining barriers.
** Aside from messing around with driver BS that I don't want to have to deal with.
It's literally been since 2004 that I last ran Linux on the bare metal on a work machine.
Thu Sep 05 2019 13:50:56 EDT from LoanShark @ Uncensored
I could probably get away with Linux on bare metal and it might even be slightly better, and avoid VirtualBox's video sluggishness -- although lately I am noticing problems with Chrome on Linux that don't exist on Chrome for Windows, necessitating that I drop into Firefox for particular sites.
Chrome for Windows has had some pretty severe problems for about a year now. Memory leaks and faulty page handling.
Yeah, I've been seeing a particular website (some gaming wiki) that always crashes if I leave its tabs open long enough. (I usually leave this site open while playing the relevant game -- might explain some sluggishness, LOL)
It's literally been since 2004 that I last ran Linux on the bare metal
on a work machine.
I really want ${EMPLOYER} to get a working VDI in place so I can just get the damn hardware off my desk entirely. I spend close to 100% of my day inside web browsers and terminal programs anyway.
And this will sound unusual coming from me, a full-penguin-blooded Linux fan, but today's Linux desktops are utter crap. They all seem to want to emulate Windows 8 and/or Mac OS, nothing is where you expect it to be and everything is abstrated through some poofy-foofy interface when you just want to get to the damn file or program.
Heh... at work, my primary machine is Windows, and I pull up an Ubuntu shell to ssh over to a real Linux machine we have on the network occasionally to do some work.
VMs are a necessity at this point, because of some funkiness we have to support.
Better to code on Linux or Windows?
Meh. Doesn't matter to me. I've found both environments have their appeal.
But then, I tend to write code within Vim, even on Windows. Intellisense within Visual Studios is nice and all, but I seem to have gotten away from it. Maybe because when it bogs everything down, it does so in a way so heinous, it defies productivity.