Agreed. There have been some genuine improvements that have made Linux a much more accessible platform.
And then there were changes made that seemed to be just for the sake of not doing it the same old way anymore.
Mon Aug 02 2021 15:54:34 EDT from Nurb432And really im not opposed to change or improvement. its just the common thread of 'lets reinvent the wheel' mentality you see a lot in the UNIX world irritates me, even when its not warranted and simple improvement would have been fine.. Most often times you lose things in that re-invention. Either due to agenda, or inexperience of why it worked that way. Just because its shiny and new, does not make it better.
Sometimes just painting the wheel and some new lug nuts is all you needed..
There's also some mention of the accelerated drivers for GPU virtualization. If they really wanted GPU acceleration to run at native or near-native speed, they could not be using the RDP compositor, so I'm unclear on the extent to which those beta accelerated WSL GPU drivers (now available for Intel, AMD, and Nvidia) really speed up desktop OpenGL use-cases rather than merely CUDA
I read it too, and it seemed like they were using RDP as a proxy between the Linux and Windows side of the compositor, but over a special channel that allows shared memory instead of just serializing everything through the main connection.
Ironically, this is *exactly* how X11 clients access the compositor :)
So, Linux apps can do GPU-accelerated rendering into an offscreen GEM buffer which is then forwarded to the Windows side over RDP?
That *might* be good enough, but the idea of having RDP in the way is the part that sounds potentially slow to me.
Armbian site has been down for hours.. i hope this is not a sign they shut the project down! Tried to do an update on one of my pinebook pro.. got errors .. tried the website to see if they had some downtime scheduled. gone...
arrrgh.
Subject: Re: Panic In Detroit.. errr no ARM Land...
So in terms of my "32 or 64 bit ARM" dilemma from a few weeks back, I think my next experiment will be to build Docker containers in 32-bit and then see if they will deploy on 64-bit. Since a Docker container shares nothing with its host except the kernel, that ought to work? Am I wrong?
I have done it before. I think if you use compose instead of just grabbing an image it complains about architecture, but i have downloaded pre-built images that didnt seem to care.
A thought too, what about instead of packaging actual binaries, it automatically runs the easy install script on first start? Sure, takes a little longer, but would be 100% current that way and less 'support' to make it work? ( might be a bad idea, just a passing thought )
Installed docker on one of my RK3399s, for when you are ready for people to test :)
I like Linux and BSD, but I'll admit it can be a learning curve. With Linux you spend a lot of time trying to learn about and control it, with Windows it spends a lot of time trying to learn about and control you :)
Seems it was released this weekend. I missed the announcement somehow. Was upgrading an old ARM box to use as temp VPN server at a friends office "buster.. bla bla.. changed to old-stable"
Went to Debian to check, yup, its released.
Subject: Docker Engine vs. single-node Kubernetes
I've set a goal of becoming the SME on everything container related at my workplace.
As we start building out container infrastructures, there's something I'm wondering about. What are the benefits of running a single node Kubernetes cluster? That's not how it would be in production, of course, but is there any benefit of running one-node Kubernetes over, say, just running Docker Engine?
Assuming this is your sandbox, i would still stick with what is there in production, just smaller, just to be consistent.
May not 'need' it but it helps with building muscle memory.
Subject: Re: Docker Engine vs. single-node Kubernetes
I'm wondering about. What are the benefits of running a single node
Kubernetes cluster? That's not how it would be in production, of
That useful as a local-work solution for developer laptops. Otherwise, I wouldn't do it that way because I think you'll have to throw a lot of it away when you move to something bigger.
Subject: Re: Docker Engine vs. single-node Kubernetes
Some of the network options with just regular Docker are pretty neat. I've got an 'ipvlan' type network that just bridges out to the server VLAN on my public segment, so that's where I can run my Internet-facing stuff.
Doing it al by hand, or something like portainer?
Subject: Re: Docker Engine vs. single-node Kubernetes
Ubuntu has something called microk8s, which I haven't looked at, which I think is basically a turnkey single-node k8s environment.
Well this is neat. x86 emulation for ARM64. It only will run linux binaries so its not a 'true' emulator, and uses native ARM interfaces/libraries to improve speed. ( so i guess sort of like wine is for windows binaries ) Not sure why i never noticed this before. Could be beneficial to a lot of people.
https://github.com/ptitSeb/box64
So...
Arm is inherently less powerful at the top end than Intel architecture. It is superior in MIPS per Watt - it is literally considered "low powered computing" and that is what makes it ideal for mobile devices.
I mean, you can make a 6 cyl. engine faster than a larger, more powerful V8.
So, this is the ability to emulate a more powerful architecture on a less powerful architecture?
And only Linux stuff, that we can assume, is already supported natively by the various flavors of Linux you can get for ARM already?
I'm not sure I get it, other than to show it can be done?
Mon Sep 06 2021 11:35:22 EDT from Nurb432Well this is neat. x86 emulation for ARM64. It only will run linux binaries so its not a 'true' emulator, and uses native ARM interfaces/libraries to improve speed. ( so i guess sort of like wine is for windows binaries ) Not sure why i never noticed this before. Could be beneficial to a lot of people.
https://github.com/ptitSeb/box64
Agreed that in the past it was about all about low power consumption and 'just enough horsepower to get the job done', but those days are gone and its not the only reason for ARM. Today's higher end arm is competitive. Most people that this would target are using mid-range arm now, which can compete against lower end x86.
The point for this is that some things ( i guess mostly games, which while i think are silly, i guess i'm going to benefit from that market ) are still targeted to x86, so this gives those of us ahead of the platform curve the ability to run them until they are migrated over native ( it will happen, and for better or worse Apple will drive that migration ). I also think that since they are only supporting linux x86 and some tricks, its not going to be a slow experience like a true emulator.
Im working on completing it on a RK3399 now since its directly supported, and finding something i can compare apples to apples on. Then ill try it on one my Jetsons which eat RK's for breakfast. Perhaps a nano next, since its also directly supported, then try the NX.
ARM is genuinely competitive now, especially in terms of processing power per watt, which is going to make it more massively parallelizable. We're not talking about data centers full of Raspberry Pi clusters here (as much as Jeff Geerling would love to see that).
Obviously a data center that needs to handle both kinds of workloads is simply going to deploy both kinds of servers. Emulation is generally for individual devices that need to run some software from the existing catalog.
There is also some opportunity to use it for cross-compiling. There are two ways to do this. The conventional way was to run a compiler that builds code for an architecture other than the one it's running on. This has worked ok for a long time but you cannot *test* the resulting binaries without bringing them over to the target platform. The "new" way, which I am now doing for multi-arch containers, is to run the entire build environment under emulation. For example, an instance of QEMU running on AMD-64, emulating ARM, with the ARM native compiler and linker. You get to instantly test everything.
Full emulation, however, is computationally expensive. Box64 and programs like it will attempt to translate code on-the-fly so that you don't need to emulate an *entire* machine. The only reason this works at all is because AMD-64 and ARM are both little-endian. So you can translate instructions as you go, which is a big win.