Language:
switch to room list switch to menu My folders
Go to page: First ... 67 68 69 70 [71]
[#] Mon May 17 2021 16:20:29 EDT from IGnatius T Foobar

[Reply] [ReplyQuoted] [Headers] [Print]

That's correct. Mac OS "DMG" format is equivalent to an Android APK, or a Linux AppImage. It carries all of its dependencies around with it and gets soft-mounted to the filesystem during execution.

The downside of this strategy is that if two or more programs are using the same version of the same library, you still have to load it into memory twice.
We had a conversation about this a few months ago, I think. Ideally you build some sort of deduplication layer into the kernel's virtual memory manager.
That's probably coming eventually. In that kind of world you don't even need shared libraries anymore, you just static link everything.

Microsoft solved the centralized management problem but they also created it. When personal computers were springing up like weeds all over IT shops, those shops already knew how to manage applications and users on centralized hosts.

[#] Mon May 17 2021 17:14:26 EDT from Nurb432

[Reply] [ReplyQuoted] [Headers] [Print]

Sure, there were/are downsides of resource 'waste' but id rather have that disadvantage than DLL hell..  And as much as i hate to say it  ( i'm not one of those normally )  with modern hardware it would be less of an issue than it was in the old days.

Mon May 17 2021 04:20:29 PM EDT from IGnatius T Foobar
That's correct. Mac OS "DMG" format is equivalent to an Android APK, or a Linux AppImage. It carries all of its dependencies around with it and gets soft-mounted to the filesystem during execution.

The downside of this strategy is that if two or more programs are using the same version of the same library, you still have to load it into memory twice.
We had a conversation about this a few months ago, I think. Ideally you build some sort of deduplication layer into the kernel's virtual memory manager.
That's probably coming eventually. In that kind of world you don't even need shared libraries anymore, you just static link everything.

Microsoft solved the centralized management problem but they also created it. When personal computers were springing up like weeds all over IT shops, those shops already knew how to manage applications and users on centralized hosts.

 



[#] Mon May 17 2021 22:06:25 EDT from ParanoidDelusions

[Reply] [ReplyQuoted] [Headers] [Print]

I'm actually a Debian guy - and have been since Potato and Sarge. I've custom compiled kernels - and frankly - I think it was the first *decent* distro - and primarily because the apt package manager made it POSSIBLE for someone without a programming level knowledge of *nix to have a shot at getting *most* things installed. Debian pushed *nix ahead light-years - and RedHat and all the others still kinda suck by comparison. Ubuntu is even *better* for mainstream users - and is one of the biggest reason why the quality of user in Debian has gone down to where you encounter Linux users who seem like they would be more at home in Windows when you read their questions. I even find *myself* going, "RTFM, noob!" sometimes, now. 

But the biggest obstacle to Linux actually mattering outside of in circles like this - is Linux itself. Although I suspect circles like this really kind of prefer it that way. I think they like to "know" that Linux is better, but that MOST of the world is too dumb to realize it. 

So they're generally not in too big of a hurry to *actually* do the things necessary to make Linux viable. 

Apple was willing to do the things necessary to make *nix viable. Apple *wanted* the least technology literate users possible in their camp. 

Linux wants the propeller-heads. And so Linux does some very dumb things for very dumb reasons, as as community. 

 

 

Mon May 17 2021 11:53:55 EDT from Nurb432

And with softies too :) 

/me ducks

 

Kidding aside, oftentimes "better" is relative, and even if you are on top of the stack, there is still room for improvement.  I doubt any fanboy feels that things *cant* be improved, just that it may not be worth the effort for such a small change.

And of course since i'm really in the Daemon camp by nature, i'm not "rooting" for penguin, even tho i think its a better choice for most people than what Microsoft offers. ( and has been for a long time )  Us *BSD people do accept our system's limitations, and work to overcome them. 

 

Mon May 17 2021 11:12:19 AM EDT from ParanoidDelusions


So it goes with Linux devotees. 

 

 



 



[#] Tue May 18 2021 08:43:28 EDT from Nurb432

[Reply] [ReplyQuoted] [Headers] [Print]

Kids these days and their fancy distributions. :)    I remember talking to Linus on Usenet back before it would even self-host and required Minix to compile..  :)     

And the wonderful boot/root floppies from that time period, where you hex edited the boot sector to get it to run on YOUR hardware. 

 

Once we got to the point of actual distributions, i was with SLS ( "soft landing system", if i remember right ) until something called Yggdrasil came along. It was actually commercial ( which i bought ), and had a graphic installer AND setup X for you, which was a royal pain in the ass back then. Back then X was soo much trouble that i actually ( with some help as i was not a C expert, i knew enough to be dangerous ) ported MGR to Linux for my own use.  I used that with vSTA and liked it a lot. Of course this was pre-WWW, or at least WWW didnt matter yet. 

But something happened and a critical point was reached and it all just 'exploded' and it all took off exponentially. 

Once things calmed down a bit during the heyday of the 'wild west' i ended up with Debian too, as it was the most like *bsd.  Used to go in to local college at 2 am with a box of floppies to get latest versions..   Trying to do it at home on a 14k dial-up via SLIP on Kermit, would take days and days. But the computer lab" was open 24/7 and had a friend that went to school there so we had an ID if ever stopped. ( never were ). And free parking at night in the garages :) 

 

Well, that brought back some memories. 



[#] Tue May 18 2021 10:19:25 EDT from ParanoidDelusions

[Reply] [ReplyQuoted] [Headers] [Print]

Yeah, I attempted a very early release of Slackware on CD... and went, "Well, this has a ways to go..." 

Things have certainly improved. 

 

Tue May 18 2021 08:43:28 EDT from Nurb432

Kids these days and their fancy distributions. :)    I remember talking to Linus on Usenet back before it would even self-host and required Minix to compile..  :)     

And the wonderful boot/root floppies from that time period, where you hex edited the boot sector to get it to run on YOUR hardware. 

 

Once we got to the point of actual distributions, i was with SLS ( "soft landing system", if i remember right ) until something called Yggdrasil came along. It was actually commercial ( which i bought ), and had a graphic installer AND setup X for you, which was a royal pain in the ass back then. Back then X was soo much trouble that i actually ( with some help as i was not a C expert, i knew enough to be dangerous ) ported MGR to Linux for my own use.  I used that with vSTA and liked it a lot. Of course this was pre-WWW, or at least WWW didnt matter yet. 

But something happened and a critical point was reached and it all just 'exploded' and it all took off exponentially. 

Once things calmed down a bit during the heyday of the 'wild west' i ended up with Debian too, as it was the most like *bsd.  Used to go in to local college at 2 am with a box of floppies to get latest versions..   Trying to do it at home on a 14k dial-up via SLIP on Kermit, would take days and days. But the computer lab" was open 24/7 and had a friend that went to school there so we had an ID if ever stopped. ( never were ). And free parking at night in the garages :) 

 

Well, that brought back some memories. 



 



[#] Tue May 18 2021 11:25:34 EDT from IGnatius T Foobar

[Reply] [ReplyQuoted] [Headers] [Print]

Blah blah blah. Failure to get Linux running properly on a computer is a failure of the person, maybe a failure of the computer, never a failure of Linux.

[#] Tue May 18 2021 13:06:28 EDT from Nurb432

[Reply] [ReplyQuoted] [Headers] [Print]

lol 



[#] Tue May 18 2021 14:00:43 EDT from ParanoidDelusions

[Reply] [ReplyQuoted] [Headers] [Print]


Well, it really isn't fair to blame Linux. 



Tue May 18 2021 11:25:34 EDT from IGnatius T Foobar
Blah blah blah. Failure to get Linux running properly on a computer is a failure of the person, maybe a failure of the computer, never a failure of Linux.

 



[#] Wed May 19 2021 04:00:13 EDT from darknetuser

[Reply] [ReplyQuoted] [Headers] [Print]

2021-05-17 08:31 from Nurb432
Sort of surprised they wont at least accept ODF.  its universal too.


Some do, but most will shove the file up your ass if you send them an ODF.

[#] Wed May 19 2021 04:03:09 EDT from darknetuser

[Reply] [ReplyQuoted] [Headers] [Print]


And of course since i'm really in the Daemon camp by nature, i'm not
"rooting" for penguin, even tho i think its a better choice for most
people than what Microsoft offers. ( and has been for a long time ) 
Us *BSD people do accept our system's limitations, and work to
overcome them. 


Well fucking said.

[#] Wed May 19 2021 04:09:12 EDT from darknetuser

[Reply] [ReplyQuoted] [Headers] [Print]

2021-05-17 16:11 from Nurb432
The one thing that apple did early on that i did like, was they did

not use shared libraries for apps.  They were encapsulated into the

folder. 


That is a complete can of worms. Since you have opened, let me have fun with them.

I have been recently been working with a program that is distributed bundled with its own set of libraries. 10 minutes of vulnerability scanning showed up the program has a potential vulnerability related to the libraries that come bundled with it.

In the regular BSD/CLinux universe, you would run $update_pkg $library (or roll your own) and be done with it. When the library comes bundled with an AppImage or a static tarball or whatever, you have to rebuild your own. This is doable, but then it is not convenient anymore, and static programs distributed like this are distributed this way because they are supposed to be convenient.


So I think both models have their place but I don't really buy the model of serving programs as a package that includes all its dependencies for general use.

[#] Wed May 19 2021 04:10:50 EDT from darknetuser

[Reply] [ReplyQuoted] [Headers] [Print]

The downside of this strategy is that if two or more programs are
using the same version of the same library, you still have to load it

into memory twice.
We had a conversation about this a few months ago, I think. Ideally

you build some sort of deduplication layer into the kernel's virtual
memory manager.

In fact I have heard word of this but I don't know in which state of development this sort of deduplication is.

[#] Wed May 19 2021 07:18:25 EDT from Nurb432

[Reply] [ReplyQuoted] [Headers] [Print]

I remember something about generic dedup of ram at one point on servers, so it would cover most anything 'running'. Unless it was vaporware

And dedup for files does exist. ( so one concern i had in the past is gone, between obscenely cheap space and dedup.. )

Wed May 19 2021 04:10:50 AM EDT from darknetuser
In fact I have heard word of this but I don't know in which state of development this sort of deduplication is.

 



[#] Wed May 19 2021 07:18:57 EDT from Nurb432

[Reply] [ReplyQuoted] [Headers] [Print]

Everything is a game of tradeoffs.  But i think the tradeoffs are worth it.

Wed May 19 2021 04:09:12 AM EDT from darknetuser
So I think both models have their place but I don't really buy the model of serving programs as a package that includes all its dependencies for general use.

 



[#] Wed May 19 2021 11:44:45 EDT from IGnatius T Foobar

[Reply] [ReplyQuoted] [Headers] [Print]

I remember something about generic dedup of ram at one point on
servers, so it would cover most anything 'running'. Unless it was
vaporware

That's about right. It's called "Kernel Samepage Merging" [KSM] and landed in the kernel in 2.6.32.

[ https://www.kernel.org/doc/html/latest/admin-guide/mm/ksm.html ]

It only operates on regions of memory which have been registered by a userspace program to qualify for deduplication. This can make sense, for example, when a hypervisor such as KVM knows it is running multiple instances of the same operating system image; it can then activate KSM on those memory regions and the kernel will dedupe the identical pages.

There is a project called "Ultra Kernel Samepage Merging" [UKSM] which eliminates the requirement for userspace processes to cooperatively submit memory regions for deduplication.

[ https://github.com/dolohow/uksm ]

This looks interesting but it has not been accepted into the mainline kernel.
If something like this were to be come ubiquitous, one could run many different static-linked programs that use the same libraries, and the kernel would dedupe out the memory used by the libraries. When building the static-linked programs, the linker would need to make sure the libraries are aligned on page boundaries so they dedupe properly. It might already do that.

Something like this is probably inevitable. Mobile operating systems already use self-contained applications by default, and such a style of deployment is at least available on general-purpose operating systems -- DMG on MacOS, "Modern" runtime on Windows, and AppImage on Linux. Optimizing the OS to run these would seem to be the next logical step.

[#] Wed May 19 2021 11:47:54 EDT from Nurb432

[Reply] [ReplyQuoted] [Headers] [Print]

Dedup across kernel level VMs... not sure im fond of that from a security standpoint.

 

Id have to think about that some.



[#] Fri May 21 2021 21:21:45 EDT from LoanShark

[Reply] [ReplyQuoted] [Headers] [Print]


I would guess that a variety of people have independently experimented with unhinted system-level page dedup, and that it didn't pan out.

There are obviously a variety of problems. You have to scan all system memory (physical and virtual) at some defined frequency. This can't be too frequent or it brings the system to its knees. But if it isn't frequent enough, you miss duplicates.

But you can't scan for dups in an atomically locked manner, because that's insane. So even if you find a dup, it could be gone before the time you commit the dedup transaction, and even the act of implementing the capability to have a "dedup transaction" could slow the whole system down.

[#] Sat May 22 2021 19:54:45 EDT from IGnatius T Foobar

[Reply] [ReplyQuoted] [Headers] [Print]

Deduplicated storage handles it by calculating a hash of each block as it is written to the cache. Then when it's time to commit the block, it compares the hash to an existing table to determine whether it can be deduped or if it has to be written new. I guess that's sort of what you mean by atomically locked. So I guess it's time for all computers to be built with content-addressable memory now.

[#] Mon May 24 2021 17:02:09 EDT from LoanShark

[Reply] [ReplyQuoted] [Headers] [Print]


I guess that's sort of what you mean by atomically locked.

I was thinking more about RAM than about storage. Unlike storage, RAM is constantly changing at less-predictable times. What I mean by that is you don't need to perform an explicit kernel call to change RAM, you just change RAM.

So there's a race condition between computing a hash of a page, identifying another page with the same hash, and merging them. Unless one were to set a readonly bit on each page prior to computing the hash--a page fault handler would then have to catch the write attempt and invalidate any hashes before the next dedup interval. This is at least a little bit slower than not bothering with any of this.

Go to page: First ... 67 68 69 70 [71]