Does anyone here have experience with that? Because if I can reduce
my build machines to just one or two machines for Linux, I'd rather
We use rpath in the Easy Install script for Citadel.
All of the libraries that we compile are installed in a dedicated support directory, and we build our binaries with the rpath option to force them to prefer their own libraries over anything else that might already be installed on the system. It probably wouldn't be too difficult to take it a step further, include almost *all* libraries instead of just the specialty ones, and ship binaries.
At some point, though, you have to start wondering whether it would just be easier to static link everything.
Static linking has problems, too.
One big problem involves working with ODBC, which I need to do for one application.
In Linux, you must dynamically link your binaries for ODBC support, since the point of it involves a 3rd party vendor providing a shared object that provides the driver for the database in terms of ODBC.
Plus, you find yourself having to build for several distributions of Linux when you statically link, to ensure that the binaries work with the target kernel.
Plus, you find yourself having to build for several distributions of
Linux when you statically link, to ensure that the binaries work with
the target kernel.
If the glibc you're linking against is sufficiently old, you ought to be able to compile once and run on any subsequent compatible kernel.
Of course it does depend on whether you're using any of the more poorly-isolated shared libraries as a dependency
I plan to build the compiler on the target machine, just so I have a consistent language to use (GNU C/C++ 5.4.0). I can copy its libs into the rpathed folder.
But, then there's glibc, which isn't compiled with the compiler. And pthreads?
I have other libraries that I plan to compile anyway, just to ensure that we're working with something consistent (for example, openssl), so I'm hoping to contain some of the scariness to just a few things.
Microsoft announced on Monday a new technology called Azure Sphere, a new system for securing the tiny processors that power smart appliances, connected toys, and other gadgets.
...here's the really notable part: To power Azure Sphere, Microsoft has developed its own, custom version of Linux — the free, open source operating system that Microsoft once considered the single biggest threat to the supremacy of its Windows software.
(To bypass the ad blocker blocker on Business Insider, hit Ctrl-A Ctrl-C quickly before the modal comes up, then paste the article into a word processor to read it.)
So let me get this straight. There's supposedly a security problem caused by all of the "IoT" connected devices, toys, etc. out there, and the solution is to connect them all to Azure and give Microsoft control of them all?
I mean, yeah, kudos to Micro$oft for being non-NIH enough to realize that 'doze is to bloated for this application and going with a Linux kernel, but still ... tethering all IoT devices to Azure "for security purposes" sounds like a cure that's worse than the disease.
We have a few chuckles about the idea at work, too.
It's a ridiculously bold statement. I don't know how they can credibly back it.
It maintains a pinned-up connection to the mothership and it is bolted to the user's skull so that it cannot be removed.
Hours spent, trying to understand in what way rpmbuild installed on Debian Lenny differs from rpmbuild installed on Debian Stretch (or CentOS 6.x).
For my situation, it wants a BuildRoot perched at the top of the file pointing to my binaries.
This is the Linux experience when you can't stay on top of the latest thing and you're mired in old OS distributions.
(It isn't much better in Windows, though).
Well, it works.
Developing on an ancient linux kernel, then building packages in that ancient kernel to distribute to modern distributions results in a product that works on both old and new linux distributions.
Particularly if you also provide your own libraries and use rpath to direct your binaries to it.
Why the fuck didn't the guy who we hired as a linux developer do this from the beginning?
(Answer: because he primarly worked on kernel development, not application development, and didn't seem to know how to figure something like this out... or perhaps he just didn't want to deal with setups and builds in Linux).
Next fun trick: hacking on bash to do what some would regard as 'nefarious' things, and see if the GNU folks gasp in horror when I submit the patch to them. Mua ha ha ha ha.
Should the Linux operating system be called "Linux" or "GNU/Linux"? These days, asking that question might get as many blank stares returned as asking, "Is it live or is it Memorex?"
Some may remember that the Linux naming convention was a controversy that raged from the late 1990s until about the end of the first decade of the 21st century. Back then, if you called it "Linux", the GNU/Linux crowd was sure to start a flame war with accusations that the GNU Project wasn't being given due credit for its contribution to the OS. And if you called it "GNU/Linux", accusations were made about political correctness, although operating systems are pretty much apolitical by nature as far as I can tell.
Has anyone played with WireGuard yet? It is supposedly the up-and-coming VPN technology because it bypasses all the hundreds of thousands of lines of code used in things like IPsec and OpenVPN; instead it just creates an interface on each end and you route through it.
[ https://www.wireguard.com ]
I haven't tried it yet but it looks pretty cool. The developer has made a request for the drivers to be included in the mainline Linux kernel. It would be interesting if that happens.
check out pritunl. decent two-factor auth support, and portable
For a while there were stories titled "x things you didn't know about ______", if you looked at the story and found you knew most of them. This headline was different, they don't claim that people don't know about them.
27 Interesting Facts about Linux
If you run Linux machines in a VMware environment, like many of us do, you are constantly annoyed by the way the VMware console leaves partial garbage on the virtual console when it blanks, instead of showing a blank screen.
And besides, what business does a virtual machine have blanking the console anyway? Screen savers belong on physical screens only.
To make this problem go poof:
1. Edit /etc/default/grub, /boot/grub2/grub.cfg, and/or wherever your kernel boot parameters are set.
2. Remove useless directives like "rhgb" and "quiet"
3. Replace them with "consoleblank=0"
et voila ... your virtual machine's console stays lit all the time now.
(This assumes that you are not running a GUI on your server console. People who do that tend to be Oracle DBAs and are not capable of learning ProTips anyway.)