(And it still wouldn't solve our general problem of needing to forward that information to the same place for all the machines in the virtual environment).
I tried emacs for a couple of years or so in college.
Of all of the harmful things and self-abusive lifestyles one can potentially experiment with during the college years ... you picked a really, really, really bad one.
I hope it didn't permanently damage you.
Unlikely.
I far prefer vi(m). The momentary flit with That Other Editor is well behind me.
I view it as a perl of editors.
Wed Apr 04 2018 20:11:24 EDT from zooer @ Uncensoredpico/nano....
Same here. I prefer nano. I can use vi/vim, and do quite often. I just prefer the simpleness of Nano.
I don't find nano very simple.
But then, I've been using vim a long time, and have built up a familiarity with it that lets me make it work for me.
I haven't done the same with nano. For all I know, it's as functional as vim.
I have had many occasions where I needed to make massive edits in my file, and found vim to work very nicely for that.
Back in the days of DOS I used/purchased a shareware program called Qedit by Semware.
I used nano because that is what my ex-boss used. At one point we were running a custom Linux and that was the only editor. I don't know if nano was better or worse than any other editor but I was familiar with nano.
I'm genuinely curious...
With nano, can you record the execution of a series of discrete edits (e.g. copy from this part of the line, paste to this part of the line, move left 2 characters and insert an 'e', that kind of stuff), into a sort of macro that you can then execute repeatedly x number of times?
I keep finding myself having a need for that style of editing, where I might have 'weird edits' like the above in 20 lines or so.
I think I just answered my question.
The answer is 'no'.
While nano does have syntax highlighting, which is nice, and it has some nice cut-n-paste things that it can do, it isn't as robust as vim. Not by a long shot.
You can't highlight a block of text and auto-indent it, like you can with vim. It doesn't even really do smart indenting at all, just a kind of 'indent like the last line' style of indenting, which is fine, but not as nice as something that understands the language you're writing in.
And that's fine, but I really do find myself using these extra features in vim.
https://www.phoronix.com/scan.php?page=news_item&px=GNU-Nano-2.9-Released
First up, GNU Nano 2.9 has the ability to record and replay keystrokes within the text editor. M-: is used to start/stop the keystroke recording session while M-; is used to playback the macro / recorded keystrokes.
I think Docker would at least partially solve a different problem that we have, where we kinda want multiple instances of our server on a single machine.
Thing is, we'd need to build something into the host machine to help manage the multiple servers... something that knows about the multiple nature of itself, and translates between clients accessing it, and the various servers, to intelligently assemble or disassemble messages appropriately.
But, we haven't started down that road yet. There's still time to consider alternatives, heh.
On another topic, I recently learned about the rpath linker option for GNU compilers.
Apparently, in closed-source shops that want to provide binaries for Linux, this tool lets you link to the libraries you want, and include them as part of the packaging for the overall product in such a way that you can link to a known environment.
The rest involves compiling on the oldest Linux you want to support to ensure that most kernels will support your binary.
Does anyone here have experience with that? Because if I can reduce my build machines to just one or two machines for Linux, I'd rather like that.
The question is, are you using -rpath for a shlib that's part of a standard baseline linux, or something that can't be assumed to exist (in a compatible version)
You need to have good knowledge of your shared library dependencies/ecosystem. Otherwise this question is not phrased in the form of a question...
Anyway, think twice about supporting more than 1 or 2 recent Linux distros, and perhaps think about shipping a virtual appliance (either docker image or traditional virt) to manage dependencies?
A customer with the resources to install and support Linux, probably also has the resources to install and support a *recent* Linux.
Trying not to reveal overmuch what we do, but:
1. We *must* work on a 10-year old distribution of Linux, because of the sort of ways this software will be used.
2. At least one of the things we do is accomplished using techniques that won't work very well if it has to go through a docker layer. This said, I'd like to accomplish the same thing another way, by taking advantage of the fact that Linux is open-sourced, and we can alter the original sources to do things we need, as long as we give those sources back to the community (who may keep it or throw it away, won't matter to us).
At the end of the day:
* I want to reduce the number of build machines we must have to build our software.
* I need this to work on as many distributions of Linux to cover at least 80% of use cases (hence debian & redhat).
* I need this to work on at least one 10 year old distribution of Linux.
* The package I create should not have any dependencies... it should be capable of standing alone.
If only we were open-sourced... this would be easier. We'd just distribute the sources and be done with it. But we're not... we have to distribute binaries.
So it's painful.
Definitely learning a lot about Linux, though, as a consequence of these requirements.
If I could address #2, I might be able to do this in a docker instance. I think. Maybe. It'd be weird, I think, but it might be possible. But running in a docker instance probably won't address #1.
Does anyone here have experience with that? Because if I can reduce
my build machines to just one or two machines for Linux, I'd rather
like that.
We use rpath in the Easy Install script for Citadel.
All of the libraries that we compile are installed in a dedicated support directory, and we build our binaries with the rpath option to force them to prefer their own libraries over anything else that might already be installed on the system. It probably wouldn't be too difficult to take it a step further, include almost *all* libraries instead of just the specialty ones, and ship binaries.
At some point, though, you have to start wondering whether it would just be easier to static link everything.
Static linking has problems, too.
One big problem involves working with ODBC, which I need to do for one application.
In Linux, you must dynamically link your binaries for ODBC support, since the point of it involves a 3rd party vendor providing a shared object that provides the driver for the database in terms of ODBC.
Plus, you find yourself having to build for several distributions of Linux when you statically link, to ensure that the binaries work with the target kernel.