Try OnlyOffice... it seems to have dependencies on other docker apps, which seems somehow kind of fucked up.
You might expect binaries to have dependencies, but there's something ... wrong ... about docker dependencies.
Bingo.
Now you've got a ship-shipping ship shipping shipping ships, which can't ship ships without... a tugboat.
Connecting to SSH over a slow tethered 3G connection makes me remember how
awesome it is to have things like Nagle TCP, and software like curses/terminfo
that was designed to run efficiently over slow links.
vi is eminently usable, redrawing only the portions of the screen that need refreshing. emacs is completely unusable (disclaimer: I haven't tried it).
vi is eminently usable, redrawing only the portions of the screen that need refreshing. emacs is completely unusable (disclaimer: I haven't tried it).
Big news!
Apparently, Mark Shuttleworth finally got around to reading the emails I sent him in 2011.
[ http://tinyurl.com/kkdsapr ]
The awful "Unity" desktop is going away. Starting with Ubuntu 18.04LTS, the Ubuntu desktop will be returning to GNOME. The long international nightmare is finally over. Shuttleworth has finally understood what the rest of us already knew: phones are not tablets are not computers. A single environment that spans them all is a bad idea. (Hey Satya, are you listening? Doze 10 is more capable of acting like a computer than Doze 8, but it doesn't go far enough. Kill your phone project like Ubuntu did.)
This is one thing that Apple got right. Different devices call for different operating environments. I'm not a fan of Mac OS or iOS, but at least each one is designed specifically for the type of device it runs on.
Ubuntu will be refocusing on the things it does well, with (omg!) a focus on the things that are actually bringing in revenue: cloud and IoT, and some desktop as well.
Unity was the reason I abandoned Ubuntu in 2011 and went to stock Debian, even though Ubuntu made the installation and maintenance experience easy.
I'm looking forward to returning to it.
They've been doing this for a while; this just makes it official. It still
requires a "helper" VM on the host to act as a kernel for the Linux containers.
I'm waiting for the day they run it on WSL.
And yes, I'm ok with this. It's the same as Windows software running on OS/2, eliminating any motivation for software developers to write for the platform that can emulate the other one. It's actually the same as Windows on OS/2 in another way: it isn't really emulation, but rather a well-hidden copy of the other operating system inside the environment.
Linux uber alles.
I'm waiting for the day they run it on WSL.
And yes, I'm ok with this. It's the same as Windows software running on OS/2, eliminating any motivation for software developers to write for the platform that can emulate the other one. It's actually the same as Windows on OS/2 in another way: it isn't really emulation, but rather a well-hidden copy of the other operating system inside the environment.
Linux uber alles.
Exactly! It's the same thing -- redirecting system calls. And unsurprisingly,
the Linux system call interface turns out to be way easier to redirect than
the Windows system call interface, because it's well documented with no hidden
surprises.
It's also more stable than the Windows API. The Windows kernel is intentionally
hidden behind the WinRT/Win64/Win32/etc. API family, so MS feels more freedom
when it comes to tweaking kernel APIs. Linus mandates that user-land applications
from 0.99 kernels are able to still run on 4.11 kernels, so actually has a
better track record of compatibility than Windows from that particular aspect.
As someone who still uses Windows 10 for games but also does development work for a living, I'm really quite excited by all this.
I am too; it really makes everything a little more accessible.
At the same time, now that I have two well-powered laptops, I can have the best of both worlds. I just installed Kubuntu on one of them and am realizing how much I missed having a native Linux around. KDE is looking good these days. For a while it had too much of a eurotrash thing going, but they've cleaned it back up. I like how they make it look and act like an actual desktop, instead of pretending to be a phone or a tablet.
I figured I'd go with KDE this time because it's the native environment of my very favorite video editor on any platform: Kdenlive.
At the same time, now that I have two well-powered laptops, I can have the best of both worlds. I just installed Kubuntu on one of them and am realizing how much I missed having a native Linux around. KDE is looking good these days. For a while it had too much of a eurotrash thing going, but they've cleaned it back up. I like how they make it look and act like an actual desktop, instead of pretending to be a phone or a tablet.
I figured I'd go with KDE this time because it's the native environment of my very favorite video editor on any platform: Kdenlive.
I wonder, sometimes, how people manage to make interesting debian packages at all.
There's this thing called debconf that one ought to use for asking the user things like the password to a database engine so you can add a new database and tables, etc. And there are these commands with which you may littler your postinst scripts to drive this kind of thing.
But you aren't likely to find the documentation for these commands easily.
No. On the official debconf site, they claim that because these commands are now part of Debian policy, you have to find them there. So you try to find them there, and you're treated to a byzantine labyrinth of text, hinting of the promised text without offering it to you easily.
I wonder how many people would use this system, but give up and just use bash's 'read' instead because fucking hell, you can at least find documentation for bash.
I did, eventually, figure out what I needed, following debian policy, etc.
But, sincerely, those guys ought to make the documentation a tad less dense, and a bit more streamlined for the guy in a hurry.
Basically:
* Redirect all stdout to a file, so you can refer to it later if something fucks up.
* You might also want to redirect stderr there, too.
* Output nothing to the user at all, short of something on stderr telling the user where to find this output.
* Use debconf commands for the rest. If you're using sh or bash, they start with db_, and you have to run their script to make the commands accessible.
* Remember to write a lot of echo statements into your log file so you can figure out when something fucks up. Because debconf sometimes just seizes up the whole damned script when you set -e at the top of the script (per policy) and something decides to go wrong.
* Test the fucker until your eyes bleed.
Oh, the actual prompting happens within config, not postinst. But if you used db_reset to kill off the password (like you should), and you need it again on uninstall, you'll need to reproduce whatever you wrote in config in your prerm script. Or, be sneaky like me, and simply call config from prerm.
I have to say, it all does look very clean and slick when you use debconf, but the documentation leaves much to be desired.