Language:
switch to room list switch to menu My folders
Go to page: First ... 53 54 55 56 [57] 58 59 60 61 ... Last
[#] Sun Dec 01 2019 17:28:09 EST from IGnatius T Foobar

[Reply] [ReplyQuoted] [Headers] [Print]

Yes, that sort of thing. Notification sharing, clipboard sharing, file transfer, media controls, find-my-phone, and it even has a mode where you can use your phone as a touchpad for the computer (which works great, but I'm not sure where I'd use such a thing).

[#] Mon Dec 02 2019 12:37:45 EST from IGnatius T Foobar

[Reply] [ReplyQuoted] [Headers] [Print]

Oh, and KDE can snap moving windows to the nearest borders of other windows and the screen, something others still can't do even though it is the current year.

[#] Mon Dec 02 2019 14:26:46 EST from darknetuser

[Reply] [ReplyQuoted] [Headers] [Print]

2019-12-01 17:28 from IGnatius T Foobar
Yes, that sort of thing. Notification sharing, clipboard sharing, file

transfer, media controls, find-my-phone, and it even has a mode where

you can use your phone as a touchpad for the computer (which works
great, but I'm not sure where I'd use such a thing).



Argh, clipboard sharing sounds like a great way for your smartphone to control what you are doing on your computer.

I can imagine that using the phone as a touchpad may be good if you are lazy and want to control your computer remotely without getting up from the sofa. 

[#] Mon Dec 02 2019 22:34:17 EST from 9pf

[Reply] [ReplyQuoted] [Headers] [Print]



You may like CWM for a minimal setup.


[#] Mon Dec 02 2019 22:35:48 EST from 9pf

[Reply] [ReplyQuoted] [Headers] [Print]


[#] Tue Dec 17 2019 15:09:43 EST from IGnatius T Foobar

[Reply] [ReplyQuoted] [Headers] [Print]

I wanted to find out how to do a status line in Screen. So I ducked "status line in gnu screen" and I got an article that basically said, "Switch from Screen to tmux. Look at how hard it is to do a status line in Screen. To get a status line with the hostname, window list, load average, and date/time, you need to do THIS!"

Great, that's all I wanted anyway. Now I don't need to bother with tmux
:)

(By the way, for those interested, it's done with the following line in ~/.screenrc ...)

hardstatus alwayslastline '%{gk}[ %{G}%H %{g}][%= %{wk}%?%-Lw%?%{=b kR}(%{W}%n*%f%t%?(%u)%?%{=b kR})%{= kw}%?%+Lw%?%?%= %{y}][%{G} %l%{y}] %{y}[%{G} %m/%d %c %{y}]%{W}'

[#] Thu Dec 26 2019 20:07:49 EST from darknetuser

[Reply] [ReplyQuoted] [Headers] [Print]

Hello!

I am operating some pet application on a crappy home server. The server often goes down for many reasons. It is an old, rusty piece of crap so I sometimes get out of RAM, that sort of problem.

I am running a web application on it. It is one of those typical MySQL/Mariadb powered PHP FPM applications. I have been wondering where I could learn about High Availability so I could keep the site up even if the server crashed. I know I am not going to run a real HA setting because I only have one location for the pet project, and I am not going to push it to a cloud provider at this point, but knowing where to learn about this stuff would be cool.

My router has load balancing capabilities, by the way. How cool is that.

[#] Mon Dec 30 2019 12:16:24 EST from IGnatius T Foobar

[Reply] [ReplyQuoted] [Headers] [Print]

That's a big question. There are so many aspects to HA that it needs to be narrowed down.

In its *best* form, a highly available application will run from multiple locations at once, keeping all of the data in sync and accepting connections at all locations identically. This of course requires an application that is capable of doing so.

In its *simplest* form, a highly available application has its data constantly being synchronized from a primary location to a standby location, and if the primary location fails, the secondary location begins receiving the workload.
This failover can take many forms, ranging from you manually pointing the DNS to the other hosting location, to automatic global load balancing services that do it for you.

For a workload running on a typical L*MP web stack, that might simply take the form of an rsync job that copies the whole site to another location. On the clearnet, you have to deal with DNS; but I suppose on the darknet you can simply announce your router into the network from another location.

The hosting company that I work for does a lot of Disaster Recovery work, so these are things I spend a lot of time thinking about. You start by making two decisions about your workload:

1. What is your Recovery Time Objective (RTO) ? This means, how long can you tolerate the application being unavailable while you transition it over to the standby hosting location. If you have an RTO measured in minutes, rather than hours, then you need to put in the kind of infrastructure that automatically activates the standby hosting location. If you believe that your RTO is zero, then you need to be running master-master at multiple locations all the time.

2. What is your Recovery Point Objective (RPO) ? This means, if the primary site is destroyed or otherwise unrecoverable, how much data can you afford to lose? For example, if you're ok with losing up to 60 minutes of the latest updates, you can just run an hourly job that sends changes to the recovery site. If you're ok with losing up to 24 hours, you send your nightly backups over. But if you have something like financial transactions that cannot be lost, then your RPO is zero, and you need to synchronously commit every transaction to all sites.

What we find in the biz is that every customer begins the conversation by saying that their RTO and RPO are either zero, or really really low, and then when they find out what it would cost to do that, they magically discover that their tolerance for an hour or two of downtime after a site failure is ok.

[#] Mon Dec 30 2019 18:15:42 EST from darknetuser

[Reply] [ReplyQuoted] [Headers] [Print]

I am more worried about having the application unreachable than having the server or data destroyed. Everything is backed up regularly and recovery time is as good as it should be expected. I mean, I have had real crashes and I managed to rebuild.

I am running my stuff in old rusty hardware that is clearly been asked too much from... it works ok most of the time, but if it is put into any sort of heavy load then the service might become unavailable for a while. At least in practice. I know the actual solution to this problem is to either upgrade hardware or tune the application and stack to work with limited resources, but nevertheless, this made me wonder if I could set a failover server to kick in if the main one crashed badly.

I think I could have a master server hosting the web application and then a separate database server, then configure a failover server for hosting a failover instance of the application. And have the router do the switching if the main webserver dies. I was just wondering if there was an industry standard way of doing this sort of thing.

To be clear, it is a toy server so there are no hard requisites... if it crashes and it takes 1 day to bring it back again, or 1 day of information is lost, it is not a big deal. But that sounds to me like bad "adminning". A reasonable target would be to have the failover system kick in in 15 minutes and guarantee no data loss spanning more than 6 hours.

[#] Thu Jan 02 2020 12:12:08 EST from LoanShark

[Reply] [ReplyQuoted] [Headers] [Print]


If your backend is MySQL, it's fairly "standard" (ok, there are many standards to choose from) practice to enable binlog replication and set up a master DB server as well as a slave/replica DB server. This inhabits a middle ground between the hourly replication rsync's described above, and a full (and complicated) multi-master replica configuration.

This means that there is some sort of watchdog daemon that, upon detecting that the master DB has failed, promote the slave/replica to master and trigger either a DNS failover or an IP-address steal. It's not zero downtime but it's good enough for the vast majority of moderately-scaled vertical market type applications.

If your application issues queries that hit certain corner-cases, they may not replicate over binlog in a well-defined way, so on occasion you may need to resync replication and have a strategy in place for that. It's a bit technical but there are commercial products out there that do most of the heavy lifting for you.

At the other end of the spectrum, if you're hosting something like Farcebook that has to reach a global audience of 2 billion with very little downtime, some form of multi-master replication, sharded architecture, or "eventual consistency" will need to be baked in to the application architecture and understood by the software developers. Farcebook does this by piling a few abstraction layers on top of Mysql, other applications do something very similar via database backends like mongodb or DynamoDB

[#] Fri Jan 03 2020 15:29:08 EST from IGnatius T Foobar

[Reply] [ReplyQuoted] [Headers] [Print]

The funny thing about Fecesbook's architecture, from what I've read, is that internally they've built exactly what they DON'T want the rest of the world to have: a federated network of smaller social networks. From what I remember reading, every victim has a home server (or more likely a home cluster) as geographically close as they can get. When the victim logs in, their feed is assembled by a front end server that queries the home server as well as the servers of anyone to whom they are subscribed. Then it strips out any wrongthink, inserts ads, and renders the page as if it were running from a single location.

Social media is a great place for eventual consistency because nothing bad happens if you drop a bit of data here and there; the worst thing that happens is that someone misses a chance to see a cat video or some questionably sourced news bites.

The MySQL replica strategy mentioned above does work. But you'd be surprised at how badly some corporate customers build their application stacks. Hard coded IP addresses everywhere, applications (built or bought) that either can't handle a database server that's been moved or simply refuse to reconnect ... you name it, we've seen it.

That's why business poindexters tend to gravitate towards commercial solutions like Zerto which, as LS points out, "do most of the heavy lifting for you".
To me, it's no substitute for simply building your application to run in a way that is not location dependent, but service providers love it because it relieves them of taking that responsibility over the customer's workload.
The nice thing about these solutions is that they integrate with the hypervisor's storage layer to replicate changes made to disk, but in doing so they also allow you to group together the virtual machines that make up a particular workload, and send the changes to the standby site in a commit-by-commit fashion. This makes the replica "crash consistent", which as every database person knows, means that the replica will have no out-of-order commits, even across multiple virtual machines on multiple disks.

[#] Fri Jan 03 2020 15:34:48 EST from IGnatius T Foobar

[Reply] [ReplyQuoted] [Headers] [Print]

For a small workload running on a Linux system however ... assuming that you're not using HA built in to the application ... the most simple and reliable recipe looks like this:

1. Run the *entire stack* on a btrfs or LVM volume
2. Every night / hour / couple of minutes (whichever you choose) --
a. Perform a snapshot of the volume
b. rsync the snapshot to the standby site

Because everything is on the same volume, your snapshots are guaranteed to be crash consistent. This gives you the quality and utility of something like NetApp SmartMirror but without the cost.

[#] Fri Jan 03 2020 16:35:32 EST from LoanShark

[Reply] [ReplyQuoted] [Headers] [Print]

That's why business poindexters tend to gravitate towards commercial

solutions like Zerto which, as LS points out, "do most of the heavy

lifting for you".
To me, it's no substitute for simply building your application to run

in a way that is not location dependent, but service providers love it


I would prefer not to name specific products (because this is something that can be home-grown reasonably, as well), and I haven't heard of Zerto. I was referring to a stack that does a bit less than what you're describing here: simply DNS failover in order to promote the MySQL slave to master in an automated fashion with all the necessary monitoring.

This is enough to meet the needs for a large majority of *properly designed* applications these days, excepting those like Farcebook that have really exceptional scaling needs. It's becoming a standard practice InTheCloud.




No doubt it's a bit easier for applications that were built this way from the ground up, but it's far from exclusive to such apps.

[#] Fri Jan 03 2020 16:53:58 EST from LoanShark

[Reply] [ReplyQuoted] [Headers] [Print]

Because everything is on the same volume, your snapshots are
guaranteed to be crash consistent. This gives you the quality and
utility of something like NetApp SmartMirror but without the cost.

That's another simple way to do it.

What I see happening lately is a lot of applications packaged as Docker containers. If the container doesn't require you to provision connectivity to an external RDBMS, it probably spins up an RDBMS process of its own, in-container, or has some other block storage layer.

Docker container will be documented to require, typically 1-3 external volume mounts which will be handled by a Docker volume storage driver. (I say 1-3 because it's not uncommon to have 1 volume for config, 1 volume for data, and 1 volume for logs.)

If those all are known to perform well over NFS, so much the better and the whole problem is greatly simplified. If not, 1 or 2 of those volumes will need to be mounted via some form of block storage protocol (meaning they become owned by a single compute node at any given time, with various implications for HA and scaling), and there will be snapshotting.


Gitlab, for example, has a complicated HA feature which, if you choose to use it at all, involves configuring one or more "read-only" storage servers which act as slaves and can offload some of the read-only queries to help with performance. I've got a test deployment of Gitlab set up, which we're poking at sporadically, but we're not using the HA features because we will never need to horizontally scale in this lifetime, and because we can tolerate the minimal downtime that we're likely to experience in a cloud hosting environment.

[#] Fri Jan 03 2020 17:48:16 EST from IGnatius T Foobar

[Reply] [ReplyQuoted] [Headers] [Print]

No doubt, it all gets easier when workloads are packaged inside containers, because then they have well-defined interfaces governing how they interact with the outside world.

I operate in a different space, where people bring decades of legacy cruft with them. In your world it's far easier to develop things the right way, with loosely coupled interfaces, containerization, and other features that make each software module easy to replicate, move around, run anywhere ... that gets you to the point where (as long as you're careful) you can run any workload in any location at any time. It's a good place to be, because you can shop for cloud hosting as a commodity.

And there are plenty of providers and software vendors who will take care of that stuff for you automatically, if you're willing to pay.

[#] Sun Jan 05 2020 09:16:24 EST from LoanShark

[Reply] [ReplyQuoted] [Headers] [Print]

good place to be, because you can shop for cloud hosting as a
commodity.

Yes and no. We end up increasingly tied to cloud-specific services (I don't know anyone who hosts anything on Amazon who doesn't at least have a few scripts and some backend logic that use S3)

Monitoring happens via CloudWatch because a lot of your data is already there *anyway* and it's best to monitor at a level above your actual servers, so that your monitoring system does not itself go down.

Increasingly, best practices for developing a modern, user-facing webapp mean using node.js and either npm or yarn to invoke webpack and generate a static JS bundle. (The build system uses node.js, but the resulting deployable can be hosted on any static-only HTTP server, because these days it's considered a best practice to do all rendering browser side in some framework like React.)

If the entire frontend layer is static, then the most cost-effective deployment technology might be something based on serverless, cloud-native technologies like S3+CloudFront+Lambda@Edge - instead of spinning up a traditional pair of linux boxes running nginx.

These are small, simple services and it's not *that* hard to port this stuff to some other cloud, but it has a way of adding up.

[#] Sun Jan 05 2020 21:06:41 EST from darknetuser

[Reply] [ReplyQuoted] [Headers] [Print]

2020-01-03 15:34 from IGnatius T Foobar
For a small workload running on a Linux system however ... assuming
that you're not using HA built in to the application ... the most
simple and reliable recipe looks like this:

1. Run the *entire stack* on a btrfs or LVM volume
2. Every night / hour / couple of minutes (whichever you choose) --
a. Perform a snapshot of the volume
b. rsync the snapshot to the standby site

Because everything is on the same volume, your snapshots are
guaranteed to be crash consistent. This gives you the quality and
utility of something like NetApp SmartMirror but without the cost.



I am currently using OpenBSD, but I understand the idea and I think it would not be very hard to replicate in the BSD world. I'll give it some thinking. Thanks for the tip.

[#] Mon Jan 06 2020 17:17:20 EST from IGnatius T Foobar

[Reply] [ReplyQuoted] [Headers] [Print]

You can do the same thing with any filesystem or volume manager that has copy-on-write snapshots.  This is what gives you differential backups instead of full backups.  Otherwise you need much more storage.



[#] Fri Jan 17 2020 23:14:53 EST from LoanShark

[Reply] [ReplyQuoted] [Headers] [Print]


Anyone looked into Hashicorp Consul (and their other product that layers on top of it, Nomad, I guess)?

My boss was excited about it. I was the opposite of excited. Which means that so far we're doing nothing with it...

to me, it seems duplicative of some things that AWS provides for us. And complex.

[#] Fri Jan 17 2020 23:19:56 EST from LoanShark

[Reply] [ReplyQuoted] [Headers] [Print]


* My boss wanted to use it for service discovery. Problem is, as I see it, that ECS does that for us in an AWS environment. And on AWS, ECS is the container runtime of choice to such an extent that I have a hard time seeing us move away from it. It appears to have the ability to containerize certain legacy workloads that won't be compatible with K8S.

Though if anyone can convince me otherwise...

Go to page: First ... 53 54 55 56 [57] 58 59 60 61 ... Last