Language:
switch to room list switch to menu My folders
Go to page: First ... 24 25 26 27 [28] 29
[#] Tue Jan 30 2024 19:43:46 EST from IGnatius T Foobar

[Reply] [ReplyQuoted] [Headers] [Print]

Whenever I revisit this project idea, I keep coming back to the HP EliteDesk 800 G3. I have a co-worker who uses them as well and he was quite happy.
Memory can go up to 64 GB, it has both M.2 and 2.5" slots, and price/performance is in a sweet spot. Hoping to have the funds together by late spring.

My ideal environment would be a three node Kubernetes cluster with a distributed storage engine like OpenEBS (but I might consider Ceph as well). I might need to run virtual machines but I'm going to try not to. The only workload that could be tricky in a container is the edge router, but now it seems there's an add-on called Multus [https://microk8s.io/docs/addon-multus] that can create a pod with multiple network interfaces.

I really wish ProxMox VE had a *native* Kubernetes stack that ran directly on the nodes instead of having to put it into virtual machines. At this point that's the only thing keeping me from running it.

[#] Tue Jan 30 2024 20:11:44 EST from Nurb432

[Reply] [ReplyQuoted] [Headers] [Print]

If i had not already invested in all these M93Ps, while they are slightly slower id be looking at a M700 since i can still swap the CPU out..  ( and 64g ram + installed m.2 SSD slot )



[#] Sat Feb 03 2024 12:07:46 EST from IGnatius T Foobar

[Reply] [ReplyQuoted] [Headers] [Print]


Latest test results are in :) Remember, the reason I'm doing this is to move uncensored.citadel.org and www.citadel.org and my other stuff back home. My current rig with the SATA disks is a two node cluster so I made a copy of uncensored.citadel.org and moved it to the other node. This is on a 32-bit Linux VM.

Export: 18 minutes and 22 seconds, 740185 rows.
Import: 20 hours and 32 minutes before it crashed (out of memory).

I certainly wasn't expecting it to crash. I was expecting it to just take a long time. I'll try it on 64-bit next, since that is the reason I'm doing the export to begin with. It would be difficult to deprecate 32-bit support when my own flagship system is still using it!

The next test will be to import the database on a newer machine with an SSD.

I've also been giving a lot of thought to what darknetuser said: do I really need a cluster?
I am a data center architect by trade so I may be over-engineering things.
Maybe it does make sense to build just one server, and consider my desktop to be the backup machine (it did, after all, run all of my VMs for some number of months). I think if I did just one machine, it would be something in Mini-ITX form factor with 64 GB of memory, two or three SSD, and a PicoPSU module instead of a traditional power supply so it can run from 12 volts DC.

Yes I keep changing my mind.

[#] Sat Feb 03 2024 17:05:29 EST from Nurb432

[Reply] [ReplyQuoted] [Headers] [Print]

Personally, id run a tiny cluster, 2 real nodes, with a 3rd 'fake' on a tiny SBC to complete the quorum. Of course id use PVE and it really needs 3 nodes to work right ( even if one is 'fake' ), so YMMV on the needs of the cluster you are choosing ( I know its not PVE )

I do realize that its not 'life critical' stuff here and some downtime while you rebuild from the OS from scratch then restore backups isn't end of world, but its not that much more $, and makes things a lot easier to deal with, i feel.  If we were talking super critical stuff, then ya 5 node or something and a storage cluster separate. 



[#] Sun Feb 04 2024 07:38:24 EST from darknetuser

[Reply] [ReplyQuoted] [Headers] [Print]

2024-02-03 17:05 from Nurb432
Personally, id run a tiny cluster, 2 real nodes, with a 3rd 'fake' on

a tiny SBC to complete the quorum. Of course id use PVE and it really

needs 3 nodes to work right ( even if one is 'fake' ), so YMMV on the

needs of the cluster you are choosing ( I know its not PVE )

IMO if you are going to add that level of redundancy, you either do it right or you don't do it.

If you have a single ISP subscription, a single router and a single power source, the only thing a cluster is really protecting you from is having one of your computer nodes gest toast and take your services with it.

We already have long downtimes due to connection saturation and the like at Uncensored. It is not like hardware redundancy is going to push availabilility from 99% into 99.9%


It does not make finantial sense either because the finantial loss from downtime is zero, so buying more nodes means spending money in order to compensate zero loss.

For the record, I have my own business services on premises with a single ISP subscription, a single router and a single node per service. It makes more sense to invest in proper monitoring and power redundancy when you live on the cheap. If you use SMB grade hardware and your software doesn't suck I have found you will achieve 95-98% availability, which is not bad for a home setup... specially if you consider the service usually faces downtime due to either planed maintenance or the ISP going on a vacation.

[#] Sun Feb 04 2024 09:12:51 EST from Nurb432

[Reply] [ReplyQuoted] [Headers] [Print]

.....and having to rebuild the metal from scratch + restoring VMs from backups. For me its more about 'ease of recovery and general use' than 'maintaining up-time', tho from a hardware standpoint it helps that too.   It also makes it easier to shift loads around while i experiment with things, and upgrading storage or actual machines is also easier with a small cluster. Just shift VMs over to the 2nd node, upgrade the original node. then move some back. Again, no futzing with restoring backups.

But i do agree if its 'critical' then you have a backup line coming in. But i personally still see an advantage.

 

( and i have battery on mine. again not so much for 'up-time' but for a controlled shutdown option to avoid corruption and, once again, dealing with rebuilding metal and restoring VM backups )

Sun Feb 04 2024 07:38:24 EST from darknetuser

If you have a single ISP subscription, a single router and a single power source, the only thing a cluster is really protecting you from is having one of your computer nodes gest toast and take your services with it.

 



[#] Sun Feb 04 2024 15:01:21 EST from darknetuser

[Reply] [ReplyQuoted] [Headers] [Print]

2024-02-04 09:12 from Nurb432
.....and having to rebuild the metal from scratch + restoring VMs
from backups. For me its more about 'ease of recovery and general
use' than 'maintaining up-time', tho from a hardware standpoint it
helps that too.   It also makes it easier to shift loads around
while i experiment with things, and upgrading storage or actual
machines is also easier with a small cluster. Just shift VMs over to
the 2nd node, upgrade the original node. then move some back. Again,
no futzing with restoring backups.

But i do agree if its 'critical' then you have a backup line coming
in. But i personally still see an advantage.

 


I just keep an offline spare server to which I can copy everything. It would be "better" to have a cluster just so you could perform faster recoveries or need no recovery at all, but it is hard to justify the expense of keeping a server in a cluster doing nothing but eating power if you are just hosting some personal business site with associate services. THe expenses of keeping the equipment on standby are worse than losing two hours every 5 years rebuilding the thing.

[#] Sun Feb 04 2024 15:21:38 EST from Nurb432

[Reply] [ReplyQuoted] [Headers] [Print]

If they were huge traditional servers, sure, i could see the power waste.

But, the stuff i use here at home, and with what i think IG has planned, it wont be that much of a concern really.   I mine max out at 50 watts if running full bore ( which they never do unless im experimenting )..  idle far far lower, around 15 or less..  

I also turn off my backup server ( same series of machine ) when not in use. Power it up remotely just before weekly backups.  Then it powers back down.



[#] Sun Feb 04 2024 17:54:47 EST from IGnatius T Foobar

[Reply] [ReplyQuoted] [Headers] [Print]

If you have a single ISP subscription, a single router and a single power source, the only thing a cluster is really protecting you from is having one of your computer nodes gest toast and take your services with it.

That's what you said last week, and the more I think about it the more it makes sense.

I'm in the data center business.  I do high uptime stuff, and we sell a lot of disaster recovery services, so it's the kind of thing that's in my head a lot.  My gaggle of virtual machines are all attached to a "router" VM that tethers the entire environment to a VPN service running at a place that actually hosts my IP addresses.  I pay for this service.  Not only does it conceal the actual location of the origin servers, but it actually makes the entire environment portable.  Anywhere there is a replica, the entire environment can be started up and attached to the Internet.  This is my current disaster recovery strategy: the virtual machines are all replicated nightly to my personal rig at home, so if the primary data center becomes unavailable for any reason -- from an actual facility outage, to the machine itself blowing up, to my hosting arrangements coming to an end -- I can simply start up the virtual machines (including the router) at home and away we go.

If I got a single machine instead of a cluster, I could afford to make it "better" -- more memory, more disk, etc. and then it could also double as a home NAS (something I do not have today).  My desktop machine would continue to be the local replica that I could turn on whenever I need to, and the old server would be relegated to an off site backup.

At this point my target date is late spring or summer of this year.  Because that's when my truck will be paid off and I'll have a little money to work with.  :)

We already have long downtimes due to connection saturation and the like at Uncensored. It is not like hardware redundancy is going to push availabilility from 99% into 99.9%

I feel I have to address this.  The occasional "too many users are already online" issue happens when some douchebag from out of town decides to try to brute-force one of our services, like authenticated SMTP or POP or IMAP or whatever.  These people need to die slowly, slashed with razor blades all over and then submerged in acid.  Whenever it happens, I take a look at it and I see all of the idle connections.  I've checked a bunch of times and all the services have their idle timeouts configured and working, but for some reason these connections are held open somehow.  When I figure that out we'll be in good shape.  In the meantime I'd appreciate any help if anyone knows how to troubleshoot this sort of thing.



[#] Sun Feb 04 2024 18:14:35 EST from Nurb432

[Reply] [ReplyQuoted] [Headers] [Print]

Other than blocking their IP, ( like fail2ban does after x failed attempts ) not sure you can.. 

Sun Feb 04 2024 17:54:47 EST from IGnatius T Foobar
I feel I have to address this.  The occasional "too many users are already online" issue happens when some douchebag from out of town decides to try to brute-force one of our services, like authenticated SMTP or POP or IMAP or whatever.  These people need to die slowly, slashed with razor blades all over and then submerged in acid.  Whenever it happens, I take a look at it and I see all of the idle connections.  I've checked a bunch of times and all the services have their idle timeouts configured and working, but for some reason these connections are held open somehow.  When I figure that out we'll be in good shape.  In the meantime I'd appreciate any help if anyone knows how to troubleshoot this sort of thing.


 



[#] Sun Feb 04 2024 18:20:58 EST from darknetuser

[Reply] [ReplyQuoted] [Headers] [Print]

2024-02-04 18:14 from Nurb432
Other than blocking their IP, ( like fail2ban does after x failed
attempts ) not sure you can.. 

I think the problem is more like the TCP connection is not getting closed when it should be naturally closed.


[#] Sun Feb 04 2024 18:22:04 EST from Nurb432

[Reply] [ReplyQuoted] [Headers] [Print]

An intentional DoS Tactic  

Sun Feb 04 2024 18:20:58 EST from darknetuser
2024-02-04 18:14 from Nurb432
Other than blocking their IP, ( like fail2ban does after x failed
attempts ) not sure you can.. 

I think the problem is more like the TCP connection is not getting closed when it should be naturally closed.

 



[#] Sun Feb 04 2024 18:25:47 EST from darknetuser

[Reply] [ReplyQuoted] [Headers] [Print]

2024-02-04 18:22 from Nurb432
An intentional DoS Tactic  

Yes, but the server itself ought to be dropping the connection AFAIK.

[#] Sun Feb 04 2024 18:56:12 EST from darknetuser

[Reply] [ReplyQuoted] [Headers] [Print]

I feel I have to address this.  The occasional "too many users are

already online" issue happens when some douchebag from out of town

decides to try to brute-force one of our services, like authenticated

SMTP or POP or IMAP or whatever.  These people need to die slowly,

slashed with razor blades all over and then submerged in acid. 
Whenever it happens, I take a look at it and I see all of the idle

connections.  I've checked a bunch of times and all the services
have their idle timeouts configured and working, but for some reason

these connections are held open somehow.  When I figure that out
we'll be in good shape.  In the meantime I'd appreciate any help if

anyone knows how to troubleshoot this sort of thing.

IMAP itself is supposed to have a short preauth timeout and a long postauth timeout. I mention just in case you are using a long timeout for both XD

IMAP supports a number of commands in preauth state that could prevent the connection from timing out, in theory. If I wanted to screw somebody out I would just connect to their IMAP server and send a CAPABILITY or NOOP command every X seconds.

I am no IMAP guru so I don't have much more to contribute.

[#] Mon Feb 05 2024 18:45:32 EST from IGnatius T Foobar

[Reply] [ReplyQuoted] [Headers] [Print]

These connections are staying open even if I tell the Citadel Server to terminate them.  That might mean they're still bound to a thread and in the middle of a transaction, which would imply that they're messing with the TCP semantics somehow.  I'm unable to reproduce this effect in the lab -- if I open a session and then either terminate it, it goes away; if I open a session and then let the timeout expire, it also goes away.

In other news, I found a bug in "ctdlload" that was only uncovered by trying to import an entire copy of the uncensored.citadel.org database.  The export on the production system took 18 minutes.  An import on that machine's twin had been running for 20 HOURS before it hit the bad record and crashed, and that's before it even got into the "big" table (message texts exceeding 1024 bytes).  After fixing the bug I'm trying it again on the NanoPi+M.2 rig.  It ran really fast and got well into the big messages before the OOM killer got it.  ☹️  So I added some swap and am trying again.

What it all boils down to however, is that my current server is absolute garbage when it comes to disk write performance, and I can build nearly anything at this point and it'll be a far better performer than what I currently have, as long as it's using some sort of SSD as its data drive.  

The "M350" Mini-ITX case looks pretty cool, is wall mountable, can be found for cheap on eBay (including existing systems that I could just up the RAM in) and with a PicoPSU can run from a 12 volt supply.  My current thinking is that one of these with an M.2 as the main system disk, a SATA SSD for virtual machine images, and a big spinning HDD for Home NAS use, could get the job done without breaking the bank.  As previously mentioned I could use my desktop as the hot standby machine.



[#] Mon Feb 05 2024 18:53:31 EST from Nurb432

[Reply] [ReplyQuoted] [Headers] [Print]

That is my guess.  Both flooding with attempts and then attempts to crash things when that does not work, both for DoS and looking for bugs to exploit.. I have seen that here too. its one reason i took everything off line for a while. 

Script kiddies need to be up against the wall, but after the politicians. 

Mon Feb 05 2024 18:45:32 EST from IGnatius T Foobar

which would imply that they're messing with the TCP semantics somehow.  



 



[#] Tue Feb 06 2024 12:19:47 EST from darknetuser

[Reply] [ReplyQuoted] [Headers] [Print]

2024-02-05 18:45 from IGnatius T Foobar
These connections are staying open even if I tell the Citadel Server

to terminate them.  That might mean they're still bound to a thread

and in the middle of a transaction, which would imply that they're

messing with the TCP semantics somehow.  I'm unable to reproduce

this effect in the lab -- if I open a session and then either
terminate it, it goes away; if I open a session and then let the

timeout expire, it also goes away.

How does Citadel terminate TCP connections that timeout? Are you using the classical RST method?

My TCPing is rusty but TCP sockets have plenty options for dropping connections that have nothing interesting going on for them.



[#] Tue Feb 06 2024 12:55:34 EST from IGnatius T Foobar

[Reply] [ReplyQuoted] [Headers] [Print]

How does Citadel terminate TCP connections that timeout? Are you using the classical RST method?

My TCPing is rusty but TCP sockets have plenty options for dropping connections that have nothing interesting going on for them.

Citadel Server doesn't attempt to do anything like that.  When there is a session that has been idle for the configurable timeout period, it simply calls close() to close the socket.

If you know of a better way please suggest it -- that sounds useful!

 



[#] Tue Feb 06 2024 13:13:31 EST from IGnatius T Foobar

Subject: The new server, part 0

[Reply] [ReplyQuoted] [Headers] [Print]

Well, it's settled.  I saw something on eBay, made a lowball offer, and it was accepted:

(photo)

This is the bones of an old Datto Siris SB41 NAS, which the seller confirmed can accept any Mini-ITX motherboard.  It comes with a 120 watt PicoPSU module, a 12 volt brick to power it, and drive caddies for two of its four 2.5" slots.  I probably paid a quarter of what it would cost to get all that stuff separately.

I still need to wait some time before I can go buy all of the stuff to go in it.  I'll find a nice Mini-ITX motherboard with 32 or 64 GB of disk and an M.2 drive for booting the operating system.  Into the drive slots will go a mix of SSD for my containers and virtual machines, and HDD for NAS and logs.  Then a nice 12 volt battery backup and we're off to the races.



[#] Tue Feb 06 2024 13:18:48 EST from darknetuser

[Reply] [ReplyQuoted] [Headers] [Print]

Citadel Server doesn't attempt to do anything like that.  When there

is a session that has been idle for the configurable timeout period,

it simply calls close() to close the socket.

If you know of a better way please suggest it -- that sounds useful!


I think close() attempts a graceful termination by delivering a FIN TCP packet to the other endpoint. That isn't very good for dealing with malicious clients.


A quick search suggests to SET so_linger with a small/zero timeout just before calling close(). This will have the server deliver a RST TCP packet, which bassically equals to sending a message into the void saying you are dropping the connection and then dropping the connection in their face. This is also known as the infamous " Connection reset by peer".

Go to page: First ... 24 25 26 27 [28] 29