Language:

en_US

switch to room list switch to menu My folders
Go to page: First ... 21 22 23 24 [25] 26 27 28 29
[#] Sun Dec 17 2023 02:42:58 UTC from LadySerenaKitty

[Reply] [ReplyQuoted] [Headers] [Print]

Better than sounding "world thirdly" imo.

Fri Dec 15 2023 15:26:28 EST from darknetuser

Lol, I am starting to sound so third worldly.

 



[#] Mon Dec 18 2023 04:08:41 UTC from IGnatius T Foobar

[Reply] [ReplyQuoted] [Headers] [Print]

Some segments are Cat 4, then I have some Cat5e but it is Chinesse
Cat5e, which means if you test the wire you won't get the rated speed


Wow, I've never heard of anyone having Cat 4 installed. Every site I have ever seen went straight from Cat 3 to Cat 5 when 100 Mbps Ethernet became available. But I do agree with the consensus here: it'll be a long time, if ever, before the "long" runs in the house need anything more than Cat 5, and if I ever do need it, I can use existing wire as a drag line. The good news is that the only difficult run (up from the garage, through the wall in my office, across the attic, and down into the living room) is accompanied by a run of RG-6 coaxial cable that I pulled along with it. That will likely be the drag line, since I abandoned the television service several years ago ... and even if for some reason we wanted to resume idiot-box service in the future, all of the providers are moving to IP delivery anyway, even for traditional multichannel service. Coaxial cable for home television service is now obsolete (MoCA retrofits notwithstanding).

Now for an ironic twist to the "how much bandwidth do you really need" conversation.
I just came to the realization that my main rig somehow downrated its connection to 100 Mbps, and I haven't noticed for weeks. A reboot fixed it, but I'll have to pay attention to see if it happens again.

[#] Sat Jan 06 2024 01:59:08 UTC from IGnatius T Foobar

Subject: I sure love my fiber :)

[Reply] [ReplyQuoted] [Headers] [Print]

Good

 

Connection speed on your device

 

 
DOWNLOAD
940
Mbps
 
UPLOAD
865
Mbps
 
See details


[#] Tue Jan 09 2024 23:02:05 UTC from IGnatius T Foobar

[Reply] [ReplyQuoted] [Headers] [Print]


Because of reasons, I am beginning to work on a "hosting exit strategy" to bring my servers back home.

This leaves me with a lot of choices to make. Most of them revolve around cost -- with two kids in college I have very little money to spend. Ideally I would like a small cluster running on 12 volt DC power, because that's how the core of my home network already runs. If I am home when the power goes out, there's plenty of time to get the generator plugged in. Power cost is another variable, of course.

Bandwidth is not an issue. I have 1 Gbps fiber and it never goes out. I would say I've had 99.999% uptime on that. I will continue to use the Static IP VPN from Ace Innovative that gives me a presence on the Internet regardless of the actual location of the origin servers. The nice thing about that is that it can run from anywhere.

I think that mostly what I need now is disks. I'm probably overdue to finally get a home NAS so maybe it's time to do that and let it double up as the storage for a hosting cluster.

I *might* be able to get the Dell servers I had my eye on, and bring them home instead of running them in the data center. I don't think I want to do that long term though, because of the power consumption.

[#] Tue Jan 09 2024 23:24:37 UTC from IGnatius T Foobar

[Reply] [ReplyQuoted] [Headers] [Print]

Oh, and one other thing: offsite backups. When I ran the servers at home, I backed them up to the data center. When I ran them at the data center, I backed them up at home. With only one location I will need a B-site. I am thinking maybe a single board computer with a biggish disk and drop it on the VPN from a family member's home or something.

[#] Tue Jan 09 2024 23:35:26 UTC from Nurb432

[Reply] [ReplyQuoted] [Headers] [Print]

May not be for everyone, but if you dont mind the ram limitation, which does suck, those lenovo tinys i have, with a xeon swapped do pretty well. They are stupid cheap. Low power use.  And i know you are not into it, but they run PVE well. 

I hear the next model up can handle 64g ram. And do have the 2nd m.2 socket soldered in..   I have heard you can swap CPU there too, but i dont have one in possession to say that is legit or not. 



[#] Tue Jan 09 2024 23:36:47 UTC from Nurb432

[Reply] [ReplyQuoted] [Headers] [Print]

Ran across a RK3588 ( :) )  SBC the other day out on Aliexpress that had emmc to boot and 4, yes, 4, m.2 sockets. 

Tue Jan 09 2024 18:24:37 EST from IGnatius T Foobar
Oh, and one other thing: offsite backups. When I ran the servers at home, I backed them up to the data center. When I ran them at the data center, I backed them up at home. With only one location I will need a B-site. I am thinking maybe a single board computer with a biggish disk and drop it on the VPN from a family member's home or something.

 



[#] Wed Jan 10 2024 01:11:38 UTC from IGnatius T Foobar

[Reply] [ReplyQuoted] [Headers] [Print]

For this particular setup I might consider PVE. What model Lenovo is it?
All the ones I see on eBay have Celery processors in them.

[#] Wed Jan 10 2024 01:41:52 UTC from Nurb432

[Reply] [ReplyQuoted] [Headers] [Print]

Mine are M93p.

The next model up with the 2nd M.2 socket installed and more ram option - M900. If i had known before i bought my M93's id have gone with them due to ram, if nothing else.

Most of these things came with i5's.  A few came with i3's and some came with i7s.  I get the bare bone i5's as they are common as dirt, and rip the CPU out and replace it with a Xeon E3-1275L V3, add ram and SSD. However the M900 is a different family so i dont know the part number without looking it up, but there are i7's and xeons available in that family too.  ( skylake? i dont remember.. ). BUT i have no first hand experience with the 900s, and only read that people have done it, so YMMV.. 

One thing to watch is if you care about WiFi, they are really picky on what they will accept. If you shove in a card it dont like, it wont even post bios..just bitch at you for 'illegal card installed'. Really annoying..

Tue Jan 09 2024 20:11:38 EST from IGnatius T Foobar
For this particular setup I might consider PVE. What model Lenovo is it?
All the ones I see on eBay have Celery processors in them.

 



[#] Wed Jan 10 2024 09:18:22 UTC from darknetuser

[Reply] [ReplyQuoted] [Headers] [Print]

I think that mostly what I need now is disks. I'm probably overdue to

finally get a home NAS so maybe it's time to do that and let it double

up as the storage for a hosting cluster.

How much storage are you planning to use?

I am not a fan of NAS appliances myself. I'd rather get one of those cheap tower PowerEdges and load it up with drives (which is essentially what I do at $job). A NAS appliance might be fine if you get it for extremely cheap somewhere, but I usually hate their firmwares so much that I usually shove them into an isolated LAN where they can access nothing.

[#] Wed Jan 10 2024 09:55:43 UTC from darknetuser

[Reply] [ReplyQuoted] [Headers] [Print]

2024-01-09 18:24 from IGnatius T Foobar
Oh, and one other thing: offsite backups. When I ran the servers at

home, I backed them up to the data center. When I ran them at the data

center, I backed them up at home. With only one location I will need a

B-site. I am thinking maybe a single board computer with a biggish

disk and drop it on the VPN from a family member's home or something.




My poorman's strategy used to be to have two NAS machines, one in a remote internet-less location, and another local, next to the machines to backup.


I would backup my servers automatically to the local NAS and then swap both NASes once a month. It is far from perfect because you risk the offsite backup being too old if you have a local disaster that takes both the servers and the local NAS, but budget was kind of limited.

If I were serious about backups and couldn t use a remote location for storage, I'd probably get a Backblaze plan. Their business tier is cheaper than running your own backup storage

[#] Wed Jan 10 2024 12:14:13 UTC from Nurb432

[Reply] [ReplyQuoted] [Headers] [Print]

I do the same actually, offline device at the office. Every 2 weeks when i go in, i bring the drive home. backup the stuff i care about, next day take it back  Its in a different county.. so pretty storm safe too. But since you are going to drop it at family and have full time network, id just just create a cluster, add it to the cluster. move the one node down the street....  instant replication/backup.... 

And if you use PVE using ceph is brain dead easy, tho it needs a 2nd drive on each machine.  Plus you have a extra CPU server if you needed to test something.. Just push the VM across 

Or use their backup solution and do daily or something on its own.  Make the remote site only a backup install..  I do like their backup solution, but for what little bit i do here, i stopped dedicating a machine for that.

Not pushing pve of course, just makes stuff like this pretty easy. ( but either way if both sides have bandwidth id consider a ceph cluster regardless of how its managed )

Wed Jan 10 2024 04:55:43 EST from darknetuser

I would backup my servers automatically to the local NAS and then swap both NASes once a month. It is far from perfect because you risk the offsite backup being too old if you have a local disaster that takes both the servers and the local NAS, but budget was kind of limited.

 



[#] Wed Jan 10 2024 15:26:57 UTC from IGnatius T Foobar

[Reply] [ReplyQuoted] [Headers] [Print]

I looked at the NAS boxes and they're really too "simplified" for my needs.
I am warming up to the idea of building a cluster of mini PCs. eBay has a ton of them, often at good prices and in "lot of 3/4/5" which seems like a good way to buy a cluster.

But it's got to be faster than what I have now, and I don't know how to tell.
The machine I'm using now is a decade old, and it's CPUs are "Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz". I know that's a server processor, but I don't know how it compares to today's "
core" processors that appear in mini PCs.

This cluster has to run www.citadel.org, uncensored.citadel.org, code.citadel.org, and a bit of software development. It needs to be faster than what I currently have. Could someone who knows CPU's better than I do provide some advice?

[#] Wed Jan 10 2024 16:26:39 UTC from Nurb432

[Reply] [ReplyQuoted] [Headers] [Print]

I may not be the best example but i'm happy with my E3-1275L's in those tinys. Seem to be the perfect compromise between speed/electric/size  Just wish i could toss more ram at them.

I also have 2 full size machines, with a E5 2620, inexpensive as they are bit older tech, they do fly, but again, full size monsters..  Last one i got has 3 m.2 slots along with 4 sata ports. One is wifi tho.. ( no on board video tho )

 

 

A thought. i could set one of the tinys up like we did that RK box if you want.. see if its worth pursuing or not. You wont see video performance  ( its not gamer stuff ) , but you dont care about that anyway.



[#] Wed Jan 10 2024 22:56:28 UTC from IGnatius T Foobar

[Reply] [ReplyQuoted] [Headers] [Print]


I am beginning to suspect that maybe the reason my existing servers perform so poorly is not because of the CPU but because of the disk. Each physical node has 4 x ST91000640SS (1 TB, 7200 RPM) in a RAID 10 configuration. Seagate describes it as a "near line" drive.

Can't complain; I got the server for free, and it's been flawless. But I think the disk subsystem might be its bottleneck. I am not sure. I'm not using a real benchmarking tool, so that's the next thing to try, I guess.
The only reason I noticed at all is because last summer I built a load testing client as part of the Citadel database overhaul we did, and I noticed that the little NanoPi R6 absolutely smokes my server in the data center. It's not even close. But the NPi has an M.2 and the big server has this array of near line drives.

I have to get this right, and I cannot spend a lot of money, not on equipment, not on power. Time to go find some real benchmarking software.

[#] Wed Jan 10 2024 23:21:05 UTC from Nurb432

[Reply] [ReplyQuoted] [Headers] [Print]

oooo yes. if you still use spinny disks for anything other than off-line backup these days you will be taking a huge hit.  its amazing how much really. ( same for eMMC... its awful )

This summer i dug out a bunch of 10k 4tb disks out of my parts closet while looking for something else, "hey, i have 6 ports in my server and a huge PS now that is mostly idling..., and these are fast.. this would be cool"  scared up the cables and such put it all in.. and even not being boot drives the thing came to a crawl..

 

 



[#] Wed Jan 10 2024 23:26:21 UTC from IGnatius T Foobar

[Reply] [ReplyQuoted] [Headers] [Print]


OMFG.

I installed `sysbench` and ran `sysbench fileio --file-test-mode=rndrw run` on all available systems. The disk subsystem in my data center server is ABSOLUTE GARBAGE. Even the Raspberry Pi with a MicroSD card beat it.

Here are the results. Skip if you're not interested.

Machine #1: hwt-b (SuperMicro with multiple Xeon CPU, 4x ST91000640SS (7200 RPM spinny) in RAID 10)
-----------------
File operations: 22.72 reads/s, 15.14 writes/s, 58.31 fsyncs/s
Throughput: read 0.35 MiB/s, write 0.24 MiB/s

Machine #2: franklin (NanoPiR6C with ARM64, 1x onboard NVMe)
--------------------
File operations: 918.85 reads/s, 612.56 writes/s, 1968.06 fsyncs/s
Throughput: read 14.36 MiB/s, write 9.57 MiB/s

Machine #3: unicorn (Raspberry Pi 4 with ARM64, 1x class 10 MicroSD)
-------------------
File operations: 399.89 reads/s, 266.59 writes/s, 863.24 fsyncs/s
Throughput: read 6.25 MiB/s, write 4.17 MiB/s

Machine #4: pegasus (Desktop PC with AMD Ryzen, 1x SATA SSD)
-------------------
File operations: 531.31 reads/s, 354.20 writes/s, 1140.92 fsyncs/s
Throughput: read 8.30 MiB/s, write 5.53 MiB/s



It's pretty clear that my workloads are disk bound on the old servers. Now I feel better about moving to smaller equipment; all I have to do is get some disks that are even remotely decent and it'll be an improvement. NVMe is the clear winner, but even a consumer grade SATA SSD is a 20x improvement.

If any of you want to post some results with your equipment, please run these commands, post the results, and tell me what you ran it on.
mkdir benchmark
cd benchmark
sysbench fileio prepare
sysbench fileio --file-test-mode=rndrw run

[#] Wed Jan 10 2024 23:50:50 UTC from darknetuser

[Reply] [ReplyQuoted] [Headers] [Print]

I have to get this right, and I cannot spend a lot of money, not on

equipment, not on power. Time to go find some real benchmarking

software.

I feel so so so close to spamming related articles from ADMIN Magazine.


Honestly, for storage you can do with a 2-core piss poor CPU and still keep it at idle most of the time. Suffices to say I have reduced the clock frequency of my storage servers to generate less heat.

Also, here you have an old ADMIN article about debugging IO performance that is already out of the paywall

https://www.admin-magazine.com/Archive/2021/64/When-I-O-workloads-don-t-perfor
m

XD

[#] Thu Jan 11 2024 00:37:43 UTC from Nurb432

[Reply] [ReplyQuoted] [Headers] [Print]

Ouch.

Wed Jan 10 2024 18:26:21 EST from IGnatius T Foobar

OMFG.


 



[#] Thu Jan 11 2024 00:39:26 UTC from Nurb432

[Reply] [ReplyQuoted] [Headers] [Print]

And a topic for another room, but its interesting how we all chose our machine names.. ( unless you are boring and use serial number or something )



Go to page: First ... 21 22 23 24 [25] 26 27 28 29