Language:
switch to room list switch to menu My folders
Go to page: First ... 21 22 23 24 [25] 26 27 28 29 ... Last
[#] Sat Apr 24 2021 14:36:16 EDT from Nurb432

[Reply] [ReplyQuoted] [Headers] [Print]

I had great luck with clonezilla for windows PC to PC transfers. ( HD upgrades, data recovery, etc on home machines or some client desktops that i really didnt want to reload from scratch ).   Multicast loading of PCs back when we still did it via 'fat images' worked well. Just as well as ghost.

But i agree, on the server side anyway, that ship sailed a while ago.

Sat Apr 24 2021 12:11:36 EDT from IGnatius T Foobar
VMware Converter worked pretty well for us, but these days we don't use it much, because there's nothing left to convert. New installations have been "born" virtual for over a decade now. There's also the small matter of anything that old is probably running an operating system you aren't willing to support anyway.

Cronezirra worked really well for Linux, not so much for Windows.

 



[#] Mon Apr 26 2021 23:38:21 EDT from ParanoidDelusions

[Reply] [ReplyQuoted] [Headers] [Print]

So, the problem for me is the physical machine is UEFI... but I built out Proxmox so that it is only supporting BIOS - and I don't really want to go all the way back to scratch - and if it was mentioned in the manual when I didn't read it - I missed the significance of it until I encountered the issue on trying to P2V the physical machine. 


So... being lazy - now I've figured out how to rsync manually from the physical machine to the target - and I think I'll just do it that way. 

Things I can't get it to do... 

"create a public key and transfer it from the source to the target so that it will log in without a password." 

"copy from the source to the exact same path on the destination" because of permissions issues. 


I don't quite gronk how to create a service/backup account that has group membership to allow it to write to root - root is disabled by default on Debian machines - and while you can sudo LOCALLY to get root access, or SU locally to get root access, you can't send a sudo or su to the remote machine - so it fails to write on the target. 

I created a service account and added it to the root group - but that didn't work either. 


I remember in the past when Linux was like, "This is REALLY stupid and insecure, but if you think you know what you're doing, we're not even going to give you a y/n switch." 

Now, Linux is frequently "No. This is stupid, I won't let this happen. There is one way to do it... and you'll have to learn a dozen other things first - so, start surfing the web, bitch - because most servers are Apache and we look better the more web traffic we create!" 

So... for now I'm backing up /usr/local/citadel to my own directory... and then from there I'll move it with SU to the /usr/local/citadel folder on the target. Then install Citadel - and I think that will get it MOSTLY working. Then I have to figure out how to assign all the accounts and put things like citadel.rc in the right place. The nice thing is - once I'm ready to go live and move to the VM... I'll just take the live one offline, do the rsync again, then move the data to where it belongs, then bring the VM live and retire the physical machine. It is a very manual, annoying process that seems unnecessarily complicated by all the modern restrictions on root - but part of it is probably that I didn't set up Citadel quite right in the first place and still don't know quite what I am doing. 

I'm also exhausted. Vegas was Vegas. The best part was the drive there and back... which was about 45 minutes faster than the navigation estimate. 



[#] Tue Apr 27 2021 10:13:37 EDT from Nurb432

[Reply] [ReplyQuoted] [Headers] [Print]

LOL ( that is one good thing about FreeBSD, 'the handbook' )

 

On Vegas: i turned down a conference a few years ago there.  Not much for me there, other than perhaps walk around and see the lights in the evening ( not into 'dining', gambling, strip bars, drinking, old washed up entertainers, what else is there? ) but it was summer, and i would have died anyway if i left the building.

3 years ago i got out of going to New Orleans, tho it was in the winter so id not die, but they would not let me drive "You have to fly or  you pay for it *all* yourself, even the hotel and conference" Even tho the previous year they let me drive to St Louis, paid for hotel and gave me the lesser of the 2 costs of drive or fly. I was fine with that as they paid the bulk and i got a bit of gas money too..  At that point i lost interest in going as i like driving, seeing things along the way, on my own schedule. That and TSA sucks, small bags, schedules, cabs, etc.. None of that crap if i drive.  But if i backed out id never get to go to another one again. Got rear ended at a light a week before leaving and broke a couple of bones in my neck, not enough for surgery a "just dont be silly for about 6 months while it heals". Talked him into getting a note to get me out of flying and so we could get our money back on the hotel and flight ( too late for the conference "you can use it next year".. which ended up being fluvid year so no conference... doh! )

Mon Apr 26 2021 11:38:21 PM EDT from ParanoidDelusions

. There is one way to do it... and you'll have to learn a dozen other things first - so, start surfing the web, bitch -

*snip*

I'm also exhausted. Vegas was Vegas. The best part was the drive there and back... which was about 45 minutes faster than the navigation estimate. 



 



[#] Tue Apr 27 2021 12:48:22 EDT from ParanoidDelusions

[Reply] [ReplyQuoted] [Headers] [Print]

I wouldn't break my neck to avoid Vegas - but I might feign a bad flu. :) 

I'll tell the story about the bum on the patio of Denny's mad-dogging my wife after she refused money, later. 

It was all kinds of fun. 

So, I did get /usr/local/citadel and usr/local/ctdlsupport copied over to my home on the VM, then copied to the right path there - and installed easy install this morning - so I've got a real plan for getting the physical machine virtualized onto Proxmox, and Proxmox all set up like I want it - so I'll probably be doing the switch later this week. I'm pretty stoked about that. It seems like initial connect resolves WAY quicker to the VM than to the physical machine too. Not sure what I changed in the setup or why that is true - but there is a long delay before it resolves and renders the lobby on the physical machine - and it is almost instant on the VM. 

Progress. Mostly by brute force and force of will - not any actual technical expertise or skill. 

Tue Apr 27 2021 10:13:37 EDT from Nurb432

LOL ( that is one good thing about FreeBSD, 'the handbook' )

 

On Vegas: i turned down a conference a few years ago there.  Not much for me there, other than perhaps walk around and see the lights in the evening ( not into 'dining', gambling, strip bars, drinking, old washed up entertainers, what else is there? ) but it was summer, and i would have died anyway if i left the building.

3 years ago i got out of going to New Orleans, tho it was in the winter so id not die, but they would not let me drive "You have to fly or  you pay for it *all* yourself, even the hotel and conference" Even tho the previous year they let me drive to St Louis, paid for hotel and gave me the lesser of the 2 costs of drive or fly. I was fine with that as they paid the bulk and i got a bit of gas money too..  At that point i lost interest in going as i like driving, seeing things along the way, on my own schedule. That and TSA sucks, small bags, schedules, cabs, etc.. None of that crap if i drive.  But if i backed out id never get to go to another one again. Got rear ended at a light a week before leaving and broke a couple of bones in my neck, not enough for surgery a "just dont be silly for about 6 months while it heals". Talked him into getting a note to get me out of flying and so we could get our money back on the hotel and flight ( too late for the conference "you can use it next year".. which ended up being fluvid year so no conference... doh! )

Mon Apr 26 2021 11:38:21 PM EDT from ParanoidDelusions

. There is one way to do it... and you'll have to learn a dozen other things first - so, start surfing the web, bitch -

*snip*

I'm also exhausted. Vegas was Vegas. The best part was the drive there and back... which was about 45 minutes faster than the navigation estimate. 



 



 



[#] Tue Apr 27 2021 14:38:00 EDT from Nurb432

[Reply] [ReplyQuoted] [Headers] [Print]

That is often the case. 

Tue Apr 27 2021 12:48:22 PM EDT from ParanoidDelusions

Mostly by brute force and force of will 



 



[#] Wed Apr 28 2021 00:37:29 EDT from ParanoidDelusions

[Reply] [ReplyQuoted] [Headers] [Print]

Got it migrated over tonight. 

I'm super stoked about getting this migrated over to the VM. I dislike that we lost some messages - that is bad for traffic and conversation - but it'll be a piece of cake to backup and restore the BBS now - to move it to other hardware if there is a hardware failure, and to expand it as necessary. I'll probably explore some of the clustering and high availability possibilities that open up if I put up another Proxmox node and shared storage - I've got built in resource and performance metrics allowing me to see in real time how the VM is doing... I set it up as a 1 CPU system initially, and just added a 2nd CPU... I also have 8GB of memory but I'm pretty sure it is fairly trivial to pop it up to 16GB. These NUCs just use laptop style memory. 

I'll have to reboot the VM to get it to use the 2nd CPU, and will probably do that at some point tonight. You can't evidently add resources and have them recognized "on the fly" in Proxmox. I feel like things are running a little choppy in some places with only the 1 CPU allocated. 

There isn't a direct path to the BBS for me from the internal network. I can open up a local console, and then connect there locally to the Citadel - but the way Proxmox works I have my management console dedicated and published only on my internal network, not accessible from outside, and the VM/BBS dedicated and published only on the public network, and not accessible internally. Hard to explain - but it seems to be a limitation of the way Proxmox handles networking - and took me a while to figure out. This is different than how other hypervisors handle it. 

There is a good Proxmox vs. ESXi review here:


https://www.smarthomebeginner.com/proxmox-vs-esxi/

That basically confirms most of my experiences with Proxmox - where it is superior and where it still lags behind. Mostly network configuration. For being a "Networking OS" - Linux as a whole takes some fairly bone-headed approaches to Networking configuration. If there is a hard way to do networking, Linux goes, "hold my beer," and makes it even more convoluted. 

But - it has its rewards if you're willing to fight with that. 

 

 



[#] Wed Apr 28 2021 09:20:13 EDT from Nurb432

[Reply] [ReplyQuoted] [Headers] [Print]

I could be wrong, but I thought there was a way to add/remove some resources 'live' but many OSs wont recognize the change until a reboot anyway so its sort of moot.   



[#] Wed Apr 28 2021 09:52:45 EDT from ParanoidDelusions

[Reply] [ReplyQuoted] [Headers] [Print]

I think you can add it live. It is another area they've made kind of complex and ambiguous. I think what is happening is that you're allocating are *slices* or percentage of CPU. 

That is what I got from the online forums last night. Basically they divide the system up into cores, sockets and vCPUs - but none of them are a 1:1 analog to an ACTUAL CPU. 

I'm not sure why to do it this way - but basically, I logged in and went through the BBS while looking at the Proxmox performance metrics and CPU never exceeded around 8% utilization while showing only 1 socket, 1 core active. 


I suspect that the dynamic allocation adds if necessary, but keeps it available for other VMs that may need it (in my case, I show 4 max (2 sockets, 2 cores) in red below the black that reads 1, (1 sockets, 1 core) on the management console.

I understand how this would work in a hosted environment. You're throttling each VM to what any individual machine needs at that moment - but you've got this reserve pool ready to give priority to any machine that needs it that has been allocated the ability to share from that resource pool. I'm going to try to read up on this today and see if my assumptions are right. 

Basically, I've overprovisioned the system for a Citadel. 

Wed Apr 28 2021 09:20:13 EDT from Nurb432

I could be wrong, but I thought there was a way to add/remove some resources 'live' but many OSs wont recognize the change until a reboot anyway so its sort of moot.   



 



[#] Wed Apr 28 2021 14:49:36 EDT from Nurb432

[Reply] [ReplyQuoted] [Headers] [Print]

Right, allocation of resources like that i think is always dynamic regardless of OS, but i was thinking you meant adding actual virtual cores. Most of them need rebooted to see them. 

 



[#] Wed Apr 28 2021 21:30:55 EDT from ParanoidDelusions

[Reply] [ReplyQuoted] [Headers] [Print]

LOL. After reading their manual, I'm not sure WHAT I mean. :D 

I mean - the discussion about configuring processors was a little opaque in the documentation. I suppose it is a hard concept to convey in writing - But I get the idea that *sockets* refer to physical sockets for an actual single CPU - and exist in the VM for licensing purposes - that cores are not ACTUAL cores - but a slice of total processing power without a 1:1 relation to the actual cores in your machine - and that they work more like cycle utilization limiters and priority levels mostly - ensuring that your important VMs get the most cycles but also that no one VM will dominate the available CPU cycles to the detriment of the other guests *or* the host. The whole vCPU thing though - I don't understand that at all. 

Set up a NFS share on my NAS and was going to add it as a volume - but it is on a different, internal subnet, and though there is a route from the internal subnet to the subnet Proxmox is on, there isn't one from that subnet back to the internal network. I probably should just get another NAS and put it on the same subnet if I want additional storage, anyhow. This is a happy accident, though. I understand why it is this way (and don't want to explain it) - but it gives me a little extra layer of security. The different parts of this are fairly well isolated from one another - and traffic flows in the right directions. I more or less have created a DMZ - kinda sorta on accident. 




Wed Apr 28 2021 14:49:36 EDT from Nurb432

Right, allocation of resources like that i think is always dynamic regardless of OS, but i was thinking you meant adding actual virtual cores. Most of them need rebooted to see them. 

 



 



[#] Fri Apr 30 2021 00:11:28 EDT from ParanoidDelusions

[Reply] [ReplyQuoted] [Headers] [Print]

Nurb, maybe you can help me with this... 

I did a snapshot of production machine - Vm01 and also made a clone of it... just to cover my bases. 

When I did the snapshot, I got a message about the loval LVM thinpool warning me that something wasn't set that would prevent the thinpool from expanding beyond the available space. 

So... I read the manual, and looked through the management console. I show my LVM-Thin menu under pve/disks as having a single volume named "data" with usage at 49% with a total size of 817 gb and 399 used with metadata usage at 3% with 8.36gb available and 223mb used. 

From the command line pvesm status shows the same thing... 

And lvs -o+lv_when_full shows: 

 lvs -o+lv_when_full

  LV                                     VG  Attr       LSize    Pool Origin        Data%  Meta%  Move Log Cpy%Sync Convert WhenFull       

  data                                   pve twi-aotz-- <817.68g                    48.83  2.61                             queue          

  root                                   pve -wi-ao----   96.00g                                                                           

  snap_vm-102-disk-0_wohProdSnap04292021 pve Vri---tz-k  250.00g data vm-102-disk-0                                                        

  swap                                   pve -wi-ao----    7.00g                                                                           

  vm-101-disk-0                          pve Vwi-a-tz--  240.00g data               30.92                                                  

  vm-102-disk-0                          pve Vwi-aotz--  250.00g data               28.97                                                  

  vm-102-state-wohProdSnap04292021       pve Vwi-a-tz--   <8.49g data               29.24                                                  

  vm-103-disk-0                          pve Vwi-a-tz--  250.00g data               100.00  


I just don't get what command it wants to to edit - but the gist of the message I got was that the thinpool is basically dynamic volumes - and that the combined drives, along with other resource utilization, can grow beyond the physically available drive space and cause data corruption on all volumes. 

That has me marginally worried. I'm not sure why you wouldn't have checkpoints against that set by default if someone is using lvm-Thin. I suspect that right now I have around 750gb allocated for VM drive volumes of a 950gb drive, minus Proxmox overhead - and that the drives are a much smaller percentage than their physical drive maximum. That is, each one is using about 30% of the ... 

Wait... looking at it, VM 101-disk-o and vm-102-disk-0 are both using about 30% of their allocated drive maximum size... but vm-103-disk-0, which is just a clone of vm-102 0 has a LSize of 250g and shows data at 100%. When you clone a VM, it must allocate the full size of the disk when it is created? 

Anyhow - do you have any idea what the warning it spit out when I made the snapshot was - and how I make the advised settings that it recommends to ensure that my space requirements allocated to the system don't exceed the physical space available to it? 

 








[#] Fri Apr 30 2021 08:43:09 EDT from Nurb432

[Reply] [ReplyQuoted] [Headers] [Print]

Since i dont normally use snapshots, and instead full backups, i will have to take a peek and read a bit.   Ill do that tonight. 

 

On clones, if i remember right, yes it does allocate all the space on the clone. Or at least did. I stopped using thin disk a long time ago, and fully allocate all my drives.  I boxed myself into a corner once years ago, and had a hard time getting out of it. So now all my vms are full, be them a clone or not.



[#] Fri Apr 30 2021 09:38:59 EDT from Nurb432

[Reply] [ReplyQuoted] [Headers] [Print]

I had a couple of minutes between meetings. This may not be 'it' but it did discuss updating a config file to get it to ignore the error.  Of course, there is always the risk of running out of space with thin.

 

https://www.thegeekdiary.com/how-to-enable-thin-lvm-automatic-extension/



[#] Fri Apr 30 2021 10:19:30 EDT from ParanoidDelusions

[Reply] [ReplyQuoted] [Headers] [Print]

Heh. I like the idea of thin provisioning - but I hate the possibility of boxing myself into a difficult-to-extract-myself-from-corner. :) 

And really - if I'm not hosting for external clients - what is the benefit, right? The advantage of having more in reserve to allocate evaporates in that use model. Allocate what you need up front so it is there when you need it. This is kind of the smoke and mirrors of everything "on-demand allocation" in the industry. 

"Pay just for the ports you need on your cisco switch, and if you need more, buy additional licenses!" 

Is another way of saying

"We're going to charge you the full price of the switch but cripple some of the ports, and if you later want to grow into those ports, we'll charge you more and activate them for you!"

Which is usually how it works with these schemes. 


That aside... I think it made the LV-thinpool by default when I set up, and set the guest volumes to reside there by default, too. 

So, really, I guess this is version .01 of WallOfHate in a virtualized environment. I'm running a live-test in production at this point - and what I'll do is migrate to another Proxmox server now that I understand more about how Proxmox works. 

And fix these issues. 

Man, it is getting expensive to run a cheap Citadel. :) Pretty sure at this point I could have just bought about a dozen Pi 3+ boxes with SD cards, imaged the first BBS and burnt 'em onto 32 gb microSDs, and if one died, just pull it out and thrown in a replacement. :) 

 

Fri Apr 30 2021 08:43:09 EDT from Nurb432

Since i dont normally use snapshots, and instead full backups, i will have to take a peek and read a bit.   Ill do that tonight. 

 

On clones, if i remember right, yes it does allocate all the space on the clone. Or at least did. I stopped using thin disk a long time ago, and fully allocate all my drives.  I boxed myself into a corner once years ago, and had a hard time getting out of it. So now all my vms are full, be them a clone or not.



 



[#] Fri Apr 30 2021 10:57:37 EDT from Nurb432

[Reply] [ReplyQuoted] [Headers] [Print]

Its one reason i use those i5 Minis.. ~150 bucks to get it running.   I added more to them, but at that price they were still usable.

 

If that link about editing the config file does not do it ill look around more.

 

Even for non-prod i always do full allocation, just to be safe, even if they are virtual desktops and not servers.  Once bitten.....



[#] Fri Apr 30 2021 16:06:11 EDT from ParanoidDelusions

[Reply] [ReplyQuoted] [Headers] [Print]

Yeah, these are on Dell Optiplex SFF 3020 and 3040 i5 devices - they're *awesome* for this. Almost like a blade server enclosure. I was looking for the Lenovo Mini - but this does the trick. 
So, yeah... not breaking the bank... but the more ambitious my goals get, the more I keep throwing at this project. A $15 a month ISP/VPN, the domain registration, the 6 nucs, replacing their hard drives with SSDs... a new NAS for what is turning into an "enterprise environment". :) 

I mean, it keeps getting better and better... but as I've noted over there - my traffic was *highest* when it was running on a Pi 3B+ on DDNS. I neglected it for a while, the DDNS got jacked up because a DHCP renewal didn't update... and by the time I got it all sorted - it had lost its initial momentum and community - and never recovered in a Post Trump Defeat 2021 when Conservatives are hiding and Democrats have decided to stop pretending that they like their Republican friends. :) 

 

Fri Apr 30 2021 10:57:37 EDT from Nurb432

Its one reason i use those i5 Minis.. ~150 bucks to get it running.   I added more to them, but at that price they were still usable.

 

If that link about editing the config file does not do it ill look around more.

 

Even for non-prod i always do full allocation, just to be safe, even if they are virtual desktops and not servers.  Once bitten.....



 



[#] Fri Apr 30 2021 20:42:10 EDT from Nurb432

[Reply] [ReplyQuoted] [Headers] [Print]

Do let me know if that config file does not fix it. ill take a look some more.  May not be until Monday now tho, wife is in hospital at least until Saturday. 



[#] Sat May 01 2021 00:17:32 EDT from ParanoidDelusions

[Reply] [ReplyQuoted] [Headers] [Print]

Picked up another NAS to put on the same network as a NFS share... the idea was to copy the backup off to the NFS share and make another node and move it there, then maybe start rsyncing the CTDL directories nightly so I have a hot spare... and to clear up space on the production server - maybe even make the target without a LVM-thinpool and get rid of the dynamic drives and rid myself of that problem - make that node the prod, fix the original prod.... whatever... Give me those kind of options. 

So, I got a dual drive one, which does a raid 1+0 mirror - popped in 2 2TB drives I have lying around... 

And one of the drives instantly threw up a SMART error. Heh. So, now I'm doing the extended SMART tests. 

They're not NAS drives... they're WD blue drives - and it didn't used to matter with WHS - but modern NAS systems - I really have better luck with WD RED drives. 

But I really don't want to buy two WD RED 2TB drives for this. I want to get by cheap. 

 



[#] Sat May 01 2021 07:29:27 EDT from Nurb432

[Reply] [ReplyQuoted] [Headers] [Print]

Since you are not 'running' from them, and its just backup/etc, i would also go with cheap.  

 



[#] Sat May 01 2021 10:28:29 EDT from ParanoidDelusions

[Reply] [ReplyQuoted] [Headers] [Print]

Yup. I bought a decent NAS, instead of just doing a USB DAS with Ext4 on it, because eventually I might want to set up clustering with high availability... 

but that would require popping a couple of SSDs in it. For now, I just want to reuse spare mechanical drives that are otherwise lying around. 


Sat May 01 2021 07:29:27 EDT from Nurb432

Since you are not 'running' from them, and its just backup/etc, i would also go with cheap.  

 



 



Go to page: First ... 21 22 23 24 [25] 26 27 28 29 ... Last