Language:
switch to room list switch to menu My folders
Go to page: First ... 20 21 22 23 [24]
[#] Tue Apr 20 2021 12:58:50 EDT from Nurb432

[Reply] [ReplyQuoted] [Headers] [Print]

I have restored smaller vms ( 100g ) and it did not take long at all.  I did not time it, but it was not a real issue. Sure. if i copied it off onto removable media, and waxed the backup on the serer, there is copy time to get it back on but the actual restore, was trivial. But it would be easy to do a few performance tests to see how yours does

 

 

Might also consider having a shared drive between the 2 hosts. 

 

 

 

Tue Apr 20 2021 09:31:14 EDT from ParanoidDelusions

But, a full backup of the VM would entail a full restore - which is time consuming, right. 
 
Maybe I didn't describe it right. 

I've got a production server, my BBS. I've built a VM test server on another physical machine, running Proxmox. I built a VM test server on that machine. I then restored a backup of just Citadel from the production machine to the test machine. I then want to use the VM as a test server... with the intent of doing things that almost certainly will break Citadel, maybe the VM host OS. If I totally bork it - wouldn't it be easier to just have a clone/snapshot on Proxmox than to rebuild/restore a bungled VM? I want to go back to a point-in-time image *before* I messed it up - not *fix* whatever I did. 

I mean... I'm coming from a world of Citrix - where the idea of having snapshots is that if something goes wrong with the server - snapshots make it easier to backup *and* restore than traditional backups. Rather than restoring a backup - you just blow out the VM that has gone bad and bring the new image up in its place. Makes new deployments easier too. You snapshot the baseline build - then just bring the clone/snapshot up and make the changes for it to be another system.  Does this concept work different in Proxmox than on other VM platforms? 

I was thinking about other network logistic things laying awake in bed. Right now my production server is on bare metal, dual homed, a wired NIC going to the ACE router, the WiFi connected to my own internal network (which goes out to the Internet over my residential ISP).  

The nature of a VM would mean that if I want to host the production server on Proxmox - the Proxmox server has to have a route to the ACE Cisco server... so this means... bare with me here... 


The physical NIC would go to ACE, through the CISCO, and needs to have an IP address from the pool they gave me. The VM would have a virtualized NIC that would also been on the subnet pool assigned by ACE. So, virtualizing Citadel in a Proxmox VM is instantly going to consume two of the 5 IP addresses that ACE assigns. One for the physical NIC, one for the VM. 

Now, if I still want the Proxmox interface visible on the internal interface - that would simply entail having a second physical NIC on the VM - hooked up to my internal network switch and assigned an internal network IP address. I wouldn't be able to connect *directly* to the VM through the internal network - I'd instead access the internal IP address of the Proxmox interface, then launch the VM console there, and that would give me physical access to the VM. 

The IP Address scheme of this gets somewhat complex once you add a VM host that you want accessible both to the public and internal network. I don't understand exactly how Proxmox handles address translation from physical NICs to VMs and virtual NICs. I'm not there yet... the whole VM is really a test box now. But if I get familiar with it, I think it will be better if I run the actually production BBS in a VM - maybe even with a second Proxmox node and replication from one node to the other. 

Right now, all of this is academic. I just have made an image that approximates my production server as a VM on Proxmox on my internal network - and I've managed to backup my prod Citadel and restore it on the VM - and I want to test appimage installs on that server. I don't want to back this server up, everything on it is disposable - but if something goes wrong - I want it to be easy as possible to restore to the point in time before I screwed up. Doing a backup and restore of THAT seems like the difficult way, with a VM? 

 



Tue Apr 20 2021 07:21:25 EDT from Nurb432

A full backup will do that for you.  No need to clone it.  If you are really paranoid, copy it off to a drive or something. 

 

 

Tue Apr 20 2021 00:58:10 EDT from ParanoidDelusions

Well, I've got the VM up and running off the Easy Install using a copy of my production BBS db - but I had to revert to the Easy Install in order to get it up and running. 

I'm stoked, though. This is good forward progress. I'm starting to think about this a little more methodically now. 


Now that I have one good VM running a copy of the production BBS on Proxmox... I can just clone it, and do my testing on the clone, right? That way, if I bungle anything, I just delete the cloned VM, and reclone it from the good source VM, and save myself a bunch of effort. 

 

Mon Apr 19 2021 23:07:56 EDT from ParanoidDelusions

And once I get this all sorted out, at the very least I'll be able to back up live from my prod Citadel to a VM - and be able to restore if something ever goes wrong. I'd rather have it running in prod on a VM - but that is probably going to take figuring out adding a third USB Nic or WiFi that Proxmox supports to the machine and just disabling the Intel NIC that is onboard. 

 



 



 



 



[#] Tue Apr 20 2021 13:33:10 EDT from ParanoidDelusions

[Reply] [ReplyQuoted] [Headers] [Print]

Well, for right now, figuring out how to backup the bare metal prod server and restore it on a VM is a major step forward. I had been just buying an additional 240GB SSD and cloning the production server SSD using a USB hardware drive cloner. That worked, and was only about $30 a backup - but wasn't really sustainable. Now I can do more routine backups and save as archives on my NAS. 

But really I'm not interested in capturing every last change with incremental backups. If I can restore back a week - that even a month - that should be good enough for my users. 

All of this is really about making my administration as hassle free as possible. If something goes wrong - I want to be able to just go back to the last time things *weren't* wrong. Preserving the last bit of data entered between the last good image and the minute things went fubar shouldn't be a major concern. 

 

Tue Apr 20 2021 12:58:50 EDT from Nurb432

I have restored smaller vms ( 100g ) and it did not take long at all.  I did not time it, but it was not a real issue. Sure. if i copied it off onto removable media, and waxed the backup on the serer, there is copy time to get it back on but the actual restore, was trivial. But it would be easy to do a few performance tests to see how yours does

 

 

Might also consider having a shared drive between the 2 hosts. 

 

 

 



[#] Tue Apr 20 2021 15:02:22 EDT from Nurb432

[Reply] [ReplyQuoted] [Headers] [Print]

I think you can schedule proxmox backups.  Ill have to check.   Since i am the paranoid type, i end up putting them on an NFS share on my desktop, then i copy that off onto external va USB for an extra copy, which admittedly is slow.   I used to have a separate NAS for this, but it was just wasting resources so i dropped it.  But all the servers in the farm have access to the NFS on my desktop.  Due to bandwidth to my desktop i DONT use it to store running VMs its just for easy access in/out of backups. 

I dont need to back up my VMs often ( some never really need to beyond the initial install, like my openvpn server or osgrid region ) so a manual button push is good for me



[#] Tue Apr 20 2021 15:03:23 EDT from Nurb432

[Reply] [ReplyQuoted] [Headers] [Print]

 I also keep forgetting they have a separate backup server offering now that is integrated with PVE.   Its fairly recent.



[#] Tue Apr 20 2021 17:09:27 EDT from Nurb432

[Reply] [ReplyQuoted] [Headers] [Print]

Well for some real numbers on one of my tiny i5 boxes using local SSD, a 200gb VM took about 12 minutes to backup. about 20 to do a 'traditional' restore.  Never tried it on these little boxes before, only connecting to an existing VM disk i copied over as a new VM,  which is of course almost instant. also taxed the CPU on restore.  Got a few alerts. 

 

 Of course starting snapshots are nearly instant, but trying to roll back large ones would take a while.



[#] Wed Apr 21 2021 01:09:58 EDT from ParanoidDelusions

[Reply] [ReplyQuoted] [Headers] [Print]

I spent tonight fighting with the Intel 8260. I was able to get it assigned, recognized, getting a DHCP lease from the AP, and I could actually SSH into the terminal on the IP address assigned to it - but that was it. 

So, then I remembered I had a USB to Ethernet adapter I'd never used, and hooked it up. Pretty much plugged and played - a ASIX AX88179 - and I figured the problem was the Wireless issues with NAT with Linux causing things not to work with the Intel card. So, I got this all set up, responding to a ping, but I can't even SSH into the address I assigned it. 

So, I figure at this point PEBKAC/OE and that I don't really understand the way that Proxmox handles physical NICs vs. the Linux Bridges. 
I just want to have two physical NICs, one internal, and one external - to lock it down so that incoming requests from OUTSIDE to the management are blocked, so that you can only get to VMs, but inside I can get to the management console or the VMs. I don't care if they're both wired or if one is Wireless. 

And I don't understand why they didn't design the software so that the admin can multihome the box and have two different networks expose *everything* on it, then use firewall rules to limit what one network or the other exposes. This seems *perfectly* logical to me. But it doesn't seem like how it works. 



[#] Wed Apr 21 2021 01:14:29 EDT from ParanoidDelusions

[Reply] [ReplyQuoted] [Headers] [Print]

So, the Linux bridge I have set up is the address that the admin console is reached on.
The actual physical NICs have no IP address assigned to them. I mean, they do - if I go and do ip Addr from console I see that the IP address assigned in the Linux Bridge is the IP assigned to ensps0. But in the management console, it doesn't present this way. If I open up Edit properties on enp2s0 ip address is blank - if I open up Edit properties on vmbr0, that is where the Iv4 address shows. 


So, I thought - well, the bridge port on that is enp2s0, so maybe I have to add enx000ec6d3dade (the USB ethernet) as a bridge port too. 


But this doesn't make sense to me. That should just add the SAME network to both NICs. Even if that would work, that doesn't achieve what I want. 


 



[#] Wed Apr 21 2021 15:26:50 EDT from ParanoidDelusions

[Reply] [ReplyQuoted] [Headers] [Print]

I think a post on the Proxmox Reddit support page may have me straightened out. Going to try a live test tonight. 

Wed Apr 21 2021 01:14:29 EDT from ParanoidDelusions

So, the Linux bridge I have set up is the address that the admin console is reached on.
The actual physical NICs have no IP address assigned to them. I mean, they do - if I go and do ip Addr from console I see that the IP address assigned in the Linux Bridge is the IP assigned to ensps0. But in the management console, it doesn't present this way. If I open up Edit properties on enp2s0 ip address is blank - if I open up Edit properties on vmbr0, that is where the Iv4 address shows. 


So, I thought - well, the bridge port on that is enp2s0, so maybe I have to add enx000ec6d3dade (the USB ethernet) as a bridge port too. 


But this doesn't make sense to me. That should just add the SAME network to both NICs. Even if that would work, that doesn't achieve what I want. 


 



 



[#] Wed Apr 21 2021 22:14:06 EDT from ParanoidDelusions

[Reply] [ReplyQuoted] [Headers] [Print]

Well not quite, but in my usual manner, I stumbled around until I kind of pieced it together when the documentation turned out to be piss poor. 

The networking of the node and guests isn't well described - and they really have their own terminology within the VM world that is confusing to boot. 

But I *might* even be able to get WiFi working at this point. 

Here is what I figured out. I left the default Network device and Linux Bridge intact as the host management. Almost every other VM solution would refer to the Linux Bridge as a virtual adapter or something like that. But theirs work differently. So anyhow... those are on the physical device NIC. But the physical device NIC (the network device) doesn't even really need a gateway, I guess. I'm not sure if it even needs an IP address. All that goes in the Linux Bridge it seems. 

So, then I created another Network Device for the USB to Ethernet device, and another bridge for it. Then you go into the guest and configure its IP address... but it still wasn't working. 

Then I figured out I had  go to the GUEST and tell it under Network Device what Linux Bridge to use and that bridge had to have the right network device defined up as a "port/slave". It is a *little* convoluted. 

And tadum. Now you can get to the clone of my BBS on the external network - but the management console is only accessible through the internal one. Exactly what I wanted to achieve. 

Now I need some time to assimilate what I've done and figure out my next steps. This opens up a lot of possibilities. 



Wed Apr 21 2021 15:26:50 EDT from ParanoidDelusions

I think a post on the Proxmox Reddit support page may have me straightened out. Going to try a live test tonight. 

Wed Apr 21 2021 01:14:29 EDT from ParanoidDelusions

So, the Linux bridge I have set up is the address that the admin console is reached on.
The actual physical NICs have no IP address assigned to them. I mean, they do - if I go and do ip Addr from console I see that the IP address assigned in the Linux Bridge is the IP assigned to ensps0. But in the management console, it doesn't present this way. If I open up Edit properties on enp2s0 ip address is blank - if I open up Edit properties on vmbr0, that is where the Iv4 address shows. 


So, I thought - well, the bridge port on that is enp2s0, so maybe I have to add enx000ec6d3dade (the USB ethernet) as a bridge port too. 


But this doesn't make sense to me. That should just add the SAME network to both NICs. Even if that would work, that doesn't achieve what I want. 


 



 



 



Go to page: First ... 20 21 22 23 [24]