Language:
switch to room list switch to menu My folders
Go to page: First ... 20 21 22 23 [24] 25 26 27 28 ... Last
[#] Tue Apr 20 2021 12:58:50 EDT from Nurb432

[Reply] [ReplyQuoted] [Headers] [Print]

I have restored smaller vms ( 100g ) and it did not take long at all.  I did not time it, but it was not a real issue. Sure. if i copied it off onto removable media, and waxed the backup on the serer, there is copy time to get it back on but the actual restore, was trivial. But it would be easy to do a few performance tests to see how yours does

 

 

Might also consider having a shared drive between the 2 hosts. 

 

 

 

Tue Apr 20 2021 09:31:14 EDT from ParanoidDelusions

But, a full backup of the VM would entail a full restore - which is time consuming, right. 
 
Maybe I didn't describe it right. 

I've got a production server, my BBS. I've built a VM test server on another physical machine, running Proxmox. I built a VM test server on that machine. I then restored a backup of just Citadel from the production machine to the test machine. I then want to use the VM as a test server... with the intent of doing things that almost certainly will break Citadel, maybe the VM host OS. If I totally bork it - wouldn't it be easier to just have a clone/snapshot on Proxmox than to rebuild/restore a bungled VM? I want to go back to a point-in-time image *before* I messed it up - not *fix* whatever I did. 

I mean... I'm coming from a world of Citrix - where the idea of having snapshots is that if something goes wrong with the server - snapshots make it easier to backup *and* restore than traditional backups. Rather than restoring a backup - you just blow out the VM that has gone bad and bring the new image up in its place. Makes new deployments easier too. You snapshot the baseline build - then just bring the clone/snapshot up and make the changes for it to be another system.  Does this concept work different in Proxmox than on other VM platforms? 

I was thinking about other network logistic things laying awake in bed. Right now my production server is on bare metal, dual homed, a wired NIC going to the ACE router, the WiFi connected to my own internal network (which goes out to the Internet over my residential ISP).  

The nature of a VM would mean that if I want to host the production server on Proxmox - the Proxmox server has to have a route to the ACE Cisco server... so this means... bare with me here... 


The physical NIC would go to ACE, through the CISCO, and needs to have an IP address from the pool they gave me. The VM would have a virtualized NIC that would also been on the subnet pool assigned by ACE. So, virtualizing Citadel in a Proxmox VM is instantly going to consume two of the 5 IP addresses that ACE assigns. One for the physical NIC, one for the VM. 

Now, if I still want the Proxmox interface visible on the internal interface - that would simply entail having a second physical NIC on the VM - hooked up to my internal network switch and assigned an internal network IP address. I wouldn't be able to connect *directly* to the VM through the internal network - I'd instead access the internal IP address of the Proxmox interface, then launch the VM console there, and that would give me physical access to the VM. 

The IP Address scheme of this gets somewhat complex once you add a VM host that you want accessible both to the public and internal network. I don't understand exactly how Proxmox handles address translation from physical NICs to VMs and virtual NICs. I'm not there yet... the whole VM is really a test box now. But if I get familiar with it, I think it will be better if I run the actually production BBS in a VM - maybe even with a second Proxmox node and replication from one node to the other. 

Right now, all of this is academic. I just have made an image that approximates my production server as a VM on Proxmox on my internal network - and I've managed to backup my prod Citadel and restore it on the VM - and I want to test appimage installs on that server. I don't want to back this server up, everything on it is disposable - but if something goes wrong - I want it to be easy as possible to restore to the point in time before I screwed up. Doing a backup and restore of THAT seems like the difficult way, with a VM? 

 



Tue Apr 20 2021 07:21:25 EDT from Nurb432

A full backup will do that for you.  No need to clone it.  If you are really paranoid, copy it off to a drive or something. 

 

 

Tue Apr 20 2021 00:58:10 EDT from ParanoidDelusions

Well, I've got the VM up and running off the Easy Install using a copy of my production BBS db - but I had to revert to the Easy Install in order to get it up and running. 

I'm stoked, though. This is good forward progress. I'm starting to think about this a little more methodically now. 


Now that I have one good VM running a copy of the production BBS on Proxmox... I can just clone it, and do my testing on the clone, right? That way, if I bungle anything, I just delete the cloned VM, and reclone it from the good source VM, and save myself a bunch of effort. 

 

Mon Apr 19 2021 23:07:56 EDT from ParanoidDelusions

And once I get this all sorted out, at the very least I'll be able to back up live from my prod Citadel to a VM - and be able to restore if something ever goes wrong. I'd rather have it running in prod on a VM - but that is probably going to take figuring out adding a third USB Nic or WiFi that Proxmox supports to the machine and just disabling the Intel NIC that is onboard. 

 



 



 



 



[#] Tue Apr 20 2021 13:33:10 EDT from ParanoidDelusions

[Reply] [ReplyQuoted] [Headers] [Print]

Well, for right now, figuring out how to backup the bare metal prod server and restore it on a VM is a major step forward. I had been just buying an additional 240GB SSD and cloning the production server SSD using a USB hardware drive cloner. That worked, and was only about $30 a backup - but wasn't really sustainable. Now I can do more routine backups and save as archives on my NAS. 

But really I'm not interested in capturing every last change with incremental backups. If I can restore back a week - that even a month - that should be good enough for my users. 

All of this is really about making my administration as hassle free as possible. If something goes wrong - I want to be able to just go back to the last time things *weren't* wrong. Preserving the last bit of data entered between the last good image and the minute things went fubar shouldn't be a major concern. 

 

Tue Apr 20 2021 12:58:50 EDT from Nurb432

I have restored smaller vms ( 100g ) and it did not take long at all.  I did not time it, but it was not a real issue. Sure. if i copied it off onto removable media, and waxed the backup on the serer, there is copy time to get it back on but the actual restore, was trivial. But it would be easy to do a few performance tests to see how yours does

 

 

Might also consider having a shared drive between the 2 hosts. 

 

 

 



[#] Tue Apr 20 2021 15:02:22 EDT from Nurb432

[Reply] [ReplyQuoted] [Headers] [Print]

I think you can schedule proxmox backups.  Ill have to check.   Since i am the paranoid type, i end up putting them on an NFS share on my desktop, then i copy that off onto external va USB for an extra copy, which admittedly is slow.   I used to have a separate NAS for this, but it was just wasting resources so i dropped it.  But all the servers in the farm have access to the NFS on my desktop.  Due to bandwidth to my desktop i DONT use it to store running VMs its just for easy access in/out of backups. 

I dont need to back up my VMs often ( some never really need to beyond the initial install, like my openvpn server or osgrid region ) so a manual button push is good for me



[#] Tue Apr 20 2021 15:03:23 EDT from Nurb432

[Reply] [ReplyQuoted] [Headers] [Print]

 I also keep forgetting they have a separate backup server offering now that is integrated with PVE.   Its fairly recent.



[#] Tue Apr 20 2021 17:09:27 EDT from Nurb432

[Reply] [ReplyQuoted] [Headers] [Print]

Well for some real numbers on one of my tiny i5 boxes using local SSD, a 200gb VM took about 12 minutes to backup. about 20 to do a 'traditional' restore.  Never tried it on these little boxes before, only connecting to an existing VM disk i copied over as a new VM,  which is of course almost instant. also taxed the CPU on restore.  Got a few alerts. 

 

 Of course starting snapshots are nearly instant, but trying to roll back large ones would take a while.



[#] Wed Apr 21 2021 01:09:58 EDT from ParanoidDelusions

[Reply] [ReplyQuoted] [Headers] [Print]

I spent tonight fighting with the Intel 8260. I was able to get it assigned, recognized, getting a DHCP lease from the AP, and I could actually SSH into the terminal on the IP address assigned to it - but that was it. 

So, then I remembered I had a USB to Ethernet adapter I'd never used, and hooked it up. Pretty much plugged and played - a ASIX AX88179 - and I figured the problem was the Wireless issues with NAT with Linux causing things not to work with the Intel card. So, I got this all set up, responding to a ping, but I can't even SSH into the address I assigned it. 

So, I figure at this point PEBKAC/OE and that I don't really understand the way that Proxmox handles physical NICs vs. the Linux Bridges. 
I just want to have two physical NICs, one internal, and one external - to lock it down so that incoming requests from OUTSIDE to the management are blocked, so that you can only get to VMs, but inside I can get to the management console or the VMs. I don't care if they're both wired or if one is Wireless. 

And I don't understand why they didn't design the software so that the admin can multihome the box and have two different networks expose *everything* on it, then use firewall rules to limit what one network or the other exposes. This seems *perfectly* logical to me. But it doesn't seem like how it works. 



[#] Wed Apr 21 2021 01:14:29 EDT from ParanoidDelusions

[Reply] [ReplyQuoted] [Headers] [Print]

So, the Linux bridge I have set up is the address that the admin console is reached on.
The actual physical NICs have no IP address assigned to them. I mean, they do - if I go and do ip Addr from console I see that the IP address assigned in the Linux Bridge is the IP assigned to ensps0. But in the management console, it doesn't present this way. If I open up Edit properties on enp2s0 ip address is blank - if I open up Edit properties on vmbr0, that is where the Iv4 address shows. 


So, I thought - well, the bridge port on that is enp2s0, so maybe I have to add enx000ec6d3dade (the USB ethernet) as a bridge port too. 


But this doesn't make sense to me. That should just add the SAME network to both NICs. Even if that would work, that doesn't achieve what I want. 


 



[#] Wed Apr 21 2021 15:26:50 EDT from ParanoidDelusions

[Reply] [ReplyQuoted] [Headers] [Print]

I think a post on the Proxmox Reddit support page may have me straightened out. Going to try a live test tonight. 

Wed Apr 21 2021 01:14:29 EDT from ParanoidDelusions

So, the Linux bridge I have set up is the address that the admin console is reached on.
The actual physical NICs have no IP address assigned to them. I mean, they do - if I go and do ip Addr from console I see that the IP address assigned in the Linux Bridge is the IP assigned to ensps0. But in the management console, it doesn't present this way. If I open up Edit properties on enp2s0 ip address is blank - if I open up Edit properties on vmbr0, that is where the Iv4 address shows. 


So, I thought - well, the bridge port on that is enp2s0, so maybe I have to add enx000ec6d3dade (the USB ethernet) as a bridge port too. 


But this doesn't make sense to me. That should just add the SAME network to both NICs. Even if that would work, that doesn't achieve what I want. 


 



 



[#] Wed Apr 21 2021 22:14:06 EDT from ParanoidDelusions

[Reply] [ReplyQuoted] [Headers] [Print]

Well not quite, but in my usual manner, I stumbled around until I kind of pieced it together when the documentation turned out to be piss poor. 

The networking of the node and guests isn't well described - and they really have their own terminology within the VM world that is confusing to boot. 

But I *might* even be able to get WiFi working at this point. 

Here is what I figured out. I left the default Network device and Linux Bridge intact as the host management. Almost every other VM solution would refer to the Linux Bridge as a virtual adapter or something like that. But theirs work differently. So anyhow... those are on the physical device NIC. But the physical device NIC (the network device) doesn't even really need a gateway, I guess. I'm not sure if it even needs an IP address. All that goes in the Linux Bridge it seems. 

So, then I created another Network Device for the USB to Ethernet device, and another bridge for it. Then you go into the guest and configure its IP address... but it still wasn't working. 

Then I figured out I had  go to the GUEST and tell it under Network Device what Linux Bridge to use and that bridge had to have the right network device defined up as a "port/slave". It is a *little* convoluted. 

And tadum. Now you can get to the clone of my BBS on the external network - but the management console is only accessible through the internal one. Exactly what I wanted to achieve. 

Now I need some time to assimilate what I've done and figure out my next steps. This opens up a lot of possibilities. 



Wed Apr 21 2021 15:26:50 EDT from ParanoidDelusions

I think a post on the Proxmox Reddit support page may have me straightened out. Going to try a live test tonight. 

Wed Apr 21 2021 01:14:29 EDT from ParanoidDelusions

So, the Linux bridge I have set up is the address that the admin console is reached on.
The actual physical NICs have no IP address assigned to them. I mean, they do - if I go and do ip Addr from console I see that the IP address assigned in the Linux Bridge is the IP assigned to ensps0. But in the management console, it doesn't present this way. If I open up Edit properties on enp2s0 ip address is blank - if I open up Edit properties on vmbr0, that is where the Iv4 address shows. 


So, I thought - well, the bridge port on that is enp2s0, so maybe I have to add enx000ec6d3dade (the USB ethernet) as a bridge port too. 


But this doesn't make sense to me. That should just add the SAME network to both NICs. Even if that would work, that doesn't achieve what I want. 


 



 



 



[#] Thu Apr 22 2021 09:33:43 EDT from IGnatius T Foobar

[Reply] [ReplyQuoted] [Headers] [Print]

It's been a while since I've used Proxmox, so I don't know how they have things set up nowadays.  And that's part of the thing -- once you know your way around QEMU/KVM and LVM and all those other tools, you don't really need the crutches that Proxmox provides.

Anyway ... you use different types of backups for different reasons.  If you're making a local backup just to protect against operator error or data corruption ("Oh boy! Are we going to do something dangerous?" --Floyd the robot) then a thin clone is fine, and if your filesystem supports it then you should definitely do it:

cp --reflink=always virtualdisk.qcow2 virtualdisk-backup.qcow2

A copy to a different physical disk, perhaps in a different location, should be used to guard against physical loss of the drive.



[#] Thu Apr 22 2021 09:57:54 EDT from Nurb432

[Reply] [ReplyQuoted] [Headers] [Print]

I guess the important part is success at the end, not the road taken to get there.  



[#] Thu Apr 22 2021 10:27:31 EDT from ParanoidDelusions

[Reply] [ReplyQuoted] [Headers] [Print]

Yeah, I just made a clone. I figured I might as well stumble around and learn more things. This, being kind of a test lab machine - is where I need to get myself comfortable - even if it means starting over. 

I'm a Windows guy. I like my crutches. The weird thing is, I think that part of the problem with Linux is so many of you are so used to doing things *without* the crutches, you often can't tell the inexperienced how to get the most out of using the crutches. I got a lot of really complex, involved answers that required a lot of multi-discipline knowledge on how to achieve this, and a lot of doubt that it could be achieved the way I was approaching it. I got it to work, and didn't require the in-depth, multi-discipline knowledge necessary to do it the other ways. Easier is relative to the depth of your broad knowledge, here. 

 


Thu Apr 22 2021 09:33:43 EDT from IGnatius T Foobar

It's been a while since I've used Proxmox, so I don't know how they have things set up nowadays.  And that's part of the thing -- once you know your way around QEMU/KVM and LVM and all those other tools, you don't really need the crutches that Proxmox provides.

Anyway ... you use different types of backups for different reasons.  If you're making a local backup just to protect against operator error or data corruption ("Oh boy! Are we going to do something dangerous?" --Floyd the robot) then a thin clone is fine, and if your filesystem supports it then you should definitely do it:

cp --reflink=always virtualdisk.qcow2 virtualdisk-backup.qcow2

A copy to a different physical disk, perhaps in a different location, should be used to guard against physical loss of the drive.



 



[#] Thu Apr 22 2021 14:17:20 EDT from IGnatius T Foobar

[Reply] [ReplyQuoted] [Headers] [Print]

Well yes, that's kind of exactly what ProxMox is for.

[#] Thu Apr 22 2021 19:51:21 EDT from ASCII Express

[Reply] [ReplyQuoted] [Headers] [Print]

operator error or data corruption ("Oh boy! Are we going to do
something dangerous?" --Floyd the robot) then a thin clone is fine,

I just had to jump in and say that I loved the Planetfall reference at this point in the discussion.

[#] Thu Apr 22 2021 22:26:53 EDT from ParanoidDelusions

[Reply] [ReplyQuoted] [Headers] [Print]

So, tonight's challenge was trying to use Clonezilla to make a P2V of the physical machine to a VM. 

Dead-ends in every direction. The physical machine is UEFI. Proxmox sets up by default for BIOS - and needs a special drive for OVMF/UEFI. "You need to add an EFI disk for storing the EFI settings. See the online help for details. 

So... 

At this point I'm not sure what is easier - to just Clonezilla the physical machine to a VM and set up UEFI, to rsync, then restore to a VM, or some third choice. 

But I'm out of time for this weekend. 

BIOS and UEFI

In order to properly emulate a computer, QEMU needs to use a firmware. Which, on common PCs often known as BIOS or (U)EFI, is executed as one of the first steps when booting a VM. It is responsible for doing basic hardware initialization and for providing an interface to the firmware and hardware for the operating system. By default QEMU uses SeaBIOS for this, which is an open-source, x86 BIOS implementation. SeaBIOS is a good choice for most standard setups.

There are, however, some scenarios in which a BIOS is not a good firmware to boot from, e.g. if you want to do VGA passthrough. [12] In such cases, you should rather use OVMF, which is an open-source UEFI implementation. [13]

If you want to use OVMF, there are several things to consider:

In order to save things like the boot order, there needs to be an EFI Disk. This disk will be included in backups and snapshots, and there can only be one.

You can create such a disk with the following command:

qm set <vmid> -efidisk0 <storage>:1,format=<format>

Where <storage> is the storage where you want to have the disk, and <format> is a format which the storage supports. Alternatively, you can create such a disk through the web interface with Add → EFI Disk in the hardware section of a VM.

When using OVMF with a virtual display (without VGA passthrough), you need to set the client resolution in the OVMF menu(which you can reach with a press of the ESC button during boot), or you have to choose SPICE as the display type.




 



[#] Fri Apr 23 2021 06:57:59 EDT from Nurb432

[Reply] [ReplyQuoted] [Headers] [Print]

I have never got that to work. 

Others have, but i'm in the same boat and stopped trying years ago.   

Thu Apr 22 2021 22:26:53 EDT from ParanoidDelusions

So, tonight's challenge was trying to use Clonezilla to make a P2V of the physical machine to a VM. 

 



[#] Fri Apr 23 2021 09:47:58 EDT from IGnatius T Foobar

[Reply] [ReplyQuoted] [Headers] [Print]

I just had to jump in and say that I loved the Planetfall reference at


I'm just glad someone got it! :)

[#] Fri Apr 23 2021 09:58:14 EDT from IGnatius T Foobar

[Reply] [ReplyQuoted] [Headers] [Print]

Regarding BIOS vs. UEFI ... I can definitely tell you that QEMU/KVM supports both. If you want to P2V a machine that's booting in BIOS mode, you can set your virtual machine to BIOS mode, and it ought to do the right thing.

But yes, if you want to run a virtual machine in UEFI mode, then you need to have a properly formatted EFI System Partition on a drive partitioned with GPT (not MBR). This is true regardless of what guest OS you're running (Linux or Windows).

I've been trying to force myself to build everything in UEFI lately (but in my data centers we use VMware, not QEMU/KVM). It really shouldn't be that hard after all this time, but it's still a minefield. Lots of little things can make the system not boot.

[#] Fri Apr 23 2021 10:14:41 EDT from Nurb432

[Reply] [ReplyQuoted] [Headers] [Print]

I remember using vmware's p2v tools years ago back when i was still doing that stuff. That always seemed to work when rebuilding wasn't an easy option. But there were often long term stability issues from what i remember. 

I think now, our vmware team wont allow it, you must rebuild.  

 

 

 



[#] Sat Apr 24 2021 12:11:36 EDT from IGnatius T Foobar

[Reply] [ReplyQuoted] [Headers] [Print]

VMware Converter worked pretty well for us, but these days we don't use it much, because there's nothing left to convert. New installations have been "born" virtual for over a decade now. There's also the small matter of anything that old is probably running an operating system you aren't willing to support anyway.

Cronezirra worked really well for Linux, not so much for Windows.

Go to page: First ... 20 21 22 23 [24] 25 26 27 28 ... Last