For some of the upcoming test automation work we're doing, we need to use a tool that, unfortunately, currently only runs under Windows. Instead of dedicating an entire machine to this tool, I've been considering running it in a VM under Linux--in this case, Fedora 11. My first question is, which VM software to use?
I've used VMWare Player before to run a Windows VM, but the host OS was Windows as well. Will VMWare handle a Windows VM under Linux just as easily? Are there any caveats I should be aware of? What about resource footprints?
Just because I'm familiar with VMWare, though, I don't want to automatically rule out other VM solutions. I've heard of KVM/QEMU, VirtualBox, and I've heard IG rave about ProxMox, but never having used any of those, I'm not even sure where to begin when trying to compare all of these.
If I had the time and luxury, I'd just load up a VM in each and see how it all works. No such luck, though.
The other big question I had was concerning how to interface with the VM.
For the most part, I expect that once I set everything up, it shouldn't require any maintenance. However, I still need to be able to access the VM in case there are problems. With VMWare Player, the VM appeared in a window on my desktop. I'm going to be deploying this on a rack-mounted server. Though it is attached to a physical console via a KVM switch, I'd prefer to not have to run a desktop session just to see a VM's display. How is this generally handled in Linux? I figure that in most instances, I'd set up either a VNC server or turn on remote desktop access inside the VM, but, since the VM will be running Windows, I expect that I'll have to access the VM's "console" from time to time.
To simply throw a couple of VM's online on Fedora, the tool you probably want to use is virt-manager.
Depends how mission critical this is going to be. If it's mission critical: VMware. If it's not: anything else. I'm disillusioned with some of the free/cheap/open products after dealing with the endless train of frequently subpar releases that's come out of Virtualbox (which is a nice product once you get it working, just don't upgrade it if it's working fine...)
OBTW, note that Windows isn't really licensed for use in a VM anymore (unless it's Server or something?) And I personally would hesitate to actually pay a unique license for a windows instance that would permanently be confined to a VM...
One alternative to that is actually Amazon EC2 which lets you spin up virtual Windows boxes on the fly, and with their agreement with Microsoft, all license fees are bundled into the instance-hour fees.
And it figures that M$ would toss a completely non-technical issue into the fray to mess things up.
Maybe this application will run under WINE.
However it is pretty clear that Microsoft is trying to stifle Windows VDI for as long as possible because they want people to buy real desktops, at least until they can figure out a way to shift the revenue into *their* cloud.
The IT managers I've spoken with all seem to say the same thing: the license people really want you to remote-access "your" physical desktop computer at the office. Windows 7 is built for that; when you remote to it, the monitor shuts off and the desktop resizes itself to the screen dimensions of the remote device.
On the other hand if you have a Volume License agreement, they aren
t going to stop you from activating licenses on virtual hardware instead of physical. What customers are asking for is a way to pool the licenses so they can be oversubscribed in a way that allows the customer to only pay for the license count that is used concurrently, but Microsoft does not allow that; they insist on receiving a full license payment for every installed copy.
Server is a different story. They know full well that every data center in the world is moving towards full virtualization, or close to it anyway, and will happily sell you licenses for that all day long.
We already use Linux to host our automated testing environment, and the vendor, in this instance, already provides stripped-down versions of their client software for Linux. It's just this one application that they don't have a Linux version of.
It's not a bad application, either. The user-interface is pretty slick.
It performs all of the functions we need it to, and gives us a lot of flexibility.
For automation, though, a slick GUI is completely unecessary. Once you strip that off, this thing really becomes a piece of middle-ware. It accepts commands from a client, and converts it into a different set of commands to be sent off to another server. Don't need a GUI for that, and that would run equally as well under Windows, Linux, MacOS, *BSD, or even DOS with the right IP drivers!
Uncensored and all citadel.org properties are running on a PVE host. I also have a six node cluster hooked up to shared storage over at the Big Blue X which we operate as a multitenant cloud.
On merit alone, PVE wins hands down. However I also like to look at where the community is going, and if it seems that there will be rallying around one particular piece of software or framework then that's worth something too; I don't want to have to manage a conversion job later on. That's why I originall went with KVM even though Xen was king at the time, and that worked out well. Right now, PVE is the best but it doesn't have widespread energy behind it. It's looking like oVirt may eventually grab that spot.
oVirt has Red Hat, IBM, and Cisco (among others) behind it. They spend a lot of time talking about "open governance" which seems to be a direct shot at the way Rackspace dictates the direction of the OpenStack project. Their message seems to be that oVirt will be the clear vendor-neutral answer to VMware vSphere.
I haven't tried oVirt yet but I plan to do a pilot project this year. From what I can tell it's not as drop-dead easy to install as PVE but it may scale better.
KVM with virt-manager is what I prefer for remote stuff, virtualbox is nice on my desktop, since it does sound and clipboard and other nice stuff.
virt-manager lets you use more than only KVM (vbox, xen, etc) and in combination with SASL blends into an AD environment. That is not entirely documented well, but it works fine here. You automagically can SSO into the VNC of your vm, too. But releases might be buggy, as loanshark pointed out. And their error messages are quite on the kabbalistic side at times.
Seconded on the KVM / Libvirt combo here. I prefer virsh for all my stop / start / force reboot that damn windows server needs. Virsh provides a nice terse interface via ssh (just the way I like it)....
The benefits here will include a more high performance display (better rendering of media, accelerated graphics, etc) as well as remote audio, and I believe they've also got something in there for client-side storage and usb etc. [http://spice-space.org/]
Still my vserver provider moved me from openvz to xen. I hear lots of people prefering xen for things which needs to be closer to the hardware. And some other arguments which sounded worth considering. But since I already forgot them...
Anyway, my personal feeling is that xen is dead.
What the Linux world is finding, however, is that with hardware-supported virtualization, bare-metal hypervisors don't offer any additional performance benefits anymore. That's why Linus chose KVM instead of Xen as the official hypervisor for the mainline kernel. KVM requires hardware VT, of course.
The benefit of making that decision is that all of the other supporting pieces -- memory management, disk queues etc -- not to mention device drivers -- are all provided by the existing Linux kernel; virtual machines are treated as "just another process" by the host OS, but at the same time the performance hit of running inside a virtual machine is negligible.
So is Xen dead? As a commodity hypervisor, I think so. It will live on in specific places where it's highly customized.
Amazon EC2 is probably the best example; they've tuned the hell out of it and brought in some highly tweaked guest kernels so that they can fit a lot more guests on the same amount of hardware. That's the kind of place where Xen will continue to run. For the average IT/datacenter wonk doing server consolidation, it's all about KVM (and VMware) at this point.
By the way, ProxMox VE 2.0 finally came out of beta and was released last week. I haven't tried it yet but the screenshots look fabulous.
The way you put it, IG, it totally makes sense.
I got a question now myself:
I need to run two VMs (Linux Server (probably ClearOS) and definetly WinXP) on a server. Since it is mainly Windowsland out there, I need a way to manage (restart, etc) them via a webinterface or vnc/rdp. Also, there should be a desktopish non-network way to manage them directly at the host. There will be mouse/keyboard and tft attached.
The site were it runs is a commercial fascilty and they are the worst misers in the world, so should be totally FOSS. Any recommendations for the underlying OS (should be flavour of linux/bsd) and the virtualisation software?
(Is there a windows tool for libvirtd around yet?!)
Glad to see this is one of the few places I can actually see some true linux discussion and not some rampant fanboyism over Ubuntu and other garbage.