[CentOS] Virtualization platform choice

Mon Mar 28 07:00:55 UTC 2011
Pasi Kärkkäinen <pasik at iki.fi>

On Sun, Mar 27, 2011 at 09:41:04AM -0400, Steve Thompson wrote:
> 
> The slightly longer story...
> 
> First. With Xen I was never able to start more than 30 guests at one time 
> with any success; the 31st guest always failed to boot or crashed during 
> booting, no matter which guest I chose as the 31st. With KVM I chose to 
> add more guests to see if it could be done, with the result that I now 
> have 36 guests running simultaneously.
> 

Hmm.. I think I've seen that earlier. I *think* it was some trivial
thing to fix, like increasing number of available loop devices or so.

> Second. I was never able to keep a Windows 7 guest running under Xen for 
> more than a few days at a time without a BSOD. I haven't seen a single 
> crash under KVM.
> 

Hmm.. Windows 7 might be too new for Xen 3.1 in el5,
so for win7 upgrading to xen 3.4 or 4.x helps. 
(gitco.de has newer xen rpms for el5 if you're ok with thirdparty rpms).

Then again xen in el5.6 might have fixes for win7, iirc.

> Third. I was never able to successfully complete a PXE-based installation 
> under Xen. No problems with KVM.
> 

That's weird. I do that often. What was the problem?

> Fourth. My main work load consists of a series of builds of a package of 
> about 1100 source files and about 500 KLOC's; all C and C++. Here are the 
> elapsed times (min:sec) to build the package on a CentOS 5 guest (1 vcpu), 
> each time with the guest being the only active guest (although the others 
> were running). Sources come from NFS, and targets are written to NFS, with 
> the host being the NFS server.
> 
> * Xen HVM guest (no pv drivers): 29:30
> * KVM guest, no virtio drivers: 23:52
> * KVM guest, with virtio: 14:38
> 

Can you post more info about the benchmark? How many vcpus did the VMs have? 
How much memory? Were the VMs 32b or 64b ?

Did you try Xen HVM with PV drivers? 
I've been planning to benchmarks myself aswell so just curious.

> Fifth: I love being able to run top/iostat/etc on the host and see just 
> what the hardware is really up to, and to be able to overcommit memory.
> 

"xm top" and iostat in dom0 works well for me :)

-- Pasi