[CentOS-virt] Virtualisation, guests cached memory and density

Fri Feb 8 18:29:02 UTC 2013
Steve Thompson <smt at vgersoft.com>

On Fri, 8 Feb 2013, Karanbir Singh wrote:

> On 02/08/2013 05:20 PM, Steve Thompson wrote:
>> On Fri, 8 Feb 2013, Karanbir Singh wrote:
>>> Xen, because of the way it works, will always get to higher density /
>>> performance than KVM when desity and reasonable performance are on the
>>> plate.
>> My experience is the exact opposite.
> Do tell more..

I have about 5-ish years Xen experience and 2 years with KVM, covering 
several hundred different VM's. I switched from Xen to KVM a few weeks 
after trying KVM for the first time (so my Xen experience is two years out 
of date). All Linux VM's are PV with virtio; Windows uses virtio also. 
Bridged networking.

One example: the physical host was a Dell PE2900 with 8 cores and 24 GB 
memory, running (now) CentOS 5.9. I wished to run 38 VM's on this, with 
the guest O/S being various CentOS versions and Windows XP, 2003 and 7. I 
could never get 30 or more VM's to start under Xen. It did not matter in 
which order I started the 30 VM's; the 30th machine always (no matter 
which one it was) failed to boot. There were periodic strange failures 
with the Xen guests, and accurate time keeping was always a problem.

I switched to KVM just to see what the fuss was about. Using the same disk 
images as input, I had the 30 VM's up and running without fuss in less 
than 2 hours. I went on to run the whole 38 in short order. I had no 
issues with KVM, and to this day I have several physical hosts with about 
75 guests, and I have never had a single problem with KVM (really). Time 
keeping does not appear to be problematical.

One of my workloads consists primarily of builds of large software 
packages, so it is a heavy fork() load. Performance of the guests, 
measured in terms of both build time and network performance, has been so 
much better in KVM than under Xen that it's not even funny. I posted on 
this some time ago.

At the time of my last Xen experience, the memory assigned to all active 
guests had to fit simultaneously in the host's physical memory, so that 
provided an upper limit. With KVM, the guest's memory is pageable, and so 
this limit goes away (unless in a practical sense the guest are all active 
simultaneously, which is not true for any of my workloads).

I see the ability to run "top" as a normal user on a KVM host and see what 
the guests are up to as a big advantage. Sure, one can run xentop on Xen, 
but only if you have root access.

Xen hosts have to run a Xen-enabled kernel; not so with KVM.

I typed this off the top of my head, so I'm sure I missed a bunch of