Hello,
We all know linux distros will use most of the free memory available for caching stuff, I/O buffers etc (hence sites like linuxatemyram.com). In many virt scenarios we have some sort of memory deduplication mechanisms, such as KSM on KVM (don't know how it's called in Vmware).
My situation is one where I need to cram up as many virtual machines as I can on a single hypervisor and I believe KSM does not work for the memory used as cache/buffers. How could I work around this? Any other pointers for achieving higher density welcome (don't suggest container technology please).
On 02/08/2013 11:26 AM, Nux! wrote:
How could I work around this? Any other pointers for achieving higher density welcome (don't suggest container technology please).
Xen, because of the way it works, will always get to higher density / performance than KVM when desity and reasonable performance are on the plate.
Regards
- KB
On 08.02.2013 11:46, Karanbir Singh wrote:
On 02/08/2013 11:26 AM, Nux! wrote:
How could I work around this? Any other pointers for achieving higher density welcome (don't suggest container technology please).
Xen, because of the way it works, will always get to higher density / performance than KVM when desity and reasonable performance are on the plate.
Does this apply to HVM or just paravirt? I would need HVM (need to run freebsd).
Hey
On 02/08/2013 12:04 PM, Nux! wrote:
Xen, because of the way it works, will always get to higher density / performance than KVM when desity and reasonable performance are on the plate.
Does this apply to HVM or just paravirt? I would need HVM (need to run freebsd).
KSM will only work with KVM guests.. however Xen as something called Tmem that ultimately targets similar goals and works for pv guests as-is, but needs a Tmem capable kernel in the HVM guest, if they are to use it as well.
- KB
On 08.02.2013 12:13, Karanbir Singh wrote:
Hey
On 02/08/2013 12:04 PM, Nux! wrote:
Xen, because of the way it works, will always get to higher density / performance than KVM when desity and reasonable performance are on the plate.
Does this apply to HVM or just paravirt? I would need HVM (need to run freebsd).
KSM will only work with KVM guests.. however Xen as something called Tmem that ultimately targets similar goals and works for pv guests as-is, but needs a Tmem capable kernel in the HVM guest, if they are to use it as well.
And FreeBSD doesn't look like it has it ... Thanks for the tip though, very handy.
On 02/08/2013 12:18 PM, Nux! wrote:
And FreeBSD doesn't look like it has it ... Thanks for the tip though, very handy.
thats odd, are you sure ? Remember that FreeBSD runs as a pv guest in Xen.
- KB
On 02/08/2013 05:20 PM, Steve Thompson wrote:
On Fri, 8 Feb 2013, Karanbir Singh wrote:
Xen, because of the way it works, will always get to higher density / performance than KVM when desity and reasonable performance are on the plate.
My experience is the exact opposite.
Do tell more..
- KB
On Fri, 8 Feb 2013, Karanbir Singh wrote:
On 02/08/2013 05:20 PM, Steve Thompson wrote:
On Fri, 8 Feb 2013, Karanbir Singh wrote:
Xen, because of the way it works, will always get to higher density / performance than KVM when desity and reasonable performance are on the plate.
My experience is the exact opposite.
Do tell more..
I have about 5-ish years Xen experience and 2 years with KVM, covering several hundred different VM's. I switched from Xen to KVM a few weeks after trying KVM for the first time (so my Xen experience is two years out of date). All Linux VM's are PV with virtio; Windows uses virtio also. Bridged networking.
One example: the physical host was a Dell PE2900 with 8 cores and 24 GB memory, running (now) CentOS 5.9. I wished to run 38 VM's on this, with the guest O/S being various CentOS versions and Windows XP, 2003 and 7. I could never get 30 or more VM's to start under Xen. It did not matter in which order I started the 30 VM's; the 30th machine always (no matter which one it was) failed to boot. There were periodic strange failures with the Xen guests, and accurate time keeping was always a problem.
I switched to KVM just to see what the fuss was about. Using the same disk images as input, I had the 30 VM's up and running without fuss in less than 2 hours. I went on to run the whole 38 in short order. I had no issues with KVM, and to this day I have several physical hosts with about 75 guests, and I have never had a single problem with KVM (really). Time keeping does not appear to be problematical.
One of my workloads consists primarily of builds of large software packages, so it is a heavy fork() load. Performance of the guests, measured in terms of both build time and network performance, has been so much better in KVM than under Xen that it's not even funny. I posted on this some time ago.
At the time of my last Xen experience, the memory assigned to all active guests had to fit simultaneously in the host's physical memory, so that provided an upper limit. With KVM, the guest's memory is pageable, and so this limit goes away (unless in a practical sense the guest are all active simultaneously, which is not true for any of my workloads).
I see the ability to run "top" as a normal user on a KVM host and see what the guests are up to as a big advantage. Sure, one can run xentop on Xen, but only if you have root access.
Xen hosts have to run a Xen-enabled kernel; not so with KVM.
I typed this off the top of my head, so I'm sure I missed a bunch of things.
Steve
I see the ability to run "top" as a normal user on a KVM host and see what the guests are up to as a big advantage. Sure, one can run xentop on Xen, but only if you have root access.
If you have at least read only access to libvirt virt-top is nice to get more detail on guest stats...
Have you looked at KVM with a c6 host yet? It's a marked improvement over c5 hosts...
On Fri, 8 Feb 2013, James Hogarth wrote:
Have you looked at KVM with a c6 host yet? It's a marked improvement over c5 hosts...
Yes, I have a samba4 domain controller running as a KVM guest on a CentOS 6.3 host (two of them w/DRS, actually). Runs like a champ.
I also have two LVS-DR load balancers running as CentOS 6.3 KVM guests with keepalived w/VRRP, one on a C5.8 host and one on a C6.3 host. Each guest has three network interfaces, as does the host (bridged mode with dual bonded interfaces underneath). Services are LDAP, Windows remote desktop, HTTP, webmail, IMAP and SMTP. Everything works fine for all services with the exception of SMTP, which works fine on C5 but not on C6 (same guest setup, same realservers), where it loses connections when sending mail messages larger than about 10MB. Even setting rp_filter=2 does not help; I have not pinned this one down yet, but I doubt that KVM is responsible since C5 works.
As a point of interest, the Windows RDP service feeds 31 Windows XP virtio realservers, which are themselves KVM guests on a Dell R710 (8 physical cores + hyperthreading, 48GB) running CentOS 5.9 (soon to be 6.3). Runs most excellently.
Steve