I have 2 similar servers. Since upgrading one from CentOS 5.5 to 6,
disk write performance in kvm guest VMs is much worse.
Philip Durbin wrote:
Nice post, Julian. It generated some feedback at
http://irclog.perlgeek.de/crimsonfu/2012-08-10 and a link to http://rhsummit.files.wordpress.com/2012/03/wagner_network_perf.pdf
Phil
Thanks Phil for linking to my post on #crimsonfu, and reporting the result back here.
In response to the points jimi_c raised there:
I don't see his test numbers for different caching options
All the test figures I gave are using cache=writethrough. cache=writeback produced much better figures than even the host (about 180MB/s), because it is really writing to memory, but I don't think it's a safe option to use. cache=none produced worse figures. I didn't include the figures because I did those tests before I started using bonnie++ (I was just timing copying files before that) and I'd already ruled cacheing out as a solution.
also, is he doing deadline on the guest, host, or both?
deadline on the host - didn't try it on the guest.
not sure if they implemented it yet, but they were talking about a vm
host setting for tuned
and one for guests yeah, Wagner mentioned it in his summit presentation:
http://rhsummit.files.wordpres%5B%E2%80%A6%5D_network_perf.pdf
they should be available in rhel 6.3 according to his presentation
Well, tuned-adm is a gift for part-time sysadmins like myself.
Some of the guest disk write figures were close to the host's & better than CentOS 5 after doing...
yum install tuned tuned-adm profile virtual-host
..in the host and...
yum install tuned tuned-adm profile virtual-guest
...in the guest.
Here are the new bonnie++ guest block write figures in MB/s. all using tuned-host and virtio, with & without tuned-guest. Not sure why there's so much variation, but at least they're all much better.
45 tuned-host 73 tuned-host 50 tuned-host + tuned-guest 37 tuned-host + tuned-guest
rhel/centos 6 probably went with a more conservative tuning option
Certainly looks that way. It's be interesting to know what & why.
Before jimi_c provided the tuned-adm tip, I was hoping that running the VM off a block device might be the answer. i.e: qemu-img convert /media/vm027/hda.raw -O /dev/vm/vm031
...but they are worse than running off a raw virtual disk file.
16 tuned-host virtio 20 tuned-host virtio 27 tuned-host tuned-guest virtio 24 tuned-host tuned-guest virtio
I'm not convinced, maybe there are other factors at work. I'd investigate further if my plan A wasn't back on track.
The bonnie++ figures I gave before measuring host disk write performance were for the host *root* partition, not the LVM volumes that the guest VMs use. What if the problem is LVM, not KVM? So I did some timings comparing the root drive with LVM volumes, some with & some without tuned-host. 'archive' and 'vmxxx' are the lvm volume names: (Note: these timings are done in the host, not in a guest)
69 / 69 / 70 / 65 / + tuned-host 64 / + tuned-host
55 archive 56 archive 53 vm027 48 vm027 33 vm027 + tuned-host 38 vm027 + tuned-host 53 vm022 + tuned-host 85 vm022 + tuned-host
This indicates that there is a wide variation in performance between different LVM volumes, and all the LVM volumes are performing worse than the root. (It's interesting that with tuned-host, the times seem to be mostly worse, but with greater deviation.) I repeated the above test on the CentOS 5 server (without tuned-host of course) and found the same thing - LVMs perform worse than root and vary widely:
54 / 50 / 39 archive 45 archive 39 archive 49 vm022 34 vm027 33 vm027
A slight performance hit might be expected for LVM, but I though it was meant to be negligible? If the figures fell into 2 bands - good and bad - then I'd be looking for a specific problem like a sector alignment, but they don't, and isn't sector alignment meant to be fixed on CentOS 6? The variation in performance indicates a problem of variable severity like fragmentation or the position on the physical disk - but I don't think either of those are likely causes, because there's only one file in each volume, and physical disk position shouldn't have such a marked effect should it? Any other suggestions?
Thanks, Julian