On 10/23/2011 04:14 AM, Eric Shubert wrote:
On 10/22/2011 06:00 PM, Dennis Jacobfeuerborn wrote:
Running iostat shows no I/O activity when running the tests for /dev/vdb which explains the insane numbers. The question is why I get such different results when both devices are defined exactly the same way?
I'm not very familiar with KVM yet (got my first real lesson today), but I notice you said: "In the guest the drive are running using virtio with type=raw and cache=none." Are these KVM settings, or did you use kernel parameters on the guest machine?
I'm using the settings in virt-manager. This is what my disk definition looks like in xml:
... <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none'/> <source file='/mnt/backup01/libvirt/images/gw1.img'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none'/> <source file='/mnt/backup02/libvirt/images/gw1-data.img'/> <target dev='vdb' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </disk> ...
Also, what about the elevator (i/o scheduler) in the guest? In a VMware Server2 host (on Centos, so I'm not far OT) it's best to use the elevator=noop parameter. I wouldn't expect the elevator to skew results quite as much as you're seeing, but what do I know? ;)
At the moment I'm not really interested in optimizing performance. The plan was to toy around with KVM to see how it behaves compared to Xen but right now things don't look too well. Given that all parameters are the same I have no idea what could such completely different bahavior. It's almost as if the guests treats /dev/vdb as /dev/null but I don't know why.
Regards, Dennis