On Thu, 2011-04-21 at 20:16 +0200, Kenni Lund wrote:
2011/4/21 Ian Forde ianforde@gmail.com:
Turns out that wasn't the only problem I faced in my migration. With 2 KVM servers, both sharing a volume mounted via NFS for VMs, I migrated all VMs to the second node, upgraded the first, them moved them all back to KVM1. Instant disk corruption on all VMs. Boom.
Are you sure it was the migration and not the raw/qcow2 error which caused the disk corruption?
In the second pair of KVM servers, I'd made the changes to the xml files and restarted libvirtd. Then did migration of a VM. Then watched the corruption. It's possible I may have needed to reboot the VM before migrating, so that KVM absolutely knows what it is. But nevertheless, I'm now a little gunshy about live migration...
I just had two Windows Servers with image corruption after upgrading from 5.5 to 5.6 and booting the first time with the raw setting, before changing it to qcow2 :-/
These two images were both on the same host, which is plain CentOS 5 *BUT* with a 2.6.37 kernel (and therefore 2.6.37 KVM module) from elrepo...
It could be my special case of running with a vanilla KVM-module + CentOS KVM userspace which allows the corruption to happen, but if other people are seeing disk corruption with the regular kernel/kmod-kvm, then this "known issue" should probably have a big fat red warning in the release notes..
Yeah. I completely agree. I've got a steaming mess of VMs that I now have to go and rebuild...
-I