After updating to 5.6 on a server this morning, I can no longer boot two virtual machines. One is trixbox which I believe is a 32bit centos based distro, and the other vm is a 64bit Windows 2008 installation.
The error I get in the virt-manager console is FATAL: No bootable device.
Both VMs are qcow2 format, and I've checked file permissions. I'm going to boot a live cd in the trixbox vm to see if I can access anything on it's qcow2 disk.
Any hints appreciated...
On Sat, Apr 9, 2011 at 9:21 AM, compdoc compdoc@hotrodpc.com wrote:
After updating to 5.6 on a server this morning, I can no longer boot two virtual machines. One is trixbox which I believe is a 32bit centos based distro, and the other vm is a 64bit Windows 2008 installation.
The error I get in the virt-manager console is FATAL: No bootable device.
Both VMs are qcow2 format, and I've checked file permissions. I'm going to boot a live cd in the trixbox vm to see if I can access anything on it's qcow2 disk.
Any hints appreciated...
A similar incident was reported during the QA. Look at the .xml file. If it says type='raw', change it to type='qcow2' and restart libvirtd. Would that fix the problem ?
Akemi
A similar incident was reported during the QA. Look at the .xml file. If it says type='raw', change it to type='qcow2' and restart libvirtd. Would that fix the problem ?
Akemi
Thank you. After reading your message, I googled the error and found a webpage that describes a slightly different procedure than yours, but which does the same thing:
http://ubuntuforums.org/showthread.php?t=1638708
Everything is working now.
:)
On 04/09/2011 12:04 PM, compdoc wrote:
A similar incident was reported during the QA. Look at the .xml file. If it says type='raw', change it to type='qcow2' and restart libvirtd. Would that fix the problem ?
Akemi
Thank you. After reading your message, I googled the error and found a webpage that describes a slightly different procedure than yours, but which does the same thing:
http://ubuntuforums.org/showthread.php?t=1638708
Everything is working now.
:)
I am going to add this to the Release Notes for 5.6 on the Wiki now.
On Sun, 2011-04-10 at 03:47 -0500, Johnny Hughes wrote:
On 04/09/2011 12:04 PM, compdoc wrote:
A similar incident was reported during the QA. Look at the .xml file. If it says type='raw', change it to type='qcow2' and restart libvirtd. Would that fix the problem ?
Akemi
Thank you. After reading your message, I googled the error and found a webpage that describes a slightly different procedure than yours, but which does the same thing:
http://ubuntuforums.org/showthread.php?t=1638708
Everything is working now.
:)
I am going to add this to the Release Notes for 5.6 on the Wiki now.
Turns out that wasn't the only problem I faced in my migration. With 2 KVM servers, both sharing a volume mounted via NFS for VMs, I migrated all VMs to the second node, upgraded the first, them moved them all back to KVM1. Instant disk corruption on all VMs. Boom.
I have a second pair of KVM servers. I tested one VM with my normal migrate-them-out-of-the-way procedure, and it, too, suffered MASSIVE filesystem corruption. This was even after I'd made the qcow2 mods and restarted libvirtd.
The only way I was able to not have to rebuild the remaining non-corrupted VMs was to shut them down on one node then bring them back up again. Turns out live migration doesn't work in this upgrade. (Though I'll test regular live migration tomorrow, given that all 4 KVM servers have now been upgraded.)
-I
Turns out that wasn't the only problem I faced in my migration. With 2 KVM servers, both sharing a volume mounted via NFS for VMs, I migrated all VMs to the second node, upgraded the first, them moved them all back to KVM1. Instant disk corruption on all VMs. Boom.
I've never used live migration, but it's my understanding that you place the VM's virtual drive file on a sharable file system (as you have done with NFS), and that the server you migrate to has access to the file.
So, this doesn't actually involve moving the virtual drive's file, it just hands off the running the VM to another server. Is that correct? If that's so, I don't see how it becomes corrupted.
I back up my VM's by shutting down the VM and copying the image file to other locations, and this has never resulted in file system corruption...
2011/4/21 Ian Forde ianforde@gmail.com:
Turns out that wasn't the only problem I faced in my migration. With 2 KVM servers, both sharing a volume mounted via NFS for VMs, I migrated all VMs to the second node, upgraded the first, them moved them all back to KVM1. Instant disk corruption on all VMs. Boom.
Are you sure it was the migration and not the raw/qcow2 error which caused the disk corruption?
I just had two Windows Servers with image corruption after upgrading from 5.5 to 5.6 and booting the first time with the raw setting, before changing it to qcow2 :-/
These two images were both on the same host, which is plain CentOS 5 *BUT* with a 2.6.37 kernel (and therefore 2.6.37 KVM module) from elrepo...
It could be my special case of running with a vanilla KVM-module + CentOS KVM userspace which allows the corruption to happen, but if other people are seeing disk corruption with the regular kernel/kmod-kvm, then this "known issue" should probably have a big fat red warning in the release notes..
Best regards Kenni
On 04/21/2011 01:16 PM, Kenni Lund wrote:
2011/4/21 Ian Forde ianforde@gmail.com:
Turns out that wasn't the only problem I faced in my migration. With 2 KVM servers, both sharing a volume mounted via NFS for VMs, I migrated all VMs to the second node, upgraded the first, them moved them all back to KVM1. Instant disk corruption on all VMs. Boom.
Are you sure it was the migration and not the raw/qcow2 error which caused the disk corruption?
I just had two Windows Servers with image corruption after upgrading from 5.5 to 5.6 and booting the first time with the raw setting, before changing it to qcow2 :-/
These two images were both on the same host, which is plain CentOS 5 *BUT* with a 2.6.37 kernel (and therefore 2.6.37 KVM module) from elrepo...
It could be my special case of running with a vanilla KVM-module + CentOS KVM userspace which allows the corruption to happen, but if other people are seeing disk corruption with the regular kernel/kmod-kvm, then this "known issue" should probably have a big fat red warning in the release notes..
It is in the release notes as a known issue ...
I had this issue and tried to reboot my VM server several times and there was no disk corruption.
I just tried booting a machine 25 times with the raw setting and it did not corrupt the image.
On Thu, 2011-04-21 at 20:16 +0200, Kenni Lund wrote:
2011/4/21 Ian Forde ianforde@gmail.com:
Turns out that wasn't the only problem I faced in my migration. With 2 KVM servers, both sharing a volume mounted via NFS for VMs, I migrated all VMs to the second node, upgraded the first, them moved them all back to KVM1. Instant disk corruption on all VMs. Boom.
Are you sure it was the migration and not the raw/qcow2 error which caused the disk corruption?
In the second pair of KVM servers, I'd made the changes to the xml files and restarted libvirtd. Then did migration of a VM. Then watched the corruption. It's possible I may have needed to reboot the VM before migrating, so that KVM absolutely knows what it is. But nevertheless, I'm now a little gunshy about live migration...
I just had two Windows Servers with image corruption after upgrading from 5.5 to 5.6 and booting the first time with the raw setting, before changing it to qcow2 :-/
These two images were both on the same host, which is plain CentOS 5 *BUT* with a 2.6.37 kernel (and therefore 2.6.37 KVM module) from elrepo...
It could be my special case of running with a vanilla KVM-module + CentOS KVM userspace which allows the corruption to happen, but if other people are seeing disk corruption with the regular kernel/kmod-kvm, then this "known issue" should probably have a big fat red warning in the release notes..
Yeah. I completely agree. I've got a steaming mess of VMs that I now have to go and rebuild...
-I