----- "Aaron Clark" ophidian@ophidian.homeip.net wrote:
qemu: could not open serial device 'none'
http://os-drive.com/files/docbook/xen-faq.html#serial_console_hvm
Does the LV show that it's open (or ever changes state) when the VM attempts to boot? Did you try taking a snapshot of the LV and mucking around with the xen configuration to see if anything changes? Did you update anything else on the machine, like perhaps the BIOS? Can you mount the VM's filesystem? Is its partition table okay? How about the boot loader?
On 03/29/2010 09:37 PM, Christopher G. Stach II wrote:
----- "Aaron Clark"ophidian@ophidian.homeip.net wrote:
qemu: could not open serial device 'none'
http://os-drive.com/files/docbook/xen-faq.html#serial_console_hvm
I will give this a look to see if it shed any light on the situation.
Does the LV show that it's open (or ever changes state) when the VM attempts to boot? Did you try taking a snapshot of the LV and mucking around with the xen configuration to see if anything changes? Did you update anything else on the machine, like perhaps the BIOS? Can you mount the VM's filesystem? Is its partition table okay? How about the boot loader?
- The LV goes to Open when the VM starts and stays that way until the VM is destroyed - I have tried messing with the virsh/xen config repeatedly after taking backups of them - The host machine has had no hardware or firmware updates applied to it - I can and have mounted the VM's file systems using kpartx with no troubles at all - When I dd'd the LV to a .img file and started it with KVM, the Bootloader appears as expected
Baffled, Aaron
Aaron,
Seems to be something related to your block device.
Try this config file: name = "Belldandy" maxmem = 256 memory = 256 vcpus = 1 builder = "hvm" kernel = "/usr/lib/xen/boot/hvmloader" boot = "c" on_poweroff = "destroy" on_reboot = "restart" on_crash = "restart" device_model = "/usr/lib64/xen/bin/qemu-dm" disk = [ "phy:/dev/SystemsVG/Belldandy,hda,w" ] vif = [ "mac=00:16:3e:1d:43:df,bridge=xenbr0,script=vif-bridge" ] vncpasswd='YOURPASSHERE' vnclisten="YOURDOM0 IP HERE" vnc=1
Using one of this options: disk = [ 'phy:/dev/SystemsVG/Belldandy, ioemu:hda,w' ] disk = [ 'phy:/dev/SystemsVG/Belldandy, sda,w' ] disk = [ 'phy:/dev/SystemsVG/Belldandy, xvda,w' ]
You can try comment your vif line, maybe is something related to it, and it won't start.
Then, try to connect to vnc server: dom0ip:5900
What's the difference between yours domU config files ? Did you see any other error in xend.log ? If there is no difference at all, try to fsck your lvm partition.
2010/3/29 Aaron Clark ophidian@ophidian.homeip.net
On 03/29/2010 09:37 PM, Christopher G. Stach II wrote:
----- "Aaron Clark"ophidian@ophidian.homeip.net wrote:
qemu: could not open serial device 'none'
http://os-drive.com/files/docbook/xen-faq.html#serial_console_hvm
I will give this a look to see if it shed any light on the situation.
Does the LV show that it's open (or ever changes state) when the VM
attempts to boot? Did you try taking a snapshot of the LV and mucking around with the xen configuration to see if anything changes? Did you update anything else on the machine, like perhaps the BIOS? Can you mount the VM's filesystem? Is its partition table okay? How about the boot loader?
- The LV goes to Open when the VM starts and stays that way until the VM
is destroyed
- I have tried messing with the virsh/xen config repeatedly after taking
backups of them
- The host machine has had no hardware or firmware updates applied to it
- I can and have mounted the VM's file systems using kpartx with no
troubles at all
- When I dd'd the LV to a .img file and started it with KVM, the
Bootloader appears as expected
Baffled, Aaron -- "The goblins are in charge of maintenance? Why not just set it on fire now and call it a day?" --Whip Tongue, Viashino Technician _______________________________________________ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
On 03/30/2010 07:33 AM, Sergio Charpinel Jr. wrote:
Aaron,
Seems to be something related to your block device.
Try this config file:
Sergio,
I just tried this config and it booted fine: name = "Belldandy" maxmem = 256 memory = 256 vcpus = 1 builder = "hvm" kernel = "/usr/lib/xen/boot/hvmloader" boot = "c" pae = 1 on_poweroff = "destroy" on_reboot = "restart" on_crash = "restart" device_model = "/usr/lib64/xen/bin/qemu-dm" disk = [ "phy:/dev/SystemsVG/Belldandy,hda,w" ] vif = [ "mac=00:16:3e:29:65:46,bridge=xenbr0,script=vif-bridge" ] vnc = 1
Attached is the generated virsh xml from the above.
I'm going to tinker with this for a bit to try and figure out what difference from the previous config is actually the cause of the issue. I just had the other, formerly working VM, bail with the same symptoms once I got this one work. The workaround is nice but I definitely want to isolate this bug now.
Thanks everyone for all the help!
Aaron
On 03/31/2010 11:50 PM, Aaron Clark wrote:
I'm going to tinker with this for a bit to try and figure out what difference from the previous config is actually the cause of the issue. I just had the other, formerly working VM, bail with the same symptoms once I got this one work. The workaround is nice but I definitely want to isolate this bug now.
I just wanted to follow up on this a bit. Once I got it going, the VM ran for over 10 days without issues. Tonight I started tinkering with the Xen config to try an isolate whether it was acpi or apic causing troubles.... the short answer is that neither is the problem.
What is the problem then? It looks like a bug in virsh/libvirt; I'll need some help to debug it though. I'm using the following file for all of this: name = "Belldandy" uuid = "b1f1d0a4-9687-947c-5eaf-b362d5d5a199" maxmem = 256 memory = 256 vcpus = 1 builder = "hvm" kernel = "/usr/lib/xen/boot/hvmloader" boot = "c" pae = 1 acpi = 1 apic = 1 on_poweroff = "destroy" on_reboot = "restart" on_crash = "restart" device_model = "/usr/lib64/xen/bin/qemu-dm" disk = [ "phy:/dev/SystemsVG/Belldandy,hda,w" ] vif = [ "mac=00:16:3e:29:65:46,bridge=xenbr0,script=vif-bridge" ] vnc = 1
1. xm create -f ./Belldandy.xen -- successfully starts as expected 2. Copy Belldandy.xen to /etc/xen/Belldandy, start with xm create Belldandy -- successfully starts as expected 3. Attempt to start with virsh start Belldandy -- fails horribly with the same 'no state' issue as before.
So I can work around this for now by not using virsh to start them up but it's still quite annoying. I should note that I can remotely access that machine with VMM and it suffers the same issues but handles everything else just fine once I manually start the domU via xm as above.
Aaron
On Sun, Apr 11, 2010 at 10:43:21PM -0400, Aaron Clark wrote:
On 03/31/2010 11:50 PM, Aaron Clark wrote:
I'm going to tinker with this for a bit to try and figure out what difference from the previous config is actually the cause of the issue. I just had the other, formerly working VM, bail with the same symptoms once I got this one work. The workaround is nice but I definitely want to isolate this bug now.
I just wanted to follow up on this a bit. Once I got it going, the VM ran for over 10 days without issues. Tonight I started tinkering with the Xen config to try an isolate whether it was acpi or apic causing troubles.... the short answer is that neither is the problem.
What is the problem then? It looks like a bug in virsh/libvirt; I'll need some help to debug it though. I'm using the following file for all of this: name = "Belldandy" uuid = "b1f1d0a4-9687-947c-5eaf-b362d5d5a199" maxmem = 256 memory = 256 vcpus = 1 builder = "hvm" kernel = "/usr/lib/xen/boot/hvmloader" boot = "c" pae = 1 acpi = 1 apic = 1 on_poweroff = "destroy" on_reboot = "restart" on_crash = "restart" device_model = "/usr/lib64/xen/bin/qemu-dm" disk = [ "phy:/dev/SystemsVG/Belldandy,hda,w" ] vif = [ "mac=00:16:3e:29:65:46,bridge=xenbr0,script=vif-bridge" ] vnc = 1
- xm create -f ./Belldandy.xen -- successfully starts as expected
- Copy Belldandy.xen to /etc/xen/Belldandy, start with xm create
Belldandy -- successfully starts as expected 3. Attempt to start with virsh start Belldandy -- fails horribly with the same 'no state' issue as before.
So I can work around this for now by not using virsh to start them up but it's still quite annoying. I should note that I can remotely access that machine with VMM and it suffers the same issues but handles everything else just fine once I manually start the domU via xm as above.
Hmm.. interesting.
I guess the VM info (state) needs to be in the libvirt database before you can 'virsh start' it?
-- Pasi