Curious, RAID1 is soft/md, fakeraid or hardware?

Also, are you using pygrub and then what is the kernel for the guests, or are you using a kernel from the host/which one?

(I'm using almost-latest, ie last week, kernel on host and guest (pygrub) on hardware raid, haven't had any issues to date.)

Eric

Sent from my iPhone.  Pardon the top-posting!

On Oct 27, 2010, at 5:56 PM, Steven Ellis <mail_lists@stevencherie.net> wrote:

I've recently upgraded a Centos 5.3 machine to Centos 5.5. The hardware isn't HVM capabile so I'm only running para-virt guests.

Using a vanilla i386 kernel boots without, but the newer kernel-xen locks up the Dom0 after a couple of minute. I'm only booting into single user mode for these tests so no VMs are active.


For the moment I've switched back to the older Xen kernel, athough I'm still running the newer Xen Hypervisor.

My current Xen packages are

Prior to the upgrade I had the following installed under Centos 5.3

Booting Dom0 with kernel-xen-2.6.18-128.1.10.el5 everything appears to work normally and all of my Guests are up and running.

If I boot with  kernel-xen-2.6.18-194.17.4.el5 the boot normally gets to around udev and the system locks up. On a couple of occasions it did mange to boot but reported some files were corrupted. I'm worried that there is an issue running this kernel where the root file system is LVM on top of Raid 1.

Anyone on this list have tips on diagnosing the issue, or come across a similar problem themselves.

Steve

 

 

_______________________________________________
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt