This is a problem I've had on and off under CentOS5 and CentOS6, with both
xen and kvm. Currently, it happens consistently with kvm on 6.5, e.g. with
every kernel update. I *think* it generally worked fine with the 6.4 kernels.
There are 7 VMs running on a 6.5, x86_64, 8GB RAM host, each with 512MB RAM
and using the e1000 NIC. I picked this specific NIC because the default does
not allow reliable monitoring through SNMP (IIRC). The host has two bonded
NICs with br0 running on top.
When the host reboots, the VMs will generally hang bringing up the virtual
NIC, and I need to go through several iterations of destroy/create, for each
VM, to get them running. The always hang here (copy&paste from console):
...
Welcome to CentOS
Starting udev: udev: starting version 147
piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 0
e1000: Intel(R) PRO/1000 Network Driver - version 7.3.21-k8-NAPI
e1000: Copyright (c) 1999-2006 Intel Corporation.
ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 11
e1000 0000:00:03.0: PCI INT A -> Link[LNKC] -> GSI 11 (level, high) -> IRQ 11
e1000 0000:00:03.0: eth0: (PCI:33MHz:32-bit) 00:16:3e:52:e3:0b
e1000 0000:00:03.0: eth0: Intel(R) PRO/1000 Network Connection
Any suggestions on where to start looking?