Hi there,
today I encountered the very same problem as described by Zoltan. We are running a system with the Intel serverboard S2600CP and two E5-2620 Xeon processors, with a total of 2 x 6 Cores with 2 threads each (resulting in a total CPU count of 24).
Base system is a CentOS 6.3 with all recent updates. We are running the setup for some months now, without any problems. Most virtual machines only got 4 vCPUs, but some got 6 or 8 vCPUs. That setup was fine if running CentOS 5, CentOS 6 or Windows Server 2008 as guest systems.
However, today I had to install two machines with Debian Squeeze. The first one got 4 vCPUs and runs fine, while the second one got 6 vCPUs and does not boot at all. Booting stops shortly after detecting the virtio harddisk with its partitions. When I switch back to 4 vCPUs everything is fine.
Since the RHEL 5.8 and 6.3 kernels work fine, it seems to be a problem specific to the Debian kernel.
Our default setup pins the different vCPUs to one fixed physical CPU each in an interleaving way. This would be the setup from /etc/libvirt/qemu/vm.xml:
<memory unit='KiB'>12582912</memory> <currentMemory unit='KiB'>12582912</currentMemory> <vcpu placement='static'>6</vcpu> <cputune> <vcpupin vcpu='0' cpuset='0-5,12-17'/> <vcpupin vcpu='1' cpuset='6-11,18-23'/> <vcpupin vcpu='2' cpuset='0-5,12-17'/> <vcpupin vcpu='3' cpuset='6-11,18-23'/> <vcpupin vcpu='4' cpuset='0-5,12-17'/> <vcpupin vcpu='5' cpuset='6-11,18-23'/> </cputune> <os> <type arch='x86_64' machine='rhel6.3.0'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <cpu mode='host-model'> <model fallback='allow'/> <topology sockets='6' cores='1' threads='1'/> </cpu> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash>
Such a setup works fine for Redhat-kernels, but not for the Debian kernel (did only test the default 2.6.32-5 amd64 kernel so far).
I played a bit with the pinning and topology, but even setting the config to do no pinning at all, and for example use 8 vCPUs divided in 2 sockets with 4 cores, or 1 socket with 8 cores or any other combination of it, does not work. The only way to get the Debian VM to boot, is to roll back to a total of 4 vCPUs (or less).
As a reference, these are the cpuflags, seen from the host: processor : 23 vendor_id : GenuineIntel cpu family : 6 model : 45 model name : Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz stepping : 7 cpu MHz : 1995.156 cache size : 15360 KB physical id : 1 siblings : 12 core id : 5 cpu cores : 6 apicid : 43 initial apicid : 43 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 sse4_2 x2apic popcnt aes xsave avx lahf_lm ida arat epb xsaveopt pln pts dts tpr_shadow vnmi flexpriority ept vpid bogomips : 3989.83 clflush size : 64 cache_alignment : 64 address sizes : 46 bits physical, 48 bits virtual power management:
and these are the cpuflags seen from the guest: processor : 3 vendor_id : GenuineIntel cpu family : 6 model : 42 model name : Intel Xeon E312xx (Sandy Bridge) stepping : 1 cpu MHz : 1995.191 cache size : 4096 KB fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx lm constant_tsc arch_perfmon rep_good pni pclmulqdq ssse3 cx16 sse4_1 sse4_2 x2apic popcnt aes xsave avx hypervisor lahf_lm bogomips : 3990.38 clflush size : 64 cache_alignment : 64 address sizes : 46 bits physical, 48 bits virtual power management:
The host is running this kernel: 2.6.32-279.11.1.el6.x86_64 #1 SMP Tue Oct 16 15:57:10 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
Packages are up to date: qemu-kvm-0.12.1.2-2.295.el6_3.2.x86_64 libvirt-0.9.10-21.el6_3.5.x86_64
The kvm commandline looks like this (for 4 vCPUs): /usr/libexec/qemu-kvm -S -M rhel6.3.0 -cpu SandyBridge,+pdpe1gb,+osxsave,+tsc-deadline,+dca,+pdcm,+xtpr,+tm2,+est,+smx,+vmx,+ds_cpl,+monitor,+dtes64,+pbe,+tm,+ht,+ss,+acpi,+ds,+vme -enable-kvm -m 12288 -smp 4,sockets=4,cores=1,threads=1 -name xxxx -uuid xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/xxxxxxxxxxxxxx.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/dev/drbd28,if=none,id=drive-virtio-disk0,format=raw,cache=none,aio=native -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=22,id=hostnet0,vhost=on,vhostfd=23 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:xx:xx:xx,bus=pci.0,addr=0x5 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -device usb-tablet,id=input0 -vnc unix:/var/lib/libvirt/vnc/xxxxxxxxxxxxxx,password -k de -vga cirrus -incoming tcp:0.0.0.0:49152 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3
Any clue on how to debug this further or how to fix is very appreciated. If any more information would be helpful, just ask.
Regards, Matthias
PS: please excuse if this mail is not correctly sorted into the existing thread, as I just write this after seeing the mails in the mailinglist archive without any possiblity to directly reply to the existing mails.