Thank you for your quick reply!
I understand the NUMA cell concept and I am using CPU pinning in the XML file. For example:
<domain type='kvm'> <name>Debian-xxxx</name> <uuid>xxxx</uuid> <memory unit='KiB'>8388608</memory> <currentMemory unit='KiB'>8388608</currentMemory> <vcpu placement='static' cpuset='6-9,18-21'>8</vcpu> <os> <type arch='x86_64' machine='rhel6.3.0'>hvm</type> ... </os> ...
This guest still hangs while starting up its Linux Kernel (3.2.x.x) ... :(
Here is my virsh capabilities output from the host (CentOS 6.3):
# virsh capabilities <capabilities>
<host> <uuid>00020003-0004-0005-0006-000700080009</uuid> <cpu> <arch>x86_64</arch> <model>SandyBridge</model> <vendor>Intel</vendor> <topology sockets='1' cores='6' threads='2'/> <feature name='pdpe1gb'/> <feature name='osxsave'/> <feature name='tsc-deadline'/> <feature name='dca'/> <feature name='pdcm'/> <feature name='xtpr'/> <feature name='tm2'/> <feature name='est'/> <feature name='smx'/> <feature name='vmx'/> <feature name='ds_cpl'/> <feature name='monitor'/> <feature name='dtes64'/> <feature name='pbe'/> <feature name='tm'/> <feature name='ht'/> <feature name='ss'/> <feature name='acpi'/> <feature name='ds'/> <feature name='vme'/> </cpu> <power_management> <suspend_disk/> </power_management> <migration_features> <live/> <uri_transports> <uri_transport>tcp</uri_transport> </uri_transports> </migration_features> <topology> <cells num='2'> <cell id='0'> <cpus num='12'> <cpu id='0'/> <cpu id='1'/> <cpu id='2'/> <cpu id='3'/> <cpu id='4'/> <cpu id='5'/> <cpu id='12'/> <cpu id='13'/> <cpu id='14'/> <cpu id='15'/> <cpu id='16'/> <cpu id='17'/> </cpus> </cell> <cell id='1'> <cpus num='12'> <cpu id='6'/> <cpu id='7'/> <cpu id='8'/> <cpu id='9'/> <cpu id='10'/> <cpu id='11'/> <cpu id='18'/> <cpu id='19'/> <cpu id='20'/> <cpu id='21'/> <cpu id='22'/> <cpu id='23'/> </cpus> </cell> </cells> </topology> </host>
<guest> <os_type>hvm</os_type> <arch name='i686'> <wordsize>32</wordsize> <emulator>/usr/libexec/qemu-kvm</emulator> <machine>rhel6.3.0</machine> <machine canonical='rhel6.3.0'>pc</machine> <machine>rhel6.2.0</machine> <machine>rhel6.1.0</machine> <machine>rhel6.0.0</machine> <machine>rhel5.5.0</machine> <machine>rhel5.4.4</machine> <machine>rhel5.4.0</machine> <domain type='qemu'> </domain> <domain type='kvm'> <emulator>/usr/libexec/qemu-kvm</emulator> </domain> </arch> <features> <cpuselection/> <deviceboot/> <pae/> <nonpae/> <acpi default='on' toggle='yes'/> <apic default='on' toggle='no'/> </features> </guest>
<guest> <os_type>hvm</os_type> <arch name='x86_64'> <wordsize>64</wordsize> <emulator>/usr/libexec/qemu-kvm</emulator> <machine>rhel6.3.0</machine> <machine canonical='rhel6.3.0'>pc</machine> <machine>rhel6.2.0</machine> <machine>rhel6.1.0</machine> <machine>rhel6.0.0</machine> <machine>rhel5.5.0</machine> <machine>rhel5.4.4</machine> <machine>rhel5.4.0</machine> <domain type='qemu'> </domain> <domain type='kvm'> <emulator>/usr/libexec/qemu-kvm</emulator> </domain> </arch> <features> <cpuselection/> <deviceboot/> <acpi default='on' toggle='yes'/> <apic default='on' toggle='no'/> </features> </guest>
</capabilities>
And the odd thing is this: virsh freecell only provides a total, not a per node list:
# virsh freecell Total: 15891284 kB
According to this Fedora page http://docs.fedoraproject.org/en-US/Fedora/13/html/Virtualization_Guide/ch25... I should see a per node list.
Anyway, my Debian guest still does not boot up when I assign more than 4 vcpus to it. Even if I pin all cpus to the same NUMA node.
BTW, I have copied my host CPU's configuration and CPU features for my guests (using virt-manager GUI, running remotely on an Ubuntu desktop box). Maybe I should use some predefined CPU type instead of cloning CPU configuration from the host??
Zoltan
On 10/24/2012 5:58 PM, bertrand.louargant@atoutlinux.net wrote:
Hello, You have a NUMA (Non Uniform Memory Access) machine, which mean that each processor has its own memory controller. virsh nodeinfo give you 2 NUMA cells with 1 CPU socket each: 2 NUMA cells x 1CPU socket x 6 Core(s) per socket x 2 threads per core = 24 "cores". The NUMA concept is really important, especially in virtualization. If you have a virtual machine with vCPUs spread across more than one NUMA cell, performances will drop drastically. Maybe you cannot assign more than 4 cPUs to your VM because Libvirt cannot pin them all on the same NUMA cell ... You can try to specify the NUMA architecture in the xml config. Br, Bertrand.
Le 24 octobre 2012 à 17:14, Zoltan Frombach zoltan@frombach.com a écrit :
Hi,
Please let me know in case I am posting my question to the wrong forum. I apologize if that is the case!
Here is my question:
We run CentOS 6.3 on a server with dual Xeon CPU's. Our "dual blade" server uses this motherboard: http://www.supermicro.com/products/motherboard/Xeon/C600/X9DRT-HF.cfm
We have two of these CPUs installed and working: Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz (
http://ark.intel.com/products/64594/Intel-Xeon-Processor-E5-2620-15M-Cache-2...
)
cat /proc/cpuinfo correctly reports a total of 24 cores (2 x 6 phisycal cores plus 2 x 6 hyperthreading cores)
However, I get this output from virsh nodeinfo :
# virsh nodeinfo CPU model: x86_64 CPU(s): 24 CPU frequency: 2000 MHz CPU socket(s): 1 Core(s) per socket: 6 Thread(s) per core: 2 NUMA cell(s): 2 Memory size: 16303552 kB
As you can see, virsh nodeinfo reports only 1 CPU socket while in fact we have two CPU's.
I would like to know if this is normal? Why does virsh reports only one physical CPU ??
Also, when we try to run a guest OS (Debian Linux "squeeze") with more than 4 vcpu's assigned to the VM, the guest OS won't boot up. The guest's kernel stuck on a screen right after it detected the /dev/vda block device and its partitions. We're using the VirtIO driver, of course. If I assign only 4 (or less) vcpu's to the guest OS it works fine. I have tried to upgrade the Linux kernel on the guest from debian backports, it did not help, we're experiencing the same issue with both the 2.6.32 and 3.2 Linux kernels. What could be causing this?
On the host, we use the Linux kernel that came with CentOS 6.3 : 2.6.32-279.11.1.el6.x86_64 #1 SMP Tue Oct 16 15:57:10 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
Thanks,
Zoltan _______________________________________________ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt