<html>
<head>
<meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
Thank you for your quick reply!<br>
<br>
I understand the NUMA cell concept and I am using CPU pinning in the
XML file. For example:<br>
<br>
<domain type='kvm'><br>
<name>Debian-xxxx</name><br>
<uuid>xxxx</uuid><br>
<memory unit='KiB'>8388608</memory><br>
<currentMemory unit='KiB'>8388608</currentMemory><br>
<vcpu placement='static' cpuset='6-9,18-21'>8</vcpu><br>
<os><br>
<type arch='x86_64' machine='rhel6.3.0'>hvm</type><br>
...<br>
</os><br>
...<br>
<br>
This guest still hangs while starting up its Linux Kernel (3.2.x.x)
... :(<br>
<br>
Here is my virsh capabilities output from the host (CentOS 6.3):<br>
<br>
# virsh capabilities<br>
<capabilities><br>
<br>
<host><br>
<uuid>00020003-0004-0005-0006-000700080009</uuid><br>
<cpu><br>
<arch>x86_64</arch><br>
<model>SandyBridge</model><br>
<vendor>Intel</vendor><br>
<topology sockets='1' cores='6' threads='2'/><br>
<feature name='pdpe1gb'/><br>
<feature name='osxsave'/><br>
<feature name='tsc-deadline'/><br>
<feature name='dca'/><br>
<feature name='pdcm'/><br>
<feature name='xtpr'/><br>
<feature name='tm2'/><br>
<feature name='est'/><br>
<feature name='smx'/><br>
<feature name='vmx'/><br>
<feature name='ds_cpl'/><br>
<feature name='monitor'/><br>
<feature name='dtes64'/><br>
<feature name='pbe'/><br>
<feature name='tm'/><br>
<feature name='ht'/><br>
<feature name='ss'/><br>
<feature name='acpi'/><br>
<feature name='ds'/><br>
<feature name='vme'/><br>
</cpu><br>
<power_management><br>
<suspend_disk/><br>
</power_management><br>
<migration_features><br>
<live/><br>
<uri_transports><br>
<uri_transport>tcp</uri_transport><br>
</uri_transports><br>
</migration_features><br>
<topology><br>
<cells num='2'><br>
<cell id='0'><br>
<cpus num='12'><br>
<cpu id='0'/><br>
<cpu id='1'/><br>
<cpu id='2'/><br>
<cpu id='3'/><br>
<cpu id='4'/><br>
<cpu id='5'/><br>
<cpu id='12'/><br>
<cpu id='13'/><br>
<cpu id='14'/><br>
<cpu id='15'/><br>
<cpu id='16'/><br>
<cpu id='17'/><br>
</cpus><br>
</cell><br>
<cell id='1'><br>
<cpus num='12'><br>
<cpu id='6'/><br>
<cpu id='7'/><br>
<cpu id='8'/><br>
<cpu id='9'/><br>
<cpu id='10'/><br>
<cpu id='11'/><br>
<cpu id='18'/><br>
<cpu id='19'/><br>
<cpu id='20'/><br>
<cpu id='21'/><br>
<cpu id='22'/><br>
<cpu id='23'/><br>
</cpus><br>
</cell><br>
</cells><br>
</topology><br>
</host><br>
<br>
<guest><br>
<os_type>hvm</os_type><br>
<arch name='i686'><br>
<wordsize>32</wordsize><br>
<emulator>/usr/libexec/qemu-kvm</emulator><br>
<machine>rhel6.3.0</machine><br>
<machine canonical='rhel6.3.0'>pc</machine><br>
<machine>rhel6.2.0</machine><br>
<machine>rhel6.1.0</machine><br>
<machine>rhel6.0.0</machine><br>
<machine>rhel5.5.0</machine><br>
<machine>rhel5.4.4</machine><br>
<machine>rhel5.4.0</machine><br>
<domain type='qemu'><br>
</domain><br>
<domain type='kvm'><br>
<emulator>/usr/libexec/qemu-kvm</emulator><br>
</domain><br>
</arch><br>
<features><br>
<cpuselection/><br>
<deviceboot/><br>
<pae/><br>
<nonpae/><br>
<acpi default='on' toggle='yes'/><br>
<apic default='on' toggle='no'/><br>
</features><br>
</guest><br>
<br>
<guest><br>
<os_type>hvm</os_type><br>
<arch name='x86_64'><br>
<wordsize>64</wordsize><br>
<emulator>/usr/libexec/qemu-kvm</emulator><br>
<machine>rhel6.3.0</machine><br>
<machine canonical='rhel6.3.0'>pc</machine><br>
<machine>rhel6.2.0</machine><br>
<machine>rhel6.1.0</machine><br>
<machine>rhel6.0.0</machine><br>
<machine>rhel5.5.0</machine><br>
<machine>rhel5.4.4</machine><br>
<machine>rhel5.4.0</machine><br>
<domain type='qemu'><br>
</domain><br>
<domain type='kvm'><br>
<emulator>/usr/libexec/qemu-kvm</emulator><br>
</domain><br>
</arch><br>
<features><br>
<cpuselection/><br>
<deviceboot/><br>
<acpi default='on' toggle='yes'/><br>
<apic default='on' toggle='no'/><br>
</features><br>
</guest><br>
<br>
</capabilities><br>
<br>
And the odd thing is this: virsh freecell only provides a total, not
a per node list:<br>
<br>
# virsh freecell<br>
Total: 15891284 kB<br>
<br>
According to this Fedora page<br>
<a
href="http://docs.fedoraproject.org/en-US/Fedora/13/html/Virtualization_Guide/ch25s06.html">http://docs.fedoraproject.org/en-US/Fedora/13/html/Virtualization_Guide/ch25s06.html</a><br>
I should see a per node list.<br>
<br>
Anyway, my Debian guest still does not boot up when I assign more
than 4 vcpus to it. Even if I pin all cpus to the same NUMA node.<br>
<br>
BTW, I have copied my host CPU's configuration and CPU features for
my guests (using virt-manager GUI, running remotely on an Ubuntu
desktop box). Maybe I should use some predefined CPU type instead of
cloning CPU configuration from the host??<br>
<br>
Zoltan<br>
<br>
<div class="moz-cite-prefix">On 10/24/2012 5:58 PM,
<a class="moz-txt-link-abbreviated" href="mailto:bertrand.louargant@atoutlinux.net">bertrand.louargant@atoutlinux.net</a> wrote:<br>
</div>
<blockquote
cite="mid:648353920.241117.1351094280412.JavaMail.open-xchange@email.1and1.fr"
type="cite">
<meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
<div> Hello, </div>
<div> </div>
<div> You have a NUMA (Non Uniform Memory Access) machine, which
mean that each processor has its own memory controller. </div>
<div> virsh nodeinfo give you 2 NUMA cells with 1 CPU socket each:
2 NUMA cells x 1CPU socket x 6 Core(s) per socket x 2 threads
per core = 24 "cores". </div>
<div> </div>
<div> The NUMA concept is really important, especially in
virtualization. </div>
<div> If you have a virtual machine with vCPUs spread across more
than one NUMA cell, performances will drop drastically. </div>
<div> </div>
<div> Maybe you cannot assign more than 4 cPUs to your VM because
Libvirt cannot pin them all on the same NUMA cell ... </div>
<div> You can try to specify the NUMA architecture in the xml
config. </div>
<div> </div>
<div> Br, </div>
<div> Bertrand. </div>
<div> <br>
Le 24 octobre 2012 à 17:14, Zoltan Frombach
<a class="moz-txt-link-rfc2396E" href="mailto:zoltan@frombach.com"><zoltan@frombach.com></a> a écrit : <br>
> Hi, <br>
> <br>
> Please let me know in case I am posting my question to the
wrong forum. <br>
> I apologize if that is the case! <br>
> <br>
> Here is my question: <br>
> <br>
> We run CentOS 6.3 on a server with dual Xeon CPU's. Our
"dual blade" <br>
> server uses this motherboard: <br>
>
<a class="moz-txt-link-freetext" href="http://www.supermicro.com/products/motherboard/Xeon/C600/X9DRT-HF.cfm">http://www.supermicro.com/products/motherboard/Xeon/C600/X9DRT-HF.cfm</a>
<br>
> <br>
> We have two of these CPUs installed and working: <br>
> Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz <br>
> ( <br>
>
<a class="moz-txt-link-freetext" href="http://ark.intel.com/products/64594/Intel-Xeon-Processor-E5-2620-15M-Cache-2_00-GHz-7_20-GTs-Intel-QPI">http://ark.intel.com/products/64594/Intel-Xeon-Processor-E5-2620-15M-Cache-2_00-GHz-7_20-GTs-Intel-QPI</a>
<br>
> ) <br>
> <br>
> cat /proc/cpuinfo correctly reports a total of 24 cores (2
x 6 phisycal <br>
> cores plus 2 x 6 hyperthreading cores) <br>
> <br>
> However, I get this output from virsh nodeinfo : <br>
> <br>
> # virsh nodeinfo <br>
> CPU model: x86_64 <br>
> CPU(s): 24 <br>
> CPU frequency: 2000 MHz <br>
> CPU socket(s): 1 <br>
> Core(s) per socket: 6 <br>
> Thread(s) per core: 2 <br>
> NUMA cell(s): 2 <br>
> Memory size: 16303552 kB <br>
> <br>
> As you can see, virsh nodeinfo reports only 1 CPU socket
while in fact <br>
> we have two CPU's. <br>
> <br>
> I would like to know if this is normal? Why does virsh
reports only one <br>
> physical CPU ?? <br>
> <br>
> Also, when we try to run a guest OS (Debian Linux
"squeeze") with more <br>
> than 4 vcpu's assigned to the VM, the guest OS won't boot
up. The <br>
> guest's kernel stuck on a screen right after it detected
the /dev/vda <br>
> block device and its partitions. We're using the VirtIO
driver, of <br>
> course. If I assign only 4 (or less) vcpu's to the guest OS
it works <br>
> fine. I have tried to upgrade the Linux kernel on the guest
from debian <br>
> backports, it did not help, we're experiencing the same
issue with both <br>
> the 2.6.32 and 3.2 Linux kernels. What could be causing
this? <br>
> <br>
> On the host, we use the Linux kernel that came with CentOS
6.3 : <br>
> 2.6.32-279.11.1.el6.x86_64 #1 SMP Tue Oct 16 15:57:10 UTC
2012 x86_64 <br>
> x86_64 x86_64 GNU/Linux <br>
> <br>
> Thanks, <br>
> <br>
> Zoltan <br>
> _______________________________________________ <br>
> CentOS-virt mailing list <br>
> <a class="moz-txt-link-abbreviated" href="mailto:CentOS-virt@centos.org">CentOS-virt@centos.org</a> <br>
> <a class="moz-txt-link-freetext" href="http://lists.centos.org/mailman/listinfo/centos-virt">http://lists.centos.org/mailman/listinfo/centos-virt</a> </div>
</blockquote>
<br>
</body>
</html>