<html>
  <head>
    <meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    Thank you for your quick reply!<br>
    <br>
    I understand the NUMA cell concept and I am using CPU pinning in the
    XML file. For example:<br>
    <br>
    &lt;domain type='kvm'&gt;<br>
      &lt;name&gt;Debian-xxxx&lt;/name&gt;<br>
      &lt;uuid&gt;xxxx&lt;/uuid&gt;<br>
      &lt;memory unit='KiB'&gt;8388608&lt;/memory&gt;<br>
      &lt;currentMemory unit='KiB'&gt;8388608&lt;/currentMemory&gt;<br>
      &lt;vcpu placement='static' cpuset='6-9,18-21'&gt;8&lt;/vcpu&gt;<br>
      &lt;os&gt;<br>
        &lt;type arch='x86_64' machine='rhel6.3.0'&gt;hvm&lt;/type&gt;<br>
        ...<br>
      &lt;/os&gt;<br>
    ...<br>
    <br>
    This guest still hangs while starting up its Linux Kernel (3.2.x.x)
    ... :(<br>
    <br>
    Here is my virsh capabilities output from the host (CentOS 6.3):<br>
    <br>
    # virsh capabilities<br>
    &lt;capabilities&gt;<br>
    <br>
      &lt;host&gt;<br>
        &lt;uuid&gt;00020003-0004-0005-0006-000700080009&lt;/uuid&gt;<br>
        &lt;cpu&gt;<br>
          &lt;arch&gt;x86_64&lt;/arch&gt;<br>
          &lt;model&gt;SandyBridge&lt;/model&gt;<br>
          &lt;vendor&gt;Intel&lt;/vendor&gt;<br>
          &lt;topology sockets='1' cores='6' threads='2'/&gt;<br>
          &lt;feature name='pdpe1gb'/&gt;<br>
          &lt;feature name='osxsave'/&gt;<br>
          &lt;feature name='tsc-deadline'/&gt;<br>
          &lt;feature name='dca'/&gt;<br>
          &lt;feature name='pdcm'/&gt;<br>
          &lt;feature name='xtpr'/&gt;<br>
          &lt;feature name='tm2'/&gt;<br>
          &lt;feature name='est'/&gt;<br>
          &lt;feature name='smx'/&gt;<br>
          &lt;feature name='vmx'/&gt;<br>
          &lt;feature name='ds_cpl'/&gt;<br>
          &lt;feature name='monitor'/&gt;<br>
          &lt;feature name='dtes64'/&gt;<br>
          &lt;feature name='pbe'/&gt;<br>
          &lt;feature name='tm'/&gt;<br>
          &lt;feature name='ht'/&gt;<br>
          &lt;feature name='ss'/&gt;<br>
          &lt;feature name='acpi'/&gt;<br>
          &lt;feature name='ds'/&gt;<br>
          &lt;feature name='vme'/&gt;<br>
        &lt;/cpu&gt;<br>
        &lt;power_management&gt;<br>
          &lt;suspend_disk/&gt;<br>
        &lt;/power_management&gt;<br>
        &lt;migration_features&gt;<br>
          &lt;live/&gt;<br>
          &lt;uri_transports&gt;<br>
            &lt;uri_transport&gt;tcp&lt;/uri_transport&gt;<br>
          &lt;/uri_transports&gt;<br>
        &lt;/migration_features&gt;<br>
        &lt;topology&gt;<br>
          &lt;cells num='2'&gt;<br>
            &lt;cell id='0'&gt;<br>
              &lt;cpus num='12'&gt;<br>
                &lt;cpu id='0'/&gt;<br>
                &lt;cpu id='1'/&gt;<br>
                &lt;cpu id='2'/&gt;<br>
                &lt;cpu id='3'/&gt;<br>
                &lt;cpu id='4'/&gt;<br>
                &lt;cpu id='5'/&gt;<br>
                &lt;cpu id='12'/&gt;<br>
                &lt;cpu id='13'/&gt;<br>
                &lt;cpu id='14'/&gt;<br>
                &lt;cpu id='15'/&gt;<br>
                &lt;cpu id='16'/&gt;<br>
                &lt;cpu id='17'/&gt;<br>
              &lt;/cpus&gt;<br>
            &lt;/cell&gt;<br>
            &lt;cell id='1'&gt;<br>
              &lt;cpus num='12'&gt;<br>
                &lt;cpu id='6'/&gt;<br>
                &lt;cpu id='7'/&gt;<br>
                &lt;cpu id='8'/&gt;<br>
                &lt;cpu id='9'/&gt;<br>
                &lt;cpu id='10'/&gt;<br>
                &lt;cpu id='11'/&gt;<br>
                &lt;cpu id='18'/&gt;<br>
                &lt;cpu id='19'/&gt;<br>
                &lt;cpu id='20'/&gt;<br>
                &lt;cpu id='21'/&gt;<br>
                &lt;cpu id='22'/&gt;<br>
                &lt;cpu id='23'/&gt;<br>
              &lt;/cpus&gt;<br>
            &lt;/cell&gt;<br>
          &lt;/cells&gt;<br>
        &lt;/topology&gt;<br>
      &lt;/host&gt;<br>
    <br>
      &lt;guest&gt;<br>
        &lt;os_type&gt;hvm&lt;/os_type&gt;<br>
        &lt;arch name='i686'&gt;<br>
          &lt;wordsize&gt;32&lt;/wordsize&gt;<br>
          &lt;emulator&gt;/usr/libexec/qemu-kvm&lt;/emulator&gt;<br>
          &lt;machine&gt;rhel6.3.0&lt;/machine&gt;<br>
          &lt;machine canonical='rhel6.3.0'&gt;pc&lt;/machine&gt;<br>
          &lt;machine&gt;rhel6.2.0&lt;/machine&gt;<br>
          &lt;machine&gt;rhel6.1.0&lt;/machine&gt;<br>
          &lt;machine&gt;rhel6.0.0&lt;/machine&gt;<br>
          &lt;machine&gt;rhel5.5.0&lt;/machine&gt;<br>
          &lt;machine&gt;rhel5.4.4&lt;/machine&gt;<br>
          &lt;machine&gt;rhel5.4.0&lt;/machine&gt;<br>
          &lt;domain type='qemu'&gt;<br>
          &lt;/domain&gt;<br>
          &lt;domain type='kvm'&gt;<br>
            &lt;emulator&gt;/usr/libexec/qemu-kvm&lt;/emulator&gt;<br>
          &lt;/domain&gt;<br>
        &lt;/arch&gt;<br>
        &lt;features&gt;<br>
          &lt;cpuselection/&gt;<br>
          &lt;deviceboot/&gt;<br>
          &lt;pae/&gt;<br>
          &lt;nonpae/&gt;<br>
          &lt;acpi default='on' toggle='yes'/&gt;<br>
          &lt;apic default='on' toggle='no'/&gt;<br>
        &lt;/features&gt;<br>
      &lt;/guest&gt;<br>
    <br>
      &lt;guest&gt;<br>
        &lt;os_type&gt;hvm&lt;/os_type&gt;<br>
        &lt;arch name='x86_64'&gt;<br>
          &lt;wordsize&gt;64&lt;/wordsize&gt;<br>
          &lt;emulator&gt;/usr/libexec/qemu-kvm&lt;/emulator&gt;<br>
          &lt;machine&gt;rhel6.3.0&lt;/machine&gt;<br>
          &lt;machine canonical='rhel6.3.0'&gt;pc&lt;/machine&gt;<br>
          &lt;machine&gt;rhel6.2.0&lt;/machine&gt;<br>
          &lt;machine&gt;rhel6.1.0&lt;/machine&gt;<br>
          &lt;machine&gt;rhel6.0.0&lt;/machine&gt;<br>
          &lt;machine&gt;rhel5.5.0&lt;/machine&gt;<br>
          &lt;machine&gt;rhel5.4.4&lt;/machine&gt;<br>
          &lt;machine&gt;rhel5.4.0&lt;/machine&gt;<br>
          &lt;domain type='qemu'&gt;<br>
          &lt;/domain&gt;<br>
          &lt;domain type='kvm'&gt;<br>
            &lt;emulator&gt;/usr/libexec/qemu-kvm&lt;/emulator&gt;<br>
          &lt;/domain&gt;<br>
        &lt;/arch&gt;<br>
        &lt;features&gt;<br>
          &lt;cpuselection/&gt;<br>
          &lt;deviceboot/&gt;<br>
          &lt;acpi default='on' toggle='yes'/&gt;<br>
          &lt;apic default='on' toggle='no'/&gt;<br>
        &lt;/features&gt;<br>
      &lt;/guest&gt;<br>
    <br>
    &lt;/capabilities&gt;<br>
    <br>
    And the odd thing is this: virsh freecell only provides a total, not
    a per node list:<br>
    <br>
    # virsh freecell<br>
    Total: 15891284 kB<br>
    <br>
    According to this Fedora page<br>
    <a
href="http://docs.fedoraproject.org/en-US/Fedora/13/html/Virtualization_Guide/ch25s06.html">http://docs.fedoraproject.org/en-US/Fedora/13/html/Virtualization_Guide/ch25s06.html</a><br>
    I should see a per node list.<br>
    <br>
    Anyway, my Debian guest still does not boot up when I assign more
    than 4 vcpus to it. Even if I pin all cpus to the same NUMA node.<br>
    <br>
    BTW, I have copied my host CPU's configuration and CPU features for
    my guests (using virt-manager GUI, running remotely on an Ubuntu
    desktop box). Maybe I should use some predefined CPU type instead of
    cloning CPU configuration from the host??<br>
    <br>
    Zoltan<br>
    <br>
    <div class="moz-cite-prefix">On 10/24/2012 5:58 PM,
      <a class="moz-txt-link-abbreviated" href="mailto:bertrand.louargant@atoutlinux.net">bertrand.louargant@atoutlinux.net</a> wrote:<br>
    </div>
    <blockquote
cite="mid:648353920.241117.1351094280412.JavaMail.open-xchange@email.1and1.fr"
      type="cite">
      <meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
      <div> Hello, </div>
      <div>   </div>
      <div> You have a NUMA (Non Uniform Memory Access) machine, which
        mean that each processor has its own memory controller. </div>
      <div> virsh nodeinfo give you 2 NUMA cells with 1 CPU socket each:
        2 NUMA cells x 1CPU socket x 6 Core(s) per socket x 2 threads
        per core = 24 "cores". </div>
      <div>   </div>
      <div> The NUMA concept is really important, especially  in
        virtualization. </div>
      <div> If you have a virtual machine with vCPUs spread across more
        than one NUMA cell, performances will drop drastically. </div>
      <div>   </div>
      <div> Maybe you cannot assign more than 4 cPUs to your VM because
        Libvirt cannot pin them all on the same NUMA cell ... </div>
      <div> You can try to specify the NUMA architecture in the xml
        config. </div>
      <div>   </div>
      <div> Br, </div>
      <div> Bertrand. </div>
      <div> <br>
        Le 24 octobre 2012 à 17:14, Zoltan Frombach
        <a class="moz-txt-link-rfc2396E" href="mailto:zoltan@frombach.com">&lt;zoltan@frombach.com&gt;</a> a écrit : <br>
        &gt; Hi, <br>
        &gt; <br>
        &gt; Please let me know in case I am posting my question to the
        wrong forum. <br>
        &gt; I apologize if that is the case! <br>
        &gt; <br>
        &gt; Here is my question: <br>
        &gt; <br>
        &gt; We run CentOS 6.3 on a server with dual Xeon CPU's. Our
        "dual blade" <br>
        &gt; server uses this motherboard: <br>
        &gt;
        <a class="moz-txt-link-freetext" href="http://www.supermicro.com/products/motherboard/Xeon/C600/X9DRT-HF.cfm">http://www.supermicro.com/products/motherboard/Xeon/C600/X9DRT-HF.cfm</a>
        <br>
        &gt; <br>
        &gt; We have two of these CPUs installed and working: <br>
        &gt; Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz <br>
        &gt; ( <br>
        &gt;
        <a class="moz-txt-link-freetext" href="http://ark.intel.com/products/64594/Intel-Xeon-Processor-E5-2620-15M-Cache-2_00-GHz-7_20-GTs-Intel-QPI">http://ark.intel.com/products/64594/Intel-Xeon-Processor-E5-2620-15M-Cache-2_00-GHz-7_20-GTs-Intel-QPI</a>
        <br>
        &gt; ) <br>
        &gt; <br>
        &gt; cat /proc/cpuinfo correctly reports a total of 24 cores (2
        x 6 phisycal <br>
        &gt; cores plus 2 x 6 hyperthreading cores) <br>
        &gt; <br>
        &gt; However, I get this output from virsh nodeinfo : <br>
        &gt; <br>
        &gt; # virsh nodeinfo <br>
        &gt; CPU model: x86_64 <br>
        &gt; CPU(s): 24 <br>
        &gt; CPU frequency: 2000 MHz <br>
        &gt; CPU socket(s): 1 <br>
        &gt; Core(s) per socket: 6 <br>
        &gt; Thread(s) per core: 2 <br>
        &gt; NUMA cell(s): 2 <br>
        &gt; Memory size: 16303552 kB <br>
        &gt; <br>
        &gt; As you can see, virsh nodeinfo reports only 1 CPU socket
        while in fact <br>
        &gt; we have two CPU's. <br>
        &gt; <br>
        &gt; I would like to know if this is normal? Why does virsh
        reports only one <br>
        &gt; physical CPU ?? <br>
        &gt; <br>
        &gt; Also, when we try to run a guest OS (Debian Linux
        "squeeze") with more <br>
        &gt; than 4 vcpu's assigned to the VM, the guest OS won't boot
        up. The <br>
        &gt; guest's kernel stuck on a screen right after it detected
        the /dev/vda <br>
        &gt; block device and its partitions. We're using the VirtIO
        driver, of <br>
        &gt; course. If I assign only 4 (or less) vcpu's to the guest OS
        it works <br>
        &gt; fine. I have tried to upgrade the Linux kernel on the guest
        from debian <br>
        &gt; backports, it did not help, we're experiencing the same
        issue with both <br>
        &gt; the 2.6.32 and 3.2 Linux kernels. What could be causing
        this? <br>
        &gt; <br>
        &gt; On the host, we use the Linux kernel that came with CentOS
        6.3 : <br>
        &gt; 2.6.32-279.11.1.el6.x86_64 #1 SMP Tue Oct 16 15:57:10 UTC
        2012 x86_64 <br>
        &gt; x86_64 x86_64 GNU/Linux <br>
        &gt; <br>
        &gt; Thanks, <br>
        &gt; <br>
        &gt; Zoltan <br>
        &gt; _______________________________________________ <br>
        &gt; CentOS-virt mailing list <br>
        &gt; <a class="moz-txt-link-abbreviated" href="mailto:CentOS-virt@centos.org">CentOS-virt@centos.org</a> <br>
        &gt; <a class="moz-txt-link-freetext" href="http://lists.centos.org/mailman/listinfo/centos-virt">http://lists.centos.org/mailman/listinfo/centos-virt</a> </div>
    </blockquote>
    <br>
  </body>
</html>