[CentOS-virt] Nested KVM issue

Boris Derzhavets bderzhavets at hotmail.com
Sun Aug 14 20:27:21 UTC 2016


Reports  posted look good for me.  Config should provide the best available performance

for cloud VM (L2) on Compute Node.


 1.  Please, remind me what goes  wrong  from your standpoint ?

 2. Which CPU is installed on Compute Node && how much RAM ?

     Actually , my concern is :-

    Number_of_ Cloud_VMs  versus Number_CPU_Cores ( not threads)

    Please, check `top`  report   in regards of swap area size.


Thanks.

Boris.

________________________________
From: centos-virt-bounces at centos.org <centos-virt-bounces at centos.org> on behalf of Laurentiu Soica <laurentiu at soica.ro>
Sent: Sunday, August 14, 2016 3:06 PM
To: Discussion about the virtualization on CentOS
Subject: Re: [CentOS-virt] Nested KVM issue

Hello,

1. <domain type='kvm' id='6'>
  <name>baremetalbrbm_1</name>
  <uuid>534e9b54-5e4c-4acb-adcf-793f841551a7</uuid>
  <memory unit='KiB'>104857600</memory>
  <currentMemory unit='KiB'>104857600</currentMemory>
  <vcpu placement='static'>36</vcpu>
  <resource>
    <partition>/machine</partition>
  </resource>
  <os>
    <type arch='x86_64' machine='pc-i440fx-rhel7.0.0'>hvm</type>
    <boot dev='hd'/>
    <bootmenu enable='no'/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <pae/>
  </features>
  <cpu mode='host-passthrough'/>
  <clock offset='utc'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='unsafe'/>
      <source file='/var/lib/libvirt/images/baremetalbrbm_1.qcow2'/>
      <backingStore/>
      <target dev='sda' bus='sata'/>
      <alias name='sata0-0-0'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <controller type='scsi' index='0' model='virtio-scsi'>
      <alias name='scsi0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </controller>
    <controller type='usb' index='0'>
      <alias name='usb'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'>
      <alias name='pci.0'/>
    </controller>
    <controller type='sata' index='0'>
      <alias name='sata0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='00:f1:15:20:c5:46'/>
      <source network='brbm' bridge='brbm'/>
      <virtualport type='openvswitch'>
        <parameters interfaceid='654ad04f-fa0a-41dd-9d30-b84e702462fe'/>
      </virtualport>
      <target dev='vnet5'/>
      <model type='virtio'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
    <interface type='bridge'>
      <mac address='52:54:00:d3:c9:24'/>
      <source bridge='br57'/>
      <target dev='vnet6'/>
      <model type='rtl8139'/>
      <alias name='net1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </interface>
    <serial type='pty'>
      <source path='/dev/pts/3'/>
      <target port='0'/>
      <alias name='serial0'/>
    </serial>
    <console type='pty' tty='/dev/pts/3'>
      <source path='/dev/pts/3'/>
      <target type='serial' port='0'/>
      <alias name='serial0'/>
    </console>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <graphics type='vnc' port='5903' autoport='yes' listen='127.0.0.1'>
      <listen type='address' address='127.0.0.1'/>
    </graphics>
    <video>
      <model type='cirrus' vram='16384' heads='1'/>
      <alias name='video0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <alias name='balloon0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </memballoon>
  </devices>
</domain>

2.
[root at overcloud-novacompute-0 ~]# lsmod | grep kvm
kvm_intel             162153  70
kvm                   525409  1 kvm_intel

[root at overcloud-novacompute-0 ~]# cat /etc/nova/nova.conf | grep virt_type|grep -v '^#'
virt_type=kvm

[root at overcloud-novacompute-0 ~]#  cat /etc/nova/nova.conf | grep  cpu_mode|grep -v '^#'
cpu_mode=host-passthrough

Thanks,
Laurentiu

În dum., 14 aug. 2016 la 21:44, Boris Derzhavets <bderzhavets at hotmail.com<mailto:bderzhavets at hotmail.com>> a scris:



________________________________
From: centos-virt-bounces at centos.org<mailto:centos-virt-bounces at centos.org> <centos-virt-bounces at centos.org<mailto:centos-virt-bounces at centos.org>> on behalf of Laurentiu Soica <laurentiu at soica.ro<mailto:laurentiu at soica.ro>>
Sent: Sunday, August 14, 2016 10:17 AM
To: Discussion about the virtualization on CentOS
Subject: Re: [CentOS-virt] Nested KVM issue

More details on the subject:

I suppose it is a nested KVM issue because it raised after I enabled the nested KVM feature. Without it, anyway, the second level VMs are unusable in terms of performance.

I am using CentOS 7 with:

kernel: 3.10.0-327.22.2.el7.x86_64
qemu-kvm:1.5.3-105.el7_2.4
libvirt:1.2.17-13.el7_2.5

on both the baremetal and the compute VM.

Please, post

1) # virsh dumpxml  VM-L1  ( where on L1 level you expect nested KVM to appear)
2) Login into VM-L1 and run :-
    # lsmod | grep kvm
3) I need outputs from VM-L1 ( in case it is Compute Node )

# cat /etc/nova/nova.conf | grep virt_type
# cat /etc/nova/nova.conf | grep  cpu_mode

Boris.





The only workaround now is to shutdown the compute VM and start it back from baremetal with virsh start.
A simple restart of the compute node doesn't help. It looks like the qemu-kvm process corresponding to the compute VM is the problem.

Laurentiu

În dum., 14 aug. 2016 la 00:19, Laurentiu Soica <laurentiu at soica.ro<mailto:laurentiu at soica.ro>> a scris:
Hello,

I have an OpenStack setup in virtual environment on CentOS 7.

The baremetal has nested KVM enabled and 1 compute node as a VM.

Inside the compute node I have multiple VMs running.

After about every 3 days the VMs get inaccessible and the compute node reports high CPU usage. The qemu-kvm process for each VM inside the compute node reports full CPU usage.

Please help me with some hints to debug this issue.

Thanks,
Laurentiu
_______________________________________________
CentOS-virt mailing list
CentOS-virt at centos.org<mailto:CentOS-virt at centos.org>
https://lists.centos.org/mailman/listinfo/centos-virt
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.centos.org/pipermail/centos-virt/attachments/20160814/63c50950/attachment.html>


More information about the CentOS-virt mailing list