Hi list,
I tested the upcoming CentOS 6.4 release with my lab installation of oVirt 3.1 and it fails to play well.
Background: freshly installed CentOS 6.3 host in a Nehalem CPU-type Cluster with 2 other hosts. Storage is iSCSI. Datacenter and Cluster are both version 3.1. oVirt 3.1 was installed via Dreyou's repo.
In CentOS 6.3 all is fine and the following rpms are installed:
libvirt.x86_64 0.9.10-21.el6_3.8 libvirt-client.x86_64 0.9.10-21.el6_3.8 libvirt-lock-sanlock.x86_64 0.9.10-21.el6_3.8 libvirt-python.x86_64 0.9.10-21.el6_3.8 vdsm.x86_64 4.10.0-0.46.15.el6 vdsm-cli.noarch 4.10.0-0.46.15.el6 vdsm-python.x86_64 4.10.0-0.46.15.el6 vdsm-xmlrpc.noarch 4.10.0-0.46.15.el6 qemu-kvm.x86_64 2:0.12.1.2-2.295.el6_3.10
uname -a Linux vh-test1.mydomain.com 2.6.32-279.22.1.el6.x86_64 #1 SMP Wed Feb 6 03:10:46 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
virsh cpu capabilities on 6.3: <cpu> <arch>x86_64</arch> <model>Nehalem</model> <vendor>Intel</vendor> <topology sockets='1' cores='4' threads='1'/> <feature name='rdtscp'/> <feature name='pdcm'/> <feature name='xtpr'/> <feature name='tm2'/> <feature name='est'/> <feature name='smx'/> <feature name='vmx'/> <feature name='ds_cpl'/> <feature name='monitor'/> <feature name='dtes64'/> <feature name='pbe'/> <feature name='tm'/> <feature name='ht'/> <feature name='ss'/> <feature name='acpi'/> <feature name='ds'/> <feature name='vme'/> </cpu>
and corresponding cpu features from vdsClient:
cpuCores = 4 cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,mtrr,pge,mca, cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht, tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon, pebs,bts,rep_good,xtopology,nonstop_tsc,aperfmperf,pni, dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr, pdcm,sse4_1,sse4_2,popcnt,lahf_lm,ida,dts,tpr_shadow,vnmi, flexpriority,ept,vpid,model_Conroe,model_Penryn, model_Nehalem cpuModel = Intel(R) Xeon(R) CPU X3430 @ 2.40GHz cpuSockets = 1 cpuSpeed = 2394.132
So the system was updated to 6.4 using the continuous release repo.
Installed rpms after update to 6.4 (6.3 + CR):
libvirt.x86_64 0.10.2-18.el6 libvirt-client.x86_64 0.10.2-18.el6 libvirt-lock-sanlock.x86_64 0.10.2-18.el6 libvirt-python.x86_64 0.10.2-18.el6 vdsm.x86_64 4.10.0-0.46.15.el6 vdsm-cli.noarch 4.10.0-0.46.15.el6 vdsm-python.x86_64 4.10.0-0.46.15.el6 vdsm-xmlrpc.noarch 4.10.0-0.46.15.el6 qemu-kvm.x86_64 2:0.12.1.2-2.355.el6_4_4.1
uname -a Linux vh-test1.mydomain.com 2.6.32-358.0.1.el6.x86_64 #1 SMP Wed Feb 27 06:06:45 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
virsh capabilities on 6.4: <cpu> <arch>x86_64</arch> <model>Nehalem</model> <vendor>Intel</vendor> <topology sockets='1' cores='4' threads='1'/> <feature name='rdtscp'/> <feature name='pdcm'/> <feature name='xtpr'/> <feature name='tm2'/> <feature name='est'/> <feature name='smx'/> <feature name='vmx'/> <feature name='ds_cpl'/> <feature name='monitor'/> <feature name='dtes64'/> <feature name='pbe'/> <feature name='tm'/> <feature name='ht'/> <feature name='ss'/> <feature name='acpi'/> <feature name='ds'/> <feature name='vme'/> </cpu>
and corresponding cpu features from vdsClient:
cpuCores = 4 cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca, cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht, tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon, pebs,bts,rep_good,xtopology,nonstop_tsc,aperfmperf,pni, dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr, pdcm,sse4_1,sse4_2,popcnt,lahf_lm,ida,dts,tpr_shadow,vnmi, flexpriority,ept,vpid,model_coreduo,model_Conroe cpuModel = Intel(R) Xeon(R) CPU X3430 @ 2.40GHz cpuSockets = 1 cpuSpeed = 2394.098
Full outputs of virsh capabilities and vdsCaps are attached. The only difference I can see is that 6.4 exposes one additional cpu flags (sep) and this seems to break the cpu recognition of vdsm.
Anyone has some hints on how to resolve or debug this further? What more information can I provide to help?
Best regards Patrick
On 03/04/2013 11:03 AM, Patrick Hurrelmann wrote:
Hi list,
I tested the upcoming CentOS 6.4 release with my lab installation of oVirt 3.1 and it fails to play well.
Background: freshly installed CentOS 6.3 host in a Nehalem CPU-type Cluster with 2 other hosts. Storage is iSCSI. Datacenter and Cluster are both version 3.1. oVirt 3.1 was installed via Dreyou's repo.
In CentOS 6.3 all is fine and the following rpms are installed:
libvirt.x86_64 0.9.10-21.el6_3.8 libvirt-client.x86_64 0.9.10-21.el6_3.8 libvirt-lock-sanlock.x86_64 0.9.10-21.el6_3.8 libvirt-python.x86_64 0.9.10-21.el6_3.8 vdsm.x86_64 4.10.0-0.46.15.el6 vdsm-cli.noarch 4.10.0-0.46.15.el6 vdsm-python.x86_64 4.10.0-0.46.15.el6 vdsm-xmlrpc.noarch 4.10.0-0.46.15.el6 qemu-kvm.x86_64 2:0.12.1.2-2.295.el6_3.10
uname -a Linux vh-test1.mydomain.com 2.6.32-279.22.1.el6.x86_64 #1 SMP Wed Feb 6 03:10:46 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
virsh cpu capabilities on 6.3: <cpu> <arch>x86_64</arch> <model>Nehalem</model> <vendor>Intel</vendor> <topology sockets='1' cores='4' threads='1'/> <feature name='rdtscp'/> <feature name='pdcm'/> <feature name='xtpr'/> <feature name='tm2'/> <feature name='est'/> <feature name='smx'/> <feature name='vmx'/> <feature name='ds_cpl'/> <feature name='monitor'/> <feature name='dtes64'/> <feature name='pbe'/> <feature name='tm'/> <feature name='ht'/> <feature name='ss'/> <feature name='acpi'/> <feature name='ds'/> <feature name='vme'/> </cpu>
and corresponding cpu features from vdsClient:
cpuCores = 4 cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,mtrr,pge,mca, cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht, tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon, pebs,bts,rep_good,xtopology,nonstop_tsc,aperfmperf,pni, dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr, pdcm,sse4_1,sse4_2,popcnt,lahf_lm,ida,dts,tpr_shadow,vnmi, flexpriority,ept,vpid,model_Conroe,model_Penryn, model_Nehalem cpuModel = Intel(R) Xeon(R) CPU X3430 @ 2.40GHz cpuSockets = 1 cpuSpeed = 2394.132
So the system was updated to 6.4 using the continuous release repo.
Installed rpms after update to 6.4 (6.3 + CR):
libvirt.x86_64 0.10.2-18.el6 libvirt-client.x86_64 0.10.2-18.el6 libvirt-lock-sanlock.x86_64 0.10.2-18.el6 libvirt-python.x86_64 0.10.2-18.el6 vdsm.x86_64 4.10.0-0.46.15.el6 vdsm-cli.noarch 4.10.0-0.46.15.el6 vdsm-python.x86_64 4.10.0-0.46.15.el6 vdsm-xmlrpc.noarch 4.10.0-0.46.15.el6 qemu-kvm.x86_64 2:0.12.1.2-2.355.el6_4_4.1
uname -a Linux vh-test1.mydomain.com 2.6.32-358.0.1.el6.x86_64 #1 SMP Wed Feb 27 06:06:45 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
virsh capabilities on 6.4: <cpu> <arch>x86_64</arch> <model>Nehalem</model> <vendor>Intel</vendor> <topology sockets='1' cores='4' threads='1'/> <feature name='rdtscp'/> <feature name='pdcm'/> <feature name='xtpr'/> <feature name='tm2'/> <feature name='est'/> <feature name='smx'/> <feature name='vmx'/> <feature name='ds_cpl'/> <feature name='monitor'/> <feature name='dtes64'/> <feature name='pbe'/> <feature name='tm'/> <feature name='ht'/> <feature name='ss'/> <feature name='acpi'/> <feature name='ds'/> <feature name='vme'/> </cpu>
and corresponding cpu features from vdsClient:
cpuCores = 4 cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca, cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht, tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon, pebs,bts,rep_good,xtopology,nonstop_tsc,aperfmperf,pni, dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr, pdcm,sse4_1,sse4_2,popcnt,lahf_lm,ida,dts,tpr_shadow,vnmi, flexpriority,ept,vpid,model_coreduo,model_Conroe cpuModel = Intel(R) Xeon(R) CPU X3430 @ 2.40GHz cpuSockets = 1 cpuSpeed = 2394.098
Full outputs of virsh capabilities and vdsCaps are attached. The only difference I can see is that 6.4 exposes one additional cpu flags (sep) and this seems to break the cpu recognition of vdsm.
Anyone has some hints on how to resolve or debug this further? What more information can I provide to help?
If you boot the older kernel on the new install, does it work as it did previously?
On 04.03.2013 23:40, Johnny Hughes wrote:
On 03/04/2013 11:03 AM, Patrick Hurrelmann wrote:
Hi list,
I tested the upcoming CentOS 6.4 release with my lab installation of oVirt 3.1 and it fails to play well.
Background: freshly installed CentOS 6.3 host in a Nehalem CPU-type Cluster with 2 other hosts. Storage is iSCSI. Datacenter and Cluster are both version 3.1. oVirt 3.1 was installed via Dreyou's repo.
In CentOS 6.3 all is fine and the following rpms are installed:
libvirt.x86_64 0.9.10-21.el6_3.8 libvirt-client.x86_64 0.9.10-21.el6_3.8 libvirt-lock-sanlock.x86_64 0.9.10-21.el6_3.8 libvirt-python.x86_64 0.9.10-21.el6_3.8 vdsm.x86_64 4.10.0-0.46.15.el6 vdsm-cli.noarch 4.10.0-0.46.15.el6 vdsm-python.x86_64 4.10.0-0.46.15.el6 vdsm-xmlrpc.noarch 4.10.0-0.46.15.el6 qemu-kvm.x86_64 2:0.12.1.2-2.295.el6_3.10
uname -a Linux vh-test1.mydomain.com 2.6.32-279.22.1.el6.x86_64 #1 SMP Wed Feb 6 03:10:46 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
virsh cpu capabilities on 6.3: <cpu> <arch>x86_64</arch> <model>Nehalem</model> <vendor>Intel</vendor> <topology sockets='1' cores='4' threads='1'/> <feature name='rdtscp'/> <feature name='pdcm'/> <feature name='xtpr'/> <feature name='tm2'/> <feature name='est'/> <feature name='smx'/> <feature name='vmx'/> <feature name='ds_cpl'/> <feature name='monitor'/> <feature name='dtes64'/> <feature name='pbe'/> <feature name='tm'/> <feature name='ht'/> <feature name='ss'/> <feature name='acpi'/> <feature name='ds'/> <feature name='vme'/> </cpu>
and corresponding cpu features from vdsClient:
cpuCores = 4 cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,mtrr,pge,mca, cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht, tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon, pebs,bts,rep_good,xtopology,nonstop_tsc,aperfmperf,pni, dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr, pdcm,sse4_1,sse4_2,popcnt,lahf_lm,ida,dts,tpr_shadow,vnmi, flexpriority,ept,vpid,model_Conroe,model_Penryn, model_Nehalem cpuModel = Intel(R) Xeon(R) CPU X3430 @ 2.40GHz cpuSockets = 1 cpuSpeed = 2394.132
So the system was updated to 6.4 using the continuous release repo.
Installed rpms after update to 6.4 (6.3 + CR):
libvirt.x86_64 0.10.2-18.el6 libvirt-client.x86_64 0.10.2-18.el6 libvirt-lock-sanlock.x86_64 0.10.2-18.el6 libvirt-python.x86_64 0.10.2-18.el6 vdsm.x86_64 4.10.0-0.46.15.el6 vdsm-cli.noarch 4.10.0-0.46.15.el6 vdsm-python.x86_64 4.10.0-0.46.15.el6 vdsm-xmlrpc.noarch 4.10.0-0.46.15.el6 qemu-kvm.x86_64 2:0.12.1.2-2.355.el6_4_4.1
uname -a Linux vh-test1.mydomain.com 2.6.32-358.0.1.el6.x86_64 #1 SMP Wed Feb 27 06:06:45 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
virsh capabilities on 6.4: <cpu> <arch>x86_64</arch> <model>Nehalem</model> <vendor>Intel</vendor> <topology sockets='1' cores='4' threads='1'/> <feature name='rdtscp'/> <feature name='pdcm'/> <feature name='xtpr'/> <feature name='tm2'/> <feature name='est'/> <feature name='smx'/> <feature name='vmx'/> <feature name='ds_cpl'/> <feature name='monitor'/> <feature name='dtes64'/> <feature name='pbe'/> <feature name='tm'/> <feature name='ht'/> <feature name='ss'/> <feature name='acpi'/> <feature name='ds'/> <feature name='vme'/> </cpu>
and corresponding cpu features from vdsClient:
cpuCores = 4 cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca, cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht, tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon, pebs,bts,rep_good,xtopology,nonstop_tsc,aperfmperf,pni, dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr, pdcm,sse4_1,sse4_2,popcnt,lahf_lm,ida,dts,tpr_shadow,vnmi, flexpriority,ept,vpid,model_coreduo,model_Conroe cpuModel = Intel(R) Xeon(R) CPU X3430 @ 2.40GHz cpuSockets = 1 cpuSpeed = 2394.098
Full outputs of virsh capabilities and vdsCaps are attached. The only difference I can see is that 6.4 exposes one additional cpu flags (sep) and this seems to break the cpu recognition of vdsm.
Anyone has some hints on how to resolve or debug this further? What more information can I provide to help?
If you boot the older kernel on the new install, does it work as it did previously?
Hi,
Good idea. I tried that, but it doesn't change a thing. Output from virsh and vdsClient is the same as with 6.4 kernel.
Regards Patrick