I've been running into a reproducible problem when using default LVM volume group names to present block devices for virtual machines in KVM, and I'm wondering why it is happening.
On dom0 I make a default VolGroup00 for the operating system. I make a second VolGroup01 for logical volumes that will be block devices for virtual systems.
In VolGroup01, I make two lv's for one system: lv.sys1, and lv.sys1-data.
I then build a new virtual machine called sys1, using lv.sys1 for the root filesystem, and lv.sys1-data for an independent data partition. Everything works great after installation, and vgdisplay on both systems looks great.
If I then run vgscan, however, on the host system, it picks up the VolGroup01 I created _within_ the virtual machine, so I now have 2 VolGroup01's with different UUIDs showing up on dom0.
Now I can see how vgscan would mistakenly see the VolGroup01 of sys1 on the block device lv.sys1-data, but why are the VolGroup00 vg's not colliding as well?
When a pvdisplay is run, I have a new "physical volume" that is actually just a logical volume of the original VolGroup01:
[root@iain2 ~]# pvdisplay WARNING: Duplicate VG name VolGroup01: Existing FNiKc9-BB3t-ziMg-prWW-n8RA-OMzk-obiKnf (created here) takes precedence over C8fNMV-aeSW-syIn-fWJZ-vJdK-N0As-Itrvfi WARNING: Duplicate VG name VolGroup01: Existing FNiKc9-BB3t-ziMg-prWW-n8RA-OMzk-obiKnf (created here) takes precedence over C8fNMV-aeSW-syIn-fWJZ-vJdK-N0As-Itrvfi --- Physical volume --- PV Name /dev/VolGroup01/lv-sys1-data VG Name VolGroup01 PV Size 40.00 GB / not usable 4.00 MB Allocatable yes (but full) PE Size (KByte) 4096 Total PE 10239 Free PE 0 Allocated PE 10239 PV UUID FTA4QU-ydZ7-e2Yy-nBsi-t4st-3jj7-IAkQH8
--- Physical volume --- PV Name /dev/sda3 VG Name VolGroup00 PV Size 39.06 GB / not usable 29.77 MB Allocatable yes (but full) PE Size (KByte) 32768 Total PE 1249 Free PE 0 Allocated PE 1249 PV UUID tTViks-3lBM-HGzV-mnN9-zRsT-fFT0-ZsJRse
--- Physical volume --- PV Name /dev/sda2 VG Name VolGroup01 PV Size 240.31 GB / not usable 25.75 MB Allocatable yes PE Size (KByte) 32768 Total PE 7689 Free PE 5129 Allocated PE 2560 PV UUID ZE5Io3-WYIO-EfOQ-h03q-zGdF-Frpa-tm63fX
Has anyone experienced this? It's very unnerving to know your data is intact as you add new logical volumes for kvm systems. I suppose the lesson learned here is to provide VGs with specific host names.
On 10/25/2010 12:31 PM, Iain Morris wrote:
I then build a new virtual machine called sys1, using lv.sys1 for the root filesystem, and lv.sys1-data for an independent data partition. Everything works great after installation, and vgdisplay on both systems looks great.
If I then run vgscan, however, on the host system, it picks up the VolGroup01 I created _within_ the virtual machine, so I now have 2 VolGroup01's with different UUIDs showing up on dom0.
Which block devices are you exporting to your guest? Post the libvirt configuration file for it.
On Mon, Oct 25, 2010 at 2:18 PM, Gordon Messmer yinyang@eburg.com wrote:
Which block devices are you exporting to your guest? Post the libvirt configuration file for it.
See below. It's specifically the second volume group that collides between virtual and physical systems. Both dom0 and U have identical VolGroup00 VGs, but these do not collide.
Renaming the VG used by the domU within the domU removes the collision, but the newly-renamed VG still shows up in dom0 as a useable VG with space to be allocated.
Here is the pvdisplay output from dom0. Interestingly, it shows /dev/VolGroup01/lv-sys1-data as "physical volume" when it's obviously just an LV in the original VolGroup01 VG. And this only happens with the _second_ Volume Group created. VolGroup00 is not an issue on this or any other systems I've used:
[root@iain2 qemu]# pvdisplay --- Physical volume --- PV Name /dev/VolGroup01/lv-sys1-data VG Name vg-sys1 PV Size 40.00 GB / not usable 4.00 MB Allocatable yes (but full) PE Size (KByte) 4096 Total PE 10239 Free PE 0 Allocated PE 10239 PV UUID FTA4QU-ydZ7-e2Yy-nBsi-t4st-3jj7-IAkQH8
--- Physical volume --- PV Name /dev/sda3 VG Name VolGroup00 PV Size 39.06 GB / not usable 29.77 MB Allocatable yes (but full) PE Size (KByte) 32768 Total PE 1249 Free PE 0 Allocated PE 1249 PV UUID tTViks-3lBM-HGzV-mnN9-zRsT-fFT0-ZsJRse
--- Physical volume --- PV Name /dev/sda2 VG Name VolGroup01 PV Size 240.31 GB / not usable 25.75 MB Allocatable yes PE Size (KByte) 32768 Total PE 7689 Free PE 5129 Allocated PE 2560 PV UUID ZE5Io3-WYIO-EfOQ-h03q-zGdF-Frpa-tm63fX
[root@iain2 qemu]# cat sys1.xml <domain type='kvm'> <name>sys1</name> <uuid>37f34394-d380-d2c4-ac37-3263c16028ff</uuid> <memory>524288</memory> <currentMemory>524288</currentMemory> <vcpu>1</vcpu> <os> <type arch='x86_64' machine='rhel5.4.0'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='block' device='disk'> <driver name='qemu' cache='none'/> <source dev='/dev/VolGroup01/lv-sys1'/> <target dev='vda' bus='virtio'/> </disk> <disk type='block' device='cdrom'> <target dev='hdc' bus='ide'/> <readonly/> </disk> <disk type='block' device='disk'> <source dev='/dev/VolGroup01/lv-sys1-data'/> <target dev='vdb' bus='virtio'/> </disk> <interface type='network'> <mac address='54:52:00:3b:4a:f5'/> <source network='default'/> <model type='virtio'/> </interface> <serial type='pty'> <source path='/dev/pts/3'/> <target port='0'/> </serial> <console type='pty' tty='/dev/pts/3'> <source path='/dev/pts/3'/> <target port='0'/> </console> <input type='mouse' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes' keymap='en-us'/> </devices> </domain>
On Oct 25, 2010, at 3:31 PM, Iain Morris iain.t.morris@gmail.com wrote:
I've been running into a reproducible problem when using default LVM volume group names to present block devices for virtual machines in KVM, and I'm wondering why it is happening.
On dom0 I make a default VolGroup00 for the operating system. I make a second VolGroup01 for logical volumes that will be block devices for virtual systems.
In VolGroup01, I make two lv's for one system: lv.sys1, and lv.sys1-data.
I then build a new virtual machine called sys1, using lv.sys1 for the root filesystem, and lv.sys1-data for an independent data partition. Everything works great after installation, and vgdisplay on both systems looks great.
If I then run vgscan, however, on the host system, it picks up the VolGroup01 I created _within_ the virtual machine, so I now have 2 VolGroup01's with different UUIDs showing up on dom0.
Now I can see how vgscan would mistakenly see the VolGroup01 of sys1 on the block device lv.sys1-data, but why are the VolGroup00 vg's not colliding as well?
When a pvdisplay is run, I have a new "physical volume" that is actually just a logical volume of the original VolGroup01:
[root@iain2 ~]# pvdisplay WARNING: Duplicate VG name VolGroup01: Existing FNiKc9-BB3t-ziMg-prWW-n8RA-OMzk-obiKnf (created here) takes precedence over C8fNMV-aeSW-syIn-fWJZ-vJdK-N0As-Itrvfi WARNING: Duplicate VG name VolGroup01: Existing FNiKc9-BB3t-ziMg-prWW-n8RA-OMzk-obiKnf (created here) takes precedence over C8fNMV-aeSW-syIn-fWJZ-vJdK-N0As-Itrvfi --- Physical volume --- PV Name /dev/VolGroup01/lv-sys1-data VG Name VolGroup01 PV Size 40.00 GB / not usable 4.00 MB Allocatable yes (but full) PE Size (KByte) 4096 Total PE 10239 Free PE 0 Allocated PE 10239 PV UUID FTA4QU-ydZ7-e2Yy-nBsi-t4st-3jj7-IAkQH8
--- Physical volume --- PV Name /dev/sda3 VG Name VolGroup00 PV Size 39.06 GB / not usable 29.77 MB Allocatable yes (but full) PE Size (KByte) 32768 Total PE 1249 Free PE 0 Allocated PE 1249 PV UUID tTViks-3lBM-HGzV-mnN9-zRsT-fFT0-ZsJRse
--- Physical volume --- PV Name /dev/sda2 VG Name VolGroup01 PV Size 240.31 GB / not usable 25.75 MB Allocatable yes PE Size (KByte) 32768 Total PE 7689 Free PE 5129 Allocated PE 2560 PV UUID ZE5Io3-WYIO-EfOQ-h03q-zGdF-Frpa-tm63fX
Has anyone experienced this? It's very unnerving to know your data is intact as you add new logical volumes for kvm systems. I suppose the lesson learned here is to provide VGs with specific host names.
You need to exclude the LVs in the host VG from being scanned for sub-VGs. It's actually easier to just list what SHOULD be scanned rather than what shouldn't.
Look in /etc/lvm/lvm.conf
You can also avoid this by creating partition based PVs in the VMs rather then whole disk PVs which would need kpartx run on the host LV before LVM could scan it.
-Ross
On Tue, Oct 26, 2010 at 6:48 AM, Ross Walker rswwalker@gmail.com wrote:
You need to exclude the LVs in the host VG from being scanned for sub-VGs. It's actually easier to just list what SHOULD be scanned rather than what shouldn't.
Look in /etc/lvm/lvm.conf
This worked, thanks. A couple of people emailed me separately on this. For others' reference, I added the following filter to lvm.conf on dom0 and disabled the default "get everything" filter. If anyone sees any pitfalls with this regex, I'm sure you'll let me know.
(assuming your physical disks are sas/sata)
filter = [ "a/^/dev/sd*/", "r/.*/" ]
Thanks for the help,
-Iain
On Oct 27, 2010, at 3:02 PM, Iain Morris iain.t.morris@gmail.com wrote:
On Tue, Oct 26, 2010 at 6:48 AM, Ross Walker rswwalker@gmail.com wrote:
You need to exclude the LVs in the host VG from being scanned for sub-VGs. It's actually easier to just list what SHOULD be scanned rather than what shouldn't.
Look in /etc/lvm/lvm.conf
This worked, thanks. A couple of people emailed me separately on this. For others' reference, I added the following filter to lvm.conf on dom0 and disabled the default "get everything" filter. If anyone sees any pitfalls with this regex, I'm sure you'll let me know.
(assuming your physical disks are sas/sata)
filter = [ "a/^/dev/sd*/", "r/.*/" ]
For all HDs this should work:
filter = [ "a/^/dev/[hs]d*/", "r/.*/" ]
Run a 'vgscan -vv' to test.
-Ross