I've been running into a reproducible problem when using default LVM volume group names to present block devices for virtual machines in KVM, and I'm wondering why it is happening.<div><br></div><div>On dom0 I make a default VolGroup00 for the operating system. I make a second VolGroup01 for logical volumes that will be block devices for virtual systems.</div>
<div><br></div><div>In VolGroup01, I make two lv's for one system: lv.sys1, and lv.sys1-data. </div><div><br></div><div>I then build a new virtual machine called sys1, using lv.sys1 for the root filesystem, and lv.sys1-data for an independent data partition. Everything works great after installation, and vgdisplay on both systems looks great.</div>
<div><br></div><div>If I then run vgscan, however, on the host system, it picks up the VolGroup01 I created _within_ the virtual machine, so I now have 2 VolGroup01's with different UUIDs showing up on dom0.</div><div>
<br></div><div>Now I can see how vgscan would mistakenly see the VolGroup01 of sys1 on the block device lv.sys1-data, but why are the VolGroup00 vg's not colliding as well?<br clear="all"><br></div><div>When a pvdisplay is run, I have a new "physical volume" that is actually just a logical volume of the original VolGroup01:</div>
<div><br></div><div><div>[root@iain2 ~]# pvdisplay</div><div> WARNING: Duplicate VG name VolGroup01: Existing FNiKc9-BB3t-ziMg-prWW-n8RA-OMzk-obiKnf (created here) takes precedence over C8fNMV-aeSW-syIn-fWJZ-vJdK-N0As-Itrvfi</div>
<div> WARNING: Duplicate VG name VolGroup01: Existing FNiKc9-BB3t-ziMg-prWW-n8RA-OMzk-obiKnf (created here) takes precedence over C8fNMV-aeSW-syIn-fWJZ-vJdK-N0As-Itrvfi</div><div> --- Physical volume ---</div><div> PV Name /dev/VolGroup01/lv-sys1-data</div>
<div> VG Name VolGroup01</div><div> PV Size 40.00 GB / not usable 4.00 MB</div><div> Allocatable yes (but full)</div><div> PE Size (KByte) 4096</div><div> Total PE 10239</div>
<div> Free PE 0</div><div> Allocated PE 10239</div><div> PV UUID FTA4QU-ydZ7-e2Yy-nBsi-t4st-3jj7-IAkQH8</div><div> </div><div> --- Physical volume ---</div><div> PV Name /dev/sda3</div>
<div> VG Name VolGroup00</div><div> PV Size 39.06 GB / not usable 29.77 MB</div><div> Allocatable yes (but full)</div><div> PE Size (KByte) 32768</div><div> Total PE 1249</div>
<div> Free PE 0</div><div> Allocated PE 1249</div><div> PV UUID tTViks-3lBM-HGzV-mnN9-zRsT-fFT0-ZsJRse</div><div> </div><div> --- Physical volume ---</div><div> PV Name /dev/sda2</div>
<div> VG Name VolGroup01</div><div> PV Size 240.31 GB / not usable 25.75 MB</div><div> Allocatable yes </div><div> PE Size (KByte) 32768</div><div> Total PE 7689</div>
<div> Free PE 5129</div><div> Allocated PE 2560</div><div> PV UUID ZE5Io3-WYIO-EfOQ-h03q-zGdF-Frpa-tm63fX</div></div><div><br></div><div><br></div><div><br></div><div>Has anyone experienced this? It's very unnerving to know your data is intact as you add new logical volumes for kvm systems. I suppose the lesson learned here is to provide VGs with specific host names.</div>
<div><br></div><div><br></div><div><br>-- <br>-- -<br>Iain Morris<br><a href="mailto:iain.t.morris@gmail.com">iain.t.morris@gmail.com</a><br>
</div>