Hey, (sorry for cross-posting, you will find this message also in centos-virt, maybe, but this was not intended and a mistake)
I have a system with a mdraid 1
... md1 : active raid1 sdb2[1] sda2[0] 1465031488 blocks [2/2] [UU] ...
this raid partition has a lvm physical volume with one volume group and several logical volumes. This machine is running since years and I seldom touch the lvm config. The lvm commands are giving me strange warnings I am uncomfortable with
... # pvdisplay Found duplicate PV b79x0kLXR9mAC0z0IZUxyJG1VC24Crl7: using /dev/sda2 not /dev/md1 --- Physical volume --- PV Name /dev/sda2 VG Name vg_sys PV Size 1.36 TB / not usable 6.81 MB Allocatable yes PE Size (KByte) 32768 Total PE 44709 Free PE 17701 Allocated PE 27008 PV UUID b79x0k-LXR9-mAC0-z0IZ-UxyJ-G1VC-24Crl7 ...
What does this "Found duplicate PV b79x0kLXR9mAC0z0IZUxyJG1VC24Crl7: using /dev/sda2 not /dev/md1" message? Some logical volumes are virtual disks vor kvm guests. Are these guests using sda only and not the mdraid?
On 04.12.2013 13:43, Markus Falb wrote:
What does this "Found duplicate PV b79x0kLXR9mAC0z0IZUxyJG1VC24Crl7: using /dev/sda2 not /dev/md1" message? Some logical volumes are virtual disks vor kvm guests. Are these guests using sda only and not the mdraid?
Markus,
I see /etc/lvm/lvm.conf has an option to ignore md members and seems on by default in EL6: md_component_detection = 1
If you run "pvdisplay /dev/sda2" what does it show? Normally you should get a "Failed to read physical volume "/dev/sda2".
Do you have such thing in your /etc/lvm/lvm.conf? ADditionally you can force a filter on the drives, smth like: filter = ["r|/dev/sda2"] (make sure you delete /etc/lvm/cache/.cache and regenerate it with vgscan so as not to contain old stuff)
On 04.Dez.2013, at 15:08, Nux! wrote:
On 04.12.2013 13:43, Markus Falb wrote:
What does this "Found duplicate PV b79x0kLXR9mAC0z0IZUxyJG1VC24Crl7: using /dev/sda2 not /dev/md1" message? Some logical volumes are virtual disks vor kvm guests. Are these guests using sda only and not the mdraid?
Markus,
I see /etc/lvm/lvm.conf has an option to ignore md members and seems on by default in EL6: md_component_detection = 1
This is a CentOS 5, but it also has this in lvm.conf, and it's value is 1.
If you run "pvdisplay /dev/sda2" what does it show? Normally you should get a "Failed to read physical volume "/dev/sda2".
# pvs /dev/md1 Found duplicate PV b79x0kLXR9mAC0z0IZUxyJG1VC24Crl7: using /dev/sda2 not /dev/md1 PV VG Fmt Attr PSize PFree /dev/sda2 vg_sys lvm2 a-- 1.36T 553.16G
# pvs /dev/sda2 Found duplicate PV b79x0kLXR9mAC0z0IZUxyJG1VC24Crl7: using /dev/md1 not /dev/sda2 PV VG Fmt Attr PSize PFree /dev/md1 vg_sys lvm2 a-- 1.36T 553.16G
ADditionally you can force a filter on the drives, smth like: filter = ["r|/dev/sda2"]
I might try that, but it is not necessary on other machines with the same setup. It was not necessary on *this* machine (running since several years)
Markus Falb píše v St 04. 12. 2013 v 15:44 +0100:
On 04.Dez.2013, at 15:08, Nux! wrote:
On 04.12.2013 13:43, Markus Falb wrote:
What does this "Found duplicate PV b79x0kLXR9mAC0z0IZUxyJG1VC24Crl7: using /dev/sda2 not /dev/md1" message? Some logical volumes are virtual disks vor kvm guests. Are these guests using sda only and not the mdraid?
Markus,
I see /etc/lvm/lvm.conf has an option to ignore md members and seems on by default in EL6: md_component_detection = 1
This is a CentOS 5, but it also has this in lvm.conf, and it's value is 1.
from man lvm.conf --- md_component_detection — If set to 1, LVM2 will ignore devices used as components of software RAID (md) devices by looking for md superblocks. This doesn’t always work satisfactorily e.g. if a device has been reused without wiping the md superblocks first. ---
If you run "pvdisplay /dev/sda2" what does it show? Normally you should get a "Failed to read physical volume "/dev/sda2".
# pvs /dev/md1 Found duplicate PV b79x0kLXR9mAC0z0IZUxyJG1VC24Crl7: using /dev/sda2 not /dev/md1 PV VG Fmt Attr PSize PFree /dev/sda2 vg_sys lvm2 a-- 1.36T 553.16G
# pvs /dev/sda2 Found duplicate PV b79x0kLXR9mAC0z0IZUxyJG1VC24Crl7: using /dev/md1 not /dev/sda2 PV VG Fmt Attr PSize PFree /dev/md1 vg_sys lvm2 a-- 1.36T 553.16G
ADditionally you can force a filter on the drives, smth like: filter = ["r|/dev/sda2"]
I might try that, but it is not necessary on other machines with the same setup. It was not necessary on *this* machine (running since several years)
I've seen this problem on two machines (It came after some update on CentOS 5 )
you can set filter more strict to avoid bad detection filter = [ "a/.*/" , "r|/dev/md.*|" ]
Pavel
On 04.Dez.2013, at 15:08, Nux! wrote:
On 04.12.2013 13:43, Markus Falb wrote:
What does this "Found duplicate PV b79x0kLXR9mAC0z0IZUxyJG1VC24Crl7: using /dev/sda2 not /dev/md1" message? Some logical volumes are virtual disks vor kvm guests. Are these guests using sda only and not the mdraid?
Markus,
I see /etc/lvm/lvm.conf has an option to ignore md members and seems on by default in EL6: md_component_detection = 1
If you run "pvdisplay /dev/sda2" what does it show? Normally you should get a "Failed to read physical volume "/dev/sda2".
Do you have such thing in your /etc/lvm/lvm.conf? ADditionally you can force a filter on the drives, smth like: filter = ["r|/dev/sda2"] (make sure you delete /etc/lvm/cache/.cache and regenerate it with vgscan so as not to contain old stuff)
I removed the cache and that did the trick. I did not modify lvm.conf.
Markus Falb wrote:
Hey, (sorry for cross-posting, you will find this message also in centos-virt, maybe, but this was not intended and a mistake)
I have a system with a mdraid 1
... md1 : active raid1 sdb2[1] sda2[0] 1465031488 blocks [2/2] [UU] ...
this raid partition has a lvm physical volume with one volume group and several logical volumes. This machine is running since years and I seldom touch the lvm config. The lvm commands are giving me strange warnings I am uncomfortable with
... # pvdisplay Found duplicate PV b79x0kLXR9mAC0z0IZUxyJG1VC24Crl7: using /dev/sda2 not /dev/md1 --- Physical volume --- PV Name /dev/sda2 VG Name vg_sys PV Size 1.36 TB / not usable 6.81 MB
<snip> smartctl -t short to start. And is there anything in your logfiles saying something like " Device: /dev/sdb [SAT], 98 Currently unreadable (pending) sectors"?
mark, who's working at decommissioning that server the line's from...
On 04.Dez.2013, at 15:11, m.roth@5-cent.us wrote:
Markus Falb wrote:
... # pvdisplay Found duplicate PV b79x0kLXR9mAC0z0IZUxyJG1VC24Crl7: using /dev/sda2 not /dev/md1 --- Physical volume --- PV Name /dev/sda2 VG Name vg_sys PV Size 1.36 TB / not usable 6.81 MB
<snip> smartctl -t short to start. And is there anything in your logfiles saying something like " Device: /dev/sdb [SAT], 98 Currently unreadable (pending) sectors"?
Interesting, I have this in /etc/smartd.conf DEVICESCAN -n standby -a -m root -s (L/../../6/00|S/../.././00)
but according to the selftest logs it seems it is only checking sdb but *not* sda. I'll have to check this out. A manual short test succeeded. However, a failing disk should not affect LVM, should it?
On 04.Dez.2013, at 15:57, Markus Falb wrote:
On 04.Dez.2013, at 15:11, m.roth@5-cent.us wrote:
Markus Falb wrote:
... # pvdisplay Found duplicate PV b79x0kLXR9mAC0z0IZUxyJG1VC24Crl7: using /dev/sda2 not /dev/md1 --- Physical volume --- PV Name /dev/sda2 VG Name vg_sys PV Size 1.36 TB / not usable 6.81 MB
<snip> smartctl -t short to start. And is there anything in your logfiles saying something like " Device: /dev/sdb [SAT], 98 Currently unreadable (pending) sectors"?
Interesting, I have this in /etc/smartd.conf DEVICESCAN -n standby -a -m root -s (L/../../6/00|S/../.././00)
but according to the selftest logs it seems it is only checking sdb but *not* sda. I'll have to check this out. A manual short test succeeded. However, a failing disk should not affect LVM, should it?
oh,
... # pvs /dev/sda2 Found duplicate PV b79x0kLXR9mAC0z0IZUxyJG1VC24Crl7: using /dev/md1 not /dev/sda2 PV VG Fmt Attr PSize PFree /dev/md1 vg_sys lvm2 a-- 1.36T 553.16G
# pvs /dev/sdb2 Failed to read physical volume "/dev/sdb2" ...
smart status tells me
... 5 Reallocated_Sector_Ct 0x0033 001 001 036 Pre-fail Always FAILING_NOW 4095 ...
smartd did not send warning mail, selftests are successful, only in the logfiles is
... Device: /dev/sdb [SAT], FAILED SMART self-check. BACK UP DATA NOW! Device: /dev/sdb [SAT], Failed SMART usage Attribute: 5 Reallocated_Sector_Ct. ...
I think I will replace sdb.
-- Thank You, Markus
Markus Falb wrote:
On 04.Dez.2013, at 15:57, Markus Falb wrote:
On 04.Dez.2013, at 15:11, m.roth@5-cent.us wrote:
Markus Falb wrote:
... # pvdisplay Found duplicate PV b79x0kLXR9mAC0z0IZUxyJG1VC24Crl7: using /dev/sda2 not /dev/md1 --- Physical volume --- PV Name /dev/sda2 VG Name vg_sys PV Size 1.36 TB / not usable 6.81 MB
<snip> smartctl -t short to start. And is there anything in your logfiles saying something like " Device: /dev/sdb [SAT], 98 Currently unreadable (pending) sectors"?
Interesting, I have this in /etc/smartd.conf DEVICESCAN -n standby -a -m root -s (L/../../6/00|S/../.././00)
but according to the selftest logs it seems it is only checking sdb but *not* sda. I'll have to check this out. A manual short test succeeded. However, a failing disk should not affect LVM, should it?
oh,
... # pvs /dev/sda2 Found duplicate PV b79x0kLXR9mAC0z0IZUxyJG1VC24Crl7: using /dev/md1 not /dev/sda2 PV VG Fmt Attr PSize PFree /dev/md1 vg_sys lvm2 a-- 1.36T 553.16G
# pvs /dev/sdb2 Failed to read physical volume "/dev/sdb2" ...
smart status tells me
... 5 Reallocated_Sector_Ct 0x0033 001 001 036 Pre-fail Always FAILING_NOW 4095 ...
smartd did not send warning mail, selftests are successful, only in the logfiles is
... Device: /dev/sdb [SAT], FAILED SMART self-check. BACK UP DATA NOW! Device: /dev/sdb [SAT], Failed SMART usage Attribute: 5 Reallocated_Sector_Ct. ...
I think I will replace sdb.
Good idea. You really want to do that before your logs suddenly fill with DRDY errors....
mark