[CentOS] lvm errors after replacing drive in raid 10 array
mike at microdel.org
Thu Jul 17 22:43:14 UTC 2008
I thought I'd test replacing a failed drive in a 4 drive raid 10 array on
a CentOS 5.2 box before it goes online and before a drive really fails.
I 'mdadm failed, removed', powered off, replaced drive, partitioned with
sfdisk -d /dev/sda | sfdisk /dev/sdb, and finally 'mdadm add'ed'.
Everything seems fine until I try to create a snapshot lv. (Creating a
snapshot lv worked before I replaced the drive.) Here's what I'm seeing.
# lvcreate -p r -s -L 8G -n home-snapshot /dev/vg0/homelv
Couldn't find device with uuid 'yIIGF9-9f61-QPk8-q6q1-wn4D-iE1x-MJIMgi'.
Couldn't find all physical volumes for volume group vg0.
Volume group for uuid not found:
Aborting. Failed to activate snapshot exception store.
So then I try
--- Physical volume ---
PV Name /dev/md3
VG Name vg0
PV Size 903.97 GB / not usable 3.00 MB
PE Size (KByte) 4096
Total PE 231416
Free PE 44536
Allocated PE 186880
PV UUID yIIGF9-9f61-QPk8-q6q1-wn4D-iE1x-MJIMgi
Subsequent runs of pvdisplay eventually returns nothing. pvck /dev/md3
seems to restore that but creating a snapshot volume still fails.
It's as if the "PV stuff" is not on the new drive. I (probably
incorrectly) assumed that just adding the drive back in to the raid array
would take care of that.
I've searched quite a bit but have not found any clues. Any one?
-- Thanks, Mike
More information about the CentOS