[CentOS] Re: lvm errors after replacing drive in raid 10 array [SOLVED ?]

Fri Jul 18 14:37:59 UTC 2008
Mike <mike at microdel.org>

Just for the record I'm about 98.7% sure that the root problem here was 
that the LVM stuff (pvcreate, vgcreate, lvcreate) was done when booted 
from systemrescuecd and had nothing to do with replacing a failed drive.

The ouptut from 'pvcreate --version' on the systemrescuecd is:
   LVM version:     2.02.33 (2008-01-31)
   Library version: 1.02.26 (2008-06-06)
   Driver version:  4.13.0

And when booted from CentOS 5.2:
   LVM version:     2.02.32-RHEL5 (2008-03-04)
   Library version: 1.02.24 (2007-12-20)
   Driver version:  4.11.5

When [pv|vg|lv]create is done like it should have been (after booting 
CentOS) snapshot volume creation works as expected even after replacing a 
failed drive.

On Thu, 17 Jul 2008, Mike wrote:

> I thought I'd test replacing a failed drive in a 4 drive raid 10 array on a 
> CentOS 5.2 box before it goes online and before a drive really fails.
>
> I 'mdadm failed, removed', powered off, replaced drive, partitioned with 
> sfdisk -d /dev/sda | sfdisk /dev/sdb, and finally 'mdadm add'ed'.
>
> Everything seems fine until I try to create a snapshot lv.  (Creating a 
> snapshot lv worked before I replaced the drive.)  Here's what I'm seeing.
>
> # lvcreate -p r -s -L 8G -n home-snapshot /dev/vg0/homelv
>  Couldn't find device with uuid 'yIIGF9-9f61-QPk8-q6q1-wn4D-iE1x-MJIMgi'.
>  Couldn't find all physical volumes for volume group vg0.
>  Volume group for uuid not found: 
> I4Gf5TUB1M1TfHxZNg9cCkM1SbRo8cthCTTjVHBEHeCniUIQ03Ov4V1iOy2ciJwm
>  Aborting. Failed to activate snapshot exception store.
>