Hello James and All, For your information, here's the listing looks like: [root at localhost ~]# pvs PV VG Fmt Attr PSize PFree /dev/sda1 vg_hosting lvm2 a-- 1.82t 0 /dev/sdb2 vg_hosting lvm2 a-- 1.82t 0 /dev/sdc1 vg_hosting lvm2 a-- 1.82t 0 /dev/sdd1 vg_hosting lvm2 a-- 1.82t 0 [root at localhost ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert lv_home vg_hosting -wi-s----- 7.22t lv_root vg_hosting -wi-a----- 50.00g lv_swap vg_hosting -wi-a----- 11.80g [root at localhost ~]# vgs VG #PV #LV #SN Attr VSize VFree vg_hosting 4 3 0 wz--n- 7.28t 0 [root at localhost ~]# The problem is, when I do: [root at localhost ~]# vgchange -a y device-mapper: resume ioctl on failed: Invalid argument Unable to resume vg_hosting-lv_home (253:4) 3 logical volume(s) in volume group "vg_hosting" now active Only lv_root and lv_swap are activated; but lv_home is not, with the error above (on the vgchange command). How to activate the lv_home even with the 3 PVs left? The PV /dev/sdb2 is the one lost. I created it from a new blank hard disk and restore the VG using: # pvcreate --restorefile ... --uuid ... /dev/sdb2 # vgcfgrestore --file ... vg_hosting Regards, Khem On Sat, February 28, 2015 7:42 am, Khemara Lyn wrote: > Dear James, > > > Thank you for being quick to help. > Yes, I could see all of them: > > > # vgs > # lvs > # pvs > > > Regards, > Khem > > > On Sat, February 28, 2015 7:37 am, James A. Peltier wrote: > >> > >> >> ----- Original Message ----- >> | Dear All, >> | >> | I am in desperate need for LVM data rescue for my server. >> | I have an VG call vg_hosting consisting of 4 PVs each contained in a >> | separate hard drive (/dev/sda1, /dev/sdb1, /dev/sdc1, and /dev/sdd1). >> | And this LV: lv_home was created to use all the space of the 4 PVs. >> | >> | Right now, the third hard drive is damaged; and therefore the third PV >> | (/dev/sdc1) cannot be accessed anymore. I would like to recover >> whatever | left in the other 3 PVs (/dev/sda1, /dev/sdb1, and >> /dev/sdd1). >> | >> | I have tried with the following: >> | >> | 1. Removing the broken PV: >> | >> | # vgreduce --force vg_hosting /dev/sdc1 >> | Physical volume "/dev/sdc1" still in use >> | >> | # pvmove /dev/sdc1 >> | No extents available for allocation >> >> >> >> This would indicate that you don't have sufficient extents to move the >> data off of this disk. If you have another disk then you could try >> adding it to the VG and then moving the extents. >> >> | 2. Replacing the broken PV: >> | >> | I was able to create a new PV and restore the VG Config/meta data: >> | >> | # pvcreate --restorefile ... --uuid ... /dev/sdc1 >> | # vgcfgrestore --file ... vg_hosting >> | >> | However, vgchange would give this error: >> | >> | # vgchange -a y >> | device-mapper: resume ioctl on failed: Invalid argument >> | Unable to resume vg_hosting-lv_home (253:4) >> | 0 logical volume(s) in volume group "vg_hosting" now active >> >> >> >> There should be no need to create a PV and then restore the VG unless >> the entire VG is damaged. The configuration should still be available >> on the other disks and adding the new PV and moving the extents should >> be enough. >> >> | Could someone help me please??? >> | I'm in dire need for help to save the data, at least some of it if >> possible. >> >> Can you not see the PV/VG/LV at all? >> >> >> >> -- >> James A. Peltier >> IT Services - Research Computing Group >> Simon Fraser University - Burnaby Campus >> Phone : 778-782-6573 >> Fax : 778-782-3045 >> E-Mail : jpeltier at sfu.ca >> Website : http://www.sfu.ca/itservices >> Twitter : @sfu_rcg >> Powering Engagement Through Technology >> "Build upon strengths and weaknesses will generally take care of >> themselves" - Joyce C. Lock >> >> _______________________________________________ >> CentOS mailing list >> CentOS at centos.org >> http://lists.centos.org/mailman/listinfo/centos >> >> >> > > > _______________________________________________ > CentOS mailing list > CentOS at centos.org > http://lists.centos.org/mailman/listinfo/centos > >