On Sat, Jul 30, 2011 at 6:40 PM, Sean Hart tevesxh@gmail.com wrote:
On Sat, Jul 30, 2011 at 7:40 AM, Alexander Dalloz ad+lists@uni-x.org wrote:
Am 30.07.2011 10:37, schrieb Sean Hart:
So here goes... First some back story -Centos 5 with latest updates as of yesterday. kernel is 2.6.18-238.19.1.el5 -setup is raid 1 for /boot and lvm over raid6 for everything else
- The / partition (lvm "RootVol") had run out of room... (100% full, things where falling appart...)
I resized the root volume (from 20GiB to 50GiB). This was done from a fedora 15 livecd, seemed like a better idea than doing it on a live system at the time.... After the resize the content of all the lvs could be mounted and all data was still there (all this from within fedora).
You would better have used the CentOS 5 install media to run into rescue mode and then to chroot into the system, given you felt better to do an offline resizing. Though online resizing (increasing an LV) is trouble free from my experience. Well, if / is completely full the offline route may indeed be better.
The problem is when i try to reboot into centos as the root volume cannot be found.
boot message goes as follows
... No Volume groups found Volume Group "RaidVolGrp" not found ... Kernel panic
the UUID's have not changed, but there is definitely a missing link, probably something dumb...
I would greatly appreciate if anyone could help point me in the right direction..
a bit more info
# lvscan ACTIVE '/dev/RaidVolGrp/RootVol' [50.00 GiB] inherit ACTIVE '/dev/RaidVolGrp/HomeVol' [250.00 GiB] inherit ACTIVE '/dev/RaidVolGrp/SwapVol' [2.44 GiB] inherit ACTIVE '/dev/RaidVolGrp/MusicVol' [350.00 GiB] inherit ACTIVE '/dev/RaidVolGrp/VideoVol' [350.00 GiB] inherit ACTIVE '/dev/RaidVolGrp/PicturesVol' [300.00 GiB] inherit ACTIVE '/dev/RaidVolGrp/MiscVol' [60.00 GiB] inherit ACTIVE '/dev/RaidVolGrp/ShareddocVol' [60.00 GiB] inherit ACTIVE '/dev/RaidVolGrp/VMVol' [60.00 GiB] inherit ACTIVE '/dev/RaidVolGrp/TorrentVol' [50.00 GiB] inherit
That is output from running the Fedora LiveCD?
Boot up with the CentOS 5 DVD into rescue mode, let it detect the existing LVMs. Go into /etc/lvm/backup and validate the info that's saved there and to check what CentOS sees.
sh
Alexander
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Ok, thanks a lot for the reply
I believe this is the relevant part of /etc/lvm/backup #################################################### RaidVolGrp { id = "gL5X13-q4c8-d8XJ-x6Qc-m36S-eCfp-LKnvIW" seqno = 22 status = ["RESIZEABLE", "READ", "WRITE"] flags = [] extent_size = 65536 # 32 Megabytes max_lv = 0 max_pv = 0 metadata_copies = 0
physical_volumes {
pv0 { id = "BpXoKc-pQYn-zVkU-7HyH-IKLw-0IX2-Ygm2HJ" device = "/dev/md1" # Hint only
status = ["ALLOCATABLE"] flags = [] dev_size = 7805081216 # 3.63452 Terabytes pe_start = 384 pe_count = 119096 # 3.63452 Terabytes } }
logical_volumes {
RootVol { id = "AWstlr-xw8t-FNTu-FsEA-YUxi-updp-0HfKtr" status = ["READ", "WRITE", "VISIBLE"] flags = [] segment_count = 1
segment1 { start_extent = 0 extent_count = 625 # 19.5312 Gigabytes
type = "striped" stripe_count = 1 # linear stripes = [ "pv0", 16250 ] } } #################################
And this is what i get when i run lvdisplay from the centos live-cd lvdisplay --- Logical volume --- LV Name /dev/RaidVolGrp/RootVol VG Name RaidVolGrp LV UUID AWstlr-xw8t-FNTu-FsEA-YUxi-updp-0HfKtr LV Write Access read/write LV Status available # open 1 LV Size 50.00 GB Current LE 1600 Segments 2 Allocation inherit Read ahead sectors auto - currently set to 4096 Block device 253:2
.....
########################## It looks like what has changes is the segment count (went from 1 to 2 segments) for the logical volume "RootVol" (and also the total number of segments of pv0 has changed from 22 to 23 i suppose)
######################## pvdisplay fom centos live-cd Scanning for physical volume names --- Physical volume --- PV Name /dev/md126 VG Name RaidVolGrp PV Size 3.63 TB / not usable 2.81 MB Allocatable yes PE Size (KByte) 32768 Total PE 119096 Free PE 70058 Allocated PE 49038 PV UUID BpXoKc-pQYn-zVkU-7HyH-IKLw-0IX2-Ygm2HJ
Not sure what to do from here Should I change the /etc/lvm/backup/RaidVolGrp file to reflect the current actuall situation? Don't see how that would help since the file is inside the pv that can't be accessed at boot time anyway...
sh
Hum it looks like the pv name has also changed from "pv0" to "md126" Would make a difference?
Thanks again, sh