My CentOS4 machine died (CPU cooler failure, causing CPU to die). In this machine I had 5 Tbyte disks in a RAID5, and LVM structures on that.
Now I've moved those 5 disks onto a CentOS5 machine and the RAID array is being rebuilt. However the LVM structures weren't detected at boot time. I was able to "vgscan" and 'vgchange -a y' to bring the volume online and then fsck the logical volumes.
But I was concerned this didn't happen at boot time. Do I need to do anything else, or have the commands I've run done sufficient?
FWIW, this machine has no other LVM (nor RAID) disks on it.
(only 5 more hours for the rebuild to complete!)
At Fri, 8 Jan 2010 22:40:21 -0500 CentOS mailing list centos@centos.org wrote:
My CentOS4 machine died (CPU cooler failure, causing CPU to die). In this machine I had 5 Tbyte disks in a RAID5, and LVM structures on that.
Now I've moved those 5 disks onto a CentOS5 machine and the RAID array is being rebuilt. However the LVM structures weren't detected at boot time. I was able to "vgscan" and 'vgchange -a y' to bring the volume online and then fsck the logical volumes.
But I was concerned this didn't happen at boot time. Do I need to do anything else, or have the commands I've run done sufficient?
FWIW, this machine has no other LVM (nor RAID) disks on it.
This is why. Unless the machine 'knows' to look for LVM (eg has the mumble mumble in /etc/lvm/), it wouldn't look for it during start up. And unless it's root file system is on LVM its initrd won't have the "vgscan" and 'vgchange -a y' commands in the init script.
RAID itself is seen by the kernel if the disks are partitioned with 'Linux RAID Autodetect' partition types.
(only 5 more hours for the rebuild to complete!)
And hopefully, you won't get a disk error in the those errors and lose the whole array.
On Fri, Jan 08, 2010 at 11:02:12PM -0500, Robert Heller wrote:
At Fri, 8 Jan 2010 22:40:21 -0500 CentOS mailing list centos@centos.org wrote:
Now I've moved those 5 disks onto a CentOS5 machine and the RAID array is being rebuilt. However the LVM structures weren't detected at boot time. I was able to "vgscan" and 'vgchange -a y' to bring the volume online and then fsck the logical volumes.
But I was concerned this didn't happen at boot time. Do I need to do anything else, or have the commands I've run done sufficient?
FWIW, this machine has no other LVM (nor RAID) disks on it.
This is why. Unless the machine 'knows' to look for LVM (eg has the mumble mumble in /etc/lvm/), it wouldn't look for it during start up. And unless it's root file system is on LVM its initrd won't have the "vgscan" and 'vgchange -a y' commands in the init script.
So what's the fix? How do I get the right "mumble mumble"? :-)
Thanks!
(only 5 more hours for the rebuild to complete!)
And hopefully, you won't get a disk error in the those errors and lose the whole array.
% cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md3 : active raid5 sdf1[4] sde4[3] sdd3[2] sdc2[1] sdb1[0] 3907039744 blocks level 5, 64k chunk, algorithm 2 [5/5] [UUUUU]
unused devices: <none>
No problem :-)
At Sat, 9 Jan 2010 08:17:11 -0500 CentOS mailing list centos@centos.org wrote:
On Fri, Jan 08, 2010 at 11:02:12PM -0500, Robert Heller wrote:
At Fri, 8 Jan 2010 22:40:21 -0500 CentOS mailing list centos@centos.org wrote:
Now I've moved those 5 disks onto a CentOS5 machine and the RAID array is being rebuilt. However the LVM structures weren't detected at boot time. I was able to "vgscan" and 'vgchange -a y' to bring the volume online and then fsck the logical volumes.
But I was concerned this didn't happen at boot time. Do I need to do anything else, or have the commands I've run done sufficient?
FWIW, this machine has no other LVM (nor RAID) disks on it.
This is why. Unless the machine 'knows' to look for LVM (eg has the mumble mumble in /etc/lvm/), it wouldn't look for it during start up. And unless it's root file system is on LVM its initrd won't have the "vgscan" and 'vgchange -a y' commands in the init script.
So what's the fix? How do I get the right "mumble mumble"? :-)
You do a "vgscan" and 'vgchange -a y' manually. This should put the right stuff in /etc/lvm/.
It is rare/unusual to 'add' LVM volumns to a running system from some 'outside' source (as you did via a disk transplant). Linux installers will do the "vgscan" and 'vgchange -a y' during the install process and vgcreate updates /etc/lvm/, so the problem rarely comes up.
Thanks!
(only 5 more hours for the rebuild to complete!)
And hopefully, you won't get a disk error in the those errors and lose the whole array.
% cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md3 : active raid5 sdf1[4] sde4[3] sdd3[2] sdc2[1] sdb1[0] 3907039744 blocks level 5, 64k chunk, algorithm 2 [5/5] [UUUUU]
unused devices: <none>
No problem :-)
On Sat, Jan 09, 2010 at 08:32:51AM -0500, Robert Heller wrote:
At Sat, 9 Jan 2010 08:17:11 -0500 CentOS mailing list centos@centos.org wrote:
time. I was able to "vgscan" and 'vgchange -a y' to bring the volume online and then fsck the logical volumes.
But I was concerned this didn't happen at boot time. Do I need to do anything else, or have the commands I've run done sufficient?
You do a "vgscan" and 'vgchange -a y' manually. This should put the right stuff in /etc/lvm/.
That's what I hoped :-) Now the resync has finished I'm gonna try a reboot and see what happens!
Thanks!
It is rare/unusual to 'add' LVM volumns to a running system from some 'outside' source (as you did via a disk transplant). Linux installers
Typically it's meant to be done with vgexport/vgimport, but in a DR situation...