I have a machine on CentOS 5 with two disks in RAID1 using Linux software RAID. /dev/md0 is a small boot partition, /dev/md1 spans the rest of the disk(s). /dev/md1 is managed by LVM and holds the system partition and several other partitions. I had to take out disk sda from the RAID and low level format it with the tool provided by Samsung. Now I put it back and want to reassemble the array. Machine boots fine from sdb and I was able to re-add /dev/sda1 to /dev/md0 easily. However, I can't do the same with /dev/sda2 to /dev/md1 because there's no /dev/md1 anymore and mdadm tells me it doesn't have the RAID superblock. If possible I want to avoid deleting the already existing setup and data. I'm wondering what I shall do next. Can I simply create /dev/md1 as if it was a new array? Is the problem created by the fact that the running system was booted from a logical volume on /dev/sda2? Should I better boot the system from a live CD and then reassemble/create anew /dev/md1?
In general, would it be better to *not* put the main system on a logical volume on a RAID partition? (The other logical volumes carry virtual machines.) Thus: /dev/md0 with /boot /dev/md1 with / (no LVM on it) /dev/md2 managed by LVM
Kai
It's obvious but I should have mentioned it nevertheless that I copied the partition table from sdb back to sda, of course. I wonder how the superblock for /dev/md1 could be missing?
Kai
on 11-26-2008 6:12 AM Kai Schaetzl spake the following:
It's obvious but I should have mentioned it nevertheless that I copied the partition table from sdb back to sda, of course. I wonder how the superblock for /dev/md1 could be missing?
Kai
Before you do anything, can you access the LVM's on /dev/sdb2? If so, make sure you back everything up as you will probably need to start over on the raid arrays. Worse case you will need a third drive for temporary storage. Best case, you can create new arrays on /dev/sda with only one member, migrate the data from sdb to sda, and then add sdb to the arrays.
Scott Silva wrote on Sat, 29 Nov 2008 14:52:18 -0800:
Before you do anything, can you access the LVM's on /dev/sdb2? If so, make sure you back everything up as you will probably need to start over on the raid arrays. Worse case you will need a third drive for temporary storage. Best case, you can create new arrays on /dev/sda with only one member, migrate the data from sdb to sda, and then add sdb to the arrays.
Thanks, I overlooked this message some days ago. I didn't need to back anything up. I created the new RAID structure on disk 1 with missing mirrors and kept the old structure on disk 2 that was now merely acting like normal partitions/LVM. The hard part then was getting it to boot from the RAID. I didn't know that the root device path is hardcoded in initrd and an error message of "cannot switch to root" is not very helpful (now, a helpful error message would have been just slightly different: "cannot switch to root path /dev/md1"). I found that out when I finally unpacked the initrd. Once I knew that getting it to boot was easy. Then I moved all data to the new LVM/RAID partition on disk 1 and finally copied the partition table to disk 2 and added the missing mirrors to the md devices. It took actually a bit more copying to and fro as I experimented a bit here and there (like what happens if you dd over a complete PV to an existing md device or vice versa or if you can keep the data if you create an md device on a partition that already holds data), but I was able to keep all the data and system during this.
Kai
on 12-2-2008 10:31 AM Kai Schaetzl spake the following:
Scott Silva wrote on Sat, 29 Nov 2008 14:52:18 -0800:
Before you do anything, can you access the LVM's on /dev/sdb2? If so, make sure you back everything up as you will probably need to start over on the raid arrays. Worse case you will need a third drive for temporary storage. Best case, you can create new arrays on /dev/sda with only one member, migrate the data from sdb to sda, and then add sdb to the arrays.
Thanks, I overlooked this message some days ago. I didn't need to back anything up. I created the new RAID structure on disk 1 with missing mirrors and kept the old structure on disk 2 that was now merely acting like normal partitions/LVM. The hard part then was getting it to boot from the RAID. I didn't know that the root device path is hardcoded in initrd and an error message of "cannot switch to root" is not very helpful (now, a helpful error message would have been just slightly different: "cannot switch to root path /dev/md1"). I found that out when I finally unpacked the initrd. Once I knew that getting it to boot was easy. Then I moved all data to the new LVM/RAID partition on disk 1 and finally copied the partition table to disk 2 and added the missing mirrors to the md devices. It took actually a bit more copying to and fro as I experimented a bit here and there (like what happens if you dd over a complete PV to an existing md device or vice versa or if you can keep the data if you create an md device on a partition that already holds data), but I was able to keep all the data and system during this.
Kai
As long as your system is back up, then no worries!
I'm still having a problem with this software-RAID setup. I restructured the disk layout, so that I have the exact same partition layout on both disks. But now I cannot boot from the new md device. The important devices are:
/dev/md0 with /boot (made from sda1 and sdb1) /dev/md2 with / (made only from sda1) /dev/sdb2 with / (to be added to md2 once booting from md2 works)
The problem is that if I boot the kernel into /dev/md2 it errors out with "cannot mount /dev/root" when switching root. I can boot fine by using the / filesystem on /dev/sdb2. initrd contains raid1.ko, so the mdraid driver is not missing driver.
Any hints what could be wrong?
Kai