I tried this and it seemed to work but it still won't boot just gives error about missing partiton still
Les Mikesell wrote:
On Mon, 2006-05-01 at 15:27, Mace Eliason wrote:
Okay to clarify
I have only hooked up the drive that is not bootable. The drive that is bootable has been disconnected.
If I run sfdisk on the nonbootable drive I get /dev/sda1 (id fd linux raid autodetect) marked * for boot /dev/sda2 " " /dev/sda3 " "
So yes it is software raid am I right. The system has been running off this drive for almost a month, until someone rebooted the server.
Should I have both drives installed when I run the linux rescue?
It should autodetect the md devices with one missing. However if it doesn't find the partitions in the automatic scan you can try it the hard way:
From the command line in rescue mode:
mount /dev/sda3 /mnt/sysinstall mount /dev/sda1 /mnt/sysinstall/boot Assuming those succeed: chroot /mnt/sysinstall Look around a bit to make sure you have the right thing mounted, then proceed with the instructions for installing grub. If that doesn't work either, try: grub <enter> device (hd0) /dev/sda root (hd0,0) setup (hd0) quit
then 'exit' twice to reboot and remove the rescue cd.
Does linux automatically rebuild the raid when it boot? If I do get them to boot with the good drive I don't want it to over write the current drive.
I'm not sure about the details but I think there is a counter on each disk with the number of times the RAID has been stopped cleanly. If it does auto-sync, the one with the higher counter will be the master. If you can get grub installed and reboot a few times with the disk you want to stay current it should become the master when they are both seen - although I generally don't trust this and do a scsi low-level when I have a chance when swapping in a new drive, then make the new matching partitions and add to the raid after booting.