OK, I'm about 90% sure that I've corrected the boot loader situation with RAID-1 and the second hard drive. I haven't tested the correction, but here's what I did:
Examined the grub.conf file and noticed that hd0 uses (hd0,1), so what followed was grub
grub> device (hd1) /dev/sdc grub> root (hd1,1) grub> setup (hd1) <after receiving the successful message> grub> quit
I didn't rebuild the boot loader on /dev/sda because it is working (if it ain't broke don't fix it). My situation is that I'm using 4 - 1 TB hard drives and I used the following pattern:
/dev/sda | /dev/sdc = First Raid -1 volume /dev/sdb | /dev/sdd = Second Raid-1 volume
Thanks all for the suggestions and thoughts!
Gene
On 6/5/2012 7:21 PM, Eugene Poole wrote:
OK, I'm about 90% sure that I've corrected the boot loader situation with RAID-1 and the second hard drive. I haven't tested the correction, but here's what I did:
Examined the grub.conf file and noticed that hd0 uses (hd0,1), so what followed was grub grub> device (hd1) /dev/sdc grub> root (hd1,1) grub> setup (hd1) <after receiving the successful message> grub> quit
I didn't rebuild the boot loader on /dev/sda because it is working (if it ain't broke don't fix it). My situation is that I'm using 4 - 1 TB hard drives and I used the following pattern:
/dev/sda | /dev/sdc = First Raid -1 volume /dev/sdb | /dev/sdd = Second Raid-1 volume
There is no complete solution to this problem. The question is this: When one of the drives dies, how will the system see the remaining drive? Will it still see it as sdb, or will it now see that drive as sda? These situations need different grub configs. I generally configure both drives as if they were hd0/sda. That way, if sda crashes, I can remove the disk and boot the second drive normally.
on 6/7/2012 9:40 AM Bowie Bailey spake the following:
On 6/5/2012 7:21 PM, Eugene Poole wrote:
OK, I'm about 90% sure that I've corrected the boot loader situation with RAID-1 and the second hard drive. I haven't tested the correction, but here's what I did:
Examined the grub.conf file and noticed that hd0 uses (hd0,1), so what followed was grub grub> device (hd1) /dev/sdc grub> root (hd1,1) grub> setup (hd1) <after receiving the successful message> grub> quit
I didn't rebuild the boot loader on /dev/sda because it is working (if it ain't broke don't fix it). My situation is that I'm using 4 - 1 TB hard drives and I used the following pattern:
/dev/sda | /dev/sdc = First Raid -1 volume /dev/sdb | /dev/sdd = Second Raid-1 volume
There is no complete solution to this problem. The question is this: When one of the drives dies, how will the system see the remaining drive? Will it still see it as sdb, or will it now see that drive as sda? These situations need different grub configs. I generally configure both drives as if they were hd0/sda. That way, if sda crashes, I can remove the disk and boot the second drive normally.
In older versions sdb would become sda, but I don't have enough time on the 6 series to know for sure... Maybe I will fire up a virtual machine with a couple emulated sata drives and see....
On Thu, Jun 7, 2012 at 4:48 PM, Scott Silva ssilva@sgvwater.com wrote:
In older versions sdb would become sda, but I don't have enough time on the 6 series to know for sure... Maybe I will fire up a virtual machine with a couple emulated sata drives and see....
Sda/sdb are the kernel's conventions. What matters is what bios sees. And that may be different depending not only on the hardware but also the failure mode - sometimes a drive will fail but not really disappear from detection and it is hard to emulate that. Also, back in ATA days it was pretty common for a failed drive to lock both channels on the controller.
As long as you have physical access to the box you can fix it fairly quickly by booting a rescue iso and re-installing grub, even if you have to try a couple of times to get it right.
On 06/07/2012 03:48 PM, Les Mikesell wrote:
On Thu, Jun 7, 2012 at 4:48 PM, Scott Silva ssilva@sgvwater.com wrote:
In older versions sdb would become sda, but I don't have enough time on the 6 series to know for sure... Maybe I will fire up a virtual machine with a couple emulated sata drives and see....
Sda/sdb are the kernel's conventions. What matters is what bios sees. And that may be different depending not only on the hardware but also the failure mode - sometimes a drive will fail but not really disappear from detection and it is hard to emulate that. Also, back in ATA days it was pretty common for a failed drive to lock both channels on the controller.
As long as you have physical access to the box you can fix it fairly quickly by booting a rescue iso and re-installing grub, even if you have to try a couple of times to get it right.
And if the server is colocated, but you have remote console access, you can leave a recovery CD in the drive, but set the boot order to boot the hard drive and then remotely change the boot order if you have problems.
Nataraj
On 6/8/2012 1:13 AM, Nataraj wrote:
On 06/07/2012 03:48 PM, Les Mikesell wrote:
And if the server is colocated, but you have remote console access, you can leave a recovery CD in the drive, but set the boot order to boot the hard drive and then remotely change the boot order if you have problems.
Nataraj
out of curiosity, how do you prevent centos from ejecting the dvd when it is done installing....?
On 06/07/2012 11:38 PM, Bob Hoffman wrote:
On 6/8/2012 1:13 AM, Nataraj wrote:
On 06/07/2012 03:48 PM, Les Mikesell wrote:
And if the server is colocated, but you have remote console access, you can leave a recovery CD in the drive, but set the boot order to boot the hard drive and then remotely change the boot order if you have problems.
Nataraj
out of curiosity, how do you prevent centos from ejecting the dvd when it is done installing....? _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
That I don't know, but once CentOS is installed, if my memory serves me correctly, I think you can leave a CD/DVD in the drive over reboots as long as you don't eject it. Alternatively, I think it would work to use a USB stick to boot a recovery system remotely.
Dell actually provides the ability to boot a remote CD over the DRAC interface but it's extremely slow unless you have a very high bandwidth connection, and at least a few years ago when I last looked, most people did not recommend using that functionality.
Actually now that I think about it, I believe that if you have a CD/DVD drive with a self loading tray, it will suck the tray back in when the BIOS resets. This will not work with the slim drives with manual trays that they put in most servers, so you would have to have rack space that allows you to leave an external drive plugged in.
The USB stick or other flash drive is probably a better solution. The main thing is having remote access to the BIOS.
Nataraj
Bob Hoffman wrote:
On 6/8/2012 1:13 AM, Nataraj wrote:
On 06/07/2012 03:48 PM, Les Mikesell wrote:
And if the server is colocated, but you have remote console access, you can leave a recovery CD in the drive, but set the boot order to boot the hard drive and then remotely change the boot order if you have problems.
Nataraj
out of curiosity, how do you prevent centos from ejecting the dvd when it is done installing....?
some drives support eject -t to close the tray
On 8.6.2012 08:38, Bob Hoffman wrote:
out of curiosity, how do you prevent centos from ejecting the dvd when it is done installing....?
There is a boot option (from RHEL5 Installation Guide)
noeject do not eject optical discs after installation. This option is useful in remote installations where it is difficult to close the tray afterwards.
I did not find it mentioned in the RHEL6 Installation Guide, so I am not sure if it has gone. I did not test it.