I installed Centos 6.x 64 bit with the minimal ISO and used two disks in RAID 1 array.
Filesystem Size Used Avail Use% Mounted on /dev/md2 97G 918M 91G 1% / tmpfs 16G 0 16G 0% /dev/shm /dev/md1 485M 54M 407M 12% /boot /dev/md3 3.4T 198M 3.2T 1% /vz
Personalities : [raid1] md1 : active raid1 sda1[0] sdb1[1] 511936 blocks super 1.0 [2/2] [UU] md3 : active raid1 sda4[0] sdb4[1] 3672901440 blocks super 1.1 [2/2] [UU] bitmap: 0/28 pages [0KB], 65536KB chunk md2 : active raid1 sdb3[1] sda3[0] 102334336 blocks super 1.1 [2/2] [UU] bitmap: 0/1 pages [0KB], 65536KB chunk md0 : active raid1 sdb2[1] sda2[0] 131006336 blocks super 1.1 [2/2] [UU]
My question is if sda one fails will it still boot on sdb? Did the install process write the boot sector on both disks or just sda? How do I check and if its not on sdb how do I copy it there?
no, you need to run grub-install /dev/sdb manually keep in mind you need to do the same after replace a disk the first part of grub is outside the RAID itself in the MBR
# file -s /dev/sda /dev/sda: x86 boot sector; GRand Unified Bootloader, stage1 version 0x3, boot drive 0x80, 1st sector stage2 0x849fc, GRUB version 0.94; partition 1: ID=0xee, starthead 0, startsector 1, 4294967295 sectors, extended partition table (last)\011, code offset 0x48
# file -s /dev/sdb /dev/sdb: x86 boot sector; GRand Unified Bootloader, stage1 version 0x3, boot drive 0x80, 1st sector stage2 0x849fc, GRUB version 0.94; partition 1: ID=0xee, starthead 0, startsector 1, 4294967295 sectors, extended partition table (last)\011, code offset 0x48
I am guessing this is saying its present on both? I did nothing to copy it there so it must have done it during the centos 6.x install process.
On Fri, Jan 24, 2014 at 11:25 AM, Matt matt.mailinglists@gmail.com wrote:
I am guessing this is saying its present on both? I did nothing to copy it there so it must have done it during the centos 6.x install process.
I think the 6.x installers try to do it for you on both drives - but whether it actually works or not may depend on the type of failure and what the bios does to the disk mapping as a result. In any case it is a good idea to know how to recover from a rescue-mode boot of the install ISO.
On 1/24/2014 10:11 AM, Les Mikesell wrote:
On Fri, Jan 24, 2014 at 11:25 AM, Mattmatt.mailinglists@gmail.com wrote:
I am guessing this is saying its present on both? I did nothing to copy it there so it must have done it during the centos 6.x install process.
I think the 6.x installers try to do it for you on both drives - but whether it actually works or not may depend on the type of failure and what the bios does to the disk mapping as a result. In any case it is a good idea to know how to recover from a rescue-mode boot of the install ISO.
and, even if you have the boot loader on both drives, there's no guarantee your BIOS will boot from the 2nd one. Disks can partially fail in nasty ways that might allow the already-running system to stay up on the other half of the mirror, but when the drive is 'tested' during power up boot sequence, it could hang the system.
On 01/24/2014 07:32 PM, John R Pierce wrote:
On 1/24/2014 10:11 AM, Les Mikesell wrote:
On Fri, Jan 24, 2014 at 11:25 AM, Mattmatt.mailinglists@gmail.com wrote:
I am guessing this is saying its present on both? I did nothing to copy it there so it must have done it during the centos 6.x install process.
I think the 6.x installers try to do it for you on both drives - but whether it actually works or not may depend on the type of failure and what the bios does to the disk mapping as a result. In any case it is a good idea to know how to recover from a rescue-mode boot of the install ISO.
and, even if you have the boot loader on both drives, there's no guarantee your BIOS will boot from the 2nd one. Disks can partially fail in nasty ways that might allow the already-running system to stay up on the other half of the mirror, but when the drive is 'tested' during power up boot sequence, it could hang the system.
True, but forwarding of root mail to admins e-mail address will warn about crash of mirror, so physical intervention of choice can be made. I think manual change in BIOS is of little consequence if system will boot off of surviving disk(s).
And if disks can be hot-swapped then only concern is that GRUB and /boot survive the crash.
On Fri, Jan 24, 2014 at 1:30 PM, Ljubomir Ljubojevic centos@plnet.rs wrote:
and, even if you have the boot loader on both drives, there's no guarantee your BIOS will boot from the 2nd one. Disks can partially fail in nasty ways that might allow the already-running system to stay up on the other half of the mirror, but when the drive is 'tested' during power up boot sequence, it could hang the system.
True, but forwarding of root mail to admins e-mail address will warn about crash of mirror, so physical intervention of choice can be made. I think manual change in BIOS is of little consequence if system will boot off of surviving disk(s).
Doesn't grub need to know the bios disk id for subsequent stages of the boot and where to find the root filesystem? I think it matters whether or not bios remaps your 2nd drive to the first id.
And if disks can be hot-swapped then only concern is that GRUB and /boot survive the crash.
And if you know how to do a rescue-mode boot and reinstall grub, you can fix that too.
On 01/24/2014 09:33 PM, Les Mikesell wrote:
On Fri, Jan 24, 2014 at 1:30 PM, Ljubomir Ljubojevic centos@plnet.rs wrote:
and, even if you have the boot loader on both drives, there's no guarantee your BIOS will boot from the 2nd one. Disks can partially fail in nasty ways that might allow the already-running system to stay up on the other half of the mirror, but when the drive is 'tested' during power up boot sequence, it could hang the system.
True, but forwarding of root mail to admins e-mail address will warn about crash of mirror, so physical intervention of choice can be made. I think manual change in BIOS is of little consequence if system will boot off of surviving disk(s).
Doesn't grub need to know the bios disk id for subsequent stages of the boot and where to find the root filesystem? I think it matters whether or not bios remaps your 2nd drive to the first id.
GRUB boots first partition on a given disk, and then kernel boots file systems from RAID's. Once /boot RAID is mounted, any changes are written to all disks.
And if disks can be hot-swapped then only concern is that GRUB and /boot survive the crash.
And if you know how to do a rescue-mode boot and reinstall grub, you can fix that too.
On Fri, Jan 24, 2014 at 4:17 PM, Ljubomir Ljubojevic centos@plnet.rs wrote:
Doesn't grub need to know the bios disk id for subsequent stages of the boot and where to find the root filesystem? I think it matters whether or not bios remaps your 2nd drive to the first id.
GRUB boots first partition on a given disk, and then kernel boots file systems from RAID's. Once /boot RAID is mounted, any changes are written to all disks.
What is the hd number you give in the setup command? And what if bios doesn't call the remaining live disk that number after a failure?
On 01/25/2014 12:20 AM, Les Mikesell wrote:
On Fri, Jan 24, 2014 at 4:17 PM, Ljubomir Ljubojevic centos@plnet.rs wrote:
Doesn't grub need to know the bios disk id for subsequent stages of the boot and where to find the root filesystem? I think it matters whether or not bios remaps your 2nd drive to the first id.
GRUB boots first partition on a given disk, and then kernel boots file systems from RAID's. Once /boot RAID is mounted, any changes are written to all disks.
What is the hd number you give in the setup command? And what if bios doesn't call the remaining live disk that number after a failure?
I myself use InstallCD or LiveCD to create RAID partitions I want, and then Anaconda offers to create GRUB on /dev/mdX and creates /boot and GRUB on all disks (I had 3 and even 4 in mirror).
I am not talking about seamless hardware boot, but on software one. If one HDD is totally dead, then /dev/sdb becomes /dev/sda and so on, but that does not matter since RAID is assembles from metadata on partitions (I use separate RAID partitions, 500MB RAID1 then the rest of the disk is RAID10,f2 or RAID10,f3 partition) so kernel line says:
title CentOS (2.6.32-431.el6.centos.plus.x86_64) root (hd0,0) kernel /vmlinuz-2.6.32-431.el6.centos.plus.x86_64 ro root=/dev/mapper/vg_kancelarija-LV_C6_ROOT rd_LVM_LV=vg_kancelarija/LV_C6_ROOT rd_MD_UUID=38669557:34ca61eb:3c6e8827:d562f15d rd_LVM_LV=vg_kancelarija/LV_C6_SWAP rd_NO_LUKS rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=p c KEYTABLE=us crashkernel=128M rhgb quiet nouveau.modeset=0 rdblacklist=nouveau initrd /initramfs-2.6.32-431.el6.centos.plus.x86_64.img
and device.map:
# this device map was generated by anaconda (hd0) /dev/sda (hd2) /dev/sdb (hd1) /dev/sdc
(From system with 3 disks)
I can not claim for sure this works, at this point, because my last problem was long time ago, but as far as I remember I just disconnected failed disk and booted system on single disk until I got new one as replacement.
On 01/24/2014 09:25 AM, Matt wrote:
# file -s /dev/sda /dev/sda: x86 boot sector; GRand Unified Bootloader, stage1 version 0x3, boot drive 0x80, 1st sector stage2 0x849fc, GRUB version 0.94; partition 1: ID=0xee, starthead 0, startsector 1, 4294967295 sectors, extended partition table (last)\011, code offset 0x48
# file -s /dev/sdb /dev/sdb: x86 boot sector; GRand Unified Bootloader, stage1 version 0x3, boot drive 0x80, 1st sector stage2 0x849fc, GRUB version 0.94; partition 1: ID=0xee, starthead 0, startsector 1, 4294967295 sectors, extended partition table (last)\011, code offset 0x48
I am guessing this is saying its present on both? I did nothing to copy it there so it must have done it during the centos 6.x install process.
It would appear so. But I'd recommend simply yanking out one drive, booting, and then swapping the drives to try booting again. You can resync the raid arrays trivially after the test. Then you know for sure. I've made this a matter of course for any servers where the root fs is RAID0.
-Ben