Hi,
I had this hard time installing a server with RAID 1 by software. I always got an error from GRUB and no booting.
So, i've installed CentOS 4.4 normally in only one disk.
Question is: is it possible now to make a RAID 1 by software with the other disk ?
If so, how ?
I'm completely lost here.
Any help would be appreciated.
Warm Regards, Mário Gamito
Mário Gamito wrote:
Hi,
I had this hard time installing a server with RAID 1 by software. I always got an error from GRUB and no booting.
So, i've installed CentOS 4.4 normally in only one disk.
Question is: is it possible now to make a RAID 1 by software with the other disk ?
If so, how ?
I'm completely lost here.
Any help would be appreciated.
Yeah, but it's potentially a bit tricky. I don't remember the command lines off the top of my head, but here's basically how to do it:
Let's assume you have installed and booted from hda now and want a raid1 with hda and hdb.
Make a sw raid on hdb with one drive marked as failed. Copy everything over from hda onto the raid drives Install Grub on hdb Boot from the raid1 and then hot-add hda as the disk which had previously failed.
IIRC, you also have to do some special magic with the grub config to make the machine boot nicely even if you have to reboot with one drive down. Unfortunately, it's been a while since last time I did this, so I can't remember.
IIRC, you also have to do some special magic with the grub config to make the machine boot nicely even if you have to reboot with one drive
down. Unfortunately, it's been a while since last time I did this, so I can't remember.
-- Haakon Gjersvik Eriksen -- Basefarm AS
Presuming sda first HDD, sdb second HDD, first partition (0) is /boot
grub device (hd1) /dev/sdb root (hd1,0) setup (hd1)
CM
CM spake the following on 2/28/2007 5:40 AM:
IIRC, you also have to do some special magic with the grub config to make the machine boot nicely even if you have to reboot with one drive
down. Unfortunately, it's been a while since last time I did this, so I can't remember.
-- Haakon Gjersvik Eriksen -- Basefarm AS
Presuming sda first HDD, sdb second HDD, first partition (0) is /boot
grub device (hd1) /dev/sdb root (hd1,0) setup (hd1)
CM
I'm confused. Wouldn't you want "root (hd0,0)" because if the first drive fails, the second drive will become sda after boot.
On 2/28/07, Scott Silva ssilva@sgvwater.com wrote:
CM spake the following on 2/28/2007 5:40 AM:
IIRC, you also have to do some special magic with the grub config to make the machine boot nicely even if you have to reboot with one drive
down. Unfortunately, it's been a while since last time I did this, so I can't remember.
-- Haakon Gjersvik Eriksen -- Basefarm AS
Presuming sda first HDD, sdb second HDD, first partition (0) is /boot
grub device (hd1) /dev/sdb root (hd1,0) setup (hd1)
CM
I'm confused. Wouldn't you want "root (hd0,0)" because if the first drive fails, the second drive will become sda after boot.
As I understand it the first drive does not become sda/hda after boot in all cases. At least with IDE drives it will always stay where it is based on controller and master/slave relationship. Beyond this, though, when specifying "root" for the setup command grub taking the drive in the context of how the BIOS has mapped who is the 0th, 1st, 2nd and so on, with a simple transform provided by your /boot/grub/device.map file. At least this is what I believe to be the truth.
...james
James Olin Oden spake the following on 2/28/2007 9:27 AM:
On 2/28/07, Scott Silva ssilva@sgvwater.com wrote:
CM spake the following on 2/28/2007 5:40 AM:
IIRC, you also have to do some special magic with the grub config to make the machine boot nicely even if you have to reboot with one drive
down. Unfortunately, it's been a while since last time I did this, so I can't remember.
-- Haakon Gjersvik Eriksen -- Basefarm AS
Presuming sda first HDD, sdb second HDD, first partition (0) is /boot
grub device (hd1) /dev/sdb root (hd1,0) setup (hd1)
CM
I'm confused. Wouldn't you want "root (hd0,0)" because if the first drive fails, the second drive will become sda after boot.
As I understand it the first drive does not become sda/hda after boot in all cases. At least with IDE drives it will always stay where it is based on controller and master/slave relationship. Beyond this, though, when specifying "root" for the setup command grub taking the drive in the context of how the BIOS has mapped who is the 0th, 1st, 2nd and so on, with a simple transform provided by your /boot/grub/device.map file. At least this is what I believe to be the truth.
...james
IDE drives will stay where they are, but scsi and sata drives will move based on (for scsi) device id, and for sata would be a crap shoot but maybe by port (probably bios, driver, and controller dependent).
But with an IDE drive, the slave might not be able to function unless the failed master is physically removed. If all drives are a master on their controller, it would be much better.
On 2/28/07, Scott Silva ssilva@sgvwater.com wrote:
James Olin Oden spake the following on 2/28/2007 9:27 AM:
On 2/28/07, Scott Silva ssilva@sgvwater.com wrote:
CM spake the following on 2/28/2007 5:40 AM:
IIRC, you also have to do some special magic with the grub config to make the machine boot nicely even if you have to reboot with one drive
down. Unfortunately, it's been a while since last time I did this, so I can't remember.
-- Haakon Gjersvik Eriksen -- Basefarm AS
Presuming sda first HDD, sdb second HDD, first partition (0) is /boot
grub device (hd1) /dev/sdb root (hd1,0) setup (hd1)
CM
I'm confused. Wouldn't you want "root (hd0,0)" because if the first drive fails, the second drive will become sda after boot.
As I understand it the first drive does not become sda/hda after boot in all cases. At least with IDE drives it will always stay where it is based on controller and master/slave relationship. Beyond this, though, when specifying "root" for the setup command grub taking the drive in the context of how the BIOS has mapped who is the 0th, 1st, 2nd and so on, with a simple transform provided by your /boot/grub/device.map file. At least this is what I believe to be the truth.
...james
IDE drives will stay where they are, but scsi and sata drives will move based on (for scsi) device id, and for sata would be a crap shoot but maybe by port (probably bios, driver, and controller dependent).
But with an IDE drive, the slave might not be able to function unless the failed master is physically removed. If all drives are a master on their controller, it would be much better.
That sounds like what I recall.
so seriously, say have two drives mirrored, and you wish the machine to boot without intervention if one of the drives goes, is their a right grub configuration for each of the drive types (ide, sata, scsi) and is it identical between each of the drive types are different?
I figure once your booted your fine because of UUID's.
Thanks....james
On Wed, 28 Feb 2007 08:30:08 -0800 Scott Silva ssilva@sgvwater.com wrote:
CM spake the following on 2/28/2007 5:40 AM:
IIRC, you also have to do some special magic with the grub config
to > make the machine boot nicely even if you have to reboot with one drive >
down. Unfortunately, it's been a while since last time I did this,
so > I can't remember.
-- Haakon Gjersvik Eriksen -- Basefarm AS
Presuming sda first HDD, sdb second HDD, first partition (0) is /boot
grub device (hd1) /dev/sdb root (hd1,0) setup (hd1)
CM
I'm confused. Wouldn't you want "root (hd0,0)" because if the first drive fails, the second drive will become sda after boot.
You already have the grub configured on sda (original primary boot device) or you can't boot. The ideea is to configure grub on sdb - or the secondary hdd - after you create the raid matrix and before anything bad happens so if the Fail event happens on the sda (original primary boot device) you will boot for sure with the sdb (original secondary boot device) moved in place of sda. CM
CM spake the following on 2/28/2007 11:05 AM:
On Wed, 28 Feb 2007 08:30:08 -0800 Scott Silva ssilva@sgvwater.com wrote:
CM spake the following on 2/28/2007 5:40 AM:
IIRC, you also have to do some special magic with the grub config
to > make the machine boot nicely even if you have to reboot with one drive >
down. Unfortunately, it's been a while since last time I did this,
so > I can't remember.
-- Haakon Gjersvik Eriksen -- Basefarm AS
Presuming sda first HDD, sdb second HDD, first partition (0) is /boot
grub device (hd1) /dev/sdb root (hd1,0) setup (hd1)
CM
I'm confused. Wouldn't you want "root (hd0,0)" because if the first drive fails, the second drive will become sda after boot.
You already have the grub configured on sda (original primary boot device) or you can't boot. The ideea is to configure grub on sdb - or the secondary hdd - after you create the raid matrix and before anything bad happens so if the Fail event happens on the sda (original primary boot device) you will boot for sure with the sdb (original secondary boot device) moved in place of sda. CM
I think I am just remembering the hoops I had to jump through with IDe drives. Having a boot line with a fallback line on the different drive just in case.
Scott Silva wrote:
I'm confused. Wouldn't you want "root (hd0,0)" because if the first drive fails, the second drive will become sda after boot.
You already have the grub configured on sda (original primary boot device) or you can't boot. The ideea is to configure grub on sdb - or the secondary hdd - after you create the raid matrix and before anything bad happens so if the Fail event happens on the sda (original primary boot device) you will boot for sure with the sdb (original secondary boot device) moved in place of sda. CM
I think I am just remembering the hoops I had to jump through with IDe drives. Having a boot line with a fallback line on the different drive just in case.
In any case it is a good idea to be prepared to boot the install CD in rescue mode and reinstall grub if it doesn't work quite the way you expect when one of the drives fails.
On Wed, 2007-02-28 at 20:05 -0600, Les Mikesell wrote:
In any case it is a good idea to be prepared to boot the install CD in rescue mode and reinstall grub if it doesn't work quite the way you expect when one of the drives fails.
Or consider making a GRUB boot CD with the current kernel[s] that can both boot the system to a normal state if the boot record is corrupted, and allow you to fix the problem running the normal environment - rather than having to boot the rescue CD, chroot, fix the problem, reboot, test, and possibly lather, rinse, repeat.
See at the end this link for directions - ignoring the stuff about errors unless you have those issues also:
http://lists.centos.org/pipermail/centos/2007-January/073835.html
Phil
On 2/28/07, Mário Gamito gamito@gmail.com wrote:
Hi,
I had this hard time installing a server with RAID 1 by software. I always got an error from GRUB and no booting.
So, i've installed CentOS 4.4 normally in only one disk.
Question is: is it possible now to make a RAID 1 by software with the other disk ?
If so, how ?
I'm completely lost here.
Any help would be appreciated.
Yes, though I don't think you can just directly convert your non md devices to md devices. I could be wrong, and if so someone merrily show me the error of my ways.
Now here is what I would do:
1) Create the same partitions on the other disk (even you swap partition), except set their type to autoraid (0xfd). 2) Now using mdadm (read man page) create your md devices, but each with only one member comming from the newly created partitions. 3) Activate the md devices. 4) Now if your using LVM, change the appropriate md devices into pv's and then create your volume group(s) and logical volume(s). 5) Lay down filesystems on lv(s) and/or the md(s). 6) Using tar or cpio, copy the files from your running systems filesystems to the ones that are now sitting md devices (if only indirectly through lvm on top of md's). 7) Setup grub on the running system (i.e. not the drive you have participating in md devices) to boot the kernel off of the md device your kernel is in, and to have that kernels root filesystem be from the root filesystem sitting on an md device. 8) Now reboot the system; this shoudl boot from the md devices. 9) Using fdisk change the non md device partitions to type autoraid. 10) Now add this partitions to the md devices as members of the raid sets. 11) At this point you should have the contents of the second disk syncing to the first. Let this finish (you can monitor it by looking at /proc/mdstat). 12) After this edit the grub.conf again to use the md devices. 13) Install the grub boot loader on your second disk also.
As an aside, the grub side is really the trickiest part of this, because though you want to boot the kernel off of the other disk after you've created you initial md devices in degraded mode, you are actually going to boot grub from the first disk, and then aquire your kernel from the second. At the point the disks are synced, then you will be back to your old grub configuration, because you haven't changed that on the raid side (though you could I suppose), and so you have to rechange the grub configuration again.
Anyway this is a rough outline. I have purposely avoided giving you specific commands and such, as the reading of the man pages is your responsibility. Also, there are places in here where if you do the wrong thing your system is toast, and your remedy will likely be to re-install. So YMMV and good luck...james
Warm Regards, Mário Gamito
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos