[CentOS] Re: Rebuilding Raid 1

Mace Eliason meliason at shaw.ca
Wed May 3 17:53:46 UTC 2006

Thanks to all that helped me out with this.  I finished at 7am just in 
time to take the sever back at 8:30

I would have been done sooner but I tried the complete copy from one 
drive to the other as you suggested, but I got them backwards,
I ended up copying  What was on the replacement drive over the the 
current drive.  Luckally the bad drive still had the info on it and was 
working.  So I started all over again.

I can swap and create drives like the back of my hand. lol

Something that really helped me out was this

"Prepping the System in Case of Drive Failure

1st is to backup the drive partition tables and is rather simple command:

#sfdisk -d /dev/sda > /raidinfo/partitions.sda
#sfdisk -d /dev/sdb > /raidinfo/partitions.sdb

Do this for all your drives that you have in the system and then you 
have backup configuration files of your drive's partitions and when you 
have a new drive to replace a previous drive it is very easy to load the 
partition table on the new drive with command:

#sfdisk /dev/sda < /raidinfo/partitions.sda                 (make sure 
you get the drives right)

This partitions the new drive with the exact same partition table that 
was there before and makes all the rebuild and raid partitions all in 
the exact same place so that you don't have to edit any raid 
configuration files at all. "

I found this info here:


That really made creating things easier.

Also running cat /proc/mdstat to find out which drives are active

Thanks again.


Scott Silva wrote:
> Les Mikesell spake the following on 5/1/2006 9:32 PM:
>> On Tue, 2006-05-02 at 00:13, Mace Eliason wrote:
>>> I think the reason for the partitions not being on the same drive is 
>>> because the old drive is the bootable drive and I changed the scsi ids 
>>> of the drives to get it to work.   I think it is using sba for the boot 
>>> but centos is on sbb. 
>> It doesn't really boot from raid.  It loads the kernel from
>> the underlying partition on the boot drive, then the kernel
>> detects the raid devices and decides how to activate them,
>> then it mounts them according to /etc/fstab.  You should be
>> able to review the detection process with 'dmesg |less'
>> if you are interested.
>>> I just need to make sure that the sync happens so that the info on sbb 
>>> is what is synced.  The info on sba is 1 month old this I have confirmed 
>>> by mounting sba3 and looking at the dates of the last emails received on 
>>> the server  I couldn't mount sdb3 because it is the active partition.
>> Be careful with your spelling... I assume you mean /dev/sda3 is the
>> old one.  
>>> So from what your saying below running mdadm as you have shown will it 
>>> copy the good info on sdb to sda?
>> Again, be careful.  It will copy the currently active partition
>> to the one you are adding - so if you see the current version
>> now, that's what will be mirrored.  In the case of /dev/sdb3 it
>> will go from sdb to sda.   But /dev/sda1 is active now and will
>> be copied to /dev/sdb1.
>>> I have to have this server ready for 7am or we will loose this 
>>> contract.  I can't believe I have spend all day on this.
>> This is the sort of thing you should practice on a test
>> box so you are prepared to handle an actual problem in a
>> reasonable amount of time.
>>> If I copy over the wrong info they will lose 1 months worth of emails.
>> You can only copy from the active one to the one you
>> add.  And by the way, having backups can be a good thing
>> too...
> You might want to download the free Vmware server software and play with
> senarios like this. You can create installs with 2 drives in raid 1, and kill
> a drive and play with rebuilding. All virtual, so no damage to working
> systems. This is how I learned some of the nuances of linux software raid.

More information about the CentOS mailing list