Hi,
I've used mdadm for years now to manage software raids.
The task of using fdisk to first create partitions on a spare drive sitting on a shelf (raid 0 were my 1st of 2 drives failed) is kind of bugging me now.
After using fdisk to create the same partition layout on the new drive as is on the existing drive and then using mdadm to finish every thing up is a little tedious.
Any one have an idea how to have a sort of hot plug were I just swap out the drive and it rebuilds?
- aurf
At Wed, 9 Jun 2010 16:50:53 -0700 CentOS mailing list centos@centos.org wrote:
Hi,
I've used mdadm for years now to manage software raids.
The task of using fdisk to first create partitions on a spare drive sitting on a shelf (raid 0 were my 1st of 2 drives failed) is kind of bugging me now.
After using fdisk to create the same partition layout on the new drive as is on the existing drive and then using mdadm to finish every thing up is a little tedious.
Any one have an idea how to have a sort of hot plug were I just swap out the drive and it rebuilds?
sfdisk is your friend (from man sfdisk):
-d Dump the partitions of a device in a format useful as input to sfdisk. For example, % sfdisk -d /dev/hda > hda.out % sfdisk /dev/hda < hda.out will correct the bad last extended partition that the OS/2 fdisk creates.
So:
1) plug in replacement disk. 2) partition it:
# sfdisk -d /dev/sdX | sfdisk /dev/sdY
Where /dev/sdX is an existing disk and /dev/sdY is the replacement disk
3) add the partition(s) to the array(s):
# mdadm /dev/mdI ... -a /dev/sdYI # mdadm /dev/mdJ ... -a /dev/sdYJ # mdadm /dev/mdK ... -a /dev/sdYK # mdadm /dev/mdL ... -a /dev/sdYL
No reason not to put all of the above in a script...
- aurf
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Jun 9, 2010, at 5:36 PM, Robert Heller wrote:
At Wed, 9 Jun 2010 16:50:53 -0700 CentOS mailing list <centos@centos.org
wrote:
Hi,
I've used mdadm for years now to manage software raids.
The task of using fdisk to first create partitions on a spare drive sitting on a shelf (raid 0 were my 1st of 2 drives failed) is kind of bugging me now.
After using fdisk to create the same partition layout on the new drive as is on the existing drive and then using mdadm to finish every thing up is a little tedious.
Any one have an idea how to have a sort of hot plug were I just swap out the drive and it rebuilds?
sfdisk is your friend
Indeed, thanks, sfdisk will remain a life long friend.
On Wed, Jun 9, 2010 at 8:36 PM, Robert Heller heller@deepsoft.com wrote: [snip]
sfdisk is your friend (from man sfdisk):
[snip]
I'd mod this up if I could... sfdisk saves me hours and hours of time.. I also use it to dump backup information to files in case I need to rebuild.
Robert Heller wrote:
At Wed, 9 Jun 2010 16:50:53 -0700 CentOS mailing list centos@centos.org wrote:
Hi,
I've used mdadm for years now to manage software raids.
The task of using fdisk to first create partitions on a spare drive sitting on a shelf (raid 0 were my 1st of 2 drives failed) is kind of bugging me now.
After using fdisk to create the same partition layout on the new drive as is on the existing drive and then using mdadm to finish every thing up is a little tedious.
Any one have an idea how to have a sort of hot plug were I just swap out the drive and it rebuilds?
sfdisk is your friend (from man sfdisk):
-d Dump the partitions of a device in a format useful as input to sfdisk. For example, % sfdisk -d /dev/hda > hda.out % sfdisk /dev/hda < hda.out will correct the bad last extended partition that the OS/2 fdisk creates.
So:
- plug in replacement disk.
- partition it:
# sfdisk -d /dev/sdX | sfdisk /dev/sdY
Where /dev/sdX is an existing disk and /dev/sdY is the replacement disk
- add the partition(s) to the array(s):
# mdadm /dev/mdI ... -a /dev/sdYI # mdadm /dev/mdJ ... -a /dev/sdYJ # mdadm /dev/mdK ... -a /dev/sdYK # mdadm /dev/mdL ... -a /dev/sdYL
No reason not to put all of the above in a script...
Agreed. And I have... :)
The script is designed to add a third drive to a raid1 set. I use this with a removable drive to get a backup of the system that can be taken off-site.
The sleeps in this script are probably a bit excessive, but they are designed to let the system fully process each command before the next one is given. I found that certain things would not work properly without a pause in there. Since I only run this once a month, it's not a big deal if it takes a couple of minutes to run.
Just posting this in case it proves useful to anyone.
I've used mdadm for years now to manage software raids.
The task of using fdisk to first create partitions on a spare drive sitting on a shelf (raid 0 were my 1st of 2 drives failed) is kind of bugging me now.
After using fdisk to create the same partition layout on the new drive as is on the existing drive and then using mdadm to finish every thing up is a little tedious.
Any one have an idea how to have a sort of hot plug were I just swap out the drive and it rebuilds?
This is all I know about it, not sure if it gives you any help at all... This was for raid 1 drives, not sure how raid 0 would do on it...
These were my notes for adding a hotspare that would auto take over on a failure, 3 drives total..this covers adding drives back to the array and also adding a brand new one. I did not cover a 2nd hot spare...again this was just raid 1...
Adding a drive back into the mix if you pulled it out via hot swap
So, you pulled a drive out to check the hot swap. Guess what, it is not recognized by the array anymore and is ignored. This is a good way to check if the hotspare is working. But now what do you do with the drive you put back in?
Each drive is labeled sda, sdb, sdc on my system. There are two raid devices on the array, one a boot and one the big physical one. on mine they are labeled as md0 and md1 (0 and 1). So when we are working on a raid on a drive we would say sbc2 (second raid device on 'c'..sbc drive).
Assuming the sda drive is our functioning one, and I took out both b and c drives, we will add b and c back in the mix.
mdadm /dev/md0 -add /dev/sdb1 mdadm /dev/md1 -add /dev/sdb2
The sdb drive should immediately start its migration. Should take about an hour depending on your system. Now add the spare back in., sbc.
mdadm /dev/md0 -add /dev/sdc1 mdadm /dev/md1 -add /dev/sdc2
Doing ' cat /proc/mdstat ' will show you all three drives are there, one with the (S) for spare and the other in the middle of migrating.
Now you have your RAID array set up, have LVM partitions you can adjust later as you need them, can recover from pulled out disks, and all disks are now bootable. Congratulations, you are done with the whole disk thing.
Final Issue. Adding a new drive into the mix to replace broken one
So, you pulled a drive out because it was broken and need to replace it. You also want it to be part of the mirror array and need to copy the partitions of the existing one to it before you add it to the array. Sounds frightening.
1- take out old drive and make sure the other two are mirroring, the new drive you will add will be the hotspare now. (this is called rotation by the way)
2- Insert new drive.
3- in the command prompt, type fdisk -l (that is an L). This should give you a list of drives and partitions. The new disk should be there too with one partition or none. This will give you the 'drive designation' if you do not know it.
4- CAUTION Make sure before you do number 5 that you now which drive is the new one sda, sdb, or sdc
5- type " sfdisk -d /dev/sda | sfdisk /dev/sdb " This is supposed to copy the partition table from disk a to disk b (insert your actual drives for a and b inthe example).
6- use fdisk -l to see if the new drive has all the proper partitions.
7- if everything is fine, use the above method via mdadm to add the new 'spare' into the array. It should automatically make it the spare drive.
Using the new drive for the spare ensures the older drives are used as much as possible until you replace them and your back ups will be tip top and not a very old drive. Rotation works like tires for your car. Would you like your spare tire to be a brand new radial ready to go 60,000 miles or a retreaded old worn tire ready to get you to the gas station to replace it? I would feel better having the new tire so I do not have to worry when I use it.