Dear all, I'm not used to handling software raid. I've inherited a server which has raid 10 set. one of our disks failed, and it's to be replaced today. My question is; any hint how to add this new disk to the existing raid array ? first thought is :- Create identical partitions as other disks in the array i'd like to add it to.- add it to raid. Though i'm extremely worried of messing things up and wiping all my data. Any hint would be appreciated
Hi,
I had to do this a while ago, basically you have to mark the disk as failed (if not already) and than remove it from the array
mark as failed > mdadm --fail /dev/md0 /dev/sdaX
remove from array > madmad --remove /dev/md0 /dev/sdaX
Partition your new disk to your needs and then add it to the array using mdadm again
mdadm --add /dev/md0 /dev/sdbX
to check the status of the array
mdadm --detail /dev/md0
Hope this gives you a starting point.
On Tue, Apr 1, 2014 at 10:21 AM, Roland RoLaNd r_o_l_a_n_d@hotmail.com wrote:
Dear all, I'm not used to handling software raid. I've inherited a server which has raid 10 set. one of our disks failed, and it's to be replaced today. My question is; any hint how to add this new disk to the existing raid array ? first thought is :- Create identical partitions as other disks in the array i'd like to add it to.- add it to raid. Though i'm extremely worried of messing things up and wiping all my data. Any hint would be appreciated _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Tue, Apr 1, 2014 at 4:58 AM, JC Putter jcputter@gmail.com wrote:
Hi,
I had to do this a while ago, basically you have to mark the disk as failed (if not already) and than remove it from the array
mark as failed > mdadm --fail /dev/md0 /dev/sdaX
remove from array > madmad --remove /dev/md0 /dev/sdaX
Partition your new disk to your needs and then add it to the array
You can "clone" the partition layout from an existing healthy disk and write it to the new disk with sfdisk. *As always, be very careful* what disk you're dumping the partition layout from and which one is the target destination.
sfdisk -d /dev/sdX | sfdisk /dev/sdY
On 4/1/2014 5:03 PM, SilverTip257 wrote:
You can "clone" the partition layout from an existing healthy disk and write it to the new disk with sfdisk.*As always, be very careful* what disk you're dumping the partition layout from and which one is the target destination.
sfdisk -d /dev/sdX | sfdisk /dev/sdY
does sfdisk support GPT disks, or is it limited to disks under 2TB ?
On Tue, Apr 1, 2014 at 8:08 PM, John R Pierce pierce@hogranch.com wrote:
On 4/1/2014 5:03 PM, SilverTip257 wrote:
You can "clone" the partition layout from an existing healthy disk and write it to the new disk with sfdisk.*As always, be very careful* what disk you're dumping the partition layout from and which one is the target destination.
sfdisk -d /dev/sdX | sfdisk /dev/sdY
does sfdisk support GPT disks, or is it limited to disks under 2TB ?
Good question. [ In all of the servers at my $DAY_JOB, the disks in software raid arrays are <= 2TB at the moment. ]
No. sfdisk does _NOT_ support GPT. But sgdisk [0] does. [1] [2]
* I've not tested/labbed sgdisk usage, so if anybody on the list has experience using it please speak up. :-)
[0] http://www.cyber-tec.org/2012/04/07/sfdisk-for-gpt-we-use-sgdisk/ [1] http://www.ibm.com/developerworks/library/l-gpt/ [2] http://askubuntu.com/questions/57908/how-can-i-quickly-copy-a-gpt-partition-...
On 01/04/14 19:21, Roland RoLaNd wrote:
Dear all, I'm not used to handling software raid. I've inherited a server which has raid 10 set. one of our disks failed, and it's to be replaced today. My question is; any hint how to add this new disk to the existing raid array ? first thought is :- Create identical partitions as other disks in the array i'd like to add it to.- add it to raid. Though i'm extremely worried of messing things up and wiping all my data. Any hint would be appreciated
Remember to set the "Linux raid autodetect" flag (fd) on the partition.
take a look into /proc/mdstat # cat /proc/mdstat
To adjust the speed of rebuild look here: /proc/sys/dev/raid/speed_limit_max /proc/sys/dev/raid/speed_limit_min
You can adjust them by echo-ing a new value directly into them:
# echo 1000000 > /proc/sys/dev/raid/speed_limit_max
and make that permanent by setting it in /etc/sysctl.conf
dev.raid.speed_limit_max = 1000000
when you do finally get to rebuilding the array, use 'watch' to monitor the rebuild.
# watch 'cat /proc/mdstat'
Then follow JC's exampled and you should be right.