I have a server set up with 4hdd using software raid. I have /boot on a raid1 on md0 , / on raid5 on md1, and swap on raid0 on md2. If one of my drives die, how to I recover?
Thanks in advance,
scharacter
I am not sure but since you have a softraid I would think that if your os gives you trouble your raid also goes the same way as the famous chicken.
Regards Per
On 2/20/09 3:15 PM, "Stephen Leonard Character" Stephen.Character@Alorica.net wrote:
I have a server set up with 4hdd using software raid. I have /boot on a raid1 on md0 , / on raid5 on md1, and swap on raid0 on md2. If one of my drives die, how to I recover?
On Fri, Feb 20, 2009 at 3:15 PM, Stephen Leonard Character Stephen.Character@alorica.net wrote:
I have a server set up with 4hdd using software raid. I have /boot on a raid1 on md0 , / on raid5 on md1, and swap on raid0 on md2. If one of my drives die, how to I recover?
First, put the swap also on RAID1, you don't want part of your memory to become unavailable is a drive dies. Using RAID has no use then.
If a drive dies nothing will happen, your system will continue to keep running (if you put your swap on raid1 that is). But make sure you configure mdadm to send you a mail when that happens, so you know a drive is gone.
For recovery, just replace the disk, repartition it and re-add the partitions to the raid arrays and your are done. The disks will resync and then everything is back to how it was.
Regards, Tim
On Fri, Feb 20, 2009 at 3:15 PM, Stephen Leonard Character Stephen.Character@alorica.net wrote:
I have a server set up with 4hdd using software raid. I have /boot on
a
raid1 on md0 , / on raid5 on md1, and swap on raid0 on md2. If one of
my
drives die, how to I recover?
First, put the swap also on RAID1, you don't want part of your memory to become unavailable is a drive dies. Using RAID has no use then.
If a drive dies nothing will happen, your system will continue to keep running (if you put your swap on raid1 that is). But make sure you configure mdadm to send you a mail when that happens, so you know a drive is gone.
For recovery, just replace the disk, repartition it and re-add the partitions to the raid arrays and your are done. The disks will resync and then everything is back to how it was.
Regards, Tim
Thanks for the reply's, I'm assuming to do the repartitioning with fdisk or gparted to repartition, what tools do I use to manage the software raid? I.e. how do I go about changing the swap partition to raid1 and re-add the partitions to the array? This system is for me to practice on for my RHCT/E exams(too broke to pay for training) so I'm planning on breaking one of the drives(just remove it) and add a new one for practice.
Thanks in advance, Stephen
On Fri, Feb 20, 2009 at 3:15 PM, Stephen Leonard Character Stephen.Character@alorica.net wrote:
I have a server set up with 4hdd using software raid. I have /boot on
a
raid1 on md0 , / on raid5 on md1, and swap on raid0 on md2. If one of
my
drives die, how to I recover?
First, put the swap also on RAID1, you don't want part of your memory to become unavailable is a drive dies. Using RAID has no use then.
If a drive dies nothing will happen, your system will continue to keep running (if you put your swap on raid1 that is). But make sure you configure mdadm to send you a mail when that happens, so you know a drive is gone.
For recovery, just replace the disk, repartition it and re-add the partitions to the raid arrays and your are done. The disks will resync and then everything is back to how it was.
Regards, Tim
Thanks for the reply's, I'm assuming to do the repartitioning with
fdisk
or gparted to repartition, what tools do I use to manage the software raid? I.e. how do I go about changing the swap partition to raid1 and re-add the partitions to the array? This system is for me to practice
on
for my RHCT/E exams(too broke to pay for training) so I'm planning on breaking one of the drives(just remove it) and add a new one for practice.
Thanks in advance, Stephen
Sorry, after re-reading your post, I saw you mentioned mdadm. Thanks for the info.
Regards, Stephen
_______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Stephen Leonard Character wrote:
On Fri, Feb 20, 2009 at 3:15 PM, Stephen Leonard Character Stephen.Character@alorica.net wrote:
I have a server set up with 4hdd using software raid. I have /boot on
a
raid1 on md0 , / on raid5 on md1, and swap on raid0 on md2. If one of
my
drives die, how to I recover?
First, put the swap also on RAID1, you don't want part of your memory to become unavailable is a drive dies. Using RAID has no use then.
If a drive dies nothing will happen, your system will continue to keep running (if you put your swap on raid1 that is). But make sure you configure mdadm to send you a mail when that happens, so you know a drive is gone.
For recovery, just replace the disk, repartition it and re-add the partitions to the raid arrays and your are done. The disks will resync and then everything is back to how it was.
Regards, Tim
Thanks for the reply's, I'm assuming to do the repartitioning with fdisk or gparted to repartition, what tools do I use to manage the software raid? I.e. how do I go about changing the swap partition to raid1 and re-add the partitions to the array? This system is for me to practice on for my RHCT/E exams(too broke to pay for training) so I'm planning on breaking one of the drives(just remove it) and add a new one for practice.
Thanks in advance, Stephen
Right, first partition the new drive then add it back to the Raid like this. mdadm /dev/md1 -a /dev/sdc3 Then the raid will start rebuilding. As posted early having swap on a raid zero is a bad idea . Dan
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Dan Carl Sent: Friday, February 20, 2009 11:37 AM To: CentOS mailing list Subject: Re: [CentOS] Software Raid Recovery
Stephen Leonard Character wrote:
On Fri, Feb 20, 2009 at 3:15 PM, Stephen Leonard Character Stephen.Character@alorica.net wrote:
I have a server set up with 4hdd using software raid. I have /boot on
a
raid1 on md0 , / on raid5 on md1, and swap on raid0 on md2. If one of
my
drives die, how to I recover?
First, put the swap also on RAID1, you don't want part of your memory to become unavailable is a drive dies. Using RAID has no use then.
If a drive dies nothing will happen, your system will continue to
keep
running (if you put your swap on raid1 that is). But make sure you configure mdadm to send you a mail when that happens, so you know a drive is gone.
For recovery, just replace the disk, repartition it and re-add the partitions to the raid arrays and your are done. The disks will
resync
and then everything is back to how it was.
Regards, Tim
Thanks for the reply's, I'm assuming to do the repartitioning with
fdisk
or gparted to repartition, what tools do I use to manage the software raid? I.e. how do I go about changing the swap partition to raid1 and re-add the partitions to the array? This system is for me to practice
on
for my RHCT/E exams(too broke to pay for training) so I'm planning on breaking one of the drives(just remove it) and add a new one for practice.
Thanks in advance, Stephen
Right, first partition the new drive then add it back to the Raid like
this.
mdadm /dev/md1 -a /dev/sdc3 Then the raid will start rebuilding. As posted early having swap on a raid zero is a bad idea . Dan
So, to change my swap to raid1, I would need to unmount it, delete the md2 device, and rebuild it as a raid1 md2 device? ______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Stephen Leonard Character wrote:
So, to change my swap to raid1, I would need to unmount it, delete the md2 device, and rebuild it as a raid1 md2 device?
swapoff /dev/md2 # now delete the raid0 and build a raid1 mkswap /dev/md2 swapon /dev/md2
and you probably don't have to change your /etc/fstab assuming the metadevice name stays the same.
On Feb 20, 2009, at 1:53 PM, John R Pierce pierce@hogranch.com wrote:
Stephen Leonard Character wrote:
So, to change my swap to raid1, I would need to unmount it, delete the md2 device, and rebuild it as a raid1 md2 device?
swapoff /dev/md2 # now delete the raid0 and build a raid1 mkswap /dev/md2 swapon /dev/md2
and you probably don't have to change your /etc/fstab assuming the metadevice name stays the same.
Since he'll have 4 md partitions after breaking the raid0, create 2 raid1 mds and add them both as priority 1 swap devices and the kernel will stripe pages across them.
-Ross
PS I prefer to create swap from LVM so to bypass this whole mess all together. LVM and RAID both come from device-mapper, so it's proven reliable and well performing technology. I wish ZFS was GPL'd though so Linux could adopt it and we'd be done talking about file systems and volume managers.
Stephen Leonard Character wrote:
So, to change my swap to raid1, I would need to unmount it, delete the md2 device, and rebuild it as a raid1 md2 device?
swapoff /dev/md2 # now delete the raid0 and build a raid1 mkswap /dev/md2 swapon /dev/md2
and you probably don't have to change your /etc/fstab assuming the metadevice name stays the same.
Since he'll have 4 md partitions after breaking the raid0, create 2 raid1 mds and add them both as priority 1 swap devices and the kernel will stripe pages across them.
-Ross
PS I prefer to create swap from LVM so to bypass this whole mess all together. LVM and RAID both come from device-mapper, so it's proven reliable and well performing technology. I wish ZFS was GPL'd though so Linux could adopt it and we'd be done talking about file systems and volume managers.
Since this was just a test box, I decided to reinstall using a raid1 md0 for /boot, and the rest of the 4 drives in raid 5 md1 with a lvm on the raid 5 with / and swap inside lvm. Thanks Ross, that was a great idea!
Regards, Stephen
Stephen Leonard Character wrote:
As posted early having swap on a raid zero is a bad idea . Dan
Yes I wasn't thinking too clearly when I made the swap raid0, well I did think about performance, but not drive failure :(
Thanks everyone for your help, Stephen
I did the same thing when I started out using Linux. At least you learned your lesson on a test box.
Here what I'd suggest. Turn off the swap. (Only because its a test box. On the production server you'd want to create a new swap first.) #swapoff /dev/md2 Umount the array #umount /dev/md2 Then stop it mdadm --stop /dev/md0
Create new swap now I don't raid swap. As stated in the Software Raid How-To ||"There's no reason to use RAID for swap performance reasons. The kernel itself can stripe swapping on several devices, if you just give them the same priority in the |/etc/fstab| file." Assuming you made your raid 0 over all four drives. (never done this post install but I believe it would go like this) go into fdisk and change partition type to 82 mkswap /dev/sda2 mkswap /dev/sdb2 mkswap /dev/sdc2 mkswap /dev/sdd2 Edit fstab file and give them the same priority swapon -p 1 /dev/sda2 swapon -p 1 /dev/sdb2 swapon -p 1 /dev/sdc2 swapon -p 1 /dev/sdd2 and all should be good.
John R Pierce wrote:
Dan Carl wrote:
I don't raid swap. As stated in the Software Raid How-To ||"There's no reason to use RAID for swap performance reasons.
BEEEEP!
you want to MIRROR swap for RELIABILITY reasons. if a swap device fails, you're looking at a kernel panic.
You're right I should of read the next paragraph in the howto. Going back under rock in shame.