Hi all
I'm looking at setting up software RAID 10, using CentOS 5.1 x64 - what is the best way todo this?
I'm reading some sources on the internet, and get a lot of different "suggestions"
1 suggestion says to boot up with a Live CD like Knoppix or SystemRescueCD, setup the RAID 10 partitions, and then install Linux from there. 2. Another is to setup a small RAID 1 on the first 2 HDD's, install Linux, bootup, and then setup the rest as RAID 10
The others didn't really make sense to me, so how do I actually do this?
And then, how do I setup the partitioning? Do I setup /boot on a separate RAID "partition"? If so, what happens if I want to replace the 1st 2 HDD's with bigger ones?
Rudi Ahlers wrote:
Hi all
I'm looking at setting up software RAID 10, using CentOS 5.1 x64 - what is the best way todo this?
I'm reading some sources on the internet, and get a lot of different "suggestions"
1 suggestion says to boot up with a Live CD like Knoppix or SystemRescueCD, setup the RAID 10 partitions, and then install Linux from there. 2. Another is to setup a small RAID 1 on the first 2 HDD's, install Linux, bootup, and then setup the rest as RAID 10
The others didn't really make sense to me, so how do I actually do this?
And then, how do I setup the partitioning? Do I setup /boot on a separate RAID "partition"? If so, what happens if I want to replace the 1st 2 HDD's with bigger ones?
What's the hardware setup?
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
Ross S. W. Walker wrote:
Rudi Ahlers wrote:
Hi all
I'm looking at setting up software RAID 10, using CentOS 5.1 x64 - what is the best way todo this?
I'm reading some sources on the internet, and get a lot of different "suggestions"
1 suggestion says to boot up with a Live CD like Knoppix or SystemRescueCD, setup the RAID 10 partitions, and then install Linux from there. 2. Another is to setup a small RAID 1 on the first 2 HDD's, install Linux, bootup, and then setup the rest as RAID 10
The others didn't really make sense to me, so how do I actually do this?
And then, how do I setup the partitioning? Do I setup /boot on a separate RAID "partition"? If so, what happens if I want to replace the 1st 2 HDD's with bigger ones?
What's the hardware setup?
-Ross
I didn't really specify any, cause I want to keep it purely software. Generally it would be on a generic PIV motherboard with 4 / 6 SATA, or even mixed SATA & IDE HDD's - all new, so at least 80GB per HDD
Rudi Ahlers wrote:
Ross S. W. Walker wrote:
Rudi Ahlers wrote:
Hi all
I'm looking at setting up software RAID 10, using CentOS 5.1 x64 - what is the best way todo this?
I'm reading some sources on the internet, and get a lot of different "suggestions"
1 suggestion says to boot up with a Live CD like Knoppix or SystemRescueCD, setup the RAID 10 partitions, and then install Linux from there. 2. Another is to setup a small RAID 1 on the first 2 HDD's, install Linux, bootup, and then setup the rest as RAID 10
The others didn't really make sense to me, so how do I actually do this?
And then, how do I setup the partitioning? Do I setup /boot on a separate RAID "partition"? If so, what happens if I want to replace the 1st 2 HDD's with bigger ones?
What's the hardware setup?
I didn't really specify any, cause I want to keep it purely software. Generally it would be on a generic PIV motherboard with 4 / 6 SATA, or even mixed SATA & IDE HDD's - all new, so at least 80GB per HDD
I was primarily interested in the # of HDDs that can be used.
If you have 6 disks, setup 2 disks as a RAID1 for the OS and the other 4 as a RAID10 for the data.
If you have 4 disks all together:
1) create /boot partition as a 4 disk RAID1 across all 4 disks
2) create the remaining space as 2 separate RAID1s of type LVM
3) create a VG out of the 2 RAID1 PVs, create root, swap LVs on the VG with a stripe of 2.
LVM striping over multiple RAID1 PVs provides the same performance as a native RAID10 array, plus you can add RAID1s later to increase the size/performance and dump/restore the data to stripe it across the larger set of PVs.
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
Ross S. W. Walker wrote:
Rudi Ahlers wrote:
Ross S. W. Walker wrote:
Rudi Ahlers wrote:
Hi all
I'm looking at setting up software RAID 10, using CentOS 5.1 x64 - what is the best way todo this?
I'm reading some sources on the internet, and get a lot of different "suggestions"
1 suggestion says to boot up with a Live CD like Knoppix or SystemRescueCD, setup the RAID 10 partitions, and then install Linux from there. 2. Another is to setup a small RAID 1 on the first 2 HDD's, install Linux, bootup, and then setup the rest as RAID 10
The others didn't really make sense to me, so how do I actually do this?
And then, how do I setup the partitioning? Do I setup /boot on a separate RAID "partition"? If so, what happens if I want to replace the 1st 2 HDD's with bigger ones?
What's the hardware setup?
I didn't really specify any, cause I want to keep it purely software. Generally it would be on a generic PIV motherboard with 4 / 6 SATA, or even mixed SATA & IDE HDD's - all new, so at least 80GB per HDD
I was primarily interested in the # of HDDs that can be used.
If you have 6 disks, setup 2 disks as a RAID1 for the OS and the other 4 as a RAID10 for the data.
If you have 4 disks all together:
create /boot partition as a 4 disk RAID1 across all 4 disks
create the remaining space as 2 separate RAID1s of type LVM
create a VG out of the 2 RAID1 PVs, create root, swap LVs on
the VG with a stripe of 2.
LVM striping over multiple RAID1 PVs provides the same performance as a native RAID10 array, plus you can add RAID1s later to increase the size/performance and dump/restore the data to stripe it across the larger set of PVs.
-Ross
Thanx, this seems like a fairly easy way of doing it.
From what I gather, the data will fill up from the beginning of the stripe, right? So the 1st 2 HDD's will work hardest in the beginning, until there's enough data to fill the other 2 HDD's - unless of cause I split the LV's across the PV's - i.e. put root on md1 & swap or var on md2 for example.
Does swap need to be part of the RAID set? Is there actually a performance boost?
Rudi Ahlers wrote:
Ross S. W. Walker wrote:
Rudi Ahlers wrote:
Ross S. W. Walker wrote:
Rudi Ahlers wrote:
Hi all
I'm looking at setting up software RAID 10, using CentOS 5.1 x64 - what is the best way todo this?
I'm reading some sources on the internet, and get a lot of different "suggestions"
1 suggestion says to boot up with a Live CD like Knoppix or SystemRescueCD, setup the RAID 10 partitions, and then install Linux from there. 2. Another is to setup a small RAID 1 on the first 2 HDD's, install Linux, bootup, and then setup the rest as RAID 10
The others didn't really make sense to me, so how do I actually do this?
And then, how do I setup the partitioning? Do I setup /boot on a separate RAID "partition"? If so, what happens if I want to replace the 1st 2 HDD's with bigger ones?
What's the hardware setup?
I didn't really specify any, cause I want to keep it purely software. Generally it would be on a generic PIV motherboard with 4 / 6 SATA, or even mixed SATA & IDE HDD's - all new, so at least 80GB per HDD
I was primarily interested in the # of HDDs that can be used.
If you have 6 disks, setup 2 disks as a RAID1 for the OS and the other 4 as a RAID10 for the data.
If you have 4 disks all together:
create /boot partition as a 4 disk RAID1 across all 4 disks
create the remaining space as 2 separate RAID1s of type LVM
create a VG out of the 2 RAID1 PVs, create root, swap LVs on
the VG with a stripe of 2.
LVM striping over multiple RAID1 PVs provides the same performance as a native RAID10 array, plus you can add RAID1s later to increase the size/performance and dump/restore the data to stripe it across the larger set of PVs.
Thanx, this seems like a fairly easy way of doing it.
From what I gather, the data will fill up from the beginning of the stripe, right? So the 1st 2 HDD's will work hardest in the beginning, until there's enough data to fill the other 2 HDD's - unless of cause I split the LV's across the PV's - i.e. put root on md1 & swap or var on md2 for example.
Yes data fills from the start, which is the fastest location, which is better used for swap, so...
1) Create 2 4GB swap LVs on the install, swap0 and swap1, install the OS into swap1
2) After install and reboot, create a 8GB LV with interleave of 2 so it stripes the writes across the 2 MD PVs, use dump and restore to move the root data from swap1 to, call it 'root', modify the fstab and rebuild the initrds.
3) Once that's all done and you are booting off of the 8GB 'root' LV, you can do a mkswap on the swap1 LV and add it to the list of swap devices in fstab with the same priority as swap0 and pagecache will stripe the swap data between them.
Then you have your 'root' LV striped, and your swap striped across the fastest portion of the disk.
Does swap need to be part of the RAID set? Is there actually a performance boost?
No, like stated create LVs for swap, swap in 2.6 kernels is very good on all types of mediums, raw disk, LVM and swap files.
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
<snip>
Does swap need to be part of the RAID set? Is there actually a performance boost?
Not a performance boost, but if the drive that swap is on fails while the OS has data there the system can choke horribly or even die. Swap on raid can sometimes be slightly slower. If you think your system won't swap any critical sleeping processes, you could be safe. But who can be that sure?
Scott Silva wrote:
<snip> > > Does swap need to be part of the RAID set? Is there actually a > performance boost? > Not a performance boost, but if the drive that swap is on fails while the OS has data there the system can choke horribly or even die. Swap on raid can sometimes be slightly slower. If you think your system won't swap any critical sleeping processes, you could be safe. But who can be that sure?
Swap on RAID should perform completely adequate these days as opposed to say the 2.4 days. Swap on RAID1 or RAID10 wouldn't have any noticeable performance degradation, swap on RAID5/6 might be slightly slower and unbearable on a degraded RAID5/6, but if swap performance is a major concern then it may be time to add some RAM.
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
Rudi Ahlers wrote:
And then, how do I setup the partitioning? Do I setup /boot on a separate RAID "partition"? If so, what happens if I want to replace the 1st 2 HDD's with bigger ones?
each partition is raided seperately with mdadm.... you could make the whole thing one LVM partition thats raid10, then use LVM to dice it up into file systems.
if you have 4 drives and are doing software raid10, you won't be swapping drives with different sizes without a WHOLE lotta pain.
/boot shouldn't be mirrored, as the BIOS won't know how to boot it. leave /dev/sdb1 the same size as /dev/sda1 and call it /boot2 and try to remember to copy /boot to /boot2 each time you update the kernel.
/boot shouldn't be mirrored, as the BIOS won't know how to boot it.
Not true for all mobo's. Regardless, why not have a copy safe somewhere easier to manage then the following suggestion IMHO. Let the computer worry about remembering to copy it.
leave /dev/sdb1 the same size as /dev/sda1 and call it /boot2 and try to remember to copy /boot to /boot2 each time you update the kernel.
John R Pierce wrote:
/boot shouldn't be mirrored, as the BIOS won't know how to boot it.
Wait. I thought mirror RAID is the same on-disk format like a plain partition, so therefore a mirrored /boot will always boot. At least, it always did for me.
Florin Andrei wrote:
John R Pierce wrote:
/boot shouldn't be mirrored, as the BIOS won't know how to boot it.
Wait. I thought mirror RAID is the same on-disk format like a plain partition, so therefore a mirrored /boot will always boot. At least, it always did for me.
Yes, default md format stores meta data at end of storage unit so it is accessible outside of raid configuration.
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
John R Pierce wrote:
Rudi Ahlers wrote:
And then, how do I setup the partitioning? Do I setup /boot on a separate RAID "partition"? If so, what happens if I want to replace the 1st 2 HDD's with bigger ones?
each partition is raided seperately with mdadm.... you could make the whole thing one LVM partition thats raid10, then use LVM to dice it up into file systems.
if you have 4 drives and are doing software raid10, you won't be swapping drives with different sizes without a WHOLE lotta pain.
Ok, so how do I do this? Let's say I have 4x 160GB HDD's now, and plan on replacing them with 4x 500GB HDD's in the future?
What setup would help with a upgrade in the future?
/boot shouldn't be mirrored, as the BIOS won't know how to boot it. leave /dev/sdb1 the same size as /dev/sda1 and call it /boot2 and try to remember to copy /boot to /boot2 each time you update the kernel.
I understand this, but how do you boot from /boot2 on the second HDD if the 1st have failed?
Rudi Ahlers wrote on Thu, 17 Jul 2008 23:10:48 +0200:
/boot shouldn't be mirrored, as the BIOS won't know how to boot it. leave /dev/sdb1 the same size as /dev/sda1 and call it /boot2 and try to remember to copy /boot to /boot2 each time you update the kernel.
I understand this, but how do you boot from /boot2 on the second HDD if the 1st have failed?
You don't (*). I don't understand John's advice here. There is no problem md mirroring /boot. You just need to install grub a second time on the other disk. For that you have to boot from it. (I think I also did it successfully without booting from the other disk in the past, but last time I tried it it didn't want to work like I remembered it should.)
(*) Anyway, you would boot from a Rescue CD or such and rename it ...
Kai
Kai Schaetzl wrote:
Rudi Ahlers wrote on Thu, 17 Jul 2008 23:10:48 +0200:
/boot shouldn't be mirrored, as the BIOS won't know how to boot it. leave /dev/sdb1 the same size as /dev/sda1 and call it /boot2 and try to remember to copy /boot to /boot2 each time you update the kernel.
I understand this, but how do you boot from /boot2 on the second HDD if the 1st have failed?
You don't (*). I don't understand John's advice here. There is no problem md mirroring /boot. You just need to install grub a second time on the other disk. For that you have to boot from it. (I think I also did it successfully without booting from the other disk in the past, but last time I tried it it didn't want to work like I remembered it should.)
(*) Anyway, you would boot from a Rescue CD or such and rename it ...
Yes, no problems, I had /boot mirrored across 4 drives (NAS box) and grub installed on each.
If you use labels for /boot in fstab you don't even need to edit fstab from a rescue CD, just remove the failed first drive and boot.
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
Ross S. W. Walker wrote:
Kai Schaetzl wrote:
Rudi Ahlers wrote on Thu, 17 Jul 2008 23:10:48 +0200:
/boot shouldn't be mirrored, as the BIOS won't know how to boot it. leave /dev/sdb1 the same size as /dev/sda1 and call it /boot2 and try to remember to copy /boot to /boot2 each time you update the kernel.
I understand this, but how do you boot from /boot2 on the second HDD if the 1st have failed?
You don't (*). I don't understand John's advice here. There is no problem md mirroring /boot. You just need to install grub a second time on the other disk. For that you have to boot from it. (I think I also did it successfully without booting from the other disk in the past, but last time I tried it it didn't want to work like I remembered it should.)
(*) Anyway, you would boot from a Rescue CD or such and rename it ...
Yes, no problems, I had /boot mirrored across 4 drives (NAS box) and grub installed on each.
If you use labels for /boot in fstab you don't even need to edit fstab from a rescue CD, just remove the failed first drive and boot.
-Ross
Can you please explain this to me?
I've never used labels before, so if you could maybe show me a sample of how it's setup?
Rudi Ahlers wrote on Sat, 26 Jul 2008 12:19:14 +0200:
I've never used labels before
CentOS uses labels in grub.conf and fstab by default if you do a standard installation (no mdraid devices), I wasn't aware that you can use labels in connection with md devices. I'd be interested in that as well.
Kai
Rudi Ahlers wrote:
Ross S. W. Walker wrote:
Kai Schaetzl wrote:
Rudi Ahlers wrote on Thu, 17 Jul 2008 23:10:48 +0200:
/boot shouldn't be mirrored, as the BIOS won't know how to boot it. leave /dev/sdb1 the same size as /dev/sda1 and call it /boot2 and try to remember to copy /boot to /boot2 each time you update the kernel.
I understand this, but how do you boot from /boot2 on the second HDD if the 1st have failed?
You don't (*). I don't understand John's advice here. There is no problem md mirroring /boot. You just need to install grub a second time on the other disk. For that you have to boot from it. (I think I also did it successfully without booting from the other disk in the past, but last time I tried it it didn't want to work like I remembered it should.)
(*) Anyway, you would boot from a Rescue CD or such and rename it ...
Yes, no problems, I had /boot mirrored across 4 drives (NAS box) and grub installed on each.
If you use labels for /boot in fstab you don't even need to edit fstab from a rescue CD, just remove the failed first drive and boot.
Can you please explain this to me?
I've never used labels before, so if you could maybe show me a sample of how it's setup?
Disk labels are stored in file systems superblock.
For ext2/ext3 file systems you use the tune2fs and the -L option to define a label, then you can refer to it in fstab like such:
LABEL=boot /boot ext3 defaults 1 2
The problem with labels is, say you have an external USB drive and it happens to have a label called 'boot' as well, well then it is possible the OS will mount that instead (but grub will still use the real 'boot' to boot off of as the physical disk is defined in grub), then you will wonder why you still are booting the old kernel after you have upgraded to the new one!
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
On Fri, Jul 18, 2008 at 12:31:19AM +0200, Kai Schaetzl wrote:
Rudi Ahlers wrote on Thu, 17 Jul 2008 23:10:48 +0200:
/boot shouldn't be mirrored, as the BIOS won't know how to boot it. leave /dev/sdb1 the same size as /dev/sda1 and call it /boot2 and try to remember to copy /boot to /boot2 each time you update the kernel.
I understand this, but how do you boot from /boot2 on the second HDD if the 1st have failed?
You don't (*). I don't understand John's advice here. There is no problem md mirroring /boot. You just need to install grub a second time on the other disk. For that you have to boot from it. (I think I also did it successfully without booting from the other disk in the past, but last time I tried it it didn't want to work like I remembered it should.)
I think you mean "if you want to boot from it, you have to install grub on it". I've done this. It means if the first disk fails, you can then physically remove the failed disk, put the survivor in as the first disk, then boot from that.
To install grub to the second disk:
# grub
device (hd0) /dev/sdb root (hd0,0) setup (hd0)
(blah blah blah) Running "install /boot/grub/stage1 (hd0) (hd0)1+16 p (hd0,0)/boot/grub/stage2 /boot/grub/grub.conf"… succeeded Done.
quit
(or /dev/hdb, or whatever is appropriate).
To get back to the OP: I've done a RAID-10 under CentOS, and the problem I encountered was that the kernel wasn't smart enough to assemble the RAID without a properly populated /etc/mdadm.conf file.
See the details at http://wiki.xdroop.com/space/Linux/Software+Raid+compound+devices
On Fri, Jul 18, 2008 at 4:15 AM, David Mackintosh David.Mackintosh@xdroop.com wrote:
To get back to the OP: I've done a RAID-10 under CentOS, and the problem I encountered was that the kernel wasn't smart enough to assemble the RAID without a properly populated /etc/mdadm.conf file.
Well, I have many working configurations with 4 disk cheap machines. Guess I should say something.
/sd[abcd]1: 20 GB, ext3, RAID-1 as /dev/md0, linux-raid autodetect (type: fd), boot flag active, mounted as "/" (incl. /boot) /sd[abcd][23...]: whatever you want
I installed grub like this:
grub> device (hd0) /dev/sda [repeat these 3 steps for all disks respectively] grub> root (hd0,0) grub> setup (hd0)
And this is from my grub.conf
"kernel /boot/2.6...../bzImage root=/dev/md0"
Kernel updates etc does not cause any problems. I only update grub.conf and bzImage file and reboot.
You should compile raid-related stuff in kernel directly (not modules) and set raid partitions to "fd", this way, you do not need any mdadm.conf or related stuff.
I had many recovery nightmares. Once, one system survived with "sdc" only because of electricity. It seems no use but I had a chance to change failed disks, rebuild my unrecoverable raid10 which was on second part and restore backups without reinstalling operating system. It depends on mainboard, SATA controller and may be disks but usually modern hardware recovers hard errors and somehow continues with working disks despite some bus-speed penalty.
If I missed something and repeated meaningless facts, sorry for being lazy to read the thread from beginning...
Thanks.
Rudi Ahlers wrote:
John R Pierce wrote:
Rudi Ahlers wrote:
And then, how do I setup the partitioning? Do I setup /boot on a separate RAID "partition"? If so, what happens if I want to replace the 1st 2 HDD's with bigger ones?
each partition is raided seperately with mdadm.... you could make the whole thing one LVM partition thats raid10, then use LVM to dice it up into file systems.
if you have 4 drives and are doing software raid10, you won't be swapping drives with different sizes without a WHOLE lotta pain.
Ok, so how do I do this? Let's say I have 4x 160GB HDD's now, and plan on replacing them with 4x 500GB HDD's in the future?
Personally I would never put an OS install on a higher RAID then RAID1, because it gets too messy to upgrade like you suggested.
What setup would help with a upgrade in the future?
/boot shouldn't be mirrored, as the BIOS won't know how to boot it. leave /dev/sdb1 the same size as /dev/sda1 and call it /boot2 and try to remember to copy /boot to /boot2 each time you update the kernel.
I understand this, but how do you boot from /boot2 on the second HDD if the 1st have failed?
Could you not get a system that had 2 drives for the OS and 4 drives for data?
I have setup 4 disk RAID10 systems before, but they were never intended to be upgraded (in place at least).
I can forward a couple of recipes, but let me first say that to do it from the CentOS install media requires 2 RAID1s and LVM striping because the RAID10 option isn't on the media, but it is functionally equivalent both in useable space and performance.
If you want to use the MD RAID10 driver you need to build it from a working system then install on it.
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
Ross S. W. Walker wrote:
Rudi Ahlers wrote:
John R Pierce wrote:
Rudi Ahlers wrote:
And then, how do I setup the partitioning? Do I setup /boot on a separate RAID "partition"? If so, what happens if I want to replace the 1st 2 HDD's with bigger ones?
each partition is raided seperately with mdadm.... you could make the whole thing one LVM partition thats raid10, then use LVM to dice it up into file systems.
if you have 4 drives and are doing software raid10, you won't be swapping drives with different sizes without a WHOLE lotta pain.
Ok, so how do I do this? Let's say I have 4x 160GB HDD's now, and plan on replacing them with 4x 500GB HDD's in the future?
Personally I would never put an OS install on a higher RAID then RAID1, because it gets too messy to upgrade like you suggested.
So you're suggesting that I keep the OS separate from the data? But what happens if both the 1st 2 drives with the OS fails, or needs to be replaced?
What setup would help with a upgrade in the future?
/boot shouldn't be mirrored, as the BIOS won't know how to boot it. leave /dev/sdb1 the same size as /dev/sda1 and call it /boot2 and try to remember to copy /boot to /boot2 each time you update the kernel.
I understand this, but how do you boot from /boot2 on the second HDD if the 1st have failed?
Could you not get a system that had 2 drives for the OS and 4 drives for data?
nope, unfortunately not. It's a 2U rackmount chassis with space for only 4 HDD's. I have been thinking about installing the OS onto a USB memory stick, but have never actually got as far as trying to figure out how todo it.
I have setup 4 disk RAID10 systems before, but they were never intended to be upgraded (in place at least).
I can forward a couple of recipes, but let me first say that to do it from the CentOS install media requires 2 RAID1s and LVM striping because the RAID10 option isn't on the media, but it is functionally equivalent both in useable space and performance.
Please share your recipes, I'd like to give it a try :)
Rudi Ahlers wrote:
Ross S. W. Walker wrote:
Rudi Ahlers wrote:
John R Pierce wrote:
Rudi Ahlers wrote:
And then, how do I setup the partitioning? Do I setup /boot on a separate RAID "partition"? If so, what happens if I want to replace the 1st 2 HDD's with bigger ones?
each partition is raided seperately with mdadm.... you could make the whole thing one LVM partition thats raid10, then use LVM to dice it up into file systems.
if you have 4 drives and are doing software raid10, you won't be swapping drives with different sizes without a WHOLE lotta pain.
Ok, so how do I do this? Let's say I have 4x 160GB HDD's now, and plan on replacing them with 4x 500GB HDD's in the future?
Personally I would never put an OS install on a higher RAID then RAID1, because it gets too messy to upgrade like you suggested.
So you're suggesting that I keep the OS separate from the data? But what happens if both the 1st 2 drives with the OS fails, or needs to be replaced?
Well I was talking 2 separate spindles for the OS, but I guess you got the idea from my later question.
What setup would help with a upgrade in the future?
/boot shouldn't be mirrored, as the BIOS won't know how to boot it. leave /dev/sdb1 the same size as /dev/sda1 and call it /boot2 and try to remember to copy /boot to /boot2 each time you update the kernel.
I understand this, but how do you boot from /boot2 on the second HDD if the 1st have failed?
Could you not get a system that had 2 drives for the OS and 4 drives for data?
nope, unfortunately not. It's a 2U rackmount chassis with space for only 4 HDD's. I have been thinking about installing the OS onto a USB memory stick, but have never actually got as far as trying to figure out how todo it.
Yeah, problem with USB memory stick is swap on the slow USB will IO wait the whole box and what if some wise guy comes and says "Oh look someone forgot a USB memory stick"?
I have setup 4 disk RAID10 systems before, but they were never intended to be upgraded (in place at least).
I can forward a couple of recipes, but let me first say that to do it from the CentOS install media requires 2 RAID1s and LVM striping because the RAID10 option isn't on the media, but it is functionally equivalent both in useable space and performance.
Please share your recipes, I'd like to give it a try :)
Ok, well let me start with the first using 2 RAID1 PVs in a VG and striping.
This will require 2 major steps, one to setup and install and the other after installation to create striped OS LV because the installer doesn't let you give options on the LV creation to make it interleaved.
1) Create 100MB or 256MB primary parts on each disk as type MD RAID.
2) Add those 4 partitions to a RAID1 set, make first 2 active and the other 2 spare.
3) Allocate the rest of the drive space on the 4 drives as partition types of MD RAID
4) Create 2 RAID1s, one out of first 2 drives, the other out of the second 2 drives. Make them of type LVM.
5) Create LVM volume group vg0 out of the 2 PVs.
6) Create 2 4 GB LVs in the VG, 1 called swap0, the other called rooti (not a typo cause after boot we will create a root).
7) Install into rooti and reboot
8) After reboot and yum update, create a LV of say 8GB that have the option '-i 2' on the lvcreate so it interleaves the allocation between the two RAID1 PVs, call it 'root'
9) Do a dump/restore of 'rooti' LV to 'root' LV, for care do it in single user so data isn't 'influx'.
10) Change fstab and grub.conf swapping rooti for root, do a 'mkinitrd' for the running kernel and then reboot.
11) Keep in mind older initrd files will still have the old rooti in them! Maybe best to get rid of those kernels...
12) If all works well, do a lvrename of rooti to swap1, do a mkswap on it and add it to fstab with same priority as swap0 and then swapon -a and swap will be interleaved.
If you need further explanation on any of those steps just let me know.
I'll give my off-line receipe after I get home from this business trip
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
on 7-17-2008 3:46 PM Rudi Ahlers spake the following:
Ross S. W. Walker wrote:
Rudi Ahlers wrote:
John R Pierce wrote:
Rudi Ahlers wrote:
And then, how do I setup the partitioning? Do I setup /boot on a separate RAID "partition"? If so, what happens if I want to replace the 1st 2 HDD's with bigger ones?
each partition is raided seperately with mdadm.... you could make the whole thing one LVM partition thats raid10, then use LVM to dice it up into file systems.
if you have 4 drives and are doing software raid10, you won't be swapping drives with different sizes without a WHOLE lotta pain.
Ok, so how do I do this? Let's say I have 4x 160GB HDD's now, and plan on replacing them with 4x 500GB HDD's in the future?
Personally I would never put an OS install on a higher RAID then RAID1, because it gets too messy to upgrade like you suggested.
So you're suggesting that I keep the OS separate from the data? But what happens if both the 1st 2 drives with the OS fails, or needs to be replaced?
Raid is not a substitute for backup. It is just an availability measure. What if the entire box shorts and catches fire? What if the power supply shorts and sends 110 volts over the 5 volt lines? I have had both of these scenarios in 20 years of IT.
What setup would help with a upgrade in the future?
/boot shouldn't be mirrored, as the BIOS won't know how to boot it. leave /dev/sdb1 the same size as /dev/sda1 and call it /boot2 and try to remember to copy /boot to /boot2 each time you update the kernel.
I understand this, but how do you boot from /boot2 on the second HDD if the 1st have failed?
Could you not get a system that had 2 drives for the OS and 4 drives for data?
nope, unfortunately not. It's a 2U rackmount chassis with space for only 4 HDD's. I have been thinking about installing the OS onto a USB memory stick, but have never actually got as far as trying to figure out how todo it.
Maybe a CF adapter, but not a USB stick. USB has a very high latency because it is PIO and not DMA.
I have setup 4 disk RAID10 systems before, but they were never intended to be upgraded (in place at least).
I can forward a couple of recipes, but let me first say that to do it from the CentOS install media requires 2 RAID1s and LVM striping because the RAID10 option isn't on the media, but it is functionally equivalent both in useable space and performance.
Please share your recipes, I'd like to give it a try :)
A pair of raid 1's with LVM properly striped across them should be fairly equal to raid 10 in speed and latency. The raid 10 code is still fairly immature in the MD drivers. You could set aside a small bit of space on all 4 drives for a raid 1 boot partition and put everything else in LVM. CentOS seems to install everything but boot in LVM by default anymore.
Ross S. W. Walker wrote:
Rudi Ahlers wrote:
John R Pierce wrote:
Rudi Ahlers wrote:
And then, how do I setup the partitioning? Do I setup /boot on a separate RAID "partition"? If so, what happens if I want to replace the 1st 2 HDD's with bigger ones?
each partition is raided seperately with mdadm.... you could make the whole thing one LVM partition thats raid10, then use LVM to dice it up into file systems.
if you have 4 drives and are doing software raid10, you won't be swapping drives with different sizes without a WHOLE lotta pain.
Ok, so how do I do this? Let's say I have 4x 160GB HDD's now, and plan on replacing them with 4x 500GB HDD's in the future?
Personally I would never put an OS install on a higher RAID then RAID1, because it gets too messy to upgrade like you suggested.
What setup would help with a upgrade in the future?
/boot shouldn't be mirrored, as the BIOS won't know how to boot it. leave /dev/sdb1 the same size as /dev/sda1 and call it /boot2 and try to remember to copy /boot to /boot2 each time you update the kernel.
I understand this, but how do you boot from /boot2 on the second HDD if the 1st have failed?
Could you not get a system that had 2 drives for the OS and 4 drives for data?
No, unfortunately not :( I have a 2U rackmount case with very limited space inside.
I have setup 4 disk RAID10 systems before, but they were never intended to be upgraded (in place at least).
I can forward a couple of recipes, but let me first say that to do it from the CentOS install media requires 2 RAID1s and LVM striping because the RAID10 option isn't on the media, but it is functionally equivalent both in useable space and performance.
Would you mind forwarding me your recipes? I'd love to try it out, and I have some time right now to setup the RAID 10 system
If you want to use the MD RAID10 driver you need to build it from a working system then install on it.
-Ross