on 7-17-2008 3:46 PM Rudi Ahlers spake the following:
Ross S. W. Walker wrote:
Rudi Ahlers wrote:
John R Pierce wrote:
Rudi Ahlers wrote:
And then, how do I setup the partitioning? Do I setup /boot on a separate RAID "partition"? If so, what happens if I want to replace the 1st 2 HDD's with bigger ones?
each partition is raided seperately with mdadm.... you could make the whole thing one LVM partition thats raid10, then use LVM to dice it up into file systems.
if you have 4 drives and are doing software raid10, you won't be swapping drives with different sizes without a WHOLE lotta pain.
Ok, so how do I do this? Let's say I have 4x 160GB HDD's now, and plan on replacing them with 4x 500GB HDD's in the future?
Personally I would never put an OS install on a higher RAID then RAID1, because it gets too messy to upgrade like you suggested.
So you're suggesting that I keep the OS separate from the data? But what happens if both the 1st 2 drives with the OS fails, or needs to be replaced?
Raid is not a substitute for backup. It is just an availability measure. What if the entire box shorts and catches fire? What if the power supply shorts and sends 110 volts over the 5 volt lines? I have had both of these scenarios in 20 years of IT.
What setup would help with a upgrade in the future?
/boot shouldn't be mirrored, as the BIOS won't know how to boot it. leave /dev/sdb1 the same size as /dev/sda1 and call it /boot2 and try to remember to copy /boot to /boot2 each time you update the kernel.
I understand this, but how do you boot from /boot2 on the second HDD if the 1st have failed?
Could you not get a system that had 2 drives for the OS and 4 drives for data?
nope, unfortunately not. It's a 2U rackmount chassis with space for only 4 HDD's. I have been thinking about installing the OS onto a USB memory stick, but have never actually got as far as trying to figure out how todo it.
Maybe a CF adapter, but not a USB stick. USB has a very high latency because it is PIO and not DMA.
I have setup 4 disk RAID10 systems before, but they were never intended to be upgraded (in place at least).
I can forward a couple of recipes, but let me first say that to do it from the CentOS install media requires 2 RAID1s and LVM striping because the RAID10 option isn't on the media, but it is functionally equivalent both in useable space and performance.
Please share your recipes, I'd like to give it a try :)
A pair of raid 1's with LVM properly striped across them should be fairly equal to raid 10 in speed and latency. The raid 10 code is still fairly immature in the MD drivers. You could set aside a small bit of space on all 4 drives for a raid 1 boot partition and put everything else in LVM. CentOS seems to install everything but boot in LVM by default anymore.