[CentOS] preferred software RAID 10?

Mon Jul 28 14:28:30 UTC 2008
Ross S. W. Walker <RWalker at medallion.com>

Rudi Ahlers wrote:
> Ross S. W. Walker wrote:
> > Rudi Ahlers wrote:
> >> Ross S. W. Walker wrote:
> >>> Rudi Ahlers wrote:
> >>>       
> >>>> Hi all
> >>>>
> >>>> I'm looking at setting up software RAID 10, using CentOS 5.1 x64 - what 
> >>>> is the best way todo this?
> >>>>
> >>>> I'm reading some sources on the internet, and get a lot of different 
> >>>> "suggestions"
> >>>>
> >>>> 1 suggestion says to boot up with a Live CD like Knoppix or 
> >>>> SystemRescueCD, setup the RAID 10 partitions, and then install Linux 
> >>>> from there.
> >>>> 2. Another is to setup a small RAID 1 on the first 2 HDD's, install 
> >>>> Linux, bootup, and then setup the rest as RAID 10
> >>>>
> >>>> The others didn't really make sense to me, so how do I 
> >>>> actually do this?
> >>>>
> >>>> And then, how do I setup the partitioning? Do I setup /boot on a 
> >>>> separate RAID "partition"? If so, what happens if I want to 
> >>>> replace the 1st 2 HDD's with bigger ones?
> >>>>     
> >>>>         
> >>> What's the hardware setup?
> >>>
> >>>       
> >> I didn't really specify any, cause I want to keep it purely software. 
> >> Generally it would be on a generic PIV motherboard with 4 / 6 
> >> SATA, or even mixed SATA & IDE HDD's - all new, so at least 80GB per HDD
> >>     
> >
> > I was primarily interested in the # of HDDs that can be used.
> >
> > If you have 6 disks, setup 2 disks as a  RAID1 for the OS and the
> > other 4 as a RAID10 for the data.
> >
> > If you have 4 disks all together:
> >
> > 1) create /boot partition as a 4 disk RAID1 across all 4 disks
> >
> > 2) create the remaining space as 2 separate RAID1s of type LVM
> >
> > 3) create a VG out of the 2 RAID1 PVs, create root, swap LVs on
> > the VG with a stripe of 2.
> >
> > LVM striping over multiple RAID1 PVs provides the same performance
> > as a native RAID10 array, plus you can add RAID1s later to
> > increase the size/performance and dump/restore the data to stripe
> > it across the larger set of PVs.
> >
> 
> Thanx, this seems like a fairly easy way of doing it.
> 
>  From what I gather, the data will fill up from the beginning of the 
> stripe, right? So the 1st 2 HDD's will work hardest in the beginning, 
> until there's enough data to fill the other 2 HDD's - unless of cause I 
> split the LV's across the PV's - i.e. put root on md1 & swap or var on 
> md2 for example.

Yes data fills from the start, which is the fastest location, which is
better used for swap, so...

1) Create 2 4GB swap LVs on the install, swap0 and swap1, install the
OS into swap1

2) After install and reboot, create a 8GB LV with interleave of 2 so
it stripes the writes across the 2 MD PVs, use dump and restore to move
the root data from swap1 to, call it 'root', modify the fstab and
rebuild the initrds.

3) Once that's all done and you are booting off of the 8GB 'root' LV,
you can do a mkswap on the swap1 LV and add it to the list of swap
devices in fstab with the same priority as swap0 and pagecache will
stripe the swap data between them.

Then you have your 'root' LV striped, and your swap striped across
the fastest portion of the disk.

> Does swap need to be part of the RAID set? Is there actually a 
> performance boost?

No, like stated create LVs for swap, swap in 2.6 kernels is very
good on all types of mediums, raw disk, LVM and swap files.

-Ross

______________________________________________________________________
This e-mail, and any attachments thereto, is intended only for use by
the addressee(s) named herein and may contain legally privileged
and/or confidential information. If you are not the intended recipient
of this e-mail, you are hereby notified that any dissemination,
distribution or copying of this e-mail, and any attachments thereto,
is strictly prohibited. If you have received this e-mail in error,
please immediately notify the sender and permanently delete the
original and any copy or printout thereof.