[CentOS] Steps to create a mirrored CentOS system?

Adam Thompson athompson at sjsd.net
Mon Feb 5 18:08:03 UTC 2007


CentOS mailing list <centos at centos.org> writes:
>Important: The partition you boot from must not be striped. It may not
>be raid-5 or raid-0.
>
>Note: On the one hand, if you want extra stability, consider using
>raid-1 (or even raid-5) for your swap partition(s) so that a drive
>failure would not corrupt your swap space and crash applications that
>are using it. On the other hand, if you want extra performance, just
>let the kernel use distinct swap partitions as it does striping by
>default.
>[quote]
>
I have a number of older 3-bay SCSI systems that aren't worth adding a
RAID card to, so the formula that gives me the best *OVERALL*
performance for general-purpose use is:

	/dev/hda:
		hda1 - 100Mb - type FD
		hda2 - 512Mb - type FD
		hda3 - (rest of drive) - type FD
	/dev/hdb, /dev/hdc - identical setup to /dev/hda

	/dev/md0:
		RAID-1, 3 members, ext3fs, /boot
	/dev/md1:
		RAID-5, 3 members, swap
	/dev/md2:
		RAID-5, 3 members, ext3fs, /

GRUB and LILO can both boot off RAID-1 sets.
I don't need EVMS or LVM support, so I avoid that overhead.
I don't need DM support either, so I avoid that overhead.
The MD code automatically replicates RAID-1 data to every member
equally, and balances read activity among all available members.  The
read performance boost is largely irrelevant since it's only the /boot
filesystem.
The RAID-5 implementation in the MD code is often faster than dedicated
RAID controllers - these are dual-P-III 1.1GHz systems, and the MD5
RAID5 code outperforms the IBM ServeRAID 4M controller by about 50%. 
Of course I don't have a true hardware RAID controller with battery
backup etc, either...

Gains:
 -no RAID controller purchase
 -triple-redundant boot (requires GRUB to be installed on all 3 drives,
see link posted previously)
 -swap on RAID, even swapped processes can survive a single-drive
failure
 -root on RAID, system can survive a single-drive failure
 -very good read/write performance as long as CPU and I/O aren't maxed
out simultaneously (which is rare for *my* workloads)
 -all drives have identical partition tables; recovery is simplified
because initialization of a new drive consists of:
		1. sfdisk -d /dev/sda | sfdisk /dev/sdb   
	(where sda is OK, and sdb has been replaced)
		2. reboot (or manually fix with mdadm --manage /dev/md[012] --add
/dev/sdb[123])

Losses:
 -no hardware RAID controller:
   * drive array might not survive power loss or system crash as
cleanly as with hardware controller.  
   * Also, no alarm to scream its head off when a drive fails.
   * No hardware offload for consistent performance when CPUs are maxed
out at 100%
   * no dedicated disk cache (shares filesystem buffer cache)
 -no fine-grained control over raid types & layout
 -single root filesystem
 -maximum of 4 primary partitions; more than 4 MD devices will require
extended partitions
 -no LVM/EVMS, therefore more difficult to change partitions later

Overall, this scheme works very well for me.  It's not for everyone,
but this matches my hardware and my workload almost perfectly - my
servers generally are doing CPU work *or* I/O work but not both
simultaneously, I have lots of RAM for buffers, I have dual CPUs which
helps the performance stay consistent.


-Adam Thompson
 Divisional IT Department,  St. James-Assiniboia School Division
 150 Moray St., Winnipeg, MB, R3J 3A2
 athompson at sjsd.net / tel: (204) 837-5886 x222 / fax: (204) 885-3178


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.centos.org/pipermail/centos/attachments/20070205/6591a8f5/attachment.html>


More information about the CentOS mailing list