Hi all,
I am about to re-build one of my boxes with CentOS 4.4 I would like to set it up as a simple mirrored system.
The box is a 1U server and has 3 SCSI drive bays. I can use 1 drive as the OS drive and the other 2 for the content (basically websites) or can just use 2 drives and mirror those.
What would you folks recommend?
Thanks
Dale
Dale wrote:
Hi all,
I am about to re-build one of my boxes with CentOS 4.4 I would like to set it up as a simple mirrored system.
The box is a 1U server and has 3 SCSI drive bays. I can use 1 drive as the OS drive and the other 2 for the content (basically websites) or can just use 2 drives and mirror those.
What would you folks recommend?
You'd have slightly better performance if you put / and swap on the first drive and make /var a RAID1 with the other two drives. You'll have better fail-over protection if you use 2 drives with all partitions mirrored. You'll have to decide which is more important.
Les,
Les Mikesell wrote:
Dale wrote:
Hi all,
I am about to re-build one of my boxes with CentOS 4.4 I would like to set it up as a simple mirrored system.
The box is a 1U server and has 3 SCSI drive bays. I can use 1 drive as the OS drive and the other 2 for the content (basically websites) or can just use 2 drives and mirror those.
What would you folks recommend?
You'd have slightly better performance if you put / and swap on the first drive and make /var a RAID1 with the other two drives. You'll have better fail-over protection if you use 2 drives with all partitions mirrored. You'll have to decide which is more important.
Thank you for the recommendation. I really appreciate it. My concern is mostly for the data to be mirrored. Performance shouldn't be an issue. But I will think about which way would be best for my situation.
One other thing, I once before tried setting up a mirrored system (was using the gui CentOS configuration) and wasn't able to figure out the proper steps to get the mirroring setup.
If there is a web resource on how to do this I would very much appreciate a pointer to it.
Thanks again!
Dale
On Feb 4, 2007, at 11:55 PM, Dale wrote:
One other thing, I once before tried setting up a mirrored system (was using the gui CentOS configuration) and wasn't able to figure out the proper steps to get the mirroring setup.
If there is a web resource on how to do this I would very much appreciate a pointer to it.
a little while back Steve Boley from Dell wrote up a thorough, straightforward guide to building a mirrored-drive system with Red Hat. you may be able to find it if you Google for "grub" "software raid"; alternately, i've preserved a copy on my wiki here:
http://www.vecna.org/wiki/GrubAndSoftwareRAID
-steve
On 2/4/07, Dale lists@ehome.net wrote:
If there is a web resource on how to do this I would very much appreciate a pointer to it.
The Gentoo site at http://www.gentoo.org/doc/en/gentoo-x86+raid+lvm2-quickinstall.xml has this advice:
[quote] /dev/sda /dev/sdb Type /dev/md1 /boot /boot Raid-1 (mirroring) swap swap Normal partitions /dev/md3 / / Raid-1 (mirroring) /dev/md4 LVM2 volumes Raid-0 (striped)
Important: The partition you boot from must not be striped. It may not be raid-5 or raid-0.
Note: On the one hand, if you want extra stability, consider using raid-1 (or even raid-5) for your swap partition(s) so that a drive failure would not corrupt your swap space and crash applications that are using it. On the other hand, if you want extra performance, just let the kernel use distinct swap partitions as it does striping by default. [quote]
Like Dennis Miller, "they could be wrong about that."
OTOH, you could get a 3ware RAID controller and run RAID1 and even a hot spare.
ldv
CentOS mailing list centos@centos.org writes:
Important: The partition you boot from must not be striped. It may not be raid-5 or raid-0.
Note: On the one hand, if you want extra stability, consider using raid-1 (or even raid-5) for your swap partition(s) so that a drive failure would not corrupt your swap space and crash applications that are using it. On the other hand, if you want extra performance, just let the kernel use distinct swap partitions as it does striping by default. [quote]
I have a number of older 3-bay SCSI systems that aren't worth adding a RAID card to, so the formula that gives me the best *OVERALL* performance for general-purpose use is:
/dev/hda: hda1 - 100Mb - type FD hda2 - 512Mb - type FD hda3 - (rest of drive) - type FD /dev/hdb, /dev/hdc - identical setup to /dev/hda
/dev/md0: RAID-1, 3 members, ext3fs, /boot /dev/md1: RAID-5, 3 members, swap /dev/md2: RAID-5, 3 members, ext3fs, /
GRUB and LILO can both boot off RAID-1 sets. I don't need EVMS or LVM support, so I avoid that overhead. I don't need DM support either, so I avoid that overhead. The MD code automatically replicates RAID-1 data to every member equally, and balances read activity among all available members. The read performance boost is largely irrelevant since it's only the /boot filesystem. The RAID-5 implementation in the MD code is often faster than dedicated RAID controllers - these are dual-P-III 1.1GHz systems, and the MD5 RAID5 code outperforms the IBM ServeRAID 4M controller by about 50%. Of course I don't have a true hardware RAID controller with battery backup etc, either...
Gains: -no RAID controller purchase -triple-redundant boot (requires GRUB to be installed on all 3 drives, see link posted previously) -swap on RAID, even swapped processes can survive a single-drive failure -root on RAID, system can survive a single-drive failure -very good read/write performance as long as CPU and I/O aren't maxed out simultaneously (which is rare for *my* workloads) -all drives have identical partition tables; recovery is simplified because initialization of a new drive consists of: 1. sfdisk -d /dev/sda | sfdisk /dev/sdb (where sda is OK, and sdb has been replaced) 2. reboot (or manually fix with mdadm --manage /dev/md[012] --add /dev/sdb[123])
Losses: -no hardware RAID controller: * drive array might not survive power loss or system crash as cleanly as with hardware controller. * Also, no alarm to scream its head off when a drive fails. * No hardware offload for consistent performance when CPUs are maxed out at 100% * no dedicated disk cache (shares filesystem buffer cache) -no fine-grained control over raid types & layout -single root filesystem -maximum of 4 primary partitions; more than 4 MD devices will require extended partitions -no LVM/EVMS, therefore more difficult to change partitions later
Overall, this scheme works very well for me. It's not for everyone, but this matches my hardware and my workload almost perfectly - my servers generally are doing CPU work *or* I/O work but not both simultaneously, I have lots of RAM for buffers, I have dual CPUs which helps the performance stay consistent.
-Adam Thompson Divisional IT Department, St. James-Assiniboia School Division 150 Moray St., Winnipeg, MB, R3J 3A2 athompson@sjsd.net / tel: (204) 837-5886 x222 / fax: (204) 885-3178
Want to thank Les, Steve Larry Morten and Adam for your suggestions and help. I am finally getting ready to set this server up today. Your help and suggestions is greatly appreciated.
Thanks!
Dale
Can I get the final outline including pointers to relevant documentation? Doing it on *BSD is simple, but doing it under Linux seems to be a PITA.
:P
Peter
Dale wrote:
Want to thank Les, Steve Larry Morten and Adam for your suggestions and help. I am finally getting ready to set this server up today. Your help and suggestions is greatly appreciated.
Thanks!
Dale
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Les Mikesell wrote:
You'd have slightly better performance if you put / and swap on the first drive and make /var a RAID1 with the other two drives. You'll have better fail-over protection if you use 2 drives with all partitions mirrored. You'll have to decide which is more important.
You can also mirror with all three drives. Linux LVM mirroring is still at a stone age level, but you can use md to mirror.
Reserve the first 100MB on each drive for /boot, then make two equal size partitions on each drive for md raid. Disk1-P1 mirrors Disk2-P1, Disk2-P2 mirrors Disk3-P1 and Disk1-P2 mirrors Disk3-P2. Now define all as PVs and add them to one or more VGs. Depending on how you spread the load, there could be potential performance bottlenecks, but you can now use all the space on all three drives.
It would be better and more flexible if LVM could map multiple PEs to a single LE, but alas that is not the path the LVM guys chose. They are too stuck in the DOS partition table paradigm I think and think of filesystems/LVs as the smallest logical block.
In case you are not familiar with LVM terminology:
VG: Volume Group, a collection of storage space for use PV: Physical Volume, storage space from a disk or partition PE: Physical Extent, a block of storage from a VG LE: A Logical Extent, where you actually define a LV (if Linux LVM supported PE mapping) LV: A Logical Volume, where you create filesystems, swap space or raw devices.