Am Mittwoch, 9. Mai 2007 schrieb Alfred von Campe:
The recent thread on Anaconda and RAID10 made me start to think about how to partition a server I'm about to set up. I have two 146GB SCSI drives on an IBM x3550. It will be used as a build system. As such, there is no critical data on these systems, as the source code will be checked out of our source control system, and the build results are copied to another system. I usually build my systems with Kickstart, so if a disk dies, I can rebuild it quickly.
Given all that, how would you partition these disks? I keep going back and forth between various options (HW RAID, SW RAID, LVM, etc.). I guess speed is more important to me than redundancy. I'm tempted to install the OS on one drive and use the entire second drive for data. This way I can rebuild or upgrade the OS without touching the data. But that will waste a lot of disk space, as the OS does not need 146GB.
The only thing I'm pretty sure of is to put 2GB of swap on each drive, but after that everything is still in the air. I am looking for any and all suggestions from the collective wisdom and experience of this list.
Ask yourself this question: Does the company loose money when the build system is down for restore? How much? How long does a restore take?
Mirroring disks is not a replacement for backup. It is a way to improve availability of a system (no downtime when a disc dies), so it might even be interesting when there is no important data on the machine. If this is important for you use RAID-1 for the entire discs.
If decreased availability is not a problem for you (you can easily afford a day of downtime when a disc dies) use RAID-0 for the entire discs. It will give you a nice performance boost. Especially on a build host people will love the extra performance of the disc array.
A combination of RAID-0 and RAID-1 may also be an option: Make a small RAID-1 partition for the operating system (say 20GB) and a big RAID-0 partition for the data. This way you will get maximum performance on the data partition, but when a disc dies you do not need to reinstall the operating system. Just put in a new disc, let the RAID-1 rebuild itself in the background and restore your data. This can reduce the downtime (and the amount of work for you) when a disc dies considerably.
HW vs SW RAID: Kind of a religious question. HW has some advantages when using RAID-5 or RAID-6 (less CPU load). When using RAID-0 or RAID-1 there should not be any difference performance wise. HW RAID gives you some advantages in terms of handling, i.e. hotplugging of discs, nice administration console, RAID-10 during install ;-), etc. It's up to you to decide whether it is worth the money. Plus you need to find a controller that is well supported in Linux.
regards, Andreas Micklei
P.s. Putting lots of RAM into the machine (for the buffer cache) has more impact than RAID-0 in my experience. Of course that depends on your filesystem usage pattern.
P.p.s. Creating one swap partition on each disc is correct, because swapping to RAID-0 is useless. Only if you decide to use RAID-1 for the whole disc you should also swap to RAID-1.