[Repost - for some reason my reply from earlier this morning did not go through]
Thanks everyone for all your suggestions/comments.
Ask yourself this question: Does the company loose money when the build system is down for restore? How much? How long does a restore take?
No, no money lost. If I keep a spare drive, it should take less than an hour to restore the system.
Mirroring disks is not a replacement for backup. It is a way to improve availability of a system (no downtime when a disc dies), so it might even be interesting when there is no important data on the machine. If this is important for you use RAID-1 for the entire discs.
I would waste the most disk space, but this is certainly a possibility.
If decreased availability is not a problem for you (you can easily afford a day of downtime when a disc dies) use RAID-0 for the entire discs. It will give you a nice performance boost. Especially on a build host people will love the extra performance of the disc array.
But if either disk dies, the whole system is unusable. I don't think I will use this option.
A combination of RAID-0 and RAID-1 may also be an option: Make a small RAID-1 partition for the operating system (say 20GB) and a big RAID-0 partition for the data. This way you will get maximum performance on the data partition, but when a disc dies you do not need to reinstall the operating system. Just put in a new disc, let the RAID-1 rebuild itself in the background and restore your data. This can reduce the downtime (and the amount of work for you) when a disc dies considerably.
Hmm, this sounds like a possibility. I have to figure out how to do this (I haven't used HW RAID before).
HW vs SW RAID: Kind of a religious question. HW has some advantages when using RAID-5 or RAID-6 (less CPU load). When using RAID-0 or RAID-1 there should not be any difference performance wise. HW RAID gives you some advantages in terms of handling, i.e. hotplugging of discs, nice administration console, RAID-10 during install ;-), etc. It's up to you to decide whether it is worth the money. Plus you need to find a controller that is well supported in Linux.
Does anyone know if the RAID controller that comes in an IBM x3550 is supported on CentOS 4 & 5? I assume that it is.
P.s. Putting lots of RAM into the machine (for the buffer cache) has more impact than RAID-0 in my experience. Of course that depends on your filesystem usage pattern.
The system has 4GB.
P.p.s. Creating one swap partition on each disc is correct, because swapping to RAID-0 is useless. Only if you decide to use RAID-1 for the whole disc you should also swap to RAID-1.
Will do.
Three raid1 sets:
raid1 #1 = / raid1 #2 = swap raid1 #3 = rest of disk on /home
for the simple fact that a dead disk won't bring down your system and halt your > > builds until your rebuild the machine.
Yes, I like that.
But if you really only care about max speed and are not worried about crashes & > > their consequences, then replace the raid1 with raid0.
I like the earlier suggestions on combining RAID0 and RAID1.
I have no reason for using LVM on boot/OS/system partitions. If I have something > > that fills the disk that much, I move it to an other storage device. In your case, > striped LVM could be used instead of raid0.
That's why I can't decide what the best approach is. So many different ways to skin this cat.
Thanks, Alfred
Am Donnerstag, 10. Mai 2007 schrieb Feizhou:
I like the earlier suggestions on combining RAID0 and RAID1.
you need four disks for this.
No, the suggestion was to use RAID1 for the system and RAID0 for the data partition (that needs to be restored from a backup then if a disc dies).
Not RAID10 or RAID0+1
Andreas Micklei wrote:
Am Donnerstag, 10. Mai 2007 schrieb Feizhou:
I like the earlier suggestions on combining RAID0 and RAID1.
you need four disks for this.
No, the suggestion was to use RAID1 for the system and RAID0 for the data partition (that needs to be restored from a backup then if a disc dies).
OH. Yeah, that would do.
Feizhou wrote:
Does anyone know if the RAID controller that comes in an IBM x3550 is supported on CentOS 4 & 5? I assume that it is.
It should give Linux a disk to look at.
Linux sees what you create as a 'volume' in the bios setup as a disk. You can make each physical disk its own volume if you want, but Linux won't see anything until you do.
Les Mikesell wrote:
Feizhou wrote:
Does anyone know if the RAID controller that comes in an IBM x3550 is supported on CentOS 4 & 5? I assume that it is.
It should give Linux a disk to look at.
Linux sees what you create as a 'volume' in the bios setup as a disk. You can make each physical disk its own volume if you want, but Linux won't see anything until you do.
Yes, i meant a kernel driver is not necessary to see the volume. Linux will see it as a disk.
Am Mittwoch, 9. Mai 2007 schrieb alfred@von-campe.com:
That's why I can't decide what the best approach is. So many different ways to skin this cat.
If at all possible play around with the system for one or two weeks before it goes into production. Try different setups, run benchmarks, reinstall and try again. Optimizing the system once it is deployed will be much harder.
Things to try: Different RAID levels, different stripe sizes, different filesystems.
Use your own benchmark! I.e. if this is a build system do a few heavy compiler runs in parallel.