On 10/01/12 8:39 PM, Keith Keller wrote:
The controller node has two 90GB SSDs that I plan to use as a bootable RAID1 system disk. What is the preferred method for laying out the RAID array?
a server makes very little use of its system disks after its booted, everything it needs ends up in cache pretty quickly. and you typically don't reboot a server very often. why waste SSD for that?
I'd rather use SSD for something like LSI Logic's CacheCade v2 (but this requires you use a LSI SAS raid card too)
- With large arrays you often hear about "aligning the filesystem to
the disk". Is there a fairly standard way (I hope using only CentOS tools) of going about this? Are the various mkfs tools smart enough to figure out how an array is aligned on its own, or is sysadmin intervention required on such large arrays? (If it helps any, the disk array is backed by a 3ware 9750 controller. I have not yet decided how many disks I will use in the array, if that influences the alignment.)
I would suggest not using more than 10-11 disks in a single raid group or the rebuild times get hellaciously long (11 x 3TB SAS2 RAID6 took 12 hours to rebuild when I ran tests). if this is for nearline bulk storage, I'd use 2 disks as hot spares, and have 2 seperate RAID5 or 6 of 11 disks, then stripe those together so its raid 5+0 or 6+0. if this is for higher performance storage, I would build mirrors and stripe them (raid 1+0)
re: alignment, use the whole disks, without partitioning. then there's no alignment issues. use a raid block size of like 32k. if you need multiple file systems, put the whole mess into a single LVM vg, and create your logical volumes in lvm.