On Thu, Jan 07, 2010 at 05:28:34PM +0000, Joseph L. Casale wrote:
I also heard that disks above 1TB might have reliability issues. Maybe it changed since then...
I remember rumors about the early 2TB Seagates.
Personally, I won't RAID SATA drives over 500GB unless they're enterprise-level ones with the limits on how long before the drive reports a problem back to the host when it has a read error.
Which should also take care of the reliability issue to a large degree.
An often overlooked issue is the rebuild time with Linux software raid and all hw raid controllers I have seen. On large drives the times are so long as a result of the sheer size, if the array is degraded you are exposed during the rebuild. ZFS's resilver has this addressed as good as you can by only copying actual data.
With this in mind, it's wise to consider how you develop the redundancy into the solution...
Very true... especially with 1TB+ drives you definitely are crazy to run anything other than RAID-6.
Lately we've been buying 24 bay systems from Silicon Mechanics, installing Solaris 10 and running RAID-Z2 + SSD for L2ARC and ZIL. Makes for great NFS storage...
The next release of Solaris 10 should have RAID-Z3 which might be better for the >1TB drives out there.
(You can of course do this with OpenSolaris as well and something similar on CentOS albeit not with ZFS).
When we need a little higher level of HA and "Enterprise-ness" we go NetApp. Just. Works. :)
Ray