On Thu, Jan 7, 2010 at 12:01 PM, Thomas Harold thomas-lists@nybeta.com wrote:
On 1/7/2010 10:54 AM, John Doe wrote:
From: Karanbir Singhmail-lists@karan.org
On 01/07/2010 02:30 PM, Boris Epstein wrote:
KB, thanks. When you say "dont go over 1 TiB in storage per spindle" what are you referring to as spindle?
disk. it boils down to how much data do you want to put under one read/write stream.
the other thing is that these days 1.5TB disks are the best bang-for-the-buck in terms of storage/cost. So maybe thats something to consider, and limit disk usage down initially - expand later as you need.
Even better if your hba can support that, if not then mdadm ( have lots of cpu right ? ), and make sure you understand recarving / reshaping before you do the final design. Refactoring filers with large quantities of data is no fun if you cant reshape and grow.
I also heard that disks above 1TB might have reliability issues. Maybe it changed since then...
I remember rumors about the early 2TB Seagates.
Personally, I won't RAID SATA drives over 500GB unless they're enterprise-level ones with the limits on how long before the drive reports a problem back to the host when it has a read error.
I'm with you on that one. We currently use RAIDZ2 to allow us to lose 2 drives in our storage pools, and will definitely move to RAIDZ3 at some point down the road. Luckily for us ZFS re-silvers just the blocks that contain data / parity when a failure occurs, so a disk failure is usually remedied in an hour or two (we devote two disks as spares).
- Ryan -- http://prefetch.net