On Sat, Dec 11, 2010 at 12:24 PM, Rainer Duffner <rainer at ultra-secure.de> wrote: > The other question is if it actually works. > Too many of the low-cost devices eat the data on the drives, when the > motherboard or the controller fries... > With luck, you can read the data on one of the drives... > > If the client only needs 12TB, there's shurely a NetApp that is > cheaper but only scales to 10 or 20TB. > If the client has maxed that out and needs to go beyond that, he needs > to buy a bigger filer-head + shelves and migrate his data (AFAIK, > that's possible, at a charge...). NetApps are wonderful. So is a Hercules transport. Amazing pieces of engineering, completely unsuitable for home use due to expense of underlying hardware and excessive sophistication of high availability components which, in a modest environment, is more easily done with rsnapshot and a few of the cheapest drives. 12 TB, well, there you're getting into noticeable storage. What are your requirements? High availability? On-line snapshots? Encryption? Do you need that 12 TB all as one array, or can it be gracefully split into 3 or 4 smaller chunks to provide redundance and upgrade paths, or put different data on different filesystems for different requirements? > You might want to try to get a quote from Oracle for a Unified Storage > Appliance 7320 and compare it with one of NetApps entry-level offerings. > > With 100TB, DIY is out of the question ;-) IBM sells some nice one rack units as well. All of them play nicely with CentOS, but you need to think about the actual connecton. GigE and NFS, which works surprisingly well? Sophisticated permissions with Samba 3.6, NFSv4 and NTFS compatibility with a NetApp QTree? Or just a big honking array to store all the porn and Bittorrent movies to brag about? > BTW: what does the client do with the disk-space? What's the access- > pattern? Indeed. Details! Details matter!