[CentOS] Parallel/Shared/Distributed Filesystems
centos at linuxpowered.net
Mon Nov 10 16:59:59 UTC 2008
Geoff Galitz wrote:
> The NetApp is running out of space and we prefer to not replace it with
> another one, if possible. To that end we are exploring our options.
NetApp, while it has a big name has the worst level of space
efficiency in the industry and it's performance isn't so hot
either. It does have some nice features though it depends on
The solutions we are looking at here is a 3PAR T400-based back
end with a Exanet EX1500 cluster(2-node) front end, and a HDS
AMS2300-based back end with a BlueArc Titan 3100 cluster(2-node)
front end. Though I'm not at all satisfied with the scalability
of the AMS2300 the vendor is grasping at straws trying to justify
it's existence, the higher end AMS2500 would be more suitable
(still not scalable), though the vendor refuses to quote it
because it's not due till late this year/early next.
Both NAS front ends scale to 8 nodes(with Exanet claiming unlimited
nodes though 8 is their currently supported maximum). 8 nodes
is enough performance to drive 1,000 SATA disks or more. The
3PAR T400 back end has linear scalability to 1,152 SATA disks
(1.2PB), the AMS2300 goes up to 240 disks(248TB).
Both NAS front end clusters can each address a minimum of
500TB of storage(per pair) and support millions of files
per directory without a performance hit.
I talked with NetApp on a couple of occasions and finally nailed
down that their competitive solution would be their GX product
line but I don't think they can get the price to where the
competition is as they promised pricing 3 weeks ago and haven't
heard a peep since.
The idea is to be able to start small(in our case ~100TB usable),
and be able to grow much larger as the company needs within
a system that can automatically re-balance the I/O as the system
expands for maximum performance while keeping a price tag that
is within our budget. Our current 520-disk system is horribly
unbalanced and it's not possible to re-balance it without massive
downtime, the result is probably at least a 50% loss in performance
overall. Of course there are lots of other goals but that's
The 3PAR/Exanet solution can scale within a single system to
approx 630,000 SpecSFS IOPS on a single file system, the HDS/BlueArc
solution can scale to about 150,000 SpecSFS IOPS on a couple of
file systems. The 3PAR would have 4 controllers, Exanet would
have 8 controllers, HDS would have 2 controllers, BlueArc would
have 2 controllers at their peak. In both cases the performance
"limit" is the back end storage, not the front end units.
Of course nothing stops the NAS units from being able to address
storage beyond a single array but you lose the ability to effectively
balance the I/O across multiple storage systems in that event which
leads to the problem we have with our current system. Perhaps if
your willing to spend a couple million a HDS USP-based system
might be effective balancing across multiple systems with their
virtualized thingamabob. Our budget is a fraction of that though.
NetApp's (non-GX) limitations prevent it from competing in this
area effectively.(They do have some ability to re-balance but it
pales in comparison).
More information about the CentOS