[CentOS] xServes are dead ;-( / SAN Question

Tue Nov 9 21:17:54 UTC 2010
James A. Peltier <jpeltier at sfu.ca>

Cost is per TB.  Would kill me here when one user occupies 150TB just themselves.

----- Original Message -----
| On 11/8/10 6:29 PM, James A. Peltier wrote:
| >
| > I have a solution that is currently centered around commodity
| > storage bricks (Dell R510), flash PCI-E controllers, 1 or 10GbE (on
| > separate Jumbo Frame Data Tier) and Solaris + ZFS.
| >
| > So far it has worked out really well. Each R510 is a box with a fair
| > bit of memory, running OpenIndiana for ZFS/RAIDZ3/Disk Dedup/iSCSI.
| > Each brick is fully populated and in a RAIDZ2 configuration with 1
| > hot spare. Some have SSDs most have SAS or SATA. I export this
| > storage pool as a single iSCSI target and I attach each of these
| > targets to the SAN pool and provision from there.
| >
| > I have two VMWare physical machines which are identically
| > configured. If I need to perform administrative maintenance on the
| > boxes I can migrate the host over to the other machine. This works
| > for me, but it took a really long time to develop the solution and
| > for the cost of my time it *might* have been cheaper to just buy
| > some package deal.
| >
| > It was a hell of a lot of fun learning though. ;)
| 
| Did you look at Nexentastor for this? You might need the commercial
| version for
| a fail-over set but I think the basic version is free up to a fairly
| large size.
| 
| --
| Les Mikesell
| lesmikesell at gmail.com
| _______________________________________________
| CentOS mailing list
| CentOS at centos.org
| http://lists.centos.org/mailman/listinfo/centos

-- 
--
James A. Peltier
Systems Analyst (FASNet), VIVARIUM Technical Director
Simon Fraser University - Burnaby Campus
Phone   : 778-782-6573
Fax     : 778-782-3045
E-Mail  : jpeltier at sfu.ca
Website : http://www.fas.sfu.ca | http://vivarium.cs.sfu.ca
          http://blogs.sfu.ca/people/jpeltier
MSN     : subatomic_spam at hotmail.com