We build a storage unit that anyone using Centos can build. It is based on the 3ware 9750-16 controller. It has 16 x 2 TB Sata 6 gb/s disks. We always set it up as a 15 disk RAID 6 array and a hot spare. We have seen multiple instances were the A/C has gone off but the customer's UPS kept the systems running for an hour or two with no cooling. Once the ambient temperature goes above 40C you are stressing all the disks in the array. I ramble about this BECAUSE we have seen many RAID 5 arrays fail while rebuilding with a hot spare and then loose all the data.
The chances of this happening on a R6 array are much much lower. Now for performance numbers: 550 MB/SEC on writes and 1150 MB/SEC on reads.
A unit like this can support any type of I/O you want to put into it as Centos will usually either have thedrivers built in or you can add then easily.
Also the rebuild time for a 50 TB array is about 12 hours while the array is in use and online. But during this time there is little or no degradation in performance.
The down side is the cost of the controller. If anyone has questions about this I would be glad to answer them off line : at seth at integratedsolutions dot org.
Adrian Sevcenco Adrian.Sevcenco@cern.ch wrote:
On 04/11/2013 06:36 PM, m.roth@5-cent.us wrote:
I'm setting up this huge RAID 6 box. I've always thought of hot spares, but I'm reading things that are comparing RAID 5 with a hot spare to RAID 6, implying that the latter doesn't need one. I *certainly* have enough drives to spare in this RAID box: 42 of 'em, so two questions: should I
we use several of this kind of boxes (but with 45 trays) and our experience was that the optimum volume size was 12 hdds (3 X 12 + 9) which will reduce the 45 disks to a actual size of 37 disks (a 12 disk volume is 40 TB size ... in event of a broken hdd it takes 1 day to recover.. more than 12 disks and i dont (want to) know how long it would take) and we don't use hot spares.
HTH, Adrian
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Hi, Seth,
Seth Bardash wrote:
We build a storage unit that anyone using Centos can build. It is based on the 3ware 9750-16 controller. It has 16 x 2 TB Sata 6 gb/s disks. We always set it up as a 15 disk RAID 6 array and a hot spare. We have seen
Interesting. We're still playing with sizing the RAID sets and volumes. The prime consideration for this is that the filesystem utils still have problems with > 16TB (and they appear to have been saying that fixing this is a priority for at least a year or two <g>), so we wanted to get close to that.
This afternoon's discussion has me with 2 17-drive RAID sets and a 6; on that, one volume set each, which gives us 30TB usable on the two large RAID sets and 8TB usable on the small one, all RAID 6.
multiple instances were the A/C has gone off but the customer's UPS kept the systems running for an hour or two with no cooling. Once the ambient temperature goes above 40C you are stressing all the disks in the array. I ramble about this BECAUSE we have seen many RAID 5 arrays fail while rebuilding with a hot spare and then loose all the data.
That's not a problem for us - we've got two *huge* units in the server room, er, computer lab that this RAID box is in. <snip>
Also the rebuild time for a 50 TB array is about 12 hours while the array is in use and online. But during this time there is little or no degradation in performance.
THANK YOU. I'll forward this email to my manager and the other admin - it's really helpful to know this, even if it isn't your box.
The down side is the cost of the controller. If anyone has questions about this I would be glad to answer them off line : at seth at integratedsolutions dot org.
What came with this are QLogic FC HBAs for the server.
mark
On 4/12/2013 12:11 PM, m.roth@5-cent.us wrote:
Interesting. We're still playing with sizing the RAID sets and volumes. The prime consideration for this is that the filesystem utils still have problems with > 16TB (and they appear to have been saying that fixing this is a priority for at least a year or two <g>), so we wanted to get close to that.
I've had no issues with 64bit CentOS 6.2+ on 81TB (74TiB) volumes using GPT and XFS(*), with or without LVM.
(*) NFS has an issue with large XFS volumes if you export directories below the root and those directories have large inode numbers. the workarounds are to either precreate all directories that are to be exported before filling up the disk, or specify arbitrary ID#'s less than 2^32 on the NFS export using fsid=nnn in the /etc/exports entry for these paths (these ID values have to be unique on that host). this is a stupid bug in NFS itself that gets triggered by XFSs use of 64bit inode numbers.