Hi people,
I am building a cheap remote rsync backup server using a software raid 5 array of 4 500GB disks. What I have available on the market is:
1. HITACHI GST Deskstar T7K500 500GB 7200rpm 16MB cache Serial ATA II-300 2. SEAGATE Barracuda 7200.10 with NCQ 500GB 7200rpm 16MB cache Serial ATA II-300 3. Western Digital 500GB SATAII RAID EDITION Caviar SE16 7200rpm 8.9ms 16MB cache
I am tempted to use the Western Digital Raid Edition as it has 5 years of warranty. However I found out that the "Raid Edition" means that the disk supports TLER - time limited error recovery:
http://www.excelmeridiandata.com/products/wd_raid_edition_drive.shtml
A more profound description can be found here http://www.wdc.com/en/library/sata/2579-001098.pdf.
In summary, the error recovery time of the hard disk is reduced to 8 second. If the hard disk cannot perform the recovery it will report an error to the RAID controller and delegate the recovery to the controller.
As I intend to use linux software raid on Centos 5, I will not have a RAID controller to deal with this delegated error, but two cheap I/O SATA controllers.
Has anybody used 'raid edition' disks with software raid? Does anybody know how linux software raid interacts/supports these TLER disks.
Best regards Alex
Alexander Georgiev spake the following on 7/27/2007 5:32 AM:
Hi people,
I am building a cheap remote rsync backup server using a software raid 5 array of 4 500GB disks. What I have available on the market is:
- HITACHI GST Deskstar T7K500 500GB 7200rpm 16MB cache Serial ATA II-300
- SEAGATE Barracuda 7200.10 with NCQ 500GB 7200rpm 16MB cache Serial ATA II-300
- Western Digital 500GB SATAII RAID EDITION Caviar SE16 7200rpm 8.9ms
16MB cache
I am tempted to use the Western Digital Raid Edition as it has 5 years of warranty. However I found out that the "Raid Edition" means that the disk supports TLER - time limited error recovery:
http://www.excelmeridiandata.com/products/wd_raid_edition_drive.shtml
A more profound description can be found here http://www.wdc.com/en/library/sata/2579-001098.pdf.
In summary, the error recovery time of the hard disk is reduced to 8 second. If the hard disk cannot perform the recovery it will report an error to the RAID controller and delegate the recovery to the controller.
As I intend to use linux software raid on Centos 5, I will not have a RAID controller to deal with this delegated error, but two cheap I/O SATA controllers.
Has anybody used 'raid edition' disks with software raid? Does anybody know how linux software raid interacts/supports these TLER disks.
Best regards Alex
I have used raid aware and non-raid aware disks for both hardware and software raid. You really want the raid edition drives, as they will send a fail up the channel much faster than a non-raid drive. The non-raid drives will retry over and over, and you stand a chance of data loss while they decide what to do. Software raid will just fail the drive and switch to degraded mode.
I have used raid aware and non-raid aware disks for both hardware and software raid. You really want the raid edition drives, as they will send a fail up the channel much faster than a non-raid drive. The non-raid drives will retry over and over, and you stand a chance of data loss while they decide what to do. Software raid will just fail the drive and switch to degraded mode.
--
MailScanner is like deodorant... You hope everybody uses it, and you notice quickly if they don't!!!!
Ok, and should I mix disks from different vendors? I have read somewhere that this augments the reliability of the array, lowering the probability of 2 disks and more go down simultaneously?
Alexander Georgiev wrote:
I have used raid aware and non-raid aware disks for both hardware and software raid. You really want the raid edition drives, as they will send a fail up the channel much faster than a non-raid drive. The non-raid drives will retry over and over, and you stand a chance of data loss while they decide what to do. Software raid will just fail the drive and switch to degraded mode.
--
MailScanner is like deodorant... You hope everybody uses it, and you notice quickly if they don't!!!!
Ok, and should I mix disks from different vendors? I have read somewhere that this augments the reliability of the array, lowering the probability of 2 disks and more go down simultaneously?
Not necessarily from different vendors as much as from different build lots, etc.
The theory is ... items build from the same components at the same time and the same place should fail/EOL at about the same time (all things being equal).
In practice, I have not seen that. If you are concerned about it, getting drives from different factories or lots (but the same manufacturer) should be OK.
On Saturday 28 July 2007, Johnny Hughes wrote:
Alexander Georgiev wrote:
...
Ok, and should I mix disks from different vendors? I have read somewhere that this augments the reliability of the array, lowering the probability of 2 disks and more go down simultaneously?
Not necessarily from different vendors as much as from different build lots, etc.
The theory is ... items build from the same components at the same time and the same place should fail/EOL at about the same time (all things being equal).
In practice, I have not seen that.
I havn't seen that either. But what I have seen is a raid controller acting up as a function of some random micro property of a specific drive model. So, I would very much not want more than one type of drive in a raid as that would double the amount of strangeness the controller would have to deal with.
/Peter
Peter Kjellstrom wrote:
On Saturday 28 July 2007, Johnny Hughes wrote:
Alexander Georgiev wrote:
...
Ok, and should I mix disks from different vendors? I have read somewhere that this augments the reliability of the array, lowering the probability of 2 disks and more go down simultaneously?
Not necessarily from different vendors as much as from different build lots, etc.
The theory is ... items build from the same components at the same time and the same place should fail/EOL at about the same time (all things being equal).
In practice, I have not seen that.
I havn't seen that either. But what I have seen is a raid controller acting up as a function of some random micro property of a specific drive model. So, I would very much not want more than one type of drive in a raid as that would double the amount of strangeness the controller would have to deal with.
/Peter
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
No virus found in this incoming message. Checked by AVG Free Edition. Version: 7.5.476 / Virus Database: 269.10.23/924 - Release Date: 7/28/2007 3:50 PM
Yeah don't mix the drive manufacturers - you are more prone to faults or "uknowns" because you are mixing two different drives from two different vendors. Always stick with the same brand, and try to get them all at once.
And, if you can, try your hardest to think about shelling out the extra cash for a hardware raid controller. If you are serious about your data, the extra $200 for an LSI Logic MegaRAID 150 4/6 port controller is a small investment for your data.
Also, if this is for a Postfix server, you're going to want RAID10, if you can. It's much faster for read/write access than RAID5.
HTH
Patrick
Not necessarily from different vendors as much as from different build lots, etc.
The theory is ... items build from the same components at the same time and the same place should fail/EOL at about the same time (all things being equal).
In practice, I have not seen that. If you are concerned about it, getting drives from different factories or lots (but the same manufacturer) should be OK.
I have seen 18 out of 20 gone in 2 weeks period. Those were 20 PCs manifactured by IBM. I guess it must have been the famous Deathstar bug.
I am not sure if I can buy disks of different lots here(I live in Bulgaria, where a rumour is spread, that hard disk on sale are from lots that have failed the tests for robustness).
What I can do is to spread the purchase in 4 weeks period - buying each week a disk. I am not in a hurry on this project.
However I wanted to do this in a 2 weeks period - buying each week one Barracuda and one Hitachy. As I intend to use software raid which in theory can even work with volumes on different channel types - like using one external USB disk and one internal ATA disk, I was not expecting problems mixing different vendors disks.
BTW According to O'relly's book on linux hardware raid, the only reason to choose disks from the same vendor is not to jeopardise performance by combining one less performant disk with a more performant one.
As this is a rsync server - performance is not a requirement.
Alexander Georgiev wrote:
Not necessarily from different vendors as much as from different build lots, etc.
The theory is ... items build from the same components at the same time and the same place should fail/EOL at about the same time (all things being equal).
In practice, I have not seen that. If you are concerned about it, getting drives from different factories or lots (but the same manufacturer) should be OK.
I have seen 18 out of 20 gone in 2 weeks period. Those were 20 PCs manifactured by IBM. I guess it must have been the famous Deathstar bug.
I am not sure if I can buy disks of different lots here(I live in Bulgaria, where a rumour is spread, that hard disk on sale are from lots that have failed the tests for robustness).
What I can do is to spread the purchase in 4 weeks period - buying each week a disk. I am not in a hurry on this project.
However I wanted to do this in a 2 weeks period - buying each week one Barracuda and one Hitachy. As I intend to use software raid which in theory can even work with volumes on different channel types - like using one external USB disk and one internal ATA disk, I was not expecting problems mixing different vendors disks.
BTW According to O'relly's book on linux hardware raid, the only reason to choose disks from the same vendor is not to jeopardise performance by combining one less performant disk with a more performant one.
As this is a rsync server - performance is not a requirement.
The real issue with different vendors on hardware RAID controllers are things like firmware incompatibility. Combining different vendors adds different complexities ... where performance is one problem, stability is the problem I would fear if using different vendors in the same array.