----- "Grant McWilliams" grantmasterflash@gmail.com wrote:
On Wed, Dec 2, 2009 at 9:48 PM, Christopher G. Stach II < cgs@ldsys.net > wrote:
----- "Grant McWilliams" < grantmasterflash@gmail.com > wrote:
a RAID 10 (or 0+1) will never reach the write... performance of a RAID-5.
(*cough* If you keep the number of disks constant or the amount of usable space? "Things working" tends to trump CapEx, despite the associated pain, so I will go with "amount of usable space.")
No.
-- Christopher G. Stach II
Nice quality reading. I like theories as much as the next person but I'm wondering if the Toms Hardware guys are on crack or you disapprove of their testing methods.
http://www.tomshardware.com/reviews/external-raid-storage,1922-9.html
They used a constant number of disks to compare two different hardware implementations, not to compare RAID 5 vs. RAID 10. They got the expected ~50% improvement from the extra stripe segment in RAID 5 with a serial access pattern. Unfortunately, that's neither real world use nor the typical way you would fulfill requirements. If you read ahead to the following pages, you have a nice comparison of random access patterns and RAID 10 coming out ahead (with one less stripe segment and a lot less risk):
http://www.tomshardware.com/reviews/external-raid-storage,1922-11.html http://www.tomshardware.com/reviews/external-raid-storage,1922-12.html
On Thu, Dec 3, 2009 at 6:08 AM, Christopher G. Stach II cgs@ldsys.netwrote:
----- "Grant McWilliams" grantmasterflash@gmail.com wrote:
On Wed, Dec 2, 2009 at 9:48 PM, Christopher G. Stach II < cgs@ldsys.net > wrote:
----- "Grant McWilliams" < grantmasterflash@gmail.com > wrote:
a RAID 10 (or 0+1) will never reach the write... performance of a RAID-5.
(*cough* If you keep the number of disks constant or the amount of usable space? "Things working" tends to trump CapEx, despite the associated pain, so I will go with "amount of usable space.")
No.
-- Christopher G. Stach II
Nice quality reading. I like theories as much as the next person but I'm wondering if the Toms Hardware guys are on crack or you disapprove of their testing methods.
http://www.tomshardware.com/reviews/external-raid-storage,1922-9.html
They used a constant number of disks to compare two different hardware implementations, not to compare RAID 5 vs. RAID 10. They got the expected ~50% improvement from the extra stripe segment in RAID 5 with a serial access pattern. Unfortunately, that's neither real world use nor the typical way you would fulfill requirements. If you read ahead to the following pages, you have a nice comparison of random access patterns and RAID 10 coming out ahead (with one less stripe segment and a lot less risk):
http://www.tomshardware.com/reviews/external-raid-storage,1922-11.html http://www.tomshardware.com/reviews/external-raid-storage,1922-12.html
-- Christopher G. Stach II
So if I have 6 drives on my RAID controller which do I choose? If I have to add two more drives to the RAID 10 to equal the performance of a RAID 5 I could just make it a RAID 5 and be faster still. RAID 5 is faster than RAID 10 for reads and writes.
However, you are right on the IOs. The RAID 10 pretty much trounced RAID 5 on IOs in all tests. What wasn't in the test (but is in others that they've done) is RAID 6. I'm not sure I'm sold on it because it gives us about the same level of redundancy as RAID 10 but with less performance than RAID 5. Theoretically it would get soundly trounced by RAID 10 on IOs and maybe be slower on r/w transfer as well.
Grant McWilliams
Some people, when confronted with a problem, think "I know, I'll use Windows." Now they have two problems.
Grant McWilliams grantmasterflash@gmail.com writes:
So if I have 6 drives on my RAID controller which do I choose?
considering the port-cost of good raid cards, you could probably use md and get 8 or 10 drives for the same money. It's hard to beat more spindles for random access performance over a large dataset. (of course, the power cost of another 2-4 drives is probably greater than that of a raid card)