Troy Engel wrote:
David Thompson wrote:
Peter =?iso-8859-1?q?Kjellstr=F6m?= wrote:
raid5, #2 I can't get good read performance on two raid5s in a lvm stripe).
Yeah, we've also seen performance issues with raid5 on the 9500 series products. The benchmarks we run look good, but when we've put them into
As per an old thread on the list, I have a new 9500S-12 and am seeing the same performance problems. But, I'm fairly sure it's because without the BBU installed write-cache is disabled. I have ordered the BBU and expect here soon, I'll see if I can't post some pre-BBU/write cache and post-BBU/write cache numbers, if I find a good test.
We throw caution to the wind and run write-cache-enabled without a BBU, and still see a significant drop in performance compared to raid-1.
Right now I'm running Arkeia on it as the backup-to-disk, and in the middle of the heaviest use I'm seeing about 50-60% I/O wait, horrible. I'm hoping the write cache is going to help this a lot. (*)
If you throw a sufficiently heavy (and multi-threaded) load at any disk array of this type, eventually you can drive a system to I/O-wait. On our systems, we look more at total disk throughput and characterization of workload (sequential vs. random access), rather than just I/O wait.
Dave
Let me know what you find.