Peter =?iso-8859-1?q?Kjellstr=F6m?= wrote:
I have two 8-port ones i a machine with 500GB hitachi drives, runs ok, but = NCQ=20 needs extremely new firmware from hitachi. I've so far tried, among other=20 things, raid0, raid5, lvm stripe over devices, heavy I/O while rebuilding, = =2E.=20 everything but two performance problems looks fine (#1 ext3 sucks on 3ware= =20 raid5, #2 I can't get good read performance on two raid5s in a lvm stripe).
Yeah, we've also seen performance issues with raid5 on the 9500 series products. The benchmarks we run look good, but when we've put them into production file server environments, the performance has been disappointing. With 500GB disks now available for $4.95 down at the local Walgreen's, we've gone to raid 1 and are _very_ pleased with the performance. Although it's been discouraged on this list, we're using xfs and the unsupported CentOS kernels and haven't had any problems. We're anxiously waiting the 'new, improved' xfs drivers for CentOS :) :)
Dave
Yeah, we've also seen performance issues with raid5 on the 9500 series products. The benchmarks we run look good, but when we've put them into production file server environments, the performance has been disappointing. With 500GB disks now available for $4.95 down at the local Walgreen's, we've gone to raid 1 and are _very_ pleased with the performance. Although it's been discouraged on this list, we're using xfs and the unsupported CentOS kernels and haven't had any problems. We're anxiously waiting the 'new, improved' xfs drivers for CentOS :) :)
The 9550 has better performance over the 9500, but I don't have any real numbers to back up the differences.
-- "They that can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety'' Benjamin Franklin 1775
On Friday 03 March 2006 17:26, Jim Perrin wrote:
Yeah, we've also seen performance issues with raid5 on the 9500 series products. The benchmarks we run look good, but when we've put them into production file server environments, the performance has been disappointing. With 500GB disks now available for $4.95 down at the local Walgreen's, we've gone to raid 1 and are _very_ pleased with the performance. Although it's been discouraged on this list, we're using xfs and the unsupported CentOS kernels and haven't had any problems. We're anxiously waiting the 'new, improved' xfs drivers for CentOS :) :)
The 9550 has better performance over the 9500, but I don't have any real numbers to back up the differences.
I have close to identical machines with 9500 and 9550 in testbed so if people have some special numbers they'd like compared I could maybe be persuaded to run it.
/Peter
On Friday 03 March 2006 17:20, David Thompson wrote:
Peter =?iso-8859-1?q?Kjellstr=F6m?= wrote:
I have two 8-port ones i a machine with 500GB hitachi drives, runs ok, but = NCQ=20 needs extremely new firmware from hitachi. I've so far tried, among other=20 things, raid0, raid5, lvm stripe over devices, heavy I/O while rebuilding, = =2E.=20 everything but two performance problems looks fine (#1 ext3 sucks on 3ware= =20 raid5, #2 I can't get good read performance on two raid5s in a lvm stripe).
Yeah, we've also seen performance issues with raid5 on the 9500 series products. The benchmarks we run look good, but when we've put them into production file server environments, the performance has been disappointing. With 500GB disks now available for $4.95 down at the local Walgreen's, we've gone to raid 1 and are _very_ pleased with the performance. Although it's been discouraged on this list, we're using xfs and the unsupported CentOS kernels and haven't had any problems. We're anxiously waiting the 'new, improved' xfs drivers for CentOS :) :)
with xfs the 9550-SX is alot faster in raid5 than the 9500-S (have them both)
/Peter
Dave
David Thompson wrote:
Peter =?iso-8859-1?q?Kjellstr=F6m?= wrote:
raid5, #2 I can't get good read performance on two raid5s in a lvm stripe).
Yeah, we've also seen performance issues with raid5 on the 9500 series products. The benchmarks we run look good, but when we've put them into
As per an old thread on the list, I have a new 9500S-12 and am seeing the same performance problems. But, I'm fairly sure it's because without the BBU installed write-cache is disabled. I have ordered the BBU and expect here soon, I'll see if I can't post some pre-BBU/write cache and post-BBU/write cache numbers, if I find a good test.
Right now I'm running Arkeia on it as the backup-to-disk, and in the middle of the heaviest use I'm seeing about 50-60% I/O wait, horrible. I'm hoping the write cache is going to help this a lot. (*)
-te
(*) some sar output:
12:00:14 AM CPU %user %nice %system %iowait %idle 12:10:51 AM all 6.44 0.03 4.19 51.05 38.30 12:10:51 AM 0 3.83 0.02 4.28 49.69 42.18 12:10:51 AM 1 9.06 0.03 4.09 52.40 34.41 12:20:08 AM all 7.45 0.02 4.40 52.52 35.61 12:20:08 AM 0 4.08 0.02 4.58 51.36 39.96