On Thu, 20 Oct 2005 at 9:43am, Aleksandar Milivojevic wrote
Quoting Joshua Baker-LePain jlb17@duke.edu:
Needless to say it's not giving me that warm fuzzy feeling. The one caveat is that not all the members of my array were the same size -- one disk is 180GB while all the rest are 160GB. I'm going to test overnight with identically sized RAID members, but I also wanted to see if anyone else is using RAID6.
I was testing RAID-5 with identical disk drives. Got the same thing. If I attempt to access it while it rebuilds, sooner or later I get errors. If I reboot while it rebuilds, it doesn't start it after the reboot (forgot the actuall error message). I needed to manually kick it again using mdadm to force-start it. If it was root file system, the machine would probably fail to boot completely. You might want to stick to your 3ware for now.
Wonderful. I'm testing RAID5 now, for which I had high hopes.
BTW, any particular reason not to use hardware RAID on 3ware?
That was the initial plan. But I can't get anything even resembling decent speeds out of them. These systems have 2 7500-8 boards in 'em (on separate PCI buses). I had been running RH7.3 (!!) and XFS with the boards in hardware RAID5 and a software RAID0 stripe across those, and would get >100MB/s writes and >300MB/s reads. With centos-4 and ext3, I was getting ~30MB/s writes and ~200MB/s reads. The reads I'd be OK with, but the write speed is absurd. I tweaked all I could think of, but nothing helped much.