----- "Grant McWilliams" grantmasterflash@gmail.com wrote:
He had a two drive RAID 1 drives and at least one of them failed but he didn't have any notification software set up to let him know that it had failed. And since that's the case he didn't know if both drives had failed or not. I wonder why he things software RAID would be a) more reliable b) fix itself magically without telling him. He never did say if he was able to use the second disk. I have 75 machines with 3ware controllers and on the very rare occasion that a controller fails you plug in another one and boot up.
I have a pile of various RAID controllers from 3ware, Promise, LSI, the utter garbage that older PERCs were, etc. that have pissed me off by randomly dropping disks, not rebuilding or detecting its own disks, hanging, etc. and very importantly, not allowing me to move disks between machines. That being said, I still have two 3ware cards and some LSI that are fine, but most of my arrays are software. Anecdotal evidence aside, unless you know what kind of performance you need, you have usage metrics, and you know how to benchmark properly, you probably don't need the risk and marginal performance improvement of some extra hardware.
(This was originally about fakeraid, wasn't it?)
I don't use software RAID in any sort of production environment unless it's RAID 0 and I don't care about the data at all. I've also tested the speed between Hardware and Software RAID 5 and no matter how many CPUs you throw at it the hardware will win.
I don't allow RAID 5, so the increased checksum processing performance doesn't have any bearing on my choices. :)
Even in the case when a 3ware RAID controller only has one drive plugged in it will beat a single drive plugged into the motherboard if applications are requesting dissimilar data. One stream from an MD0 RAID 0 will be as fast as one stream from a Hardware RAID 0. Multiple streams of dissimilar data will be much faster on the Hardware RAID controller due to controller caching.
True. That probably falls under be the aforementioned "need". :) You can't beat the performance of a cache, although the Linux filesystem cache will perform reasonably well. If you're using something like an ACID compliant DB, you'll want the battery backed hardware cache for any sizable amount of I/O. Discussing performance without discussing benchmark methodology is annoying and often useless, but if you want to go down that route...
(Also, your use case is contrary to your storage design. You would use RAID 0 for serial access and something parallel for random access. Yes, a Ford Explorer doesn't hug corners at 120 mph.)