[CentOS] Harware vs Kernel RAID (was Re: External SATA enclosures: SiI3124 and CentOS 5?)

Ross Walker rswwalker at gmail.com
Tue Jun 2 15:38:40 UTC 2009


On Mon, Jun 1, 2009 at 10:52 PM, Michael A. Peters <mpeters at mac.com> wrote:
> -=- starting as new thread as it is off topic from controller thread -=-
>
> Ross Walker wrote:
>
>  >
>  > The real key is the controller though. Get one that can do hardware
>  > RAID1/10, 5/50, 6/60, if it can do both SATA and SAS even better and
>  > get a battery backed write-back cache, the bigger the better, 256MB
>  > good, 512MB better, 1GB best.
>
> I've read a lot of different reports that suggest at this point in time,
> kernel software raid is in most cases better than controller raid.
>
> The basic argument seems to be that CPU's are fast enough now that the
> limitation on throughput is the drive itself, and that SATA resolved the
> bottleneck that PATA caused with kernel raid. The arguments then go on
> to give numerous examples where a failing hardware raid controller
> CAUSED data loss, where a raid card died and an identical raid card had
> to be scrounged from eBay to even read the data on the drives, etc. -
> problems that apparently don't happen with kernel software raid.
>
> The main exception I've seen to using software raid are high
> availability setups where a separate external unit ($$$) provides the
> same hard disk to multiple servers. Then the raid can't really be in the
> kernel but has to be in the hardware.
>
> I'd be very interested in hearing opinions on this subject.

The real reason I use hardware RAID is the write-back cache. Nothing
beats it for shear write performance.

Hell I don't even use the on-board RAID. I just export the drives as
individual RAID0 disks, readable with a straight SAS controller if
need be, and use ZFS for RAID. ZFS only has to resilver the existing
data and not the whole drive on a drive failure which reduces the
double failure window significantly and the added parity checking on
each block gives me piece of mind that the data is uncorrupted. The
512MB of write back cache makes the ZFS logging fly without having to
buy in to expensive SSD drives.

I might explore using straight SAS controllers and MPIO with SSD
drives for logging in the future once ZFS gets a way to disassociate a
logging device from a storage pool after it's been associated in case
the SSD device fails.

But now things are way off topic.

-Ross



More information about the CentOS mailing list