On Jan 13, 2012, at 2:37 PM, John R Pierce <pierce at hogranch.com> wrote: > On 01/13/12 6:41 AM, Vahan Yerkanian wrote: >> multipath -ll showed everything OK, with both sdb and sdc (the same 24 x 3tb raid6 array) as active and ready. > > > are those controllers aware you're using them for multipathing? RAID > cards like that tend to have large caches, and one controllers cache > won't see changes written to the other, leading to inconsistent data, > unless the controllers have some form of back channel communications > between them to coordinate their caches. John's right, I thought these were straight SAS/SATA controllers. You will need to publish these disks as straight through individual disks with write-through cache and use software RAID if the controllers can't communicate with each other. Some controllers are smart enough to perform multipathing across them but they tend to cost more than $500. The Dell PERC (LSI) RAID controllers I have at work do multipathing on-board between multiple connections to each enclosure, but not between multiple controllers. To do that I would need two plain SAS/SATA controllers and handle RAID in software. I have done that successfully with Solaris and ZFS in the past, but Linux software RAID wasn't performant enough for large RAID6s (in my experience). > btw, thats _way_ too many disks in a single disk group, your disk > rebuild times with 24 x raid6 will be ouch long. I try and limit my raid > groups to 12 drives max, and stripe those. given 24 disks, I'd > probably have 2 hot spares, and 2 x 11 raid60, which would provide the > space equivalent of 18 disks I agree with John here too. Create two RAID6 groups and use software to stripe them, either using mdraid or lvm. If it were me, I'd put each RAID6 on a separate controller for balanced parity calculations and then stripe the two volumes in LVM. Keep a third controller as a spare in the closet. -Ross