[CentOS] [OT/HW] hardware raid -- comment/experience with 3Ware

SilverTip257 silvertip257 at gmail.com
Tue Mar 12 23:33:26 UTC 2013


On Tue, Mar 12, 2013 at 4:30 AM, Arun Khan <knura9 at gmail.com> wrote:

> On Thu, Mar 7, 2013 at 12:07 AM, Gordon Messmer  wrote:
> > On 03/06/2013 08:35 AM, Arun Khan wrote:
> >
> >> Any preference between 1 and 2 above.
> >
> > Based on about 10 years of running a hundred or so systems with 3ware
> > controllers, I would say that you're better off with an LSI MegaRAID
> > card, or with Linux software RAID.  3ware cards themselves have been the
> > most problematic component of any system I've run in my entire
> > professional career (starting in 1996).  Even very recent cards fail in
> > a wide variety of ways, and there is no guarantee that if your array
> > fails using a controller that you buy now that you'll be able to connect
> > it to a controller that you buy later.
>
> @ Gordon - thanks for sharing this piece of info!  In case of RAID
> card failure, it is important to be able to recover the data (RAID
> device) with a compatible replacement.    Are the LSI MegaRAID
> controller more reliable in this respect?
>

I've not had any MegaRAID controllers fail, so I can only say they've been
reliable thus far!


>
> > At this point, I deploy almost exclusively systems running Linux with
> > KVM on top of software RAID.  While I lose the battery backed write
> > cache (which is great for performance unless you sustain enough writes
> > to fill it completely, at which point the system grinds nearly to a
> > halt), I gain a consistent set of management tools and the ability to
> > move a disk array to any hardware that accepts the same form factor
> > disk.  The reliability of my systems has improved significantly since I
> > moved to software RAID.
>
> Software RAID is an option but I don't think hot swap is possible
> without some tinkering with the mdadm tool a priori.
>

Hot swap really depends on what your HBA or RAID controller supports.

You start by failing/removing the drive via mdadm.  Then hot remove the
disk from the subsystem (ex: SCSI [0]) and finally physically remove it.
 Then work in the opposite direction ... hot add (SCSI [1]), clone the
partition layout from one drive to the new with sfdisk, and finally add the
new disk/partitions to your softraid array with mdadm.

You must hot remove the disk from the SCSI subsystem or the block device
(ex: /dev/sdc) name is occupied and unavailable for the new disk you put in
the system.  I've used the above procedure many times to repair softraid
arrays while keeping systems online.

[0]
https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Linux/5/html/Online_Storage_Reconfiguration_Guide/removing_devices.html
[1]
https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Linux/5/html/Online_Storage_Reconfiguration_Guide/adding_storage-device-or-path.html

The systems will go to client site (remote),  prefer to keep the
> support calls to remove/replace hardware activity :(
>
> Thanks,
> -- Arun Khan
> _______________________________________________
> CentOS mailing list
> CentOS at centos.org
> http://lists.centos.org/mailman/listinfo/centos
>



-- 
---~~.~~---
Mike
//  SilverTip257  //



More information about the CentOS mailing list