On Wed, Mar 13, 2013 at 4:08 PM, Arun Khan knura9@gmail.com wrote:
On Wed, Mar 13, 2013 at 6:37 PM, SilverTip257 wrote:
On Tue, Mar 12, 2013 at 10:10 PM, Keith Keller wrote:
On 2013-03-12, SilverTip257 silvertip257@gmail.com wrote:
I've not had any MegaRAID controllers fail, so I can only say they've
been
reliable thus far!
I think that this is not a helpful comment for the OP. He wants to know, in the event the controller does fail, can he replace it with a similar-but-possibly-not-identical controller and have it recognize the
I've had no problem with various versions of Dell MegaRAID/PERC5i controllers. You can swap drives from a PERC5i into a PERC6i for example and things
are
peachy. But it is not possible to swap drives from a PERC6i into a
PERC5i
controller.
No plans to go with Dell hardware but it is great to note that newer models (Dell OEM Megaraid) recognize arrays created with older models. I don't expect an older model to recognize an array created by a newer model.
Doubtful. The 6i are newer than the 5i. Months ago I tested and can confirm 5i cannot read 6i metadata (Dell and others are not lying).
I've not tried swapping drives from a 5i into a 6i, then back to 5i to see if the 6i changed the metadata any. That's too much swapping for a hypothetical situation where an admin does not do a one-to-one swap (5i to 5i).
Avoid SAS6/iR controllers ... they are low-end controllers that only support hardware RAID0 and RAID1.
And to add more information... the Dell PERC[56]i controllers are supported by the LSI SNMP daemon which then exports quite a bunch of information to SNMP. And that is useful with a Nagios plugin to keep tabs on array health.
The SAS6/iR controllers are not compatible with that LSI SNMP daemon and so far I've not found a way to monitor their array health efficiently. The newest version of that Nagios script claims to support MPTFusion-based controllers (which the SAS6/iR is), but again I've not found a way to export the data to SNMP.
[ OT: I confess...someone on this list pointed me at a Nagios plugin to check OpenManage that I've yet to test. :-/ ]
My configuration will be RAID 5 or 6, depending on how the option the client is willing to pay.
Ultimately hardware RAID controllers can be a big pain -- just like anything else it's a good business practice to have spares!
original RAID containers. Just because you have not seen any failures so far does not mean the OP never will.
You start by failing/removing the drive via mdadm. Then hot remove
the
disk from the subsystem (ex: SCSI [0]) and finally physically remove
it.
Then work in the opposite direction ... hot add (SCSI [1]), clone the partition layout from one drive to the new with sfdisk, and finally
add
the
new disk/partitions to your softraid array with mdadm.
You must hot remove the disk from the SCSI subsystem or the block
device
(ex: /dev/sdc) name is occupied and unavailable for the new disk you
put
in
the system. I've used the above procedure many times to repair
softraid
arrays while keeping systems online.
This is basically the same procedure for replacing a failed drive in a hardware RAID array, except that there is no need to worry about drive
I'll argue that the software RAID process is slightly more complex. And
it
is crucial that one remember to hot-remove the disk ... after all one could panic their box by just yanking the drive.
Yes, this could happen inspite of well documented procedures. For this reason, hardware RAID has been a consideration. However, I have come to realize that it has it's own pros and cons as mentioned in this thread.
-- Arun Khan _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos