[CentOS] RAID card selection - JBOD mode / Linux RAID

Wed Jul 18 22:44:16 UTC 2012
SilverTip257 <silvertip257 at gmail.com>

I don't know anything specifically about your RAID card or enclosures,
but I the following experience might help you nonetheless.

Some Dell PERC Storage Controllers (specifically PERC5i and 6i in my
case - which are LSI OEM I believe) there is no JBOD mode so I ended
up setting a RAID0 striped volume for each drive.  Then set up a Linux
softraid volume on each drive with mdadm to chain them together in
whatever RAID array type fits the job.

Hopefully the previous tidbit of information helps you.

That's going to really drag if you have to configure a RAID0 for all
48 disks ... it'd be much easier if you could directly communicate
with the drives.  I've set up at most five or six RAID0 devices on one
host and it's not particularly enjoyable!

---~~.~~---
Mike
//  SilverTip257  //


On Wed, Jul 18, 2012 at 3:50 PM, Alan McKay <alan.mckay at gmail.com> wrote:
> I don't think this is off topic since I want to use JBOD mode so that
> Linux can do the RAID.  I'm going to hopefully run this in CentOS 5
> and Ubuntu 12.04 on a Sunfire x2250
>
> Hard to get answers I can trust out of vendors :-)
>
> I have a Sun RAID card which I am pretty sure is LSI OEM.  It is a
> 3G/s SAS1 with 2 external connectors like the one on the right here :
>
> http://www.cablesondemand.com/images/products/CS-SAS1MUKBCM.jpg
>
> And I have 2 x Sun J4400 JBOD cabinets each with 24 disks.
>
> If I buy a new card that is 6G/s SAS2 with the same connector, can I
> connect my cabinets to it and have them work?  Even if they only work
> at 3G/s I don't care.
>
> I've also hit an issue with the number of logical devices allowed, and
> am wondering whether this might be a HW, FW or SW limitation if anyone
> knows.
>
> I want to run everything in JBOD mode and let Linux do the RAID.   So
> for the first cabinet I ran this command 24 times to create a logical
> drive for each physical one
>
> /usr/StorMan/arcconf create 1 logicaldrive max volume 0,X noprompt
>
> Where X goes from 0 to 23
>
> Went great, created /dev/sd[c-z] and I was able to use those with
> mdadm to create 4 x RAID6 arrays and then a big RAID0 out of the 4 x
> RAID6.  Works great!
>
> Then I try to connect the 2nd cabinet and when I run the above arcconf
> command it tells me there are already 24 logical devices, and this is
> the max.
>
> Anyone know whether that is HW, FW or SW?
>
> Would a new card fix this problem?   Anyone know for sure of a card
> with above connectors that has a JBOD mode that will support 48 drives
> and expose them all to Linux?
>
>
> --
> “Don't eat anything you've ever seen advertised on TV”
>          - Michael Pollan, author of "In Defense of Food"
> _______________________________________________
> CentOS mailing list
> CentOS at centos.org
> http://lists.centos.org/mailman/listinfo/centos