I don't think this is off topic since I want to use JBOD mode so that Linux can do the RAID. I'm going to hopefully run this in CentOS 5 and Ubuntu 12.04 on a Sunfire x2250
Hard to get answers I can trust out of vendors :-)
I have a Sun RAID card which I am pretty sure is LSI OEM. It is a 3G/s SAS1 with 2 external connectors like the one on the right here :
http://www.cablesondemand.com/images/products/CS-SAS1MUKBCM.jpg
And I have 2 x Sun J4400 JBOD cabinets each with 24 disks.
If I buy a new card that is 6G/s SAS2 with the same connector, can I connect my cabinets to it and have them work? Even if they only work at 3G/s I don't care.
I've also hit an issue with the number of logical devices allowed, and am wondering whether this might be a HW, FW or SW limitation if anyone knows.
I want to run everything in JBOD mode and let Linux do the RAID. So for the first cabinet I ran this command 24 times to create a logical drive for each physical one
/usr/StorMan/arcconf create 1 logicaldrive max volume 0,X noprompt
Where X goes from 0 to 23
Went great, created /dev/sd[c-z] and I was able to use those with mdadm to create 4 x RAID6 arrays and then a big RAID0 out of the 4 x RAID6. Works great!
Then I try to connect the 2nd cabinet and when I run the above arcconf command it tells me there are already 24 logical devices, and this is the max.
Anyone know whether that is HW, FW or SW?
Would a new card fix this problem? Anyone know for sure of a card with above connectors that has a JBOD mode that will support 48 drives and expose them all to Linux?
I don't know anything specifically about your RAID card or enclosures, but I the following experience might help you nonetheless.
Some Dell PERC Storage Controllers (specifically PERC5i and 6i in my case - which are LSI OEM I believe) there is no JBOD mode so I ended up setting a RAID0 striped volume for each drive. Then set up a Linux softraid volume on each drive with mdadm to chain them together in whatever RAID array type fits the job.
Hopefully the previous tidbit of information helps you.
That's going to really drag if you have to configure a RAID0 for all 48 disks ... it'd be much easier if you could directly communicate with the drives. I've set up at most five or six RAID0 devices on one host and it's not particularly enjoyable!
---~~.~~--- Mike // SilverTip257 //
On Wed, Jul 18, 2012 at 3:50 PM, Alan McKay alan.mckay@gmail.com wrote:
I don't think this is off topic since I want to use JBOD mode so that Linux can do the RAID. I'm going to hopefully run this in CentOS 5 and Ubuntu 12.04 on a Sunfire x2250
Hard to get answers I can trust out of vendors :-)
I have a Sun RAID card which I am pretty sure is LSI OEM. It is a 3G/s SAS1 with 2 external connectors like the one on the right here :
http://www.cablesondemand.com/images/products/CS-SAS1MUKBCM.jpg
And I have 2 x Sun J4400 JBOD cabinets each with 24 disks.
If I buy a new card that is 6G/s SAS2 with the same connector, can I connect my cabinets to it and have them work? Even if they only work at 3G/s I don't care.
I've also hit an issue with the number of logical devices allowed, and am wondering whether this might be a HW, FW or SW limitation if anyone knows.
I want to run everything in JBOD mode and let Linux do the RAID. So for the first cabinet I ran this command 24 times to create a logical drive for each physical one
/usr/StorMan/arcconf create 1 logicaldrive max volume 0,X noprompt
Where X goes from 0 to 23
Went great, created /dev/sd[c-z] and I was able to use those with mdadm to create 4 x RAID6 arrays and then a big RAID0 out of the 4 x RAID6. Works great!
Then I try to connect the 2nd cabinet and when I run the above arcconf command it tells me there are already 24 logical devices, and this is the max.
Anyone know whether that is HW, FW or SW?
Would a new card fix this problem? Anyone know for sure of a card with above connectors that has a JBOD mode that will support 48 drives and expose them all to Linux?
-- “Don't eat anything you've ever seen advertised on TV” - Michael Pollan, author of "In Defense of Food" _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Wed, Jul 18, 2012 at 6:44 PM, SilverTip257 silvertip257@gmail.com wrote:
That's going to really drag if you have to configure a RAID0 for all 48 disks ... it'd be much easier if you could directly communicate with the drives. I've set up at most five or six RAID0 devices on one host and it's not particularly enjoyable!
Well using the command line tools it is not bad at all - but my question remains unanswered - how many logical devices can your card have? I've hit my limit at 24 but I have 48 drives. So I need to find a card that can do 48 logical devices.
thanks
Look at ZFS information and lists. ZFS prefers JBOD and raw drive access, so there is plenty of information out there about which controllers work best that way.
LSI 9211-8i being one of the more popular ones.
On Thu, Jul 19, 2012 at 5:21 AM, Alan McKay alan.mckay@gmail.com wrote:
On Wed, Jul 18, 2012 at 6:44 PM, SilverTip257 silvertip257@gmail.com wrote:
That's going to really drag if you have to configure a RAID0 for all 48 disks ... it'd be much easier if you could directly communicate with the drives. I've set up at most five or six RAID0 devices on one host and it's not particularly enjoyable!
Well using the command line tools it is not bad at all - but my question remains unanswered - how many logical devices can your card have? I've hit my limit at 24 but I have 48 drives. So I need to find a card that can do 48 logical devices.
thanks
-- “Don't eat anything you've ever seen advertised on TV” - Michael Pollan, author of "In Defense of Food" _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
----- Original Message -----
From: "Alan McKay" alan.mckay@gmail.com To: "CentOS mailing list" centos@centos.org Sent: Thursday, July 19, 2012 5:21:17 AM Subject: Re: [CentOS] RAID card selection - JBOD mode / Linux RAID
On Wed, Jul 18, 2012 at 6:44 PM, SilverTip257 silvertip257@gmail.com wrote:
That's going to really drag if you have to configure a RAID0 for all 48 disks ... it'd be much easier if you could directly communicate with the drives. I've set up at most five or six RAID0 devices on one host and it's not particularly enjoyable!
Well using the command line tools it is not bad at all - but my question remains unanswered - how many logical devices can your card have? I've hit my limit at 24 but I have 48 drives. So I need to find a card that can do 48 logical devices.
thanks
I use an LSI 9200-8e PCIe host bus adapter card for my SAS JBOD's. It has nothing but two standard SFP connecters for connecting external JBOD. No fancy hardware RAID or anything is done by the card. I can see up to 512 SATA/SAS drives connected to the card via SAS JBOD enclosures. Drivers a re built into the Linux kernel. Drives are seen raw in Linux so you can use smartd and smartctl to monitor them. There is also an HBA 9200 series that does internal connections too. Most of LSI's non HBA cards that do RAID do NOT support pure JBOD.
David.
Thanks for the recommendations folks!
On Thu, Jul 19, 2012 at 12:24 PM, David C. Miller millerdc@fusion.gat.com wrote:
LSI 9200-8e
BTW, I read the specs on that and it says it is compatible with 6G and 3G SAS which hopefully means it will work with my Sun J4400 SAS1 shelf, right?
I like that it is a JBOD-only card - that is exactly what I want