I was wondering if anyone here has experience with HP MSA60 with P400 and P800 controller. How reliable are they for a 24x7 shop?
TIA
Mag Gam wrote:
I was wondering if anyone here has experience with HP MSA60 with P400 and P800 controller. How reliable are they for a 24x7 shop?
well, its not 5-nines stuff, there's all kinda single points of failure. you want 0.99999 kinda reliability, you need a fully redundant system with multipath, at every stage, like a fiberchannel SAN with dual HBA's on each system, dual switches, dual controllers on each storage array, etc, all components hotswappable, etc. of course, this all comes at siginficant expense, both in complexity and cost.
Well, I am poor and so is my school.
We want to setup a cheap storage farm so I was asking what is people's opinions on the the controller and the disks :-)
On Thu, Aug 20, 2009 at 10:33 PM, John R Piercepierce@hogranch.com wrote:
Mag Gam wrote:
I was wondering if anyone here has experience with HP MSA60 with P400 and P800 controller. How reliable are they for a 24x7 shop?
well, its not 5-nines stuff, there's all kinda single points of failure. you want 0.99999 kinda reliability, you need a fully redundant system with multipath, at every stage, like a fiberchannel SAN with dual HBA's on each system, dual switches, dual controllers on each storage array, etc, all components hotswappable, etc. of course, this all comes at siginficant expense, both in complexity and cost.
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Mag Gam wrote:
Well, I am poor and so is my school.
Hear, hear
We want to setup a cheap storage farm so I was asking what is people's opinions on the the controller and the disks :-)
Cor, looks like it is not just me having to think of a storage farm. Over here, some downtime (an hour...) is okay so I have been looking at moving things to a replicated storage farm on elcheapo hardware and zero-cost operating systems patched together with GlusterFS.
On Thu, Aug 20, 2009 at 10:33 PM, John R Piercepierce@hogranch.com wrote:
Mag Gam wrote:
I was wondering if anyone here has experience with HP MSA60 with P400 and P800 controller. How reliable are they for a 24x7 shop?
well, its not 5-nines stuff, there's all kinda single points of failure. you want 0.99999 kinda reliability, you need a fully redundant system with multipath, at every stage, like a fiberchannel SAN with dual HBA's on each system, dual switches, dual controllers on each storage array, etc, all components hotswappable, etc. of course, this all comes at siginficant expense, both in complexity and cost.
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Friday 21 August 2009, Mag Gam wrote:
I was wondering if anyone here has experience with HP MSA60 with P400 and P800 controller. How reliable are they for a 24x7 shop?
We have a few (p800). My opinion is that they're acceptable but not fast. We've had one flaky controller in 30 controllers in a year (but I think that one turned out to be a loose PCI-slot).
On the plus side: + does raid6 + smartarray logical drives are a lot more flexible than most other raids + monitoring built on hpacucli, works, clear and consistent behaviour + just works on CentOS + no problem with large devices (for resonably new versions of fw/driver) and: - not terribly impressive speed wise - /dev/cciss/cXdY... can be problematic when software assumes /dev/XXX - hpacucli requires you to twist your brain sideways (syntax)
All my experience with the p800 is from DL185g5 with 12x1T drives.
We also have a few random p400's. They behave idenically to the p800 but slower.
/Peter
We have a few (p800). My opinion is that they're acceptable but not fast.
Heard this a few times now, in the interest of getting something better next time, what have you found equally reliable but faster?
- hpacucli requires you to twist your brain sideways (syntax)
Heh, that's no doubt:)
On Friday 21 August 2009, Joseph L. Casale wrote:
We have a few (p800). My opinion is that they're acceptable but not fast.
Heard this a few times now, in the interest of getting something better next time, what have you found equally reliable but faster?
Nothing as cheap as a full dl185 that's for sure unless you count SUNs thor (thumper ng) machines but then you'll have to do the raid part in software somehow.
I do like Nexsans Satabeast and qlogic HBAs though :-)
/Peter
- hpacucli requires you to twist your brain sideways (syntax)
Heh, that's no doubt:)
Am 21.08.2009 um 19:08 schrieb Peter Kjellstrom:
On Friday 21 August 2009, Joseph L. Casale wrote:
We have a few (p800). My opinion is that they're acceptable but not fast.
Heard this a few times now, in the interest of getting something better next time, what have you found equally reliable but faster?
Nothing as cheap as a full dl185 that's for sure unless you count SUNs thor (thumper ng) machines but then you'll have to do the raid part in software somehow.
Yeah, but that is as easy as zpool create tank raidz2 dev1 dev2 dev3 dev4 dev5 dev6 etc. zfs create tank/bigdisk
But I'd go one step further and use one of SUNs OpenStorage devices. Once you have a lot of no-name JBOD SATA-drives, the inability of Solaris to light-up the yellow light of the broken one will make it painfully obvious that while one can spend to much on storage, one can as easily spend too little... ;-)
If you really want to go with a HW controller, try Areca or the high- end 3Ware models.
As mentioned in any ZFS document, when you use a HW-raidcontroller, the OS never knows if a drive is broken or showing errors. The HW hides that from the OS. You have to have closed-source drivers like the HP utils to tell you that.
If your data-set will, over the lifespan of that server, never grow beyond the original size of the array, then you can go with a HW- raidcontroller. Otherwise, go ZFS.
Rainer
Hello
Rainer Duffner wrote:
Am 21.08.2009 um 19:08 schrieb Peter Kjellstrom:
On Friday 21 August 2009, Joseph L. Casale wrote:
We have a few (p800). My opinion is that they're acceptable but not fast.
Heard this a few times now, in the interest of getting something better next time, what have you found equally reliable but faster?
Nothing as cheap as a full dl185 that's for sure unless you count SUNs thor (thumper ng) machines but then you'll have to do the raid part in software somehow.
Yeah, but that is as easy as zpool create tank raidz2 dev1 dev2 dev3 dev4 dev5 dev6 etc. zfs create tank/bigdisk
It's as simple as that when creating your RAID, true. It's more tricky if you have to boot off it, even more if you have to start replacing disks. I have nothing against software RAID, it works fine for me, but you have to know what you're doing.
If you really want to go with a HW controller, try Areca or the high- end 3Ware models.
IMHO 3ware has kind of fallen out of grace with me, their latests lines were expensive and overall performance was bad compared to Areca. Maybe that has changed in meanwhile but I'm not buying them anymore.
As mentioned in any ZFS document, when you use a HW-raidcontroller, the OS never knows if a drive is broken or showing errors. The HW hides that from the OS.
That's basically the whole point of a hardware RAID controller. But it doesn't mean the OS can't analyze the RAID controller's (and disks) status, see next paragraph.
You have to have closed-source drivers like the HP utils to tell you that.
Not entirely true, the cciss driver is open source and you can use an easy utility like cciss_vol_status to monitor your RAID. Though if you want to manage your RAID array beyond basic checking of status and replacing broken disks, you'll have to use the HP management utilities (you can get them from their website), which are, yes, a bit dire. Such as requiring 32 bit libs in their 64 bit packages, FAIL if you ask me. (Note that you don't have to use HP's driver for cciss for their management utilities to work - they work fine with the drivers that ship with CentOS).
In addition, smartmontools is capable of monitoring the individual disks behind a cciss RAID controller. Even more, the failure LEDs on the backplanes work, something I haven't seen possible to use in combination with software RAID.
If your data-set will, over the lifespan of that server, never grow beyond the original size of the array, then you can go with a HW- raidcontroller. Otherwise, go ZFS.
I see no reason why a growing dataset is an argument against a hardware RAID controller. You can just add disks, create new arrays and use LVM.
My few pro's/con's software vs. hardware RAID:
Software RAID: + Portable: any controller will work and the management is the same + Can span anything that's a block device + Cheap + In RAID 1 & 10, the independent parts are readable without the RAID functioning - Failure LED's don't work - Removing a drive can be tricky involving the scsi remove-single-device command. Once this command didn't even work and yanking out the disk without the remove command made the server lock up. - When replacing disks on a bootable set, you have to remember to re-install the bootloader
Hardware RAID: + Failure LED's work when used in conjunction with a backplane (might require additional cabling) + Possibility for a battery backed write cache + Just works (at least for HP) +- Somewhat portable if you remain within the same brand. Excellent experience with HP Smart Array controllers in this regard. - Management tools can be a PITA on linux. - Expensive - Some controllers have such a way of storing the data on disk that you can't read them back on a regular controller without additional tools
Best regards,
Glenn
If you really want to go with a HW controller, try Areca or the high- end 3Ware models.
Well, for non Solaris/non file servers, hardware raid is easiest. I am hesitant to go with Areca (looks like cheap tw stuff). I have always used LSI stuff but found them slow as hell, I am pretty sure the hp sa's are LSI chips...
Wonder what's in store for 3ware now that LSI owns them...
Someone in the vSphere forums did a comparison between an LSI and Adaptec and showed good numbers for an Adaptec SAS controller. I had horrible luck years ago with Adaptec SATA and never went back.
Oh well... Might try Adaptec or 3Ware next...
jlc
Am 21.08.2009 um 21:07 schrieb Joseph L. Casale:
If you really want to go with a HW controller, try Areca or the high- end 3Ware models.
Well, for non Solaris/non file servers, hardware raid is easiest.
True. Replacing disks is much easier for sure.
I am hesitant to go with Areca (looks like cheap tw stuff). I have always used LSI stuff but found them slow as hell, I am pretty sure the hp sa's are LSI chips...
Areca has some merit. But I have never run a comparison, admittedly.
Wonder what's in store for 3ware now that LSI owns them...
Someone in the vSphere forums did a comparison between an LSI and Adaptec and showed good numbers for an Adaptec SAS controller. I had horrible luck years ago with Adaptec SATA and never went back.
Oh well... Might try Adaptec or 3Ware next...
It would be interesting to make a comparison between high-end Adaptec, Areca and 3Ware HBAs.
Rainer
Joseph L. Casale wrote:
I have always used LSI stuff but found them slow as hell, I am pretty sure the hp sa's are LSI chips...
for a server, random IO operations per second is usually more important than sequential burst read/write. I've found many server RAID cards excell at IOPS while relatively sucking at Mbyte/sec.
On Friday 21 August 2009, Joseph L. Casale wrote:
If you really want to go with a HW controller, try Areca or the high- end 3Ware models.
Well, for non Solaris/non file servers, hardware raid is easiest. I am hesitant to go with Areca (looks like cheap tw stuff). I have always used LSI stuff but found them slow as hell, I am pretty sure the hp sa's are LSI chips...
Yes they are LSI but with serious firmware re-work by HP.
/Peter
Wonder what's in store for 3ware now that LSI owns them...
...
On Fri, Aug 21, 2009 at 07:40:24PM +0200, Rainer Duffner wrote:
Am 21.08.2009 um 19:08 schrieb Peter Kjellstrom:
On Friday 21 August 2009, Joseph L. Casale wrote:
We have a few (p800). My opinion is that they're acceptable but not fast.
Heard this a few times now, in the interest of getting something better next time, what have you found equally reliable but faster?
Nothing as cheap as a full dl185 that's for sure unless you count SUNs thor (thumper ng) machines but then you'll have to do the raid part in software somehow.
Yeah, but that is as easy as zpool create tank raidz2 dev1 dev2 dev3 dev4 dev5 dev6 etc. zfs create tank/bigdisk
But I'd go one step further and use one of SUNs OpenStorage devices. Once you have a lot of no-name JBOD SATA-drives, the inability of Solaris to light-up the yellow light of the broken one will make it painfully obvious that while one can spend to much on storage, one can as easily spend too little... ;-)
Uhm.. Solaris/zfs can't really light-up the failure lights on Sun's own hardware?
-- Pasi
Am 22.08.2009 um 12:37 schrieb Pasi Kärkkäinen:
Uhm.. Solaris/zfs can't really light-up the failure lights on Sun's own hardware?
Of course it can - on SUN's own hardware. But you can run Solaris on almost any hardware - and that turns into a problem sometimes. Like in this case... ZFS has nothing to do with lighting up lights on disks. The OS must know which SCSI-commands to send to do that. With our Promise JBOD, that's a lost case.... ;-)
Rainer
On Sat, Aug 22, 2009 at 02:04:58PM +0200, Rainer Duffner wrote:
Am 22.08.2009 um 12:37 schrieb Pasi Kärkkäinen:
Uhm.. Solaris/zfs can't really light-up the failure lights on Sun's own hardware?
Of course it can - on SUN's own hardware. But you can run Solaris on almost any hardware - and that turns into a problem sometimes. Like in this case... ZFS has nothing to do with lighting up lights on disks. The OS must know which SCSI-commands to send to do that. With our Promise JBOD, that's a lost case.... ;-)
Well that sounds more like what I was thinking of :)
-- Pasi
On Friday 21 August 2009, Rainer Duffner wrote:
Am 21.08.2009 um 19:08 schrieb Peter Kjellstrom:
On Friday 21 August 2009, Joseph L. Casale wrote:
We have a few (p800). My opinion is that they're acceptable but not fast.
Heard this a few times now, in the interest of getting something better next time, what have you found equally reliable but faster?
Nothing as cheap as a full dl185 that's for sure unless you count SUNs thor (thumper ng) machines but then you'll have to do the raid part in software somehow.
Yeah, but that is as easy as zpool create tank raidz2 dev1 dev2 dev3 dev4 dev5 dev6 etc. zfs create tank/bigdisk
Well, this being the CentOS mailing list I kind of assumed the OP was planning to run Linux on it and last I checked that did not include a production worthy zfs (if one at all).
...
If you really want to go with a HW controller, try Areca or the high- end 3Ware models.
We have lots of 3ware. They work but when they exhibit problems it's, IMHO, a lot worse to diagnose/fix. Also, in my experience, both Areca and 3ware almost needs XFS to run fast (sequential I/O wise).
/Peter