[CentOS] p800 and HP

RedShift redshift at pandora.be
Fri Aug 21 18:43:33 UTC 2009


Hello


Rainer Duffner wrote:
> Am 21.08.2009 um 19:08 schrieb Peter Kjellstrom:
> 
>> On Friday 21 August 2009, Joseph L. Casale wrote:
>>>> We have a few (p800). My opinion is that they're acceptable but  
>>>> not fast.
>>> Heard this a few times now, in the interest of getting something  
>>> better
>>> next time, what have you found equally reliable but faster?
>> Nothing as cheap as a full dl185 that's for sure unless you count  
>> SUNs thor
>> (thumper ng) machines but then you'll have to do the raid part in  
>> software
>> somehow.
> 
> 
> Yeah, but that is as easy as
> zpool create tank raidz2 dev1 dev2 dev3 dev4 dev5 dev6 etc.
> zfs create tank/bigdisk
> 

It's as simple as that when creating your RAID, true. It's more tricky if you have to boot off it, even more if you have to start replacing disks. I have nothing against software RAID, it works fine for me, but you have to know what you're doing.


> If you really want to go with a HW controller, try Areca or the high- 
> end 3Ware models.
> 

IMHO 3ware has kind of fallen out of grace with me, their latests lines were expensive and overall performance was bad compared to Areca. Maybe that has changed in meanwhile but I'm not buying them anymore.


> As mentioned in any ZFS document, when you use a HW-raidcontroller,  
> the OS never knows if a drive is broken or showing errors. The HW  
> hides that from the OS.

That's basically the whole point of a hardware RAID controller. But it doesn't mean the OS can't analyze the RAID controller's (and disks) status, see next paragraph.

> You have to have closed-source drivers like the HP utils to tell you  
> that.
> 

Not entirely true, the cciss driver is open source and you can use an easy utility like cciss_vol_status to monitor your RAID. Though if you want to manage your RAID array beyond basic checking of status and replacing broken disks, you'll have to use the HP management utilities (you can get them from their website), which are, yes, a bit dire. Such as requiring 32 bit libs in their 64 bit packages, FAIL if you ask me. (Note that you don't have to use HP's driver for cciss for their management utilities to work - they work fine with the drivers that ship with CentOS).

In addition, smartmontools is capable of monitoring the individual disks behind a cciss RAID controller. Even more, the failure LEDs on the backplanes work, something I haven't seen possible to use in combination with software RAID.


> If your data-set will, over the lifespan of that server, never grow  
> beyond the original size of the array, then you can go with a HW- 
> raidcontroller.
> Otherwise, go ZFS.
> 
> 

I see no reason why a growing dataset is an argument against a hardware RAID controller. You can just add disks, create new arrays and use LVM.


My few pro's/con's software vs. hardware RAID:

Software RAID:
+ Portable: any controller will work and the management is the same
+ Can span anything that's a block device
+ Cheap
+ In RAID 1 & 10, the independent parts are readable without the RAID functioning
- Failure LED's don't work
- Removing a drive can be tricky involving the scsi remove-single-device command. Once this command didn't even work and yanking out the disk without the remove command made the server lock up.
- When replacing disks on a bootable set, you have to remember to re-install the bootloader

Hardware RAID:
+ Failure LED's work when used in conjunction with a backplane (might require additional cabling)
+ Possibility for a battery backed write cache
+ Just works (at least for HP)
+- Somewhat portable if you remain within the same brand. Excellent experience with HP Smart Array controllers in this regard.
- Management tools can be a PITA on linux. 
- Expensive
- Some controllers have such a way of storing the data on disk that you can't read them back on a regular controller without additional tools


Best regards,


Glenn



More information about the CentOS mailing list