[CentOS] CentOS Digest, Vol 73, Issue 3

Chuck Munro

chuckm at seafoam.net
Thu Feb 3 18:38:35 UTC 2011



On 02/03/2011 09:00 AM, Lamar Owen wrote:
> ------------------------------
>
> On Wednesday, February 02, 2011 08:04:43 pm Les Mikesell wrote:
>> >  I think there are ways that drives can fail that would make them not be detected
>> >  at all - and for an autodetected raid member in a system that has been rebooted,
>> >  not leave much evidence of where it was when it worked.  If your slots are all
>> >  full you may still be able to figure it out but it might be a good idea to save
>> >  a copy of the listing when you know everything is working.
> I'll echo this advice.
>
> I guess I'm spoiled to my EMC arrays, which light a yellow LED on the DAE and on the individual drive, as well as telling you which backend bus, which enclosure, and which drive in that enclosure.  And the EMC-custom firmware is paranoid about errors.
>
> But my personal box is a used SuperMicro dual Xeon I got at the depth of the recession in December 2009, and paid a song and half a dance for it.  It had the six bay hotswap SCSI, and I replaced it with the six bay hotswap SATA, put in a used (and cheap) 3Ware 9500S controller, and have a RAID5 of four 250GB drives for the boot and root volumes, and an MD RAID1 pair of 750GB drives for /home.  The Supermicro motherboard didn't have SATA ports, but I got a 64-bit PCI-X dual internal SATA/dual eSATA low-profile board with the low-profile bracket to fit the 2U case.  Total cost<$500.
>

Less than $500 for a Supermicro box?  Holy crap, Batman!

I'm using one of their motherboards (X8DAL-3) in a home-brew 
configuration, which has turned out to be somewhat less expensive than a 
factory-built Supermicro, but still not cheap.  I'll say one thing for 
them ... they do make really nice stuff, with little extras the 
commodity makers like Asus or Gigabit don't include.  Well worth the 
extra dollars.  This box just screams, so it should handle KVM very 
nicely if I organize the arrays properly ... anyone with RAID-60 advice 
please chime in.

I took Les' advice and built a connection map of the 15 drives.  The 
Disk Utility GUI turned out to be useful once I determined the SATA 
breakout cable order and labelled each with its PHY number.  Knowing 
that lets me use the GUI to clearly identify a failed drive, which shows 
up as a PHY#.  I'm normally a command-line kinda guy but this utility is 
something I really like on RHEL-6.  I hope it's the same on CentOS-6.

I'll organize the disk arrays with this trial version of RHEL, on the 
assumption that CentOS-6 simply picks them up as-built.

Chuck



More information about the CentOS mailing list