On 02/03/2011 09:00 AM, Lamar Owen wrote:
On Wednesday, February 02, 2011 08:04:43 pm Les Mikesell wrote:
I think there are ways that drives can fail that would make them not be detected at all - and for an autodetected raid member in a system that has been rebooted, not leave much evidence of where it was when it worked. If your slots are all full you may still be able to figure it out but it might be a good idea to save a copy of the listing when you know everything is working.
I'll echo this advice.
I guess I'm spoiled to my EMC arrays, which light a yellow LED on the DAE and on the individual drive, as well as telling you which backend bus, which enclosure, and which drive in that enclosure. And the EMC-custom firmware is paranoid about errors.
But my personal box is a used SuperMicro dual Xeon I got at the depth of the recession in December 2009, and paid a song and half a dance for it. It had the six bay hotswap SCSI, and I replaced it with the six bay hotswap SATA, put in a used (and cheap) 3Ware 9500S controller, and have a RAID5 of four 250GB drives for the boot and root volumes, and an MD RAID1 pair of 750GB drives for /home. The Supermicro motherboard didn't have SATA ports, but I got a 64-bit PCI-X dual internal SATA/dual eSATA low-profile board with the low-profile bracket to fit the 2U case. Total cost<$500.
Less than $500 for a Supermicro box? Holy crap, Batman!
I'm using one of their motherboards (X8DAL-3) in a home-brew configuration, which has turned out to be somewhat less expensive than a factory-built Supermicro, but still not cheap. I'll say one thing for them ... they do make really nice stuff, with little extras the commodity makers like Asus or Gigabit don't include. Well worth the extra dollars. This box just screams, so it should handle KVM very nicely if I organize the arrays properly ... anyone with RAID-60 advice please chime in.
I took Les' advice and built a connection map of the 15 drives. The Disk Utility GUI turned out to be useful once I determined the SATA breakout cable order and labelled each with its PHY number. Knowing that lets me use the GUI to clearly identify a failed drive, which shows up as a PHY#. I'm normally a command-line kinda guy but this utility is something I really like on RHEL-6. I hope it's the same on CentOS-6.
I'll organize the disk arrays with this trial version of RHEL, on the assumption that CentOS-6 simply picks them up as-built.
Chuck
On 2/3/2011 12:38 PM, Chuck Munro wrote:
Less than $500 for a Supermicro box? Holy crap, Batman!
I'm using one of their motherboards (X8DAL-3) in a home-brew configuration, which has turned out to be somewhat less expensive than a factory-built Supermicro, but still not cheap. I'll say one thing for them ... they do make really nice stuff, with little extras the commodity makers like Asus or Gigabit don't include. Well worth the extra dollars. This box just screams, so it should handle KVM very nicely if I organize the arrays properly ... anyone with RAID-60 advice please chime in.
I took Les' advice and built a connection map of the 15 drives. The Disk Utility GUI turned out to be useful once I determined the SATA breakout cable order and labelled each with its PHY number. Knowing that lets me use the GUI to clearly identify a failed drive, which shows up as a PHY#. I'm normally a command-line kinda guy but this utility is something I really like on RHEL-6. I hope it's the same on CentOS-6.
If you are building your own stuff, I kind of like trayless hot-swap sata bays. And if you care more about size and power than speed and capacity, you can get them for laptop size drives too.
Les Mikesell wrote:
On 2/3/2011 12:38 PM, Chuck Munro wrote:
Less than $500 for a Supermicro box? Holy crap, Batman!
I'm using one of their motherboards (X8DAL-3) in a home-brew configuration, which has turned out to be somewhat less expensive than a factory-built Supermicro, but still not cheap. I'll say one thing for
<snip>>
If you are building your own stuff, I kind of like trayless hot-swap sata bays. And if you care more about size and power than speed and capacity, you can get them for laptop size drives too.
Trayless is nice. All the sleds mostly use *different* screws. The ones that drive me crazy are the Penguin boxes we have, that use screws no one else uses... *and* we have a good number that came with only one or two drives (for use in clusters)....
mark
On 2/3/2011 1:52 PM, m.roth@5-cent.us wrote:
Les Mikesell wrote:
On 2/3/2011 12:38 PM, Chuck Munro wrote:
Less than $500 for a Supermicro box? Holy crap, Batman!
I'm using one of their motherboards (X8DAL-3) in a home-brew configuration, which has turned out to be somewhat less expensive than a factory-built Supermicro, but still not cheap. I'll say one thing for
<snip>>
If you are building your own stuff, I kind of like trayless hot-swap sata bays. And if you care more about size and power than speed and capacity, you can get them for laptop size drives too.
Trayless is nice. All the sleds mostly use *different* screws. The ones that drive me crazy are the Penguin boxes we have, that use screws no one else uses... *and* we have a good number that came with only one or two drives (for use in clusters)....
It has been a while since I used Penguins, but as I recall they used to ship a bag of those screws with every unit, so if you don't have them it may be because you threw them out. And they put working carriers in the bays. The ones that irritate me are the IBMs and Dells that ship dummy carriers that take just as much material in the empty bays and charge a bundle for working ones if you add drives later.
Les Mikesell wrote:
On 2/3/2011 1:52 PM, m.roth@5-cent.us wrote:
<snip>
Trayless is nice. All the sleds mostly use *different* screws. The ones that drive me crazy are the Penguin boxes we have, that use screws no one else uses... *and* we have a good number that came with only one or
two
drives (for use in clusters)....
It has been a while since I used Penguins, but as I recall they used to ship a bag of those screws with every unit, so if you don't have them it may be because you threw them out. And they put working carriers in
Don't look at me, they've were here when I started. And the other admin I work with claims they came without. I dunno, maybe the guy I replaced, or the one he replaced....
the bays. The ones that irritate me are the IBMs and Dells that ship dummy carriers that take just as much material in the empty bays and charge a bundle for working ones if you add drives later.
Oh, that silliness. What I dislike are the PERC 700s, that will *only* accept Dell drives, not commodity ones.
mark
On Thu, 3 Feb 2011, m.roth@5-cent.us wrote:
Oh, that silliness. What I dislike are the PERC 700s, that will *only* accept Dell drives, not commodity ones.
My understanding is that Dell has reversed this policy via a firmware update after a flood of complaints. I don't have any 700's to check this, but I believe they will now accept non-Dell drives. Certainly the Perc 5's and 6's do.
Steve
On 02/03/11 1:57 PM, Steve Thompson wrote:
On Thu, 3 Feb 2011, m.roth@5-cent.us wrote:
Oh, that silliness. What I dislike are the PERC 700s, that will *only* accept Dell drives, not commodity ones.
My understanding is that Dell has reversed this policy via a firmware update after a flood of complaints. I don't have any 700's to check this, but I believe they will now accept non-Dell drives. Certainly the Perc 5's and 6's do.
the complication being that random off-the-shelf drives probably haven't been qualified with the raid controller, and have the potential for any number of nasty side effects. even simple firmware revision changes on the same model drive can break things, relating to write barriers and buffer flushing.
this is especially problematic for SATA drives, where the majority of them are optimized for desktop single user performance and take all kinda shortcuts that would adversely affect data reliability in a raid environment.
On Thursday, February 03, 2011 01:38:35 pm Chuck Munro wrote:
On 02/03/2011 09:00 AM, Lamar Owen wrote:
But my personal box is a used SuperMicro dual Xeon I got at the depth of the recession in December 2009
Less than $500 for a Supermicro box? Holy crap, Batman!
Hey, first let me thank you for trimming the rest of the digest out in your reply; that's good stuff.
Next, this is an older Supermicro board, P4DP6, and older 32-bit-only Xeon's, 2.8GHz. But it has 4GB of ECC RAM, and the nice 2U Supermicro rack chassis with the six bay trayless drive array. And, again, the total package, except for the drives, which I already had on hand, was less than $500, and that included the 3ware controller and the SiI SATA-2 low profile 64-bit PCI-X board.
I took Les' advice and built a connection map of the 15 drives. The Disk Utility GUI turned out to be useful once I determined the SATA breakout cable order and labelled each with its PHY number. Knowing that lets me use the GUI to clearly identify a failed drive, which shows up as a PHY#. I'm normally a command-line kinda guy but this utility is something I really like on RHEL-6. I hope it's the same on CentOS-6.
Should be. As far as I know this is the same palimpsest that's in Fedora; not sure of which version is in EL6, though. It does pretty good, even gives you a benchmarking utility, partitioning, formatting, SMART utilities, etc, all in one place.
It works reasonably well over an ssh X tunnel, too, and is one reason my standard install now includes at least the X libraries, even on a server. There are other reasons to have a remote GUI on a server, even if you disable the display/desktop manager and GUI login. I use ssh tunneled konqueror a lot, for instance. When you need to do large batches, CLI works best, but for just drilling down into filesystems I do like konqueror, from either KDE3 or KDE4, doesn't really matter that much to me. And it's reasonably fast even over DSL.
And my main server management station runs ssh tunneled gkrellm instances from critical servers; it's easy to tell at a glance if something has happened to a server, and it makes for a pretty display in the datacenter during tours, too.