> I have a bunch of IBM HS20 blades in two bladecenters, and for the > most part, > they are very decent commodity (sort of) machines. > Pros: > > 14 of them in 6U space. Dual processor (Intel), hardware raid-1 that > pretty > much just works, all the time. Basic RH9-friendly (soon to be > upgraded to > CentOS) /proc information about all kinds of groovy things. Management > console *rocks* for remote diagnostics, bios upgrades, remote full > lock reboots, > and a host of other things. Initial buy-in is highish, but a full > bladecenter is > pretty reasonable per-machine. Seem to run 2.4.x (old RH9 stuff) > decently, > I imagine they'll run CentOS fairly well, too. Shared CD-ROM drive for > media insertion, that can be switched remotely or by hand. Really clever > power redundancy options (see below, too). the HS20's do run CentOS v3 -and- v4 quite nicely, I've got a number of HS20's (sans raid, just single internal SCSI drives), and everything just works out of box. I've installed both x86_64 and i686 on them. PDU? Mine are just plugged into the rack's existing 208V dual rails. i have 3GB each of ram in mine. Mine have dual Xeon 2.8Ghz CPUs w/ 2MB cache. you can get a gigE copper 'passthrough' module that just gives you discrete 14 ethernet cables for each port of each machine instead of buying into their cisco or nortel switches... with hindsight, I kinda wish I'd gotten the internal switches, would have saved a lot of cable mess. those little tiny 'lame-o' 2.5" drives aren't laptop drives, they are 10,000 rpm u320 scsi drives, the Seagate Savvio enterprise drives that all the server folks are switching to as they can get more drives in less space. I've got 2G fiberchannel adapters in my blades, so they can talk to a SAN, and they can even boot off of iSCSI if so configured (that I haven't tested yet) you can also get dual Opteron blades, PowerPC blades that run AIX, and even a Cell blade for doing Cell based scientific number crunching.