Hi,
Whats' the best motherboard you ever ran CentOS on?
--------- Don't read this unless you have lots o' spare time, and please, no flamers or kook-jobs with wierd attitudes, this is just plain old hardware as it relates to centos talk -no politic or mean folks.
Background (why I ask about centos-friendly hardware):
As I order alot of servers, I have alot of vendors telling me all sorts of stuff. It gets pretty extreme sometimes. I play poker each Sat. eve with a Google Engineer, but we never talk about work. One day we might, but I thought i'd ask the list, because there are alot of vendors out there who contradict just about everything I see on the list that I consider to be excellent advice, given by really talented and positive-minded folks, true professionals.
One thing I remember reading about Google is that they choose commodity hardware and engineer the $%^&* out of it and basically add it into a hive/clusterlike existence where it metes out it's daily java/mail/database/ftp/memcache sessions. The modular nature of the hardware allows both scaling, and spontaneuous disaster recovery of very high quality. http://news.com.com/2100-1032_3-5875433.html
I have chosen to follow the same logic, or at least continue down the path of those before me that made this choice.
From my view, my CentOS machines that never break, run happiest on AMD-friendly boards, with older onboard Broadcomm GBnics that support NIC bonding and handle excess internet & internal snmp (cacti & nagios) traffic without blowing up.
I can rave about Crucial (Micron) RAM, and AMD Opteron CPU's, and Seagate & Fujitsu SCSI SCA 80 pin drives, Chenbro 2U rackmount 6* SCA cases, Zippy redundant power supplies, LSI Logic MegaRAID 320-2 controllers -but the basis of it all is the board. It's got all the other parts no one talks about. It's the source of many a dmesg output. With the right board, life is much much easier.
Right now I run using Tyan S288@UG3NR-D Dual Core Opteron SCSI SATA GBe LAN boards.
I have a vendor that consistently says they get complaints on Tyan boards, but out of the cluster, none of mine have ever died. The Dells die, and get replaced. The remaining Sun's, seem to never die even though I wish they would. Not that i'm a fan of Tyan, -but oddly enough this particular board works great.
I noticed alot of the hardware advice Johnny gives (hardware & advice/fixes) just so happens to coincide with vendors saying the exact oppposite thing. They say go with ATI and broadcomm right when he's actually helping someone fix something related to one of those components. Sometimes on the same day.
If we as engineers are to have any say in our industry, it's going to happen when we all talk outside the box of BS theory and FUD or over-analysis or analysis-for-analysis'-sake.
Right now Intel has things such that it's actually a little difficult to find a stock 2U production linux system unless you actually break it down part-by-part and vet the whole thing. Just curious about your opinions and advice -is there a spec you follow that you like?
Way back when, you'd either order a Supermicro-type system, or get a VA Linux type machine. What do you do now? If you happen to be trying to spec out a solid Linux server, I can say that the spec I arrived at handles over 100,000,000 page views a week -that's 1/3rd of CNET. It's all CentOS, the whole thing. A percentage of you might have travelled across them, especially if you happen to read news on the web.
Commodity is the way to go. Get 40 servers for the price of one commercial vendor machine. CentOS is very real my friends -let's talk hardware! Maybe we can help the centos project out by doing so.
I have to say, i'm sticking with my own hardware choices, -so please don't view this as someone trying to get a hardware spec for free -the intention here is to solidify our own base as centos users, sysadmins.
-krb
On 1/16/07, Karl R. Balsmeier karl@klxsystems.net wrote:
Whats' the best motherboard you ever ran CentOS on?
Depends on purpose. Desktop/Workstation: Iwill dp533 board has been very good to me. It's an older board, but it's been rock solid, and very well supported.
Server: IBM x365's (and in the near future the ibm hs21's) are pretty much all I use at work anymore. With only a few minor gripes the serveraid cards are quite good, and with the exception of ibm's website sucking for locating the occasional driver disk, their server hardware hasn't let me down yet no matter what kind of load I throw at the system.
Jim Perrin wrote:
On 1/16/07, Karl R. Balsmeier karl@klxsystems.net wrote:
Whats' the best motherboard you ever ran CentOS on?
Depends on purpose. Desktop/Workstation: Iwill dp533 board has been very good to me. It's an older board, but it's been rock solid, and very well supported.
Server: IBM x365's (and in the near future the ibm hs21's) are pretty much all I use at work anymore. With only a few minor gripes the serveraid cards are quite good, and with the exception of ibm's website sucking for locating the occasional driver disk, their server hardware hasn't let me down yet no matter what kind of load I throw at the system.
I have a bunch of IBM HS20 blades in two bladecenters, and for the most part, they are very decent commodity (sort of) machines.
Pros:
14 of them in 6U space. Dual processor (Intel), hardware raid-1 that pretty much just works, all the time. Basic RH9-friendly (soon to be upgraded to CentOS) /proc information about all kinds of groovy things. Management console *rocks* for remote diagnostics, bios upgrades, remote full lock reboots, and a host of other things. Initial buy-in is highish, but a full bladecenter is pretty reasonable per-machine. Seem to run 2.4.x (old RH9 stuff) decently, I imagine they'll run CentOS fairly well, too. Shared CD-ROM drive for media insertion, that can be switched remotely or by hand. Really clever power redundancy options (see below, too).
Cons:
Machines use tiny little lam-o 2.5" laptop drives. Big ass PDU that doesn't really have a place to go other than the floor. Huge loud fans to keep that chassis cool. Celeron processors. Strange '2.5G RAM configuration'. Non-individual Gb ethernet connection (*choke*). Ridiculous Gb switching module that you can install into the bladecenter. Pseudo-clever power redundancy options! Why would I want options for power redundancy, for redundancy, all I want is if one dies, the other one has the capacity to run the whole thing! So-so tech support for 'enterprise class' hardware.
I also have loads of Dell boxes - a few racks worth.
Pros:
I like the newer PERC 5/i 1U boxes with a pair of 500GB SATA drives mirrored and 4GB of RAM. I like being able to boot an OMSA/Knoppix CD off a random machine. I like Dell tech support for silver on server hardware. I like Dell's pretty serious support of Linux on PowerEdge hardware. I like the fact that you can talk to people who don't tell you to reformat your computer when you're having hardware issues. I like the big chunky 8-drive backplane on the 5U 2900. I like serial console redirection from the BIOS. I like the extra 'management' ethernet port on the back. I like a pair of dual core Xeon's that appear to be 8 processors in 'top'. I like the completeness of the rackmount hardware, and the general sturdiness and build quality of the 2900/1950 boxes. I even like the el-cheapo 860's. They have people on staff who have helped develop Linux.
CentOS 4.4 runs like a *champ* on the new stuff, we'll find out in the next couple of months about all the older stuff.
Cons:
Expensive. Way too expensive for what you get. Support contracts for 7x24 hardware replacement are ludicrously high for high-end hardware that really shouldn't break in 8-10 years, much less 5 or 3. I think they should just warranty that stuff, and maintain spares locally enough to get it fast, and not charge a fortune for it like it's a 'Best Buy' replacement warranty on a new dvd player. (39.99 on a $45 item). Sales is a PITA to deal with. Takes forever and 3 conference calls to get everything finalized, and then when you want a really simple change to the quote, the prices are all over the map, like every go is a new round of 'Let's Make A Deal'. Lifecycle on Dell Hardware can be measured in milliseconds. They drop a line of server hardware like Paris Hilton changes tapes in a video camera.
BROADCOM Gb NIC's!
No dual powersupply options randomly on rackmount server hardware. Not even something little and cheesy with two plugs (anymore).
Older PERC RAID controller support under modern Linux distributions is a nightmare involving trying to graft a floppy drive into an old ass machine that never had one, or the one it had died, but uses a proprietary interface.
/RANT
:)
Peter
Machines use tiny little lam-o 2.5" laptop drives.
No denying this one, but you can boot from a san, eliminating the in-blade drive entirely.
Celeron processors.
They've since done away with these in the 21's. They're solid processors now.
Non-individual Gb ethernet connection (*choke*). Ridiculous Gb switching module that you can install into the bladecenter.
This one is a toss up. It depends on what module you get for them. IBM seems to push the HS21's and accompanying chassis with the nortel layer 2/3 switching module, however there are others you can get. We've ordered a patch-panel copper passthru module so that we can put our cisco 4506 to good use. Basically each blade gets their own lines to our patch panel, and from there to our switch.
So-so tech support for 'enterprise class' hardware.
This one surprises me. I guess it might depend on who your rep is and what your organization is. we've got 24x7 4hour support with them, and they've never taken longer than 1.5 hours to show up. The tech (when we've requested more than just a replacement part) has always been reasonably intelligent, and his knuckles don't drag on the floor at all. I was quite impressed.
Jim Perrin wrote:
This one is a toss up. It depends on what module you get for them. IBM seems to push the HS21's and accompanying chassis with the nortel layer 2/3 switching module, however there are others you can get. We've ordered a patch-panel copper passthru module so that we can put our cisco 4506 to good use. Basically each blade gets their own lines to our patch panel, and from there to our switch.
I think I need some of those - *scribble* *scribble*.
This one surprises me. I guess it might depend on who your rep is and what your organization is. we've got 24x7 4hour support with them, and they've never taken longer than 1.5 hours to show up. The tech (when we've requested more than just a replacement part) has always been reasonably intelligent, and his knuckles don't drag on the floor at all. I was quite impressed.
I've only dealt with the guys on the phone. He was reasonably intelligent, and probably his fix will work, it's just being a PITA to implement, and it still smells like it's hardware, but he's claiming it's Celeron microcode and I have to jump through a bunch of hoops to fix it. Hopefully next week I'll deal with it while I'm spending a few days of quality time building out a bunch of new stuff down there. I'm pretty sure his knuckles don't drag on the floor though, he wasn't just reading a screen. Did take him a week plus to call me back when I called an hour after he went home that night.
Peter
I have a bunch of IBM HS20 blades in two bladecenters, and for the most part, they are very decent commodity (sort of) machines. Pros:
14 of them in 6U space. Dual processor (Intel), hardware raid-1 that pretty much just works, all the time. Basic RH9-friendly (soon to be upgraded to CentOS) /proc information about all kinds of groovy things. Management console *rocks* for remote diagnostics, bios upgrades, remote full lock reboots, and a host of other things. Initial buy-in is highish, but a full bladecenter is pretty reasonable per-machine. Seem to run 2.4.x (old RH9 stuff) decently, I imagine they'll run CentOS fairly well, too. Shared CD-ROM drive for media insertion, that can be switched remotely or by hand. Really clever power redundancy options (see below, too).
the HS20's do run CentOS v3 -and- v4 quite nicely, I've got a number of HS20's (sans raid, just single internal SCSI drives), and everything just works out of box. I've installed both x86_64 and i686 on them.
PDU? Mine are just plugged into the rack's existing 208V dual rails.
i have 3GB each of ram in mine. Mine have dual Xeon 2.8Ghz CPUs w/ 2MB cache. you can get a gigE copper 'passthrough' module that just gives you discrete 14 ethernet cables for each port of each machine instead of buying into their cisco or nortel switches... with hindsight, I kinda wish I'd gotten the internal switches, would have saved a lot of cable mess. those little tiny 'lame-o' 2.5" drives aren't laptop drives, they are 10,000 rpm u320 scsi drives, the Seagate Savvio enterprise drives that all the server folks are switching to as they can get more drives in less space. I've got 2G fiberchannel adapters in my blades, so they can talk to a SAN, and they can even boot off of iSCSI if so configured (that I haven't tested yet)
you can also get dual Opteron blades, PowerPC blades that run AIX, and even a Cell blade for doing Cell based scientific number crunching.
John R Pierce wrote:
the HS20's do run CentOS v3 -and- v4 quite nicely, I've got a number of HS20's (sans raid, just single internal SCSI drives), and everything just works out of box. I've installed both x86_64 and i686 on them.
PDU? Mine are just plugged into the rack's existing 208V dual rails.
i have 3GB each of ram in mine. Mine have dual Xeon 2.8Ghz CPUs w/ 2MB cache. you can get a gigE copper 'passthrough' module that just gives you discrete 14 ethernet cables for each port of each machine instead of buying into their cisco or nortel switches... with hindsight, I kinda wish I'd gotten the internal switches, would have saved a lot of cable mess. those little tiny 'lame-o' 2.5" drives aren't laptop drives, they are 10,000 rpm u320 scsi drives, the Seagate Savvio enterprise drives that all the server folks are switching to as they can get more drives in less space. I've got 2G fiberchannel adapters in my blades, so they can talk to a SAN, and they can even boot off of iSCSI if so configured (that I haven't tested yet)
you can also get dual Opteron blades, PowerPC blades that run AIX, and even a Cell blade for doing Cell based scientific number crunching.
I didn't actually spec and order what I'm playing with at the moment, but I'll look into it going forward.
And thanks to the guys who replied to this ever so OT thread, I just learned a bunch of things I didn't know about the stuff I have, including what accessories I want to get for them going down the road the next couple of months.
Peter
Jim Perrin wrote:
On 1/16/07, Karl R. Balsmeier karl@klxsystems.net wrote:
Whats' the best motherboard you ever ran CentOS on?
Depends on purpose. Desktop/Workstation: Iwill dp533 board has been very good to me. It's an older board, but it's been rock solid, and very well supported.
Server: IBM x365's (and in the near future the ibm hs21's) are pretty much all I use at work anymore. With only a few minor gripes the serveraid cards are quite good, and with the exception of ibm's website sucking for locating the occasional driver disk, their server hardware hasn't let me down yet no matter what kind of load I throw at the system.
Does it have to be Intel/AMD?
IBM has a good reputation on Power and zSeries (not good for CPU-intensive work, but disk I/O is in gigabytes/sec).
A little Zed beside one's desk would be a fine ornament to any office:-)
If you want CPU, take a look at top500.org. Even if you don't have the dollars to buy some of that kit, it's good for serious drooling;-)
IBM has a good reputation on Power and zSeries (not good for CPU-intensive work, but disk I/O is in gigabytes/sec).
The Power 4+ CPUs are quite powerful.
A 4-way IBM pSeries 1.9Ghz outperformed a HP 4 way Opteron 2.4Ghz running a specific computationally intense Oracle pl/sql workload we benchmarked at work. The IBM was running AIX 5.3L, the Opteron RHEL 3 x86_64, both machines had at least 8GB ram (and weren't at all ram constrained). (RHEL 4 on the same Opteron configuration was slower!). Both systems were using 3 seperate 4 spindle raid10 on fiberchannel for the oracle tablespaces, the disk IO rates was quite high, almost exclusively writes, and not even close to a constraining factor.
On Thu, 2007-01-18 at 19:42 -0800, John R Pierce wrote:
IBM has a good reputation on Power and zSeries (not good for CPU-intensive work, but disk I/O is in gigabytes/sec).
The Power 4+ CPUs are quite powerful.
A 4-way IBM pSeries 1.9Ghz outperformed a HP 4 way Opteron 2.4Ghz running a specific computationally intense Oracle pl/sql workload we benchmarked at work. The IBM was running AIX 5.3L, the Opteron RHEL 3 x86_64, both machines had at least 8GB ram (and weren't at all ram constrained). (RHEL 4 on the same Opteron configuration was slower!). Both systems were using 3 seperate 4 spindle raid10 on fiberchannel for the oracle tablespaces, the disk IO rates was quite high, almost exclusively writes, and not even close to a constraining factor.
If anyone had one of these ppc64 blades available for donation to the CentOS team for our ppc builder machine that would be most appreciated.
We would need that one blade to be under CentOS Project control (so we could tie down the security tightly ... make it not accessible to the world and only accessable to/from the builder network, etc). But, it sure would be nice to get ppc/ppc64 up to speed as a major supported distro in C5 (and backwards into c4)... and a dedicated builder that we could use would certainly make that much easier.
This machine would be ONLY used for building RPMS / SRPMS and the traffic associated getting those files back and forth to the main builder location ... it would not be mirroring any files to other users, so the bandwidth usage would be fairly mild most of the time.
Donations anyone :P
Thanks, Johnny Hughes
Johnny Hughes wrote:
If anyone had one of these ppc64 blades available for donation to the CentOS team for our ppc builder machine that would be most appreciated.
Well, or at least a partition on such a machine.
Ralph
If anyone had one of these ppc64 blades available for donation to the CentOS team for our ppc builder machine that would be most appreciated.
the only Power I have access to now is a IBM pSeries 510Q 4 * 1.5Ghz Power5+ AIX server, which is on our intranet and in no way accessible from outside. Sadly, its sitting 99.9% idle as the development project it was bought for is on indefinite hiatus, but I can't see any way politically or physically make it accessible to outside parties for any such purpose. :(
Also, the Power5 stuff isn't quite the same as the PowerPC. Same user mode, same basic instruction architecture, but the -system- level stuff is quite a bit different as I understand... Certainly, a pSeries server isn't a PC architecture machine.
John R Pierce wrote:
If anyone had one of these ppc64 blades available for donation to the CentOS team for our ppc builder machine that would be most appreciated.
Also, the Power5 stuff isn't quite the same as the PowerPC. Same user mode, same basic instruction architecture, but the -system- level stuff is quite a bit different as I understand... Certainly, a pSeries server isn't a PC architecture machine.
Still there is a need for Linux on those machines - and that might be the reason that Red Hat and SuSE (and others) offer Linux for PPC64.
Ralph
Ralph Angenendt wrote:
Still there is a need for Linux on those machines - and that might be the reason that Red Hat and SuSE (and others) offer Linux for PPC64.
The IBM Power4 and 5 are the main targets for the upstream providers version for PPC. There is a need for it, particulary in HPC installations. Generally AIX is a better options on them than Linux, but it depends on what software you want to run on the boxes. The hardware is in many ways similar to many PC server parts... Adaptec SCSI HBAs, Broadcom ethernet, Mylex FC HBAs, DDR2 memory, PCI slots... but a main difference is the chipsets and I/O bandwidth available.
Oh, and the hardware hypervisor. That is sweet. When you make a partition to run an OS is, you select the PCI slots you want available to it, the amount of memory, the amount of CPU(s) with lots of config options... and you have virtual I/O options for network, disk and consoles. The hypervisor provides a virtual Ethernet switch that emulates 1Gbps Ethernet, but it runs at much higher speeds :) And all allocated resources can be dynamic... so you can add and remove PCI slots, memory, CPU on the fly. Nice machines :)
Sadly, I don't have a Power4 or 5 box on the net either, else I would be glad to donate a partition to CentOS.
Johnny Hughes wrote:
On Thu, 2007-01-18 at 19:42 -0800, John R Pierce wrote:
IBM has a good reputation on Power and zSeries (not good for CPU-intensive work, but disk I/O is in gigabytes/sec).
The Power 4+ CPUs are quite powerful.
A 4-way IBM pSeries 1.9Ghz outperformed a HP 4 way Opteron 2.4Ghz running a specific computationally intense Oracle pl/sql workload we benchmarked at work. The IBM was running AIX 5.3L, the Opteron RHEL 3 x86_64, both machines had at least 8GB ram (and weren't at all ram constrained). (RHEL 4 on the same Opteron configuration was slower!). Both systems were using 3 seperate 4 spindle raid10 on fiberchannel for the oracle tablespaces, the disk IO rates was quite high, almost exclusively writes, and not even close to a constraining factor.
If anyone had one of these ppc64 blades available for donation to the CentOS team for our ppc builder machine that would be most appreciated.
Johhny Have you asked IBM for help here? IBM does provide free help to folk wanting zSeries for projects such as yours, you only need to ask.
My $0.02 On Tuesday 16 January 2007 17:12, Karl R. Balsmeier wrote:
Whats' the best motherboard you ever ran CentOS on? Right now I run using Tyan S288@UG3NR-D Dual Core Opteron SCSI SATA GBe LAN boards.
I have half a dozen systems with very, very similar hardware. Under EXTREME loads (LA > 70) it STILL NEVER DIES. I love these things!
I have a vendor that consistently says they get complaints on Tyan boards, but out of the cluster, none of mine have ever died. The Dells die, and get replaced. The remaining Sun's, seem to never die even though I wish they would. Not that i'm a fan of Tyan, -but oddly enough this particular board works great.
Ditto. Just tried some quad-core Supermicro systems, and they've never been able to hold their own with a sustained load above 8 or so. And these are QUAD CORE systems...
I noticed alot of the hardware advice Johnny gives (hardware & advice/fixes) just so happens to coincide with vendors saying the exact oppposite thing. They say go with ATI and broadcomm right when he's actually helping someone fix something related to one of those components. Sometimes on the same day.
Salesman will say whatever if it means a sale.
If we as engineers are to have any say in our industry, it's going to happen when we all talk outside the box of BS theory and FUD or over-analysis or analysis-for-analysis'-sake.
Right now Intel has things such that it's actually a little difficult to find a stock 2U production linux system unless you actually break it down part-by-part and vet the whole thing. Just curious about your opinions and advice -is there a spec you follow that you like?
Why 2U? I'm *all* 1U.
Way back when, you'd either order a Supermicro-type system, or get a VA Linux type machine. What do you do now? If you happen to be trying to spec out a solid Linux server, I can say that the spec I arrived at handles over 100,000,000 page views a week -that's 1/3rd of CNET. It's all CentOS, the whole thing. A percentage of you might have travelled across them, especially if you happen to read news on the web.
Commodity is the way to go. Get 40 servers for the price of one commercial vendor machine. CentOS is very real my friends -let's talk hardware! Maybe we can help the centos project out by doing so.
I'm setting up a cluster of (minimally) 6 systems over the next year.
I have to say, i'm sticking with my own hardware choices, -so please don't view this as someone trying to get a hardware spec for free -the intention here is to solidify our own base as centos users, sysadmins.
Tyan is the way to go!
This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean.