From: Kirk Bocek t004@kbocek.com
I've been planning to build a dual Opteron server for awhile. I'd like to get people's suggestions on a suitable motherboard.
I assume you read 2004 November Sys Admin on the Opteron, including avoiding cheap mainboard designs? http://www.samag.com/documents/sam0411b/
Specifically figure 6 for a 2-way mainboard: http://www.samag.com/documents/s=9408/sam0411b/0411b_f6.htm
I've looked at the Tyan K8SE (S2892) and K8SRE (S2891) but would like to find more Linux-specific experiences with these boards.
On 2-way, you want one with an AMD8131 (or AMD8132) HyperTransport PCI-X 1.0 (2.0) tunnel.
On 4-way, you want one with at least 2 such tunnels.
I assume you want 2-way. The S2891 is a nice, 2-way board for 1/2U. The S2892 is nice if you want more of a traditional SSI EEB mainboard. They are true NUMA and don't cut corners on the CPU and memory.
_But_, unfortunately _both_ mainboards put _all_ I/O on CPU #1. That's not ideal.
As such, you might be better of going with the S2895 which puts the nForce Pro 2050 and 1 GbE NIC on CPU #2, at least if you have a PCIe storage or NIC cards. It looks more like a workstation board, but the fact that 1 embedded GbE NIC _and_ 1 PCIe slot is on CPU #2 helps improve processor affinity for I/O (especially under Solaris, but even Linux to an extent as well).
Tyan continues to disappoint me by putting _all_ I/O on 1 CPU. The only option where they don't is the nForce Pro 2200+2050 combination on the S2895 where the 2050 goes on CPU #2. Tyan should put the AMD8131 on CPU #2 on the 2891/2892 so it doesn't contend with the nForce 2200 on CPU #1 and its slots/peripherals (and there is no 2050).
Some features I expect are at least 4 SATA (SATA-300?) ports,
??? Why "dumb" SATA channels ???
Why not a 3Ware Escalade 8506-4+ or a LSI Logic MegaRAID 300-8X?
serial console support in the BIOS,
Any PhoenixBIOS will typically give you that.
USB 2.0 and IEEE-1394 ports,
On a server? The last thing you want is any storage over those two. I've been ripping out a lot of external storage solutions that use USB 2.0 or IEEE-1394 because they put server stability at risk.
low-end on-board video.
Most low-footprint mainboards have video on-mainboard these days. Typically just glued to a PCI bus.
Anyone care to share their experiences?
Just curious why you want "dumb" SATA channels, let alone external storage over USB or FireWire on a _server_.
-- Bryan J. Smith mailto:b.j.smith@ieee.org
Thanks for the detailed reply, Brian.
Bryan J. Smith b.j.smith@ieee.org wrote:
I assume you read 2004 November Sys Admin on the Opteron, including avoiding cheap mainboard designs? http://www.samag.com/documents/sam0411b/
Yea, but who *is* this guy? :)
I assume you want 2-way. The S2891 is a nice, 2-way board for 1/2U. The S2892 is nice if you want more of a traditional SSI EEB mainboard. They are true NUMA and don't cut corners on the CPU and memory.
Don't really need physical 4-way. I'm planning on 2-way dual core. 1/2U doesn't matter, it'll be a tower case. I mostly want the 'server' features like dual NICs and on-board video.
??? Why "dumb" SATA channels ???
Why not a 3Ware Escalade 8506-4+ or a LSI Logic MegaRAID 300-8X?
I mostly just want to see how two or three striped SATA drives will perform. If the on-board ports don't perform, I can always add a separate controller later. My understanding is that NCQ support isn't here yet for Linux but that it should provide a boost.
serial console support in the BIOS,
Any PhoenixBIOS will typically give you that.
I did not know that. I've seen too many 'workstation' mobos without serial console support to make that assumption.
USB 2.0 and IEEE-1394 ports,
On a server? The last thing you want is any storage over those two. I've been ripping out a lot of external storage solutions that use USB 2.0 or IEEE-1394 because they put server stability at risk.
Not for primary storage. Mostly just future proofing here. I've been meaning to setup a backup system using external hard drives instead of tape. That wouldn't fly over USB 1.1.
*******************
So, Brian, having given me all that good information (thanks again), what specific models do you suggest? What are *you* running on?
Kirk Bocek
On Wednesday 22 June 2005 13:29, Bryan J. Smith b.j.smith@ieee.org wrote:
Tyan continues to disappoint me by putting _all_ I/O on 1 CPU. The only option where they don't is the nForce Pro 2200+2050 combination on the S2895 where the 2050 goes on CPU #2. Tyan should put the AMD8131 on CPU #2 on the 2891/2892 so it doesn't contend with the nForce 2200 on CPU #1 and its slots/peripherals (and there is no 2050).
Bryan,
there are sometimes more important things than max performance... Upgradability i.e. ... If you buy a complete server its one thing, but if you're offering parts and people build there own they often do consider upgradability. If you buy a server that right now should run fine with 1 CPU but you want to upgrade later to a second one, then the Tyan design is for you. This way you can run all your I/O still and still not have to pay extra for the second CPU up front. The Tyan website even mentions this somewhere as the reason for the design decision.
Peter.
Peter Arremann wrote:
On Wednesday 22 June 2005 13:29, Bryan J. Smith b.j.smith@ieee.org wrote:
Tyan continues to disappoint me by putting _all_ I/O on 1 CPU. The only option where they don't is the nForce Pro 2200+2050 combination on the S2895 where the 2050 goes on CPU #2. Tyan should put the AMD8131 on CPU #2 on the 2891/2892 so it doesn't contend with the nForce 2200 on CPU #1 and its slots/peripherals (and there is no 2050).
Bryan,
there are sometimes more important things than max performance... Upgradability i.e. ... If you buy a complete server its one thing, but if you're offering parts and people build there own they often do consider upgradability. If you buy a server that right now should run fine with 1 CPU but you want to upgrade later to a second one, then the Tyan design is for you. This way you can run all your I/O still and still not have to pay extra for the second CPU up front. The Tyan website even mentions this somewhere as the reason for the design decision.
This discussion is getting far afield from CentOS (heh, I should have known when I saw who's post you were following up). 8-)
If you have a task that's I/O bound, then perhaps Bryan's concerns will impact your decision on a motherboard. For my purposes, I'm mostly cpu bound, it really doesn't mean a hell of a lot to me that one cpu is "stuck" with mundane I/O tasks. And as cpus get faster and faster, this becomes less of an issue (if it's really an issue at all for most tasks). Also, as Bryan mentions, the S2895 splits the I/O up a bit better and the newer revs of that board support dual core Opterons too. So if you're in an I/O heavy environment you could choose that board rather than the S2882.
Best regards,
C
On 6/22/05, Bryan J. Smith b.j.smith@ieee.org thebs413@earthlink.net wrote:
??? Why "dumb" SATA channels ???
Why not a 3Ware Escalade 8506-4+ or a LSI Logic MegaRAID 300-8X?
Out of curiosity..... why a 3ware 8506-4 and not a 3ware 9500 4 port card?
JC
On Wed, 2005-06-22 at 23:22 -0400, Juan Carlos wrote:
Out of curiosity..... why a 3ware 8506-4 and not a 3ware 9500 4 port card?
A lot of people are reporting issues with them.
The 9500 series adds DRAM (to the existing ASIC+SRAM design of the 7500/8500 series). There still seems to be a lot of maturity on the part of the firmware. I remember 3Ware played with RAID-5 quite awhile on the 6000 series and they eventually had to develop a new ASIC (which then appeared in the 7000 series) before they got it right.
At least a lot of people are still reporting issues, so until I hear otherwise, I've been avoiding deploying the 9500. I have only one in use, which seems to be okay for its light duties.
Besides, the 8506-4 at RAID-10 will give you about the same performance. You don't need any DRAM buffer for RAID-0, 1 or 10.
On Thursday 23 June 2005 00:21, Bryan J. Smith wrote:
On Wed, 2005-06-22 at 23:22 -0400, Juan Carlos wrote:
Out of curiosity..... why a 3ware 8506-4 and not a 3ware 9500 4 port card?
A lot of people are reporting issues with them.
The 9500 series adds DRAM (to the existing ASIC+SRAM design of the 7500/8500 series). There still seems to be a lot of maturity on the part of the firmware. I remember 3Ware played with RAID-5 quite awhile on the 6000 series and they eventually had to develop a new ASIC (which then appeared in the 7000 series) before they got it right.
At least a lot of people are still reporting issues, so until I hear otherwise, I've been avoiding deploying the 9500. I have only one in use, which seems to be okay for its light duties.
Besides, the 8506-4 at RAID-10 will give you about the same performance. You don't need any DRAM buffer for RAID-0, 1 or 10.
Actually, support for the 9500s has gotten quite a lot better over the past few months - you might want to give them another try. A college of mine mis-ordered a few of them when they first came out and at the beginning the results were less than stallar. By now (FC4 and CentOS 4U1) the drivers are stable enough that the tests I ran could no longer crash the box.
Anyway, another reason to still go with the 8xxx series is that if you need to run an older distribution you'll run into issues. CentOS 4 should be fine, but if you still have the original CentOS 3 isos around, you'll need to find something newer.
Peter.