[CentOS] How to select a motherboard -- CPU architectures and chipsets

Thu Dec 15 15:48:29 UTC 2005
Bryan J. Smith <thebs413 at earthlink.net>

Feizhou <feizhou at graffiti.net> wrote:
> Well, the OP probably is stuck with a processor capable of
> frying eggs...which is not possible with your suggestion.
> My colleagues take their sweaters/warmers off due to the
Dell
> cum heater box besides their feet in the office.

Socket-478/LGA-775 Netburst architecture (Pentium 4) is the
absolute worst in heat generation, even at 90nm.  They've
brought it down some with the dual-core solutions, but it's
still way too high.

Newer Socket-754/939/940 Athlon 3000-3500+ and Opteron "HE"
(or 150+) only generate 31-55W heat.  Dual-core versions are
70-110W.

The best is the newer, but little known Socket-479 Pentium
Pro-III architecture (Pentium M), and uses as little as 21W. 
The 2.0-2.26GHz versions will typically best all but the
highest clock Pentium 4.  They even offer it with the
PCIe/DDR2 i915 chipset, although Socket-479 is a major
mark-up (but far better than it was just a little bit ago).

> an Intel chipset motherboard seems to be the safest bet.

Not always.  But for the most part, the new ICH7 peripherals
on the i9x5 are fairly well supported now.

Ironically enough, Intel does _not_ make good server
chipsets, they _never_ have.  The only Intel chipsets for
servers that are worthy are the E7200/7500 series -- designed
by ServerWorks (now owned by Broadcom).  The last server
chipset Intel designed was the NX450 -- well over 5 years
ago.

The ServerWorks ServerSet III series was a godsend back in
the P3/Xeon days.  The GrandChampion (GC) series for P4/Xeon
was also powerful until Intel came out with the E7500 series
based on it, and then the more entry-level E7200 series after
that.

The absolute best server "chips" (since AMD HyperTransport is
no longer a "fixed" chipset design), are the AMD8000 series
-- especially the dual-channel AMD8131/8132 PCI-X 1.0/2.0
HyperTransport tunnels.  The AMD8131/8132 paired with newer
logic like the nForce Pro series for PCIe and peripherals,
it's a very powerful workstation and/or server combination
for Opteron.

> Say no to VIA. 
> (warning: biased opinion from a Chinese guy who has been
> burnt too many times by chipsets from said Taiwanese
company
> in both consumer and server boards and therefore has not
tried
> any of the latest chipsets from said company)

ViA is great for ViA C3/Eden platforms.  They typically lag
in features, so by the time the leading-edge desktop ViA
chipsets get peripheral support in Linux, they are adopted by
the low-power C3/Eden platforms.

So yes, for desktop, ViA changes their peripheral logic way
too much.  That keeps the kernel developers adding PCI IDs,
tracking little variants in their ATA and other logic, etc...
 ViA has _not_ switched to native HyperTransport on AMD, and
are still using their VLink PCI-based interconnect.

But ViA has _never_ designed a server chipset either.

I was very impressed with nVidia's ability to keep PCI IDs
and other peripherals consistent from the nForce2/MCP-02
through the single-chip nForce4 (integrated MCP-04). 
Unfortunately, that seems to have ended with the new
nForce4x0/GeForce61x0 (C51/NV44), it uses new PCI IDs and
other things so you need a recent kernel.

E.g., FC4's installer 2.6.11 didn't cut it -- the updated
2.6.14 did, however.  Although I have to had it to nVidia,
they at least give you an installable driver set that you can
do on a minimal install.  I.e., I installed FC4 with kernel
2.6.11, installed the nForce platform driver with its "nvnet"
for 10/100[/1000] NIC, ran yum update, then switched back to
the GPL "forcedeth" after the reboot into kernel 2.6.14 (did
not have to re-install the nForce platform drivers).


-- 
Bryan J. Smith                | Sent from Yahoo Mail
mailto:b.j.smith at ieee.org     |  (please excuse any
http://thebs413.blogspot.com/ |   missing headers)