[CentOS] Re: Hot swap CPU -- shared memory (1 NUMA/UPA) v. clustered (4 MCH)

Bryan J. Smith <b.j.smith@ieee.org>

thebs413 at earthlink.net
Fri Jul 8 22:43:39 UTC 2005


From: Bruno Delbono <bruno.s.delbono at mail.ac>
> I'm really sorry to start this thread again but I found something very 
> interesting I thought everyone should ^at least^ have a look at:
> http://uadmin.blogspot.com/2005/06/4-dual-xeon-vs-e4500.html
> This article takes into account a comparision of 4 dual xeon vs. e4500. 
> The author (not me!) talks about "A Shootout Between Sun E4500 and a 
> Linux Redhat3.0 AS Cluster Using Oracle10g [the cluster walks away limping]"

People can play with the numbers all they want.

Shared memory systems (especially NUMA) with their multi-GBps, native
interconnects are typically going to be faster at anything that is GbE or
FC interconnected.  At the same time, lower-cost, clustered systems
are competitive *IF* the application scales _linearally_.  In this test
scenario, the operations were geared towards operations that scale
_linearally_.

But even then, I think this blogger pointed out some very good data.

The reality of the age of the systems in comparison, as well as the
"less than commodity" cluster configuration of the PC Servers.  It
too is also a "power hungry" setup, and very costly.

The reality is that a dual-2.8GHz Xeon MCH system does _not_ have
the same interconnect capable of an even _yesteryear_ UltraSPARC
II hardware that actually had a _drastically_lower_ cost overall.  And
even then, for its age, price, etc..., the "old" 4-way UltraSPARC II
NUMA/UPA was quite "competitive" against a processor with 7x the
CPU clock, but because the latter, "newer" P4 MCH platform has a
_far_lower_ total aggregate throughput capable interconnect.  Had
the test included database benchmarks that were less linear and
favored shared memory systems, then the results might have been
very different.

Frankly, if they wanted to make the "Linux v. Solaris" game stick,
they should have taken a SunFire V40z and compared _directly_.
Or at least pit a SunFire V40z running Linux against the same cluster,
as the costs are very much near each other.

So, in the end, I think this was a _poor_ test overall, because apples
and oranges are being compared.  The clustered setup has _better_
failover, the shared memory system has _better_ "raw interconnect."
And it wasn't fair to use an aged Sun box, a newer, "cost equivalent"
SPARC (III/IV?) should have been used -- especially given the costs.
It's very likely that someone was just "re-using" the only Sun box
they had, which is just a piss-poor show of journalism.  The memory
of the Sun should have also been boosted to the same, total amount
to show off the power of a shared memory system with the appropriate
benchmarks.

I mean, the shared memory system has interconnect measured in the
multi-GBps, while the cluster is well sub-0.1GBps for the GbE, and
sub-0.5GBps for the FC-AL on even PCI-X 2.0.

Furthermore, I would really like to see how the 4x P4 MCH platforms
would perform versus more of a NetApp setup, beyond just the
SunFire V40z (Opteron).  Or, better yet, a new Opteron platform with
HTX InfiniBand (Infiniband directly on the HT).  That's _exactly_ what
Sun is moving to, and something Intel doesn't have.  Especially
considering that InfiniBand on PCI-X 2.0 is only capable of maybe
0.8GBps in an "ideal" setup, and 1GbE can't get anywhere near that
(let alone the layer-3/4 overhead!), while HTX InfiniBand has broken
1.8GBps!

At 1.8GBps throughput InfiniBand, you're starting to blur the
difference between Clustered and Shared Memory, especially
with the HyperTransport protocol -- and especially versus
traditional GbE.  Intel can't even compete at 40% the power
of Opteron HTX in a cluster configuration.

**SIDE DISCUSSION:  SATA and the evolution to SAS (bye SCSI!)

As far as SATA v. SCSI, they were using 10K SATA disks that basically
roll of the same lines as SCSI.  Using the intelligent FC host fabric, the
SATA's not only queue just as good as a SCSI array, the SATA throughput
is _higher_ because of the reduced overhead in the protocol.  There has
been study after study after study that shows if [S]ATA is paired with
an intelligent storage host, the host will queue and the drives will
burst.  The combination _roasts_ traditional SCSI -- hence why SCSI is
quickly being replaced by Serial Attached Storage (SAS), a multi-target
ATA-like interconnect, in new designs.

Anyone who still thinks SCSI is "faster" versus [S]ATA is lost.  Yes,
[S]ATA doesn't have queuing, and putting NCQ (Native Command
Queuing) on the Intelligent Drive Electronics (IDE) is still not the
same as having an intelligent Host Adapter (HA) which SCSI always
has.  But when you use an intelligent Host Adapter (HA) for [S]ATA,
such as the ASIC in 3Ware and other products, the game totally
changes.  That's when SCSI's command set actually becomes a
latency liability versus ATA -- especially in an ASIC design like
3Ware's, and new Broadcom, Intel and other solutions, SCSI can
_not_ compete.  Hence, again, the evolution to SAS.

Parallel is dead, because it's much better to have a number of
point-to-point devices with direct pins to a PHY to a wide ASIC
than a wide bus  that is shared by all devices.

And these 10K RPM SATA models are rolling of the _exact_same_
fab lines as their SCSI equivalents, with the same vibration specs
and MTBF numbers.  They are not "commodity" [S]ATA drives, of
which many 7,200rpm SCSI drives are even now sharing (and
share the same 0.4Mhr MTBF as commodity [S]ATA).


--
Bryan J. Smith   mailto:b.j.smith at ieee.org




More information about the CentOS mailing list