[CentOS] [OT] Memory Models and Multi/Virtual-Cores -- WAS: 4.0 -> 4.1 update failing

Sat Jun 25 22:09:31 UTC 2005
Peter Arremann <loony at loonybin.org>

On Saturday 25 June 2005 17:46, Bryan J. Smith <b.j.smith at ieee.org> wrote:
> I think I know where you and I are differing.
>
> When you talk about "heavy [network] IO," you refer to SQL-based
> applications over a primary 100mbit link.  In reality, the MCH bottleneck
> isn't much of an issue here.
Heavy network IO for a webserver is a 100mbit link. 
Heavy network IO for a database server is a trunked gigabit link or something 
proprietary like a firelink. 
Heavy disk IO is 4 to 8 fibre links to an EMC or netapp. 
I was simply using IPM as an example because it runs on exactly the kind of 
hardware we've been talking about - dual/quad opterons... v40z to be exact. 

> When I talk about "heavy [network] IO," I'm typically referring to less
> intelligent applications (e.g., NFS or other "raw block" transfer) over
> one or even multiple GbE, possibly FC-AL links (possibly direct IP,
> link agregated IP, maybe 1 out-of-band channel, or possibly to a
> Storage Area Network, SAN).  Although I have built financial
> transaction systems that have required GbE as well as engineering.
And part of my job deals with sun/hp/ibm frames - things like E10k, F15k, 
superdome, P690 and so on :-) Me and my team do the whole thing from 
estimating hardware requirements to setup, disk provisioning, io designs to 
OS patching... 

> I guess this is where my terminology really differs.  A lot of people
> are using Linux for Internet services.  I've typically been using Linux
> for high-performance LAN systems -- both "raw block" as well as
> intelligent applications.  My Internet connection is not my bottleneck.
:-) Linux is mostly for our development desktops and to play with... 

> BTW, despite popular thought, this can be done quite inexpensively
> when needed.  It really all depends.  But in these applications,
> definitely _not_ going to see Opteron doing much for you over Xeon.
> Let alone not when running Linux or Windows (Solaris might be another
> story though).
Actually apps like the one I was referring to showed about 50% single thread 
performance gain when going from a 2.4GHZ Xeon to a 2.2GHz Opteron. Full load 
can't be compared since the memory config was so vastly different. 

> -- Bryan
>
> P.S.  Just a follow-up, _never_ assume you're the only EE in a thread.
> [ Let alone don't assume I haven't designed memory controllers as
> part of my job, beyond just my option in computer architecture.
> That's why I had to "give up" in the other thread, because everytime
> I try to explain something, you are going to a very simplified path
> and I have to stop and explain it (e.g., the fact that IA-32/x86 _can_
> address beyond 4GiB). ]
Never assume you've done more more than others either :-) I've done the more 
difficult job with finding all the applicable documents while you just put 
out hearsay and "doesn't work like that" statements without really backing it 
up. And yes, IA-32 _can_ address more than 4GB - called PAE.

For now - this is really the last email I'm going to send about this as well 
as in further threads I'll try not to waste my time trying to argue with 
someone who can't even come up with a single shred of prove of their own 
theories when claiming things that go against everything, including the 
manufacturers docs of the equipment you talk about. 

Glad to say that after all these differences in details, in the end we agree 
on so many things like 3ware controllers or bad mainboard designs with dimm 
slots on only one CPU... :-) 

Peter.