From: Peter Arremann loony@loonybin.org
Then the first AnandTech benchmark article (http://www.anandtech.com/IT/showdoc.aspx?i=2447) is exactly what you want to look at. Huge amount of memory (when compared to the size of the database running on the system) on a 64bit linux kernel... We're doing the same for one of our apps called IPM. Its a PHP app running against a quad opteron with 16GB ram. Heavy on network IO (during business hours its rare that we don't saturate the main 100mbit link) but little disk activity. DB size is about 2.5GB and we end up with a couple of gig for disk buffers. CentOS4 of course... anything specific you're looking for?
I think I know where you and I are differing.
When you talk about "heavy [network] IO," you refer to SQL-based applications over a primary 100mbit link. In reality, the MCH bottleneck isn't much of an issue here.
When I talk about "heavy [network] IO," I'm typically referring to less intelligent applications (e.g., NFS or other "raw block" transfer) over one or even multiple GbE, possibly FC-AL links (possibly direct IP, link agregated IP, maybe 1 out-of-band channel, or possibly to a Storage Area Network, SAN). Although I have built financial transaction systems that have required GbE as well as engineering.
I guess this is where my terminology really differs. A lot of people are using Linux for Internet services. I've typically been using Linux for high-performance LAN systems -- both "raw block" as well as intelligent applications. My Internet connection is not my bottleneck.
BTW, despite popular thought, this can be done quite inexpensively when needed. It really all depends. But in these applications, definitely _not_ going to see Opteron doing much for you over Xeon. Let alone not when running Linux or Windows (Solaris might be another story though).
-- Bryan
P.S. Just a follow-up, _never_ assume you're the only EE in a thread. [ Let alone don't assume I haven't designed memory controllers as part of my job, beyond just my option in computer architecture. That's why I had to "give up" in the other thread, because everytime I try to explain something, you are going to a very simplified path and I have to stop and explain it (e.g., the fact that IA-32/x86 _can_ address beyond 4GiB). ]
-- Bryan J. Smith mailto:b.j.smith@ieee.org
On Saturday 25 June 2005 17:46, Bryan J. Smith b.j.smith@ieee.org wrote:
I think I know where you and I are differing.
When you talk about "heavy [network] IO," you refer to SQL-based applications over a primary 100mbit link. In reality, the MCH bottleneck isn't much of an issue here.
Heavy network IO for a webserver is a 100mbit link. Heavy network IO for a database server is a trunked gigabit link or something proprietary like a firelink. Heavy disk IO is 4 to 8 fibre links to an EMC or netapp. I was simply using IPM as an example because it runs on exactly the kind of hardware we've been talking about - dual/quad opterons... v40z to be exact.
When I talk about "heavy [network] IO," I'm typically referring to less intelligent applications (e.g., NFS or other "raw block" transfer) over one or even multiple GbE, possibly FC-AL links (possibly direct IP, link agregated IP, maybe 1 out-of-band channel, or possibly to a Storage Area Network, SAN). Although I have built financial transaction systems that have required GbE as well as engineering.
And part of my job deals with sun/hp/ibm frames - things like E10k, F15k, superdome, P690 and so on :-) Me and my team do the whole thing from estimating hardware requirements to setup, disk provisioning, io designs to OS patching...
I guess this is where my terminology really differs. A lot of people are using Linux for Internet services. I've typically been using Linux for high-performance LAN systems -- both "raw block" as well as intelligent applications. My Internet connection is not my bottleneck.
:-) Linux is mostly for our development desktops and to play with...
BTW, despite popular thought, this can be done quite inexpensively when needed. It really all depends. But in these applications, definitely _not_ going to see Opteron doing much for you over Xeon. Let alone not when running Linux or Windows (Solaris might be another story though).
Actually apps like the one I was referring to showed about 50% single thread performance gain when going from a 2.4GHZ Xeon to a 2.2GHz Opteron. Full load can't be compared since the memory config was so vastly different.
-- Bryan
P.S. Just a follow-up, _never_ assume you're the only EE in a thread. [ Let alone don't assume I haven't designed memory controllers as part of my job, beyond just my option in computer architecture. That's why I had to "give up" in the other thread, because everytime I try to explain something, you are going to a very simplified path and I have to stop and explain it (e.g., the fact that IA-32/x86 _can_ address beyond 4GiB). ]
Never assume you've done more more than others either :-) I've done the more difficult job with finding all the applicable documents while you just put out hearsay and "doesn't work like that" statements without really backing it up. And yes, IA-32 _can_ address more than 4GB - called PAE.
For now - this is really the last email I'm going to send about this as well as in further threads I'll try not to waste my time trying to argue with someone who can't even come up with a single shred of prove of their own theories when claiming things that go against everything, including the manufacturers docs of the equipment you talk about.
Glad to say that after all these differences in details, in the end we agree on so many things like 3ware controllers or bad mainboard designs with dimm slots on only one CPU... :-)
Peter.
On Sat, 2005-06-25 at 18:09 -0400, Peter Arremann wrote:
Heavy network IO for a webserver is a 100mbit link. Heavy network IO for a database server is a trunked gigabit link or something proprietary like a firelink. Heavy disk IO is 4 to 8 fibre links to an EMC or netapp. I was simply using IPM as an example because it runs on exactly the kind of hardware we've been talking about - dual/quad opterons... v40z to be exact.
I was just saying that "Heavy [network] IO" from a performance standpoint of optimally leveraging Opteron NUMA/HyperTransport (or most RISC/UNIX platforms) versus just your typical, desktop PC-focused (just wider) Xeon MCH interconnect is rarely done with web apps.
Case-in-point: I've seen far too many consultants used to designing web servers install a piss-poor network fileserver or LAN application server. And when I've challenged them or their clients on this, they have absolutely no idea what I'm talking about. It's only when I put my money where my mouth's at with a minimal investment and minor system change that they realize what I'm talking about.