From: John Hinton webmaster@ew3d.com
Yup, it's a shame to waste expensive horsepower if you don't need it and most web stuff doesn't use a lot of horsepower.. Buy a bit above your needs. Quality over horsepower as well, has been my philosophy.
Actually, it's not about waste IMHO. It's about believing a newer processor design means the interconnect is better.
A dual-P3 on a ServerSet IIIHE-SL will slap a P4-Celeron silly when it comes to server I/O. Especially when you throw a GbE on one PCI channel, and your SATA RAID on another.
I would advise going to the DL380 series. These use only the ultra2 or 3 scsi drives.. not the mix of ultra and wide-ultra.
Actually, it wasn't until recently that most 36, 73, 146GB SCSI drives started breaking 40MBps sustained. Ultra2/3 (aka Ultra80/160) offer LVD (low-voltate differential) for longer bus length and integrity. But yes, at today's disk speeds, Ultra2(80) is minimum. And you typically don't want more than 2-3 drives per channel.
Although Ultra640 is planned, Serial Attached SCSI (SAS) is going to quickly kill it. A parallel storage bus is yesteryear. Until then, a SATA controller with queuing and "enterprise" SATA drives is the best of all worlds. The great thing is that SAS can use SATA drives too (so you can recycle your "enterprise" SATA drive investments).
I very nice speed improvement. Also, when buying, watch for drives quanity, size and speed (try to avoid the 7200rpm and get the 10 or 15k units).
Because in many cases, many current 7200rpm drives come off the same line as ATA and SATA drives too. Interface is not an indicator of reliability, the actual mechanics are. A good indicator is the vibration and other tolerances in the spec sheet. "Enterprise" drives (typically the 9, 18, 36, 73 and 146GB capacities) vibrate 3-8x less than "Commodity" drives (typically the 40, 80, 120, 160, 200, 250, 300, 320 and 400GB capacities).
E.g., WD Raptor 73 and 146GB 10,000rpm SATA drives come off the same line as Hitachi 10,000 U320 SCSI drives.
I think that was the ending point for the 1850s. The DL380s are almost the same unit with a lot of parts being interchangable. The 380s are just more modern in terms of processors, circuitry, raids, drives... and pretty much picked up I think at the 600mhtz point and have grown from there. The dual 866s and up to about 1.2 giggers are very reasonble on ebay
From what I was reading, the 1850 and similar era are 440 and 450 series
chipsets. The 440 is _crap_. The 450NX is so-so, but bridges on extra PCI channels (for either 2 or 5 total). It still doesn't compare to a ServerWorks ServerSet III series.
A t-1 is still only 1/6th of a ten base ethernet card... and how much power does it take to deliver products at one sixth of a 10base card? Yeah, I know.... it's more complex than that. Database apps can eat up a lot, email/spam systems can eat up HUGE amounts of processing power. But if you're mostly delivering web pages, it just doesn't take much.
I guess I've been too used to having a GbE saturated. ;->
-- Bryan J. Smith mailto:b.j.smith@ieee.org
Bryan J. Smith b.j.smith@ieee.org wrote:
From: John Hinton webmaster@ew3d.com
Yup, it's a shame to waste expensive horsepower if you don't need it and most web stuff doesn't use a lot of horsepower.. Buy a bit above your needs. Quality over horsepower as well, has been my philosophy.
Actually, it's not about waste IMHO. It's about believing a newer processor design means the interconnect is better.
A dual-P3 on a ServerSet IIIHE-SL will slap a P4-Celeron silly when it comes to server I/O. Especially when you throw a GbE on one PCI channel, and your SATA RAID on another.
I mean spending 5 to 40g on a server that will get a 5% load, is just a waste of money. It'll sit there and devalue at 50% per year and unless you are growing like crazy, you'll never cross that magical horsepower vs. money line.
A t-1 is still only 1/6th of a ten base ethernet card... and how much power does it take to deliver products at one sixth of a 10base card? Yeah, I know.... it's more complex than that. Database apps can eat up a lot, email/spam systems can eat up HUGE amounts of processing power. But if you're mostly delivering web pages, it just doesn't take much.
I guess I've been too used to having a GbE saturated. ;->
Yeah, well, you keep throwing to bit eaters up and I'll sit back and pick them up in a year or two and make lowly web servers out of 'em. ;)
I guess we all forget the other man's situation. Yeah, some networks have huge throughput... and then others are just struggling for more bandwidth for less money.... the ones who ultimately hook into a phone system somewhere...... those of us stuck in the slow world of the internet mired in the old school ways of the huge telcos.
Best, John Hinton
On Mon, 2005-07-18 at 19:34 -0400, John Hinton wrote:
I mean spending 5 to 40g on a server that will get a 5% load, is just a waste of money.
Don't confuse CPU load with interconnect and I/O load. I/O load is very difficult to measure in Linux, and interconnect is virtually impossible. You can literally have less than 2% CPU load and your I/O cards are not only starving for bandwidth, but contending for it (even worse).
I guess we all forget the other man's situation. Yeah, some networks have huge throughput... and then others are just struggling for more bandwidth for less money....
My point is that 9 times out of 10, I can design a higher performing set of servers and network for _lower_cost_ than what people think is a modest server at a much higher cost.
I don't know how many times I walk into a company and they put a $200-300 server mainboard with $15,000 of storage and other hardware. Not only could I give them 3x the performance with a $700 mainboard, but I probably could have architect a storage solution that is more faster, reliable and uses far less power, for less than 1/3rd the price.