Simon Banton wrote:
At 13:03 -0400 2/10/07, Ross S. W. Walker wrote:
Have you tried calculating the performance of your current drives on paper to see if it matches your "reality"? It may just be that your disks suck...
They're performing to spec for 7200rpm SATA II drives - your help in determining which was the appropriate elevator to use showed that.
What is the server going to be doing? What is the workload of your application?
Originally, it was going to be hosting a number of VMWare installations each containing a separate self contained LAMP website (for ease of subsequent migration), but that's gone by the board in favour of dispensing with the VMWare aspect. Now the websites will be NameVhosts under a single Apache directly on the native OS.
Yeah, I wouldn't do VMware guests it'll just get way too complex way too quickly as it will turn into more of a grid computing project then a web server project.
The app on each website is MySQL-backed and Perl CGI intensive. DB intended to be on a separate (identical) server. All running swimmingly at present on 4 year old single 1.6GHz P4s with single IDE disks, 512MB RAM and RH7.3 - except at peak times when they're a bit CPU bound. Loadave rarely above 1 or 2 most of the time.
Sounds like the issue is more of a CPU issue then a disk issue, so just upgrading the hardware and OS should make a big difference in itself, but I would profile the SQL queries to make sure they are not trying to bite off more then they need to.
Which is why I'm now focused on getting the non-VMWare approach up and running so I can profile it, instead of getting hung up on benchmarking the empty hardware. I'd never have started if I'd not noticed a terrific slowdown halfway through creating the filesystem when doing an initial CentOS 4.3 install many many weeks ago.
Well when you created the file system the write cache wasn't installed yet right?
And it may be that when you were creating the file system it was right after you created the RAID1 array and the controller may have been still sync'ing up the disks, which will slow things down tremendously.
If you had sync'ing disks and no write-back cache, then you will see terrible writes (and slower reads) until the disks were sync'd up.
It may be that it will work fine for what you need it to do?
Yeah - but it's the edge cases that bite you. Can't be doing with a production server where it's possible to accidentally step on an indeterminate trigger that sends responsiveness into a nosedive.
I agree that it is the edge cases that can come back and bite you just be sure you don't over-scope those edge cases for situations that will never arise.
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.