Hi all,
I am facing facing performance issues with our web servers which is working for concurrent 250 requests properly and then stops responding when the requests are more than 250 .
The current configuration parameters are as follows :
apachectl -version Server version: Apache/2.0.52 Server built: Jan 30 2007 09:56:16
Kernel : 2.6.9-55.ELsmp #1 SMP Fri Apr 20 16:36:54 EDT 2007 x86_64 x86_64 x86_64 GNU/Linux
Server Hardware : MAIN MEMORY (i) Memory Size 4 GB Dual-Core Intel 5160 processors.
httpd.conf
Timeout 300 KeepAlive On MaxKeepAliveRequests 1000 KeepAliveTimeout 150
## Server-Pool Size Regulation (MPM specific) # prefork MPM
# StartServers: number of server processes to start # MinSpareServers: minimum number of server processes which are kept spare # MaxSpareServers: maximum number of server processes which are kept spare # ServerLimit: maximum value for MaxClients for the lifetime of the server # MaxClients: maximum number of server processes allowed to start # MaxRequestsPerChild: maximum number of requests a server process serves
<IfModule prefork.c>
StartServers 8 MinSpareServers 5 MaxSpareServers 20 ServerLimit 251 MaxClients 251 MaxRequestsPerChild 4000 </IfModule>
# worker MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule worker.c> StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule>
I want to know about the difference between worker MPM and Prefork MPM , how to find out which one will be used by my apache server and the recommended one for highly loaded server.If some one provide me the link that best explains above two comparison also be very use full.
Can any one guide me tuning to be done for the maximum utilization of the Resources and better performance of the Servers.
Regards, lingu
Am 20.01.2009 um 19:39 schrieb linux-crazy:
Hi all,
I am facing facing performance issues with our web servers which is working for concurrent 250 requests properly and then stops responding when the requests are more than 250 .
The current configuration parameters are as follows :
Only use prefork MPM (worker is for Win32, AFAIK).
If you set ServerLimit to 251, that's the limit. Set ServerLimit and MaxClients to the same value. (Larger than 250) OK? And take a couple of minutes to study the apache documentation. It's actually quite good and tranlated into many languages...
BUT: what your server is being able to handle also depends on what you actually serve (PHP/JSP/Servelets/Perl/whatever!
This is usually not a problem that is easily described and solved in two sentences.
Rainer
linux-crazy wrote:
Hi all,
I am facing facing performance issues with our web servers which is working for concurrent 250 requests properly and then stops responding when the requests are more than 250 .
Increase your MaxClients configuration, also enable if you haven't already the server-status module and monitor the server via http://<your server name>/server-status it will show how many workers are busy and what they are doing.
As for which to use, prefork is the old forked method of doing things, the other uses threads. Depending on what kind of application your running on top of apache, if it is not thread safe(at some point many PHP modules were not thread safe), you may want to use prefork. Otherwise you can use the threading model.
If your not sure I'd say stick to prefork to be safe until you can determine for sure that threading is safe.
nate
linux-crazy wrote:
I want to know about the difference between worker MPM and
Prefork MPM , how to find out which one will be used by my apache server and the recommended one for highly loaded server.If some one provide me the link that best explains above two comparison also be very use full.
Can any one guide me tuning to be done for the maximum utilization of the Resources and better performance of the Servers.
Most list members would likely advise sticking with the prefork configuration. Without knowing what kind of applications you are running on your webserver, I wouldn't suggest changing it.
Merely increasing the number of workers might make performance worse.
Use ps or top to figure out how much each apache worker is using. Then decide how much ram you want to dedicate on your server to Apache, without going into swap. (Over-allocating and then paging out memory will only make performance much worse.) For example, if I have 2G or ram, and I want 1.5 for apache workers, my average apache worker size (resident memory) is 65MB, then I have room for 23 workers. (1024 * 1.5 ) / 65. (There are more accurate ways to calculate this usage, like taking shared memory into account.)
Upgrading the ram in your web server is a pretty fast interim solution.
Consider your application performance, too. The longer a request in your application takes, the more workers are in use on your web server, taking up more memory. If you have long-running queries in your database, take care of those first.
Good luck
Jed
Jed Reynolds wrote:
Merely increasing the number of workers might make performance worse.
Use ps or top to figure out how much each apache worker is using. Then decide how much ram you want to dedicate on your server to Apache, without going into swap. (Over-allocating and then paging out memory will only make performance much worse.) For example, if I have 2G or ram, and I want 1.5 for apache workers, my average apache worker size (resident memory) is 65MB, then I have room for 23 workers. (1024 * 1.5 ) / 65. (There are more accurate ways to calculate this usage, like taking shared memory into account.)
Pay attention to shared memory when doing this. A freshly-forked process shares virtually all RAM with its parent. How much and how quickly this changes varies wildly with the application type and activity that causes the child's data to become unique. With some types of applications (especially mod_perl) you may want to tune down the number of hits each child services to increase memory sharing. Also, if you are running external cgi programs you must take them into account.
Upgrading the ram in your web server is a pretty fast interim solution.
Consider your application performance, too. The longer a request in your application takes, the more workers are in use on your web server, taking up more memory. If you have long-running queries in your database, take care of those first.
You may also have to turn off or tune down the HTTP 1.1 connection keepalives, trading the time it takes to establish a new connection for the RAM it takes to keep the associated process waiting for another request from the same client.
On Wed, 21 Jan 2009 00:09:38 +0530 linux-crazy hicheerup@gmail.com wrote:
Hi all,
I am facing facing performance issues with our web servers which is working for concurrent 250 requests properly and then stops responding when the requests are more than 250 .
...
Can any one guide me tuning to be done for the maximum utilization of the Resources and better performance of the Servers.
Also take a look at apache alternatives like lighttpd and nginx. In certain cases they do miracles.
Am 21.01.2009 um 00:47 schrieb Jure Pečar:
On Wed, 21 Jan 2009 00:09:38 +0530 linux-crazy hicheerup@gmail.com wrote:
Hi all,
I am facing facing performance issues with our web servers which is working for concurrent 250 requests properly and then stops responding when the requests are more than 250 .
...
Can any one guide me tuning to be done for the maximum utilization of the Resources and better performance of the Servers.
Also take a look at apache alternatives like lighttpd and nginx. In certain cases they do miracles.
I don't know about lighttpd (our results were mixed) - but NGINX is really _very_ fast. Last time I checked, it seems to be the fastest way to accelerate webpage-delivery on generic hardware and with OSS-software.
But it's not a feature-monster like apache, so you still need that. ;-)
Rainer
Linux-crazy wrote on Wed, 21 Jan 2009 00:09:38 +0530:
KeepAliveTimeout 150
My god, reduce this to 10 or 5.
ServerLimit 251 MaxClients 251
There's your limit. However, you should check with your hardware if upping it is really desirable. With 4 GB I think you won't be able to handle much more anyway.
Kai
Kai Schaetzl wrote:
There's your limit. However, you should check with your hardware if upping it is really desirable. With 4 GB I think you won't be able to handle much more anyway.
That really depends. If you only shove out static pages and have one or two or three odd cgis on the machine, you can flatten down the httpd binary quite a bit by throwing out unneeded modules.
500 to 750 clients shouldn't be that much of a problem then ...
Cheers,
Ralph
Ralph Angenendt wrote on Wed, 21 Jan 2009 10:52:59 +0100:
That really depends. If you only shove out static pages and have one or two or three odd cgis on the machine, you can flatten down the httpd binary quite a bit by throwing out unneeded modules.
500 to 750 clients shouldn't be that much of a problem then ...
Sure it depends ;-) I do serve dynamic pages but by removing the really unnecessary modules I made my httpds much faster (especially on pipelined image downloads) and they only have some 10 MB (RES) per worker.
But I would expect that they are running the default set of modules.
Kai
Kai Schaetzl schrieb:
Ralph Angenendt wrote on Wed, 21 Jan 2009 10:52:59 +0100:
That really depends. If you only shove out static pages and have one or two or three odd cgis on the machine, you can flatten down the httpd binary quite a bit by throwing out unneeded modules.
500 to 750 clients shouldn't be that much of a problem then ...
Sure it depends ;-) I do serve dynamic pages but by removing the really unnecessary modules I made my httpds much faster (especially on pipelined image downloads) and they only have some 10 MB (RES) per worker.
But I would expect that they are running the default set of modules.
Kai
If you can separate-out the images to a specific URL (img.domain.com) and put them all in the same directory, you can use NGINX to serve them. Of course, the actual transfer-speed will not increase much - but latency will go down to the absolute minimum. And latency is what makes a page appear "fast" or "slow" to customers.
Rainer
Ralph Angenendt wrote:
Kai Schaetzl wrote:
There's your limit. However, you should check with your hardware if upping it is really desirable. With 4 GB I think you won't be able to handle much more anyway.
That really depends. If you only shove out static pages and have one or two or three odd cgis on the machine, you can flatten down the httpd binary quite a bit by throwing out unneeded modules.
500 to 750 clients shouldn't be that much of a problem then ...
Yeah gotta monitor it.. just checked in on some servers I ran at my last company and the front end proxies (99% mod_proxy) seem to peak out traffic wise at about 230 workers. For no other reason other than because I could each proxy has 8 apache instances running(4 for HTTP 4 for HTTPS). CPU usage peaks at around 3%(dual proc single core), memory usage around 800MB. About 100 idle workers. Keepalive was set to 300 seconds due to poor application design.
One set of back end apache servers peaks traffic wise at around 100 active workers, though memory usage was higher, around 2GB because of mod_fcgid running ruby on rails. CPU usage seems to be at around 25%(dual proc quad core) for that one application.
Each back end application had it's own dedicated apache instance(11 apps) for maximum stability/best performance monitoring. The bulk of them ran on the same physical hardware though.
Traffic routing was handled by a combination of F5 load balancers and the apache servers previously mentioned(F5 iRules were too slow and F5's TMM was not scalable at the time).
nate
RobertH wrote on Wed, 21 Jan 2009 11:26:41 -0800:
what do you think about the "general Timeout
it is set to 300
ive never much thought about it, yet should we be consider and possible reduce that one too?
I'm using 120, but I don't think reducing this value has much impact.
Kai