You could try tunning apache.. Start with MaxRequestPerChild, whichs sets a number of requests for child process before it is stopped. When a child is stopped, memory is freed. This could be your protection before running out of memory. KeepAlive is enabled? If yes, maybe you could try disabling it. KeepAlive speeds up your website, but uses twice as much of memory, because child processes need to keep connections opened, while waiting for new requests from established connections.
Cheers, Barbara
On 02/03/2014 11:12 PM, m.roth@5-cent.us wrote:
Kwan Lowe wrote:
On Mon, Feb 3, 2014 at 2:59 PM, m.roth@5-cent.us wrote:
We've got a number of websites on one of our production servers, and they get hit moderately (it's not Amazon... but they are US gov't scientific research sites), and I think we've got 25 threads running, total, to server *all* of them.
If you don't mind me asking, what are your fork/child settings like for those and what sort of workload?
For a very crude estimate, in /var/log/httpd, I did grep GET access_*log | grep -c 03/Feb, and got 178388, and that's with 23 workers (as in, ps -ef | grep -c httpd).
mark
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos