Ralph Angenendt wrote: > Kai Schaetzl wrote: >> There's your limit. However, you should check with your hardware if upping >> it is really desirable. With 4 GB I think you won't be able to handle much >> more anyway. > > That really depends. If you only shove out static pages and have one or > two or three odd cgis on the machine, you can flatten down the httpd > binary quite a bit by throwing out unneeded modules. > > 500 to 750 clients shouldn't be that much of a problem then ... Yeah gotta monitor it.. just checked in on some servers I ran at my last company and the front end proxies (99% mod_proxy) seem to peak out traffic wise at about 230 workers. For no other reason other than because I could each proxy has 8 apache instances running(4 for HTTP 4 for HTTPS). CPU usage peaks at around 3%(dual proc single core), memory usage around 800MB. About 100 idle workers. Keepalive was set to 300 seconds due to poor application design. One set of back end apache servers peaks traffic wise at around 100 active workers, though memory usage was higher, around 2GB because of mod_fcgid running ruby on rails. CPU usage seems to be at around 25%(dual proc quad core) for that one application. Each back end application had it's own dedicated apache instance(11 apps) for maximum stability/best performance monitoring. The bulk of them ran on the same physical hardware though. Traffic routing was handled by a combination of F5 load balancers and the apache servers previously mentioned(F5 iRules were too slow and F5's TMM was not scalable at the time). nate