Les Mikesell пишет:
Sergej Kandyla wrote:
nginx http_proxy module is universal complex solution. Also apache working in prefork mode (in general cases), I don't know does mod_jk\mod_proxy_ajp works in the worker-MPM mode...
In the preforking mode apache create a child on each incoming request, so it's too much expensive for resource usage.
Have you actually measured this? Preforking apache doesn't fork per request, it forks enough instances to accept the concurrent connection count plus a few spares. Each child would typically handle thousands of requests before exiting and requiring a new fork - the number is configurable.
Sorry for bad explanation. I meant that apache create a child (above MinSpareServers) for serving each new unique client.
I measured nginx in real life :) On some server (~15k uniq hosts per day, ~ 100k pageviews, and with 1-3k concurrent tcp "established" connections ) with frontend(nginx) - backend (apache + phpfastcgi) architecture I turned off nginx proxing and server go away for a minute... apache forked to MaxClients (500) and took all memory.
Also nginx helped me protect from low-medium DDoS. When apache forked to maxclients, nginx could server many thousand concurrent connections. So I've wrote shell scripts to parse nginx logs and put IPs of bots to firewall table.
Therefore I find nginx (lighttpd also a good choose) enough efficient (at least for me). Off course you should understand what you expecting from nginx, what it can do and what can't.
If you want real world measurements or examples of using nginx on heavy loaded sites please to google. Also you could ask in the nginx at sysoev.ru mail list (EN).
Also apache spend about 15-30Kb mem for serving each tcp connection at this time nginx only 1-1.5Kb. If you have, for example, abount 100 concurrent connections from different IPs there is nearly 100 apache forks... it's too expensive.
A freshly forked child should have nearly 100% memory shared with its parent and other child instances.
Please tell me how much resources you should have for revers proxing with apache for example nearly 1k-2k unique clients ? What cpu load and memory usage will you have?
I think that apache is great software. It's very flexible and features rich, but it especially good as backend for dynamical applications (mod_php, mod_perl, etc.) If you need to serve many thousand concurrent connections you should look at nginx, lighttpd, squid, etc.. IMHO.
http://www.kegel.com/c10k.html
As things change, this will decrease, but you are going to have to store the unique socket/buffer info somewhere whether it is a copy-on-write fork or allocated in an event-loop program. If you run something like mod_perl, the shared memory effect degrades pretty quickly because of the way perl stores reference counts along with its variables, but I'd expect the base apache and most module code to be pretty good about retaining their inherited shared memory.