[CentOS] clustering and load balancing Apache, using nginx

nate centos at linuxpowered.net
Wed Feb 11 18:24:13 UTC 2009


Les Mikesell wrote:

> It may be, but I'd like to see some real-world  measurements.  Most of
> the discussions about more efficient approaches seem to use straw-man
> arguments that aren't realistic about the way apache works or timings of
> a few static pages under ideal conditions that don't match an internet
> web server.

In my experience apache has not been any kind of noticeable bottleneck.
At my last company we deployed a pair of apache reverse proxy nodes
that did:

- reverse proxy(188 rewrite rules)
- HTTP compression (compression level set to 9)
- mod_expires for some static content that we hosted on the front end
  proxy nodes
- SSL termination for the portion of the sites that needed SSL
- Header manipulation (had to remove some headers to work around
  IE browser issues with SSL)
- Serve up a "maintenance" page when we took the site down for
  software updates(this was on another dedicated apache instance)

traffic flow was:

internet->BigIP->proxy->BigIP->front end web servers->BigIP->back end apps
(utilizing BigIP's ability to transparently/effortlessly NAT
traffic internal to the network, and using HTTP headers to
communicate the originating IP addresses from the outside
world).

Each proxy node had 8 copies of apache going, 4 for HTTP and 4
for HTTPS, at the moment they seem to average about 125 workers
per proxy node, and an average of 80 idle workers per node.
CPU averages 3%, memory averages about 650MB(boxes have 3GB).
When I first started at the company they were trying to do this
via a low end F5 BigIP load balancer but it was not able to
provide the same level of service at low latency(and that was
when we had a dozen proxy rules). I love BigIPs but for proxies
I prefer apache. It wasn't until recently that F5 made their
code sudo multithreaded, until then even if you had a 4 CPU
load balancer, the proxy stuff could only use one of those
CPUs. Because of this limitation one large local customer F5
told me that they had to implement 5 layers of load balancers
due to their app design depended on the full proxy support in
the BigIPs to route traffic.

Systems were dual proc single core hyperthreaded. They proxied
requests for four dual proc quad core systems which seem to
average around 25-35% CPU usage and about 5GB of memory usage(8GB
total) a piece.

At the company before that we had our stuff split out per
customer, and had 3 proxy nodes in front and about 100 web servers
and application servers behind them for the biggest customers,
having 3 was just for N+1 redundancy, 1 was able to handle the
job. And those proxies were single processor.

At my current job 99% of the load is served directly by tomcat,
the application on the front end at least is simple by comparison
so there's no need for rewrite-type rules. Load balancing is
handled by F5 BigIPs, as is SSL termination. We don't do any
HTTP compression as far as I know.

I personally would not want to load balance using apache, I load
balance with BigIPs, and I do layer 7 proxying(URL inspection)
with apache. If I need to do deeper layer 7 inspection then I
may resort to F5 iRules, but the number of times I've had to
do that over the past several years I think is maybe two.
And even today with the latest version of code, our dual
processor BigIPs cannot run in multithreaded mode, it's not
supported on the platform, only on the latest & greatest(ours
is one generation back from the latest).

I use apache because I've been using it for so long and know it
so well, it's rock solid stable at least for me, and the fewer
different platforms I can use reduces complexity and improves
manageability for me.

If I was in a situation where apache couldn't scale to meet the
needs and something else was there that could handle say 5x the
load, then I might take a look. So far haven't come across that
yet.

nate




More information about the CentOS mailing list