[CentOS] Load balancing...

Fri Mar 4 19:45:10 UTC 2011
Les Mikesell <lesmikesell at gmail.com>

On 3/4/2011 1:18 PM, James Nguyen wrote:
>
> You want two boxes that run both haproxy + keepalived.  This way you
> get the load balancing (HAProxy) plus the high availability
> (Keepalived) using a shared virtual IP for your two boxes.  You can do
> maintenance on either one while traffic still remains active.
>
> I don't have metrics to spec out the boxes, but given your traffic
> load you mentioned you don't need hefty boxes at all.  Just get
> yourself a box with some Gigabit interfaces which I'm sure they all
> are these days.  A single socket with 4 cores is more than enough.
> You can probably even do with 2 cores.  Someone can correct me on that
> if they think the solution requires a lot of CPU.  Memory wise I think
> machines come with at least 4Gb these days.  That should do.  You can
> probably both boxes for around 2k?
>
> You already know how much F5 or any of those guys cost per device. =)
>

F5's are one of those things where if you have to ask the price you 
probably can't afford it...  But they do provide a very nice web 
interface to control the pool members and virtual interfaces, something 
I haven't seen on free alternatives, and if you are big enough to have 
multiple locations they can propagate their server state info to their 
global DNS servers (also expensive) to control balancing/failover across 
sites.

For a couple of boxes that can work independently, I'd just use round 
robin DNS and also use heartbeat to float the IP's to the backup on 
outages.  That way you normally share the load for performance but if 
one fails or is shut down gracefully, the other one will still handle 
things for both IP targets.  If your application maintains any session 
state you'll need to work out a way to keep it in sync or after the 
initial connection, redirect to a specific machine and live with what 
happens when it goes down (which might not be that bad, maybe just a new 
login when they try to come back).

-- 
   Les Mikesell
    lesmikesell at gmail.com