Hi all,
I am running squid integrated with squidguard.dansguardian,clamav running on single standalone centos 5 server.Also running webmin for managing squid.This Squid is serving for 4000 clients.
Since it is serving more users i don't want to take risk by running single server,if there is any single point of failure will cause all of my users to standstill.
So i am planned to go for one more Centos5 server with cluster.Can any one suggest me how to design it either i have to go for common storage for storing all global files or i need to synchronize both the server periodically by running instance on local disk of both the servers.
Below is the scenario currently running on the server
squid is under /etc and squidguard database under /var/lib and dansguardian config file under /etc/dansguardian and webmin is under /usr/local
Help me in designing the setup of high availability squid
Regards, lingu
So i am planned to go for one more Centos5 server with cluster.Can any one suggest me how to design it either i have to go for common storage for storing all global files or i need to synchronize both the server periodically by running instance on local disk of both the servers.
You could also make them peer caches of one another and you could also use Linux Virtual Server .. part of the Cluster Suite to load balance .. although you would need to make sure that you have persistence enabled.
lingu wrote:
Help me in designing the setup of high availability squid
I think your better off using a load balancer instead of a cluster. For one, GFS requires shared storage(typically SAN), and two load balancing is much simpler than clustering.
LVS is a free linux-based load balancer, so you'd have two LVS nodes and N+1 squid nodes. I prefer F5 BigIPs as they are more flexible/much easier to manage though they aren't that cheap, though if you have 4000 clients it shouldn't be hard to justify it.
Run round-robin load balancing, and if your load balancer supports it consider enabling some sort of persistence such as source IP persistence so that the client IPs get pinned to the same proxy server(so the cache is consistent) throughout their session, if that proxy server fails then the load balancer will fail them over to the other server seamlessly. Don't worry about keeping the caches in sync. With BigIP you can configure this in about 30 seconds, with LVS I'm not sure if it's even possible.
nate