[CentOS] Optimizing CentOS for gigabit firewall

Pasi Kärkkäinen pasik at iki.fi
Mon Dec 21 09:07:08 UTC 2009


On Fri, Dec 18, 2009 at 09:36:57PM +0200, sadas sadas wrote:
>    I will explain more deeply. I need to deploy a firewall(s) in front of web
>    server farm because I need to do billing - I will use CentOS with iptables
>    + ipset to store a list if my clients so when client doesn't pay his
>    server's IP is out of the list and he can't access the web server.
> 
>    Second - I know that iptables is very heavy and it's not recommended to
>    use it in gigabit firewall but I don't have a choice as far as I know only
>    ipset works with iptables. I don't know can pf store 500 IPs in one list.
>    Ipset is written for that purpose.
> 
>    I can't find information is there linux or BSD distribution with effective
>    firewall that uses optimized algorithm to store hundreds of IPs and to
>    forward huge traffic. Any idea?
>

I've been using Linux (CentOS5) on gigabit firewalls, for thousands of
users. No problems.

Just make sure ip_conntrack_max is big enough, so you don't run out of
connections. 

There are other things to tune to optimize the performance, but it's
certainly doable with linux+iptables.

-- Pasi


>    regards
> 
>    <peter.serwe at gmail.com><centos at centos.org>I'll second damn near everything
>    nate said, and hopefully add a tidbit or two.
> 
>    If you're new to BSD, you may want to consider the pfsense project in the
>    aforementioned active-active configuration.
> 
>    It gives you a nice, intuitive gui to manage your failover firewalls, if
>    you insist on putting a firewall in front of your web servers.
> 
>    Better to secure the box, leave only the ports you need open on the public
>    interfaces, and don't firewall them.
> 
>    Also, I'd strongly consider running your firewalls with no disk at all.  A
>    Live CD, CF card or USB Flash to boot off of, remote syslog and
>    one less subsystem (disks) to buy/fail makes for some mighty cheap 1U
>    servers.  A single dual-core with core speeds above 3.0Ghz
>    and 4GB of RAM is to pass Gb @ line rate - ethernet overhead.  Truth be
>    told, it's already being done on much less
>    than that.  You can also load balance your traffic, albiet somewhat
>    primitively with it.  If you really want massive throughput, consider
>    toying
>    around with extremely expensive 10G gear, size RAM appropriately, and see
>    how PF performs under multi-processor, high-core speed.
>    but if you're handling over a Gb of traffic and you can't split the
>    application into multiple farms, that's the best move.
> 
>    Akamai, for instance, runs 10G to each rack, each rack has around 20-24
>    servers, and they run GB to the server.
> 
>    [1]pfsense.org has extensive information about hardware requirements,
>    features, and what you're looking to do.
> 
>    [2]https://calomel.org/network_performance.html is an excellent BSD
>    firewall performance site.
> 
>    One thing to note, you are claiming to want to deploy this as a passive
>    bridge.  You cannot do what you want to do
>    running anything in bridge mode.  The packets need to route somehow.  Get
>    a /29 from your colo provider and ask
>    to have your existing block routed through it once you've tested it.
> 
>    Another option for a seamless failover, is to alias a different range of
>    IP's to the server interfaces, put a /29 and whatever
>    netblock you want to end up being your public IP block on the PFSense
>    hardware.  When you're convinced everything's
>    working through rigorous testing, put a test domain up pointing to that
>    block, modify virtualhost entries on the servers to
>    respond to that domain with your production web site, and test some more.
>    Once you're convinced that's working perfectly,
>    make the changes in DNS to point your production domain at the IP's you
>    want, and failover will happen with DNS convergence.
> 
>    Peter
> 
>    On Fri, Dec 18, 2009 at 9:06 AM, nate <[3]centos at linuxpowered.net> wrote:
> 
>      sadas sadas wrote:
>      >
>      > Hi,
>      >  I want to configure CentOS on powerful server with gigabit
>      > adapters as transparent bridge and deploy it in front of server farm.
>      > Can you tell how to optimize the OS for hight packet processing? What
>      > configurations I need to do to achieve very hight speeds and thousands
>      of
>      >  packets?
> 
>      iptables makes a TERRIBLE firewall, use pf instead
> 
>      [4]http://www.openbsd.org/faq/pf/index.html
> 
>      Also consider how your going to provide redundancy, if you have a web
>      server farm you want to protect them with at least two firewalls, not
>      one.
> 
>      [5]http://www.openbsd.org/faq/pf/carp.html
> 
>      I haven't used CARP myself but did setup a pair of pf firewalls about
>      5 years ago in a large network in bridging mode, the layer 3 fault
>      tolerance was provided by OSPF on the core switches, the firewalls
>      were active-active(with pfsync) since they were layer 2 only.
> 
>      Maybe someday linux will fix the overly complex iptables system to
>      something that is more manageable, not holding my breath though.
> 
>      If you want really high speed(say multi GbE) though you'll want/need
>      to go with an appliance based solution.
> 
>      Also since your referring to a web server farm, it is perfectly
>      acceptable to not use firewalls these days, if you have a good
>      load balancer that serves the same role as a firewall in that it
>      only passes traffic that you specifically configure it to pass. Also
>      in high traffic environments the performance of load balancers
>      destroys most firewalls, making investing in a high end firewall
>      a very expensive proposition.
> 
>      I've worked for the better part of the last 10 years with
>      companies who did not have firewalls in front of their web servers
>      for this reason, it didn't make sense $ wise, because the benefit
>      wasn't there, and the added complexity, and performance implications
>      wasn't worth it either. Talk to most load balancing companies and
>      they'll tell you this themselves.
>      nate
> 
>    </centos at centos.org></peter.serwe at gmail.com>
> 
> References
> 
>    Visible links
>    1. http://pfsense.org/
>    2. https://calomel.org/network_performance.html
>    3. mailto:centos at linuxpowered.net
>    4. http://www.openbsd.org/faq/pf/index.html
>    5. http://www.openbsd.org/faq/pf/carp.html

> _______________________________________________
> CentOS mailing list
> CentOS at centos.org
> http://lists.centos.org/mailman/listinfo/centos




More information about the CentOS mailing list