> You may still be hosed since the bottleneck is in front of your server. > New client requests -> InternetConnection -> Router/FW -> Server > > If your new client requests are coming into an internet connection > that's saturated, I'm pretty sure they won't even make it to the server > to get rate limited. Your client would start seeing error > rates/retransmits and you'd be effectively DOS'd. If you were running > with an ISP that let you burst, then used a Router/FW that let you start > throttling traffic you may do better, but I don't think you're going to > get good results out of that system. True, but based on the traffic graphs so far, the uplink is usually saturated first during normal usage. If for some reason the incoming requests are coming so fast as to saturate the downlink, I don't think having another inline server/router would help, would it? This is because it seems to me that since the ISP's the one controlling that end of the traffic and if they don't shape/police, whatever they let in would be choking out the legit users anyway regardless of what I have waiting to receive the packets. > Any reason you don't buy a hosted solution and put your static content > (manuals, long downloads, etc) up there for people to pull? You could > also get pay as you go caching thru a few limelight/level3/akami/etc for > your domain. Unfortunately, they are not static content. Their webapp generates certain documents, then generate links to their clients to download. Some of these contain quite a number of images so the size can add up quite quickly even at 50kb~100kb per pic. So when they get a few clients doing this, they basically get snail paced access to the webapp. > In the past I've used tc to do testing for crappy network links. Here > are the two links that I found helpful > * http://www.linuxfoundation.org/collaborate/workgroups/networking/netem > * http://lartc.org/howto/lartc.ratelimit.single.html Thanks for the links :)