We are considering whether or not to block internal access to social networking and private entertainment web sites. This not a policy decision as of yet, just an exploratory exercise.
Our gateways run CentOS-5.4 and use iptables to enforce firewall rules. The information that we wish to determine is whether or not it is feasible to block sites such as facebook, youtube, twitter, etc. using iptables. Is there a superior method? Does there exist already a generally accepted utility or method for accomplishing this?
At the present time we only block outgoing traffic for a handful of internal hosts that should never have any reason to generate traffic destined outside the lan. But, now we are advised by some authorities that facebook and similar sites are considered security risks to hosts that are used to access them.
Without debating the merits of such claims, how would one proceed to block internal network access to specific domain names using CentOS?
Sincerely,
Without debating the merits of such claims, how would one proceed to block internal network access to specific domain names using CentOS?
Using transparent proxy server is best way to block this kind of services. You can use squid package to setup transparent proxy server.
-- Eero
On Fri, Nov 27, 2009 at 12:32, Eero Volotinen eero.volotinen@iki.fi wrote:
Without debating the merits of such claims, how would one proceed to block internal network access to specific domain names using CentOS?
Using transparent proxy server is best way to block this kind of services. You can use squid package to setup transparent proxy server.
I agree with the parent poster. Squid (or any other advanced proxy server) is probably the best way to deal with this. But for the sake of argument--say, in case you can't use a proxy for some reason, IPTables has some *limited* application, here.
IPTables will accept a DNS host/domain name in place of an IP address in an 'iptables' command. But the rule it creates doesn't actually use the DNS name--it just performs a lookup when you add the rule, and then adds a rule for whatever IP address it found.
If Facebook only operated a single web server, and if the DNS hostname 'www.facebook.com' always resolved to that particular IP address, this would work OK. You could either specifiy 'www.facebook.com' in your IPTables blocking rule, or look up the IP address manually and specify it directly in your rule.
The unfortunate reality is that FB operates dozens (maybe hundreds) of web servers, and any given browser's HTTP request to 'www.facebook.com' might be answered by any one of those web servers. And they don't use a straightforward, static DNS mechanism. The 'facebook.com' DNS servers will respond differently depending on where the request originates and (I presume) on the current load status of their global web server pool. So, under normal conditions, clients will usually be directed to the closest (lowest-latency) web server. And if your closest web server's load rises high enough, you be instead directed to a further-away, less busy server.
I just took a few samples from a collection of servers I operate that are scattered throughout the continental US, over the course of several minutes. I see very little stability in the DNS responses, but it appears that the pool is pretty small.
You could write a short script that runs from 'cron' every few minutes and performs a DNS lookup for 'www.facebook.com', and adds the result to a running list of FB IP addresses, and then adds another IPTables blocking rule anytime it finds a new IP. This is similar to how some popular anti-SSH-dictionary-attack-bot scripts operate. It's not perfect, but it would be pretty effective, and it doesn't require much effort.
Honestly, though, you're probably better off using Squid. If I had the option, that's what I would do.
Good luck.
-Ryan
James B. Byrne wrote:
Without debating the merits of such claims, how would one proceed to block internal network access to specific domain names using CentOS?
As others have mentioned using a proxy would work..
Other ways would be using iptables to block access to those domain's name servers so the names do not resolve at all(they could still access via IP..)
Also hosting the domains on your internal name server and pointing them to some internal address so that they can't be resolved as well could work.
Often times client side antivirus/spyware programs can be configured to block things on the client side as well.
nate
----- "nate" centos@linuxpowered.net wrote:
James B. Byrne wrote:
Without debating the merits of such claims, how would one proceed
to
block internal network access to specific domain names using
CentOS?
Also hosting the domains on your internal name server and pointing them to some internal address so that they can't be resolved as well could work.
I've used this many times where implementing a Squid proxy just wasn't an option. We ran an internal DNS server that was authoritative for any domains we didn't want users to access. Then, we use iptables to route all DNS traffic to that DNS server. Those domains would resolve, but to a specific IP that was configured to hand out a nastygram page saying "Blocked by the filter" etc...
Even when it isn't required at a particular installation, it's certainly fun to play with this at the office. :-)
Tim Nelson Systems/Network Support Rockbochs Inc. (218)727-4332 x105
On Fri, Nov 27, 2009 at 01:52:31PM -0800, nate wrote:
As others have mentioned using a proxy would work..
Proxy would be the best as it offers a lot of additional features such as logging ability to see how much time people are wasting at work. Squid setup as a transparent proxy negates having to do any client-side setup and can not be easily bypassed by clueful end-users.
Other ways would be using iptables to block access to those domain's name servers so the names do not resolve at all(they could still access via IP..)
Not as easy as one would think; most sites in this day and age are still going to require proper Host: headers be sent I would think.
Blocking by server ip addresses or even authoratative DNS servers for the domains you wish blocked are not ideal as you have *no* control over these resources. web server or geoip redirectors / load balancers may change public ip spaces and DNS servers are subject to similar.
Also hosting the domains on your internal name server and pointing them to some internal address so that they can't be resolved as well could work.
I've done this in the past with great success; point them to a "You've Been Busted Going To This Website" type page; access logs can be processed to see who is trying to waste company time with this solution also. The only real problem with this is ensuring that /etc/hosts or \Windows\system32\drivers\etc\hosts (and whatever Macs use) resolution is properly locked down so that clueful users can not resolve locally thus bypassing your DNS server.
Often times client side antivirus/spyware programs can be configured to block things on the client side as well.
While this indeed can be done, and I've seen it used to good effect it just adds to workloads if you ever change to another AV solution down the road; the local DNS server is set and forget.
John