When a web site is attacked, so far by unsuccessful hackers, my error routine adds the attackers IP address, prefixed by 'deny', to that web site's .htaccess file. It works and the attacker, on second and subsequent attacks, gets a 403 error response.
I want to extend the exclusion ability to every web site hosted on a server. My preferred method is iptables. However, when breaking-out of a PHP script on a web page and running a normal iptables command, for example:
iptables -A 3temp -s 1.2.3.4 -j DROP
iptables responds with:
iptables v1.3.5: can't initialize iptables table `filter': Permission denied (you must be root)
Executing 'whoami' confirms Apache is the user. Giving Apache group rw on the /etc/sysconfig/iptables and ensuring the /sbin/iptables is executable by all, fails to resolve the problem.
Is there any method of running iptables from an Apache originated process ?
Thank you.
On Sun, 2011-08-21 at 00:09 +0100, Always Learning wrote:
When a web site is attacked, so far by unsuccessful hackers, my error routine adds the attackers IP address, prefixed by 'deny', to that web site's .htaccess file. It works and the attacker, on second and subsequent attacks, gets a 403 error response.
I want to extend the exclusion ability to every web site hosted on a server. My preferred method is iptables. However, when breaking-out of a PHP script on a web page and running a normal iptables command, for example:
iptables -A 3temp -s 1.2.3.4 -j DROP
iptables responds with:
iptables v1.3.5: can't initialize iptables table `filter': Permission denied (you must be root)
Executing 'whoami' confirms Apache is the user. Giving Apache group rw on the /etc/sysconfig/iptables and ensuring the /sbin/iptables is executable by all, fails to resolve the problem.
Is there any method of running iptables from an Apache originated process ?
Thank you.
---- If you are determined to do that (have user apache capable of making changes to iptables), you can have your script do it as sudo and make an entry in /etc/sudoers to allow user apache to execute /sbin/iptables commands without a password.
Of course automated scripts can (and likely will) go haywire and anything that automates adding iptables blocks is capable of blocking you too and I would highly suggest you rethink what you are doing. Also, there's also the subjectivity of what it is that constitues 'an attack'.
Craig
On Sat, 2011-08-20 at 17:03 -0700, Craig White wrote:
If you are determined to do that (have user apache capable of making changes to iptables), you can have your script do it as sudo and make an entry in /etc/sudoers to allow user apache to execute /sbin/iptables commands without a password.
Thank you. I will try that. Having read the file it seems ideal.
Of course automated scripts can (and likely will) go haywire and anything that automates adding iptables blocks is capable of blocking you too and I would highly suggest you rethink what you are doing. Also, there's also the subjectivity of what it is that constitues 'an attack'.
My scripts are generally well behaved, but then I usually test them extensively. The proposed iptables changes are to place IP addresses in a spare iptables table and block them. If it works well for one IP address it should work successfully for subsequent ones.
I am acutely conscious of being locked-out. I can get in remotely via the console. However the very first entries in every server's iptables have always been to allow 3 static IPs access. 3test comes later on in the sequence, ensuring what happens there should never lock me out.
(approved static IPs) 0banned 1approved 2emails 3temp 3web 4permit 5drop
A daily reader of Logwatch, I don't like seeing the same weirdo attacking different web sites hosted on the same server. I also get an instant email for every web page error on every site. Banning an IP address from a server as soon as the first detected hacking occurs seems a welcome improvement to writing to one web site's .htaccess file.
Thank you for your good suggestion. It is appreciated.
--On Sunday, August 21, 2011 2:51 AM +0100 Always Learning centos@u61.u22.net wrote:
I am acutely conscious of being locked-out. I can get in remotely via the console. However the very first entries in every server's iptables have always been to allow 3 static IPs access. 3test comes later on in the sequence, ensuring what happens there should never lock me out.
To reduce the attack surface, create a script that can only update that subtable with a supplied IP address and then invoke it by sudo.
On 08/21/2011 01:09 AM, Always Learning wrote:
When a web site is attacked, so far by unsuccessful hackers, my error routine adds the attackers IP address, prefixed by 'deny', to that web site's .htaccess file. It works and the attacker, on second and subsequent attacks, gets a 403 error response.
I want to extend the exclusion ability to every web site hosted on a server. My preferred method is iptables. However, when breaking-out of a PHP script on a web page and running a normal iptables command, for example:
iptables -A 3temp -s 1.2.3.4 -j DROP
iptables responds with:
iptables v1.3.5: can't initialize iptables table `filter': Permission denied (you must be root)
Executing 'whoami' confirms Apache is the user. Giving Apache group rw on the /etc/sysconfig/iptables and ensuring the /sbin/iptables is executable by all, fails to resolve the problem.
Is there any method of running iptables from an Apache originated process ?
Maybe SELinux blocks Apache from writing to /etc/sysconfig/iptables? Have you looked at fail2ban and denyhosts? These apps seem to offer a similar solution.
Regards, Patrick
On Sun, 2011-08-21 at 02:50 +0200, Patrick Lists wrote:
Maybe SELinux blocks Apache from writing to /etc/sysconfig/iptables? Have you looked at ? These apps seem to offer a similar solution.
I'm not using SELinux at the moment simply because I don't have the time to understand it. I'm a self-taught Linuxist. I believe it uses the 'labels' inherent with every file description block.
With Craig's SU suggestion, I believe my attack detection system will successfully block the attacker's IP address on a server and for a selected ports only.
I will look at fail2ban and denyhosts and see how they can help.
Thank you.
On Sun, 2011-08-21 at 02:00 +0100, Always Learning wrote:
On Sun, 2011-08-21 at 02:50 +0200, Patrick Lists wrote:
Maybe SELinux blocks Apache from writing to /etc/sysconfig/iptables? Have you looked at ? These apps seem to offer a similar solution.
I'm not using SELinux at the moment simply because I don't have the time to understand it. I'm a self-taught Linuxist. I believe it uses the 'labels' inherent with every file description block.
With Craig's SU suggestion, I believe my attack detection system will successfully block the attacker's IP address on a server and for a selected ports only.
I will look at fail2ban and denyhosts and see how they can help.
---- I'm going to present another view of what I think is a larger picture.
What you seem to want to do is to block host access (TCP possibly UDP) based upon certain GET/POST activities on your web server. Thus you are attempting to create a curtain based upon things that have already failed and eventually you will get a huge IPTABLES filter that will slow up all traffic while parsing the rules. I would suspect that this would also be the same system that is also the web server - thus you will slow down the very system you want to be fast. The entire predicate is reactive. You would also need to have a system to expire those rules after a period of time. It's all a waste of energy focused on giving you satisfaction that you are at least doing something to block script kiddies.
You should spend the time protecting the server with good system administration... SELinux, which you state 'you are not using at the moment' is a prime example.
You should ensure that known attack vectors (first place to look is the very common php programs like phpmyadmin) are either not in use or at least always kept up to date and secured via access controls.
The security issues you should be worrying about are not the things that are getting logged - that's just a record of things that already didn't work.
Craig
On Sun, Aug 21, 2011 at 05:46:18AM -0700, Craig White wrote:
What you seem to want to do is to block host access (TCP possibly UDP) based upon certain GET/POST activities on your web server. Thus you are attempting to create a curtain based upon things that have already failed and eventually you will get a huge IPTABLES filter that will slow up all traffic while parsing the rules. I would suspect that this would
fail2ban handles rule expiration; firewall rules can be configured as the admin sees fit for the offending action. In fact each trigger can have a configurable lifetime. fail2ban also ships with working apache triggers, for example there is one that triggers off of failed auth attempts; these can be modified to fit the OP's needs with minimal work.
You should spend the time protecting the server with good system administration... SELinux, which you state 'you are not using at the moment' is a prime example.
There is little excuse in not having selinux enabled. Every hacked box we've seen in #centos for the past few years has had selinux disabled; not one that I've seen reported had it enabled.
The security issues you should be worrying about are not the things that are getting logged - that's just a record of things that already didn't work.
True, but blocking automated 5cr1p7-k1dd135 probes will reduce log volume and potentially protect you from probes further down the scan chain that haven't hit yet that you may be vulnerable to.
John -- We cannot do everything at once, but we can do something at once.
-- Calvin Coolidge (1872-1933), 30th president of the United States
On Sunday, August 21, 2011 08:46 PM, Craig White wrote:
On Sun, 2011-08-21 at 02:00 +0100, Always Learning wrote:
On Sun, 2011-08-21 at 02:50 +0200, Patrick Lists wrote:
Maybe SELinux blocks Apache from writing to /etc/sysconfig/iptables? Have you looked at ? These apps seem to offer a similar solution.
I'm not using SELinux at the moment simply because I don't have the time to understand it. I'm a self-taught Linuxist. I believe it uses the 'labels' inherent with every file description block.
With Craig's SU suggestion, I believe my attack detection system will successfully block the attacker's IP address on a server and for a selected ports only.
I will look at fail2ban and denyhosts and see how they can help.
I'm going to present another view of what I think is a larger picture.
What you seem to want to do is to block host access (TCP possibly UDP) based upon certain GET/POST activities on your web server. Thus you are attempting to create a curtain based upon things that have already failed and eventually you will get a huge IPTABLES filter that will slow up all traffic while parsing the rules. I would suspect that this would also be the same system that is also the web server - thus you will slow down the very system you want to be fast. The entire predicate is reactive. You would also need to have a system to expire those rules after a period of time. It's all a waste of energy focused on giving you satisfaction that you are at least doing something to block script kiddies.
is ipset stable yet? Maybe he is better off with two redundant OpenBSD boxes using pf to protect his boxes and his apache instances scripting them bsd boxen firewall rules.
/me loses the 'simple and works' challenge
On Sun, 2011-08-21 at 05:46 -0700, Craig White wrote:
I'm going to present another view of what I think is a larger picture.
What you seem to want to do is to block host access (TCP possibly UDP) based upon certain GET/POST activities on your web server.
Yes, in this instance the annoying attacks of 200 attempts to break-in via phpmyadmin for example or the stupid pratts suffixing a correct web page name with things like ...login and ... forgotten_password ... and execute and ...sql... etc. I don't want that crap.
Thus you are attempting to create a curtain based upon things that have already failed and eventually you will get a huge IPTABLES filter that will slow up all traffic while parsing the rules.
Yes create a curtain but wrong about 'huge'. Attempts are done via compromised IP addresses around the world by the same person or a group of like-minded people. It is my intention to delete the contents of the temporary iptables table often to prevent it becoming a liability.
I could probably achieve this by having two temporary tables (for blocked IP addresses) and after a week or two delete the contents of one table and than at another interval delete the contents of the second table. This would provide a useful overlap and ensure an IP blocked today is not 'freed' tomorrow when a temporary table's contents are deleted.
Persistent offenders would have their IP address or their IP block, if a data centre, permanently stored in another table (3web).
I would suspect that this would also be the same system that is also the web server - thus you will slow down the very system you want to be fast. The entire predicate is reactive. You would also need to have a system to expire those rules after a period of time.
I can do a cron at a regular interval to flush the first temporary table and a second cron job to flush the second temporary table. So not too much effort involved.
It's all a waste of energy focused on giving you satisfaction that you are at least doing something to block script kiddies.
It is a good programming and learning Linux exercise. I gain personally from doing it. The ultimate objective is a smooth running system although I am certain there will be other issues arising.
You should spend the time protecting the server with good system administration... SELinux, which you state 'you are not using at the moment' is a prime example.
Yes you are correct. May have a look at it in a week or two. In the past SELinux seems to stop things running which is not what I want.
You should ensure that known attack vectors (first place to look is the very common php programs like phpmyadmin) are either not in use or at least always kept up to date and secured via access controls.
PHPmyAdmin is definitely not available to the public. Absolutely not. That was one of my very first priorities. I do not follow the /var/www convention for locating public web pages. Every hosted web site is a virtual site and entrance through the front door (the server's IP addresses) is blocked and monitored.
The security issues you should be worrying about are not the things that are getting logged - that's just a record of things that already didn't work.
I have introduced additional logging on things that work as well as do not work.
It is the things I am unaware of that present a danger. That is why I try to block everything and specifically permit authorised things through the firewall. Obviously I am still learning and SELinux needs some experimentation after I discover exactly how it works and the logic behind it and the Linux 'labelling'.
Your /etc/sudoers is uppermost in my thoughts.
Thank you.
On Sun, Aug 21, 2011 at 03:07:51PM +0100, Always Learning wrote:
I could probably achieve this by having two temporary tables (for blocked IP addresses) and after a week or two delete the contents of one table and than at another interval delete the contents of the second table. This would provide a useful overlap and ensure an IP blocked today is not 'freed' tomorrow when a temporary table's contents are deleted.
What I do (for SMTP) is nightly check the rules for those that don't have any packets associated with them, delete those, then reset the count on the remainder. This means that entries stay in the firewall while they're still making attempts, but get removed a day after they've stopped.
Code extracts:
getlist() { /sbin/iptables --line-numbers -L INPUT -v$n $1 | awk '/dpt:25|dpt:smtp/ {printf("Rule=%d Count=%d source=%s\n", $1,$2,$9)}' }
lst=$(getlist | /usr/bin/tac | sed -n 's/^Rule=(.* Count=0)/\1/p')
if [ -n "$lst" ] then echo "$lst" | while read rule details do /sbin/iptables -D INPUT $rule echo Clearing Rule=$rule $details done else echo No Rules to clear fi
getlist -Z
On Sun, 2011-08-21 at 02:50 +0200, Patrick Lists wrote:
On 08/21/2011 01:09 AM, Always Learning wrote:
When a web site is attacked, so far by unsuccessful hackers, my error routine adds the attackers IP address, prefixed by 'deny', to that web site's .htaccess file. It works and the attacker, on second and subsequent attacks, gets a 403 error response.
I want to extend the exclusion ability to every web site hosted on a server. My preferred method is iptables. However, when breaking-out of a PHP script on a web page and running a normal iptables command, for example:
iptables -A 3temp -s 1.2.3.4 -j DROP
iptables responds with:
iptables v1.3.5: can't initialize iptables table `filter': Permission denied (you must be root)
Executing 'whoami' confirms Apache is the user. Giving Apache group rw on the /etc/sysconfig/iptables and ensuring the /sbin/iptables is executable by all, fails to resolve the problem.
Is there any method of running iptables from an Apache originated process ?
Maybe SELinux blocks Apache from writing to /etc/sysconfig/iptables? Have you looked at fail2ban and denyhosts? These apps seem to offer a similar solution.
---- fail2ban and denyhosts center on failed logins - I don't think this is what he is dealing with.
Craig
On 08/21/2011 02:34 PM, Craig White wrote:
Maybe SELinux blocks Apache from writing to /etc/sysconfig/iptables? Have you looked at fail2ban and denyhosts? These apps seem to offer a similar solution.
fail2ban and denyhosts center on failed logins - I don't think this is what he is dealing with.
Afaik both are configurable for what you want them to listen for and how you want them to react to. Agree that their popular use is for listening for failed logins and then blocking the originating IP address. But with a little regex creativity, perhaps Paul could use them for his purpose.
Regards, Patrick
When a web site is attacked, so far by unsuccessful hackers, my error routine adds the attackers IP address, prefixed by 'deny', to that web site's .htaccess file. It works and the attacker, on second and subsequent attacks, gets a 403 error response.
Have you looked at mod_evasive? http://www.zdziarski.com/blog/?page_id=442
Barry
On Sat, 20 Aug 2011, Barry Brimer wrote:
To: CentOS mailing list centos@centos.org From: Barry Brimer lists@brimer.org Subject: Re: [CentOS] Apache Changing IPtables C 5.6 via Apache
When a web site is attacked, so far by unsuccessful hackers, my error routine adds the attackers IP address, prefixed by 'deny', to that web site's .htaccess file. It works and the attacker, on second and subsequent attacks, gets a 403 error response.
Have you looked at mod_evasive? http://www.zdziarski.com/blog/?page_id=442
There is also another application that reads the Apache log file, and then IIRC writes IPTables rules to deal with these sort of attacks. It was written for a university thesis several years ago, but I just do not remember the name of that particular guy or the project.
Kind Regards,
Keith Roberts
----------------------------------------------------------------- Websites: http://www.karsites.net http://www.php-debuggers.net http://www.raised-from-the-dead.org.uk
All email addresses are challenge-response protected with TMDA [http://tmda.net] -----------------------------------------------------------------
On Sun, 2011-08-21 at 08:26 +0100, Keith Roberts wrote:
There is also another application that reads the Apache log file, and then IIRC writes IPTables rules to deal with these sort of attacks. It was written for a university thesis several years ago, but I just do not remember the name of that particular guy or the project.
That is probably too slow for me. My present system is immediate and effective usually within the same second. I just want to expand site .htaccess blocking to iptables whole server blocking and will, when I have a spare minute, implement Clive's /etc/sudoers suggestion.
- With best regards,
Paul. England, EU.
On Sat, 2011-08-20 at 22:43 -0500, Barry Brimer wrote:
When a web site is attacked, so far by unsuccessful hackers, my error routine adds the attackers IP address, prefixed by 'deny', to that web site's .htaccess file. It works and the attacker, on second and subsequent attacks, gets a 403 error response.
Have you looked at mod_evasive? http://www.zdziarski.com/blog/?page_id=442
Thank you for the suggestion. I have just looked at it and see:-
* Requesting the same page more than a few times per second
* Making more than 50 concurrent requests on the same child per second
* Making any requests while temporarily blacklisted ...
My requirement, based on observations, is to instantly cut-off the IP's access as soon a wrong URL is entered. When a web page error occurs it is handled by a PHP routine. Two sets of checks show whether it was an 'innocent' mistake or a known hacking attempt. Currently known hacking attempts are blocked at the web site's .htaccess file.
mod_evasive lacks the ability to compare the erroneous page request and then take action. Clive's helpful /etc/sudoers suggestion overnight seems ideal because (if it works for my routine) it will let me block an IP address at iptables and limit that blocking to a port.
My check list has a 104 'words' which cause an IP address to be blocked. When my revised system is working satisfactorily with whole server blocking I will publish the details on the web.
From: Always Learning centos@u61.u22.net
Executing 'whoami' confirms Apache is the user. Giving Apache group rw on the /etc/sysconfig/iptables and ensuring the /sbin/iptables is executable by all, fails to resolve the problem. Is there any method of running iptables from an Apache originated process ?
I would be wary of letting the apache user control iptables... Better have another independent script to read the list of IPs file, filter it, and then call iptables.
JD