(Sorry, third time -- last one, promise, just giving it a subject line!)
OK, a second machine hosted at the same hosting company has also apparently been hacked. Since 2 of out of 3 machines hosted at that company have now been hacked, but this hasn't happened to any of the other 37 dedicated servers that I've got hosted at other hosting companies (also CentOS, same version or almost), this makes me wonder if there's a security breach at this company, like if they store customers' passwords in a place that's been hacked. (Of course it could also be that whatever attacker found an exploit, was just scanning that company's address space for hackable machines, and didn't happen to scan the address space of the other hosting companies.)
So, following people's suggestions, the machine is disconnected and hooked up to a KVM so I can still examine the files. I've found this file: -rw-r--r-- 1 root root 1358 Oct 21 17:40 /home/file.pl which appears to be a copy of this exploit script: http://archive.cert.uni-stuttgart.de/bugtraq/2006/11/msg00302.html Note the last-mod date of October 21.
No other files on the system were last modified on October 21st. However there was a security advisory dated October 20th which affected httpd: http://mailinglist-archive.com/centos-announce/2011-10/00035-CentOSannounce+... https://rhn.redhat.com/errata/RHSA-2011-1392.html
and a large number of files on the machine, including lots of files in */ usr/lib64/httpd/modules/* and */lib/modules/2.6.18-274.7.1.el5/kernel/* , have a last-mod date of October 20th. So I assume that these are files which were updated automatically by yum as a result of the patch that goes with this advisory -- does that sound right?
So a couple of questions that I could use some help with:
1) The last patch affecting httpd was released on October 20th, and the earliest evidence I can find of the machine being hacked is a file dated October 21st. This could be just a coincidence, but could it also suggest that the patch on October 20th introduced a new exploit, which the attacker then used to get in on October 21st? (Another possibility: I think that when yum installs updates, it doesn't actually restart httpd. So maybe even after the patch was installed, my old httpd instance kept running and was still vulnerable? As for why it got hacked the very next day, maybe the attacker looked at the newly released patch and reverse-engineered it to figure out where the vulnerabilities were, that the patch fixed?)
2) Since the */var/log/httpd/* and /var/log/secure* logs only go back 4-5 weeks by default, it looks like any log entries related to how the attacker would have gotten in on or before October 21st, are gone. (The secure* logs do show multiple successful logins as "root" within the last 4 weeks, mostly from IP addresses in Asia, but that's to be expected once the machine was compromised -- it doesn't help track down how they originally got in.) Anywhere else that the logs would contain useful data?
2012/1/2 Bennett Haselton bennett@peacefire.org:
(Sorry, third time -- last one, promise, just giving it a subject line!)
OK, a second machine hosted at the same hosting company has also apparently been hacked. Since 2 of out of 3 machines hosted at that company have now been hacked, but this hasn't happened to any of the other 37 dedicated servers that I've got hosted at other hosting companies (also CentOS, same version or almost), this makes me wonder if there's a security breach at this company, like if they store customers' passwords in a place that's been hacked. (Of course it could also be that whatever attacker found an exploit, was just scanning that company's address space for hackable machines, and didn't happen to scan the address space of the other hosting companies.)
So, following people's suggestions, the machine is disconnected and hooked up to a KVM so I can still examine the files. I've found this file: -rw-r--r-- 1 root root 1358 Oct 21 17:40 /home/file.pl which appears to be a copy of this exploit script: http://archive.cert.uni-stuttgart.de/bugtraq/2006/11/msg00302.html Note the last-mod date of October 21.
No other files on the system were last modified on October 21st. However there was a security advisory dated October 20th which affected httpd: http://mailinglist-archive.com/centos-announce/2011-10/00035-CentOSannounce+... https://rhn.redhat.com/errata/RHSA-2011-1392.html
and a large number of files on the machine, including lots of files in */ usr/lib64/httpd/modules/* and */lib/modules/2.6.18-274.7.1.el5/kernel/* , have a last-mod date of October 20th. So I assume that these are files which were updated automatically by yum as a result of the patch that goes with this advisory -- does that sound right?
So a couple of questions that I could use some help with:
- The last patch affecting httpd was released on October 20th, and the
earliest evidence I can find of the machine being hacked is a file dated October 21st. This could be just a coincidence, but could it also suggest that the patch on October 20th introduced a new exploit, which the attacker then used to get in on October 21st? (Another possibility: I think that when yum installs updates, it doesn't actually restart httpd. So maybe even after the patch was installed, my old httpd instance kept running and was still vulnerable? As for why it got hacked the very next day, maybe the attacker looked at the newly released patch and reverse-engineered it to figure out where the vulnerabilities were, that the patch fixed?)
- Since the */var/log/httpd/* and /var/log/secure* logs only go back 4-5
weeks by default, it looks like any log entries related to how the attacker would have gotten in on or before October 21st, are gone. (The secure* logs do show multiple successful logins as "root" within the last 4 weeks, mostly from IP addresses in Asia, but that's to be expected once the machine was compromised -- it doesn't help track down how they originally got in.) Anywhere else that the logs would contain useful data?
sshd with root login enabled with very bad password?
-- Eero
On Sun, Jan 1, 2012 at 2:55 PM, Eero Volotinen eero.volotinen@iki.fiwrote:
2012/1/2 Bennett Haselton bennett@peacefire.org:
(Sorry, third time -- last one, promise, just giving it a subject line!)
OK, a second machine hosted at the same hosting company has also
apparently
been hacked. Since 2 of out of 3 machines hosted at that company have
now
been hacked, but this hasn't happened to any of the other 37 dedicated servers that I've got hosted at other hosting companies (also CentOS,
same
version or almost), this makes me wonder if there's a security breach at this company, like if they store customers' passwords in a place that's been hacked. (Of course it could also be that whatever attacker found an exploit, was just scanning that company's address space for hackable machines, and didn't happen to scan the address space of the other
hosting
companies.)
So, following people's suggestions, the machine is disconnected and
hooked
up to a KVM so I can still examine the files. I've found this file: -rw-r--r-- 1 root root 1358 Oct 21 17:40 /home/file.pl which appears to be a copy of this exploit script: http://archive.cert.uni-stuttgart.de/bugtraq/2006/11/msg00302.html Note the last-mod date of October 21.
No other files on the system were last modified on October 21st. However there was a security advisory dated October 20th which affected httpd:
http://mailinglist-archive.com/centos-announce/2011-10/00035-CentOSannounce+...
https://rhn.redhat.com/errata/RHSA-2011-1392.html
and a large number of files on the machine, including lots of files in */ usr/lib64/httpd/modules/* and */lib/modules/2.6.18-274.7.1.el5/kernel/* , have a last-mod date of October 20th. So I assume that these are files which were updated automatically by yum as a result of the patch that
goes
with this advisory -- does that sound right?
So a couple of questions that I could use some help with:
- The last patch affecting httpd was released on October 20th, and the
earliest evidence I can find of the machine being hacked is a file dated October 21st. This could be just a coincidence, but could it also
suggest
that the patch on October 20th introduced a new exploit, which the
attacker
then used to get in on October 21st? (Another possibility: I think that when yum installs updates, it doesn't actually restart httpd. So maybe even after the patch was installed, my old httpd instance kept running and was still vulnerable?
As
for why it got hacked the very next day, maybe the attacker looked at the newly released patch and reverse-engineered it to figure out where the vulnerabilities were, that the patch fixed?)
- Since the */var/log/httpd/* and /var/log/secure* logs only go back 4-5
weeks by default, it looks like any log entries related to how the
attacker
would have gotten in on or before October 21st, are gone. (The secure* logs do show multiple successful logins as "root" within the last 4
weeks,
mostly from IP addresses in Asia, but that's to be expected once the machine was compromised -- it doesn't help track down how they originally got in.) Anywhere else that the logs would contain useful data?
sshd with root login enabled with very bad password?
Forgot to mention: the root password was: 1WyJstJZnQ!j (I have since changed it).
(I have already practically worn out my keyboard explaining the math behind why I think a 12-character alphanumeric password is secure enough :) )
Bennett
On 01/02/2012 12:27 AM, Bennett Haselton wrote:
On Sun, Jan 1, 2012 at 2:55 PM, Eero Volotineneero.volotinen@iki.fiwrote:
2012/1/2 Bennett Haseltonbennett@peacefire.org:
(Sorry, third time -- last one, promise, just giving it a subject line!)
OK, a second machine hosted at the same hosting company has also
apparently
been hacked. Since 2 of out of 3 machines hosted at that company have
now
been hacked, but this hasn't happened to any of the other 37 dedicated servers that I've got hosted at other hosting companies (also CentOS,
same
version or almost), this makes me wonder if there's a security breach at this company, like if they store customers' passwords in a place that's been hacked. (Of course it could also be that whatever attacker found an exploit, was just scanning that company's address space for hackable machines, and didn't happen to scan the address space of the other
hosting
companies.)
So, following people's suggestions, the machine is disconnected and
hooked
up to a KVM so I can still examine the files. I've found this file: -rw-r--r-- 1 root root 1358 Oct 21 17:40 /home/file.pl which appears to be a copy of this exploit script: http://archive.cert.uni-stuttgart.de/bugtraq/2006/11/msg00302.html Note the last-mod date of October 21.
No other files on the system were last modified on October 21st. However there was a security advisory dated October 20th which affected httpd:
http://mailinglist-archive.com/centos-announce/2011-10/00035-CentOSannounce+...
https://rhn.redhat.com/errata/RHSA-2011-1392.html
and a large number of files on the machine, including lots of files in */ usr/lib64/httpd/modules/* and */lib/modules/2.6.18-274.7.1.el5/kernel/* , have a last-mod date of October 20th. So I assume that these are files which were updated automatically by yum as a result of the patch that
goes
with this advisory -- does that sound right?
So a couple of questions that I could use some help with:
- The last patch affecting httpd was released on October 20th, and the
earliest evidence I can find of the machine being hacked is a file dated October 21st. This could be just a coincidence, but could it also
suggest
that the patch on October 20th introduced a new exploit, which the
attacker
then used to get in on October 21st? (Another possibility: I think that when yum installs updates, it doesn't actually restart httpd. So maybe even after the patch was installed, my old httpd instance kept running and was still vulnerable?
As
for why it got hacked the very next day, maybe the attacker looked at the newly released patch and reverse-engineered it to figure out where the vulnerabilities were, that the patch fixed?)
- Since the */var/log/httpd/* and /var/log/secure* logs only go back 4-5
weeks by default, it looks like any log entries related to how the
attacker
would have gotten in on or before October 21st, are gone. (The secure* logs do show multiple successful logins as "root" within the last 4
weeks,
mostly from IP addresses in Asia, but that's to be expected once the machine was compromised -- it doesn't help track down how they originally got in.) Anywhere else that the logs would contain useful data?
sshd with root login enabled with very bad password?
Forgot to mention: the root password was: 1WyJstJZnQ!j (I have since changed it).
(I have already practically worn out my keyboard explaining the math behind why I think a 12-character alphanumeric password is secure enough :) )
The Errata you posted does not explain (to me at least) how they would get in. They would still need to brut-force the password. So if it was not a password, they used something else.
And that VBullliten exploit looks like they used it to attack other sites with old VBullitens, from your server.
Doesn't your Hosting company keep connection tracking from their rooters for 6 months, in case "police" requests them? You could track what happened from those logs.
Also, have you used Logwatch to send daily log summaries to your mail? They could have had interesting data.
Good practice is to setup remote logging by sending logs to server with rsyslog daemon with IP/host separation. It is done in real time, so attacker can only stop the log daemon on broken system and reconfigure them.
One of the reasons I use Virtualmin package is because all servers (httpd, postfix, ....) are run under the user that owns that domain. Even if you have only one domain, all services are will lowered privileges, so the damage is lesser then when they are run from root account.
On Sunday, January 01, 2012 06:27:32 PM Bennett Haselton wrote:
(I have already practically worn out my keyboard explaining the math behind why I think a 12-character alphanumeric password is secure enough :) )
Also see: https://lwn.net/Articles/369703/
On 1/3/2012 2:13 PM, Lamar Owen wrote:
On Sunday, January 01, 2012 06:27:32 PM Bennett Haselton wrote:
(I have already practically worn out my keyboard explaining the math behind why I think a 12-character alphanumeric password is secure enough :) )
Also see: https://lwn.net/Articles/369703/
The focus of this article seems to be on systems with multiple users where the admin can't necessarily trust all the users to make smart decisions. I've already said that I can see why in that case it might be desirable to require users to use ssh keys instead of passwords, since you can't force users to use good passwords. My point was that if you're the only user and you can make yourself use a 12-char password with enough entropy, that's good enough.
Bennett
On Jan 1, 2012, at 5:23 PM, Bennett Haselton bennett@peacefire.org wrote:
(Sorry, third time -- last one, promise, just giving it a subject line!)
OK, a second machine hosted at the same hosting company has also apparently been hacked. Since 2 of out of 3 machines hosted at that company have now been hacked, but this hasn't happened to any of the other 37 dedicated servers that I've got hosted at other hosting companies (also CentOS, same version or almost), this makes me wonder if there's a security breach at this company, like if they store customers' passwords in a place that's been hacked. (Of course it could also be that whatever attacker found an exploit, was just scanning that company's address space for hackable machines, and didn't happen to scan the address space of the other hosting companies.)
So, following people's suggestions, the machine is disconnected and hooked up to a KVM so I can still examine the files. I've found this file: -rw-r--r-- 1 root root 1358 Oct 21 17:40 /home/file.pl which appears to be a copy of this exploit script: http://archive.cert.uni-stuttgart.de/bugtraq/2006/11/msg00302.html Note the last-mod date of October 21.
No other files on the system were last modified on October 21st. However there was a security advisory dated October 20th which affected httpd: http://mailinglist-archive.com/centos-announce/2011-10/00035-CentOSannounce+... https://rhn.redhat.com/errata/RHSA-2011-1392.html
and a large number of files on the machine, including lots of files in */ usr/lib64/httpd/modules/* and */lib/modules/2.6.18-274.7.1.el5/kernel/* , have a last-mod date of October 20th. So I assume that these are files which were updated automatically by yum as a result of the patch that goes with this advisory -- does that sound right?
So a couple of questions that I could use some help with:
- The last patch affecting httpd was released on October 20th, and the
earliest evidence I can find of the machine being hacked is a file dated October 21st. This could be just a coincidence, but could it also suggest that the patch on October 20th introduced a new exploit, which the attacker then used to get in on October 21st? (Another possibility: I think that when yum installs updates, it doesn't actually restart httpd. So maybe even after the patch was installed, my old httpd instance kept running and was still vulnerable? As for why it got hacked the very next day, maybe the attacker looked at the newly released patch and reverse-engineered it to figure out where the vulnerabilities were, that the patch fixed?)
- Since the */var/log/httpd/* and /var/log/secure* logs only go back 4-5
weeks by default, it looks like any log entries related to how the attacker would have gotten in on or before October 21st, are gone. (The secure* logs do show multiple successful logins as "root" within the last 4 weeks, mostly from IP addresses in Asia, but that's to be expected once the machine was compromised -- it doesn't help track down how they originally got in.) Anywhere else that the logs would contain useful data? _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Do you have SELinux enabled? If not, you might need to turn that on, as that would have prevented that exploit.
On Sun, Jan 1, 2012 at 4:57 PM, Rilindo Foster rilindo@me.com wrote:
On Jan 1, 2012, at 5:23 PM, Bennett Haselton bennett@peacefire.org wrote:
(Sorry, third time -- last one, promise, just giving it a subject line!)
OK, a second machine hosted at the same hosting company has also
apparently
been hacked. Since 2 of out of 3 machines hosted at that company have
now
been hacked, but this hasn't happened to any of the other 37 dedicated servers that I've got hosted at other hosting companies (also CentOS,
same
version or almost), this makes me wonder if there's a security breach at this company, like if they store customers' passwords in a place that's been hacked. (Of course it could also be that whatever attacker found an exploit, was just scanning that company's address space for hackable machines, and didn't happen to scan the address space of the other
hosting
companies.)
So, following people's suggestions, the machine is disconnected and
hooked
up to a KVM so I can still examine the files. I've found this file: -rw-r--r-- 1 root root 1358 Oct 21 17:40 /home/file.pl which appears to be a copy of this exploit script: http://archive.cert.uni-stuttgart.de/bugtraq/2006/11/msg00302.html Note the last-mod date of October 21.
No other files on the system were last modified on October 21st. However there was a security advisory dated October 20th which affected httpd:
http://mailinglist-archive.com/centos-announce/2011-10/00035-CentOSannounce+...
https://rhn.redhat.com/errata/RHSA-2011-1392.html
and a large number of files on the machine, including lots of files in */ usr/lib64/httpd/modules/* and */lib/modules/2.6.18-274.7.1.el5/kernel/* , have a last-mod date of October 20th. So I assume that these are files which were updated automatically by yum as a result of the patch that
goes
with this advisory -- does that sound right?
So a couple of questions that I could use some help with:
- The last patch affecting httpd was released on October 20th, and the
earliest evidence I can find of the machine being hacked is a file dated October 21st. This could be just a coincidence, but could it also
suggest
that the patch on October 20th introduced a new exploit, which the
attacker
then used to get in on October 21st? (Another possibility: I think that when yum installs updates, it doesn't actually restart httpd. So maybe even after the patch was installed, my old httpd instance kept running and was still vulnerable?
As
for why it got hacked the very next day, maybe the attacker looked at the newly released patch and reverse-engineered it to figure out where the vulnerabilities were, that the patch fixed?)
- Since the */var/log/httpd/* and /var/log/secure* logs only go back 4-5
weeks by default, it looks like any log entries related to how the
attacker
would have gotten in on or before October 21st, are gone. (The secure* logs do show multiple successful logins as "root" within the last 4
weeks,
mostly from IP addresses in Asia, but that's to be expected once the machine was compromised -- it doesn't help track down how they originally got in.) Anywhere else that the logs would contain useful data? _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Do you have SELinux enabled? If not, you might need to turn that on, as that would have prevented that exploit.
I don't, but I'm not sure what you mean by "that would have prevented that exploit". How do you know what exploit they used, much less that SELinux would have prevented it?
Or are you assuming that my guess was correct that they got in because of a vulnerability that was opened up by the patch being auto-installed on 10/20, and that if *that* is the case, that SELinux would have prevented that? How does that work, how would SELinux have prevented a vulnerability being opened up by the patch install?
Bennett
≈On Jan 1, 2012, at 8:24 PM, Bennett Haselton wrote:
On Sun, Jan 1, 2012 at 4:57 PM, Rilindo Foster rilindo@me.com wrote:
On Jan 1, 2012, at 5:23 PM, Bennett Haselton bennett@peacefire.org wrote:
(Sorry, third time -- last one, promise, just giving it a subject line!)
OK, a second machine hosted at the same hosting company has also
apparently
been hacked. Since 2 of out of 3 machines hosted at that company have
now
been hacked, but this hasn't happened to any of the other 37 dedicated servers that I've got hosted at other hosting companies (also CentOS,
same
version or almost), this makes me wonder if there's a security breach at this company, like if they store customers' passwords in a place that's been hacked. (Of course it could also be that whatever attacker found an exploit, was just scanning that company's address space for hackable machines, and didn't happen to scan the address space of the other
hosting
companies.)
So, following people's suggestions, the machine is disconnected and
hooked
up to a KVM so I can still examine the files. I've found this file: -rw-r--r-- 1 root root 1358 Oct 21 17:40 /home/file.pl which appears to be a copy of this exploit script: http://archive.cert.uni-stuttgart.de/bugtraq/2006/11/msg00302.html Note the last-mod date of October 21.
No other files on the system were last modified on October 21st. However there was a security advisory dated October 20th which affected httpd:
http://mailinglist-archive.com/centos-announce/2011-10/00035-CentOSannounce+...
https://rhn.redhat.com/errata/RHSA-2011-1392.html
and a large number of files on the machine, including lots of files in */ usr/lib64/httpd/modules/* and */lib/modules/2.6.18-274.7.1.el5/kernel/* , have a last-mod date of October 20th. So I assume that these are files which were updated automatically by yum as a result of the patch that
goes
with this advisory -- does that sound right?
So a couple of questions that I could use some help with:
- The last patch affecting httpd was released on October 20th, and the
earliest evidence I can find of the machine being hacked is a file dated October 21st. This could be just a coincidence, but could it also
suggest
that the patch on October 20th introduced a new exploit, which the
attacker
then used to get in on October 21st? (Another possibility: I think that when yum installs updates, it doesn't actually restart httpd. So maybe even after the patch was installed, my old httpd instance kept running and was still vulnerable?
As
for why it got hacked the very next day, maybe the attacker looked at the newly released patch and reverse-engineered it to figure out where the vulnerabilities were, that the patch fixed?)
- Since the */var/log/httpd/* and /var/log/secure* logs only go back 4-5
weeks by default, it looks like any log entries related to how the
attacker
would have gotten in on or before October 21st, are gone. (The secure* logs do show multiple successful logins as "root" within the last 4
weeks,
mostly from IP addresses in Asia, but that's to be expected once the machine was compromised -- it doesn't help track down how they originally got in.) Anywhere else that the logs would contain useful data? _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Do you have SELinux enabled? If not, you might need to turn that on, as that would have prevented that exploit.
I don't, but I'm not sure what you mean by "that would have prevented that exploit". How do you know what exploit they used, much less that SELinux would have prevented it?
Or are you assuming that my guess was correct that they got in because of a vulnerability that was opened up by the patch being auto-installed on 10/20, and that if *that* is the case, that SELinux would have prevented that? How does that work, how would SELinux have prevented a vulnerability being opened up by the patch install?
The script in question is an exploit from a web board which is apparently designed to pull outside traffic. If you had SELinux, it would put httpd in its own context and by default, it will NOT allow connections from that context to another. You have to enable it with:
setsebool -P httpd_can_network_connect on
- Rilindo
On Sun, Jan 1, 2012 at 5:33 PM, RILINDO FOSTER rilindo@me.com wrote:
≈On Jan 1, 2012, at 8:24 PM, Bennett Haselton wrote:
On Sun, Jan 1, 2012 at 4:57 PM, Rilindo Foster rilindo@me.com wrote:
On Jan 1, 2012, at 5:23 PM, Bennett Haselton bennett@peacefire.org wrote:
(Sorry, third time -- last one, promise, just giving it a subject
line!)
OK, a second machine hosted at the same hosting company has also
apparently
been hacked. Since 2 of out of 3 machines hosted at that company have
now
been hacked, but this hasn't happened to any of the other 37 dedicated servers that I've got hosted at other hosting companies (also CentOS,
same
version or almost), this makes me wonder if there's a security breach
at
this company, like if they store customers' passwords in a place that's been hacked. (Of course it could also be that whatever attacker found
an
exploit, was just scanning that company's address space for hackable machines, and didn't happen to scan the address space of the other
hosting
companies.)
So, following people's suggestions, the machine is disconnected and
hooked
up to a KVM so I can still examine the files. I've found this file: -rw-r--r-- 1 root root 1358 Oct 21 17:40 /home/file.pl which appears to be a copy of this exploit script: http://archive.cert.uni-stuttgart.de/bugtraq/2006/11/msg00302.html Note the last-mod date of October 21.
No other files on the system were last modified on October 21st.
However
there was a security advisory dated October 20th which affected httpd:
http://mailinglist-archive.com/centos-announce/2011-10/00035-CentOSannounce+...
https://rhn.redhat.com/errata/RHSA-2011-1392.html
and a large number of files on the machine, including lots of files in
*/
usr/lib64/httpd/modules/* and
*/lib/modules/2.6.18-274.7.1.el5/kernel/* ,
have a last-mod date of October 20th. So I assume that these are files which were updated automatically by yum as a result of the patch that
goes
with this advisory -- does that sound right?
So a couple of questions that I could use some help with:
- The last patch affecting httpd was released on October 20th, and the
earliest evidence I can find of the machine being hacked is a file
dated
October 21st. This could be just a coincidence, but could it also
suggest
that the patch on October 20th introduced a new exploit, which the
attacker
then used to get in on October 21st? (Another possibility: I think that when yum installs updates, it doesn't actually restart httpd. So maybe even after the patch was installed, my old httpd instance kept running and was still vulnerable?
As
for why it got hacked the very next day, maybe the attacker looked at
the
newly released patch and reverse-engineered it to figure out where the vulnerabilities were, that the patch fixed?)
- Since the */var/log/httpd/* and /var/log/secure* logs only go back
4-5
weeks by default, it looks like any log entries related to how the
attacker
would have gotten in on or before October 21st, are gone. (The secure* logs do show multiple successful logins as "root" within the last 4
weeks,
mostly from IP addresses in Asia, but that's to be expected once the machine was compromised -- it doesn't help track down how they
originally
got in.) Anywhere else that the logs would contain useful data? _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Do you have SELinux enabled? If not, you might need to turn that on, as that would have prevented that exploit.
I don't, but I'm not sure what you mean by "that would have prevented
that
exploit". How do you know what exploit they used, much less that SELinux would have prevented it?
Or are you assuming that my guess was correct that they got in because
of a
vulnerability that was opened up by the patch being auto-installed on 10/20, and that if *that* is the case, that SELinux would have prevented that? How does that work, how would SELinux have prevented a
vulnerability
being opened up by the patch install?
The script in question is an exploit from a web board which is apparently designed to pull outside traffic. If you had SELinux, it would put httpd in its own context and by default, it will NOT allow connections from that context to another. You have to enable it with:
setsebool -P httpd_can_network_connect on
I'm not sure what you mean by "an exploit from a web board which is apparently designed to pull outside traffic". Like Ljubomir said, it looks like a script that is used from machine X to DOS attack machine Y, if machine Y has the VBulletin bulletin board software installed on it. Is that what you meant? :)
But anyway, since the file was located at /home/file.pl (and since attacker had console access), presumably it wasn't being invoked by the web server, only from the command line. So how would it have made any difference if httpd was running in its own context, if that script was not being invoked by httpd?
Bennett
On Jan 1, 2012, at 8:50 PM, Bennett Haselton wrote:
On Sun, Jan 1, 2012 at 5:33 PM, RILINDO FOSTER rilindo@me.com wrote:
≈On Jan 1, 2012, at 8:24 PM, Bennett Haselton wrote:
On Sun, Jan 1, 2012 at 4:57 PM, Rilindo Foster rilindo@me.com wrote:
On Jan 1, 2012, at 5:23 PM, Bennett Haselton bennett@peacefire.org wrote:
(Sorry, third time -- last one, promise, just giving it a subject
line!)
OK, a second machine hosted at the same hosting company has also
apparently
been hacked. Since 2 of out of 3 machines hosted at that company have
now
been hacked, but this hasn't happened to any of the other 37 dedicated servers that I've got hosted at other hosting companies (also CentOS,
same
version or almost), this makes me wonder if there's a security breach
at
this company, like if they store customers' passwords in a place that's been hacked. (Of course it could also be that whatever attacker found
an
exploit, was just scanning that company's address space for hackable machines, and didn't happen to scan the address space of the other
hosting
companies.)
So, following people's suggestions, the machine is disconnected and
hooked
up to a KVM so I can still examine the files. I've found this file: -rw-r--r-- 1 root root 1358 Oct 21 17:40 /home/file.pl which appears to be a copy of this exploit script: http://archive.cert.uni-stuttgart.de/bugtraq/2006/11/msg00302.html Note the last-mod date of October 21.
No other files on the system were last modified on October 21st.
However
there was a security advisory dated October 20th which affected httpd:
http://mailinglist-archive.com/centos-announce/2011-10/00035-CentOSannounce+...
https://rhn.redhat.com/errata/RHSA-2011-1392.html
and a large number of files on the machine, including lots of files in
*/
usr/lib64/httpd/modules/* and
*/lib/modules/2.6.18-274.7.1.el5/kernel/* ,
have a last-mod date of October 20th. So I assume that these are files which were updated automatically by yum as a result of the patch that
goes
with this advisory -- does that sound right?
So a couple of questions that I could use some help with:
- The last patch affecting httpd was released on October 20th, and the
earliest evidence I can find of the machine being hacked is a file
dated
October 21st. This could be just a coincidence, but could it also
suggest
that the patch on October 20th introduced a new exploit, which the
attacker
then used to get in on October 21st? (Another possibility: I think that when yum installs updates, it doesn't actually restart httpd. So maybe even after the patch was installed, my old httpd instance kept running and was still vulnerable?
As
for why it got hacked the very next day, maybe the attacker looked at
the
newly released patch and reverse-engineered it to figure out where the vulnerabilities were, that the patch fixed?)
- Since the */var/log/httpd/* and /var/log/secure* logs only go back
4-5
weeks by default, it looks like any log entries related to how the
attacker
would have gotten in on or before October 21st, are gone. (The secure* logs do show multiple successful logins as "root" within the last 4
weeks,
mostly from IP addresses in Asia, but that's to be expected once the machine was compromised -- it doesn't help track down how they
originally
got in.) Anywhere else that the logs would contain useful data? _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Do you have SELinux enabled? If not, you might need to turn that on, as that would have prevented that exploit.
I don't, but I'm not sure what you mean by "that would have prevented
that
exploit". How do you know what exploit they used, much less that SELinux would have prevented it?
Or are you assuming that my guess was correct that they got in because
of a
vulnerability that was opened up by the patch being auto-installed on 10/20, and that if *that* is the case, that SELinux would have prevented that? How does that work, how would SELinux have prevented a
vulnerability
being opened up by the patch install?
The script in question is an exploit from a web board which is apparently designed to pull outside traffic. If you had SELinux, it would put httpd in its own context and by default, it will NOT allow connections from that context to another. You have to enable it with:
setsebool -P httpd_can_network_connect on
I'm not sure what you mean by "an exploit from a web board which is apparently designed to pull outside traffic". Like Ljubomir said, it looks like a script that is used from machine X to DOS attack machine Y, if machine Y has the VBulletin bulletin board software installed on it. Is that what you meant? :)
But anyway, since the file was located at /home/file.pl (and since attacker had console access), presumably it wasn't being invoked by the web server, only from the command line. So how would it have made any difference if httpd was running in its own context, if that script was not being invoked by httpd?
How do you know that the attack has console access? If you mean ssh access, there may not be a need for the attack use that - the attack can simply use an exploit in a known web application and install their own copy of ssh or some other remote access daemon - and do it in way that it is not detectable even in the system logs.
Which brings up an important point - most intrusions on web hosts is through unpatched web sites that attacks used to attempt further privilege escalations. That is why you have mechanisms like SELinux to lock down the app, so that the worse they can do is to break the site, but not become root.
- Rilindo
On 01/02/2012 02:50 AM, Bennett Haselton wrote:
I'm not sure what you mean by "an exploit from a web board which is apparently designed to pull outside traffic". Like Ljubomir said, it looks like a script that is used from machine X to DOS attack machine Y, if machine Y has the VBulletin bulletin board software installed on it. Is that what you meant?:)
But anyway, since the file was located at /home/file.pl (and since attacker had console access), presumably it wasn't being invoked by the web server, only from the command line. So how would it have made any difference if httpd was running in its own context, if that script was not being invoked by httpd?
Nobody of us really knows how they got in. All you will get from this mailing list will be speculations, apart from useful instructions how to gather as much info as possible. So there are many possible ways they got in including brute force. As I understood you, you do not use neither fail2ban, denyhosts or/and logwatch, and you haven't checked those two servers very much in recent months.
What Rilindo is saying is that SELinux might detect exploits while their trying to break processes from their routine (allowed by SELinux), and all of this (if it happened via exploits) might have been prevented by SELinux. You really do have lot of gaps in your security. If I were you, I would use all advice's given to you here and secure the rest of your servers (SELinux, fail2ban/denyhosts, logwatch, rsyslog, etc..)
On Sun, Jan 1, 2012 at 6:04 PM, Ljubomir Ljubojevic office@plnet.rs wrote:
On 01/02/2012 02:50 AM, Bennett Haselton wrote:
I'm not sure what you mean by "an exploit from a web board which is apparently designed to pull outside traffic". Like Ljubomir said, it
looks
like a script that is used from machine X to DOS attack machine Y, if machine Y has the VBulletin bulletin board software installed on it. Is that what you meant?:)
But anyway, since the file was located at /home/file.pl (and since
attacker
had console access), presumably it wasn't being invoked by the web
server,
only from the command line. So how would it have made any difference if httpd was running in its own context, if that script was not being
invoked
by httpd?
Nobody of us really knows how they got in. All you will get from this mailing list will be speculations, apart from useful instructions how to gather as much info as possible.
Speculation is fine, I'm interested in any method that the attacker *could* have used to get in to the server if it's *logically possible* that the attack could have had a non-trivial chance of working. (If they could have gotten in by method A, and they could have gotten in by method B, then regardless of which they DID use, I should still try to fix both A and B.)
But that still excludes things like "they got in because you used a password instead of an ssh key", as I have said many times why a 12-character random alphanumeric password (with 10^24 possible values) is secure enough even with the most pessimistic assumptions about botnets and GPUs.
So there are many possible ways they got in including brute force.
Knowing that it was a 12-character random alphanumeric password, do you still think it could have been done by brute force?
As I understood you, you do not use
neither fail2ban, denyhosts or/and logwatch, and you haven't checked those two servers very much in recent months.
What Rilindo is saying is that SELinux might detect exploits while their trying to break processes from their routine (allowed by SELinux), and all of this (if it happened via exploits) might have been prevented by SELinux. You really do have lot of gaps in your security.
A couple of people have pointed out "gaps" in what I was doing for *detection and analysis* after the fact (don't wipe a machine after it's hacked, have logs going back further than 4 weeks, have the logs backed up off-site, etc.), and that's useful. But as far as I can tell nobody has really pointed out any "gap" in what I'm doing on the *prevention* side.
To focus on one exact question: What is a *preventive* measure that you think I am not doing, that you think would reduce the chance of a break-in, in some specific scenario?
Take fail2ban and denyhosts, which block connections from IPs that make too many invalid ssh logins. In the scenario where the attacker is trying to brute-force the login, then if your ssh login password is already un-bruteforceable, then blocking connections from IPs that are making too many invalid attempts, will not reduce the chance of an attacker brute-force (which is already essentially zero).
Do you think I'm incorrect that fail2ban and/or denyhosts (and switching from passwords to ssh keys) do not reduce the chance of a breakin, if your ssh password is a truly random mixed-case-alphanumeric 12-character string? If you think this is incorrect, why?
If I were you, I would use all advice's given to you here and secure the rest of your servers (SELinux, fail2ban/denyhosts, logwatch, rsyslog, etc..)
I tried SELinux but it broke so much needed functionality on the server that it was not an option. There are many, many issues where all the forum discussions about how to solve the problem, just end with "Just turn off selinux", and nobody apparently has any other idea how to solve it. But in any case, what exactly does it do for httpd security, that isn't already accomplished by having httpd run as the unprivileged "apache" user? You said SELinux could prevent an exploit from "breaking a process from its routine". But even without SELinux, an attacker who found an exploit that could take control of httpd and make it try any action he wanted, still wouldn't be able to actually do anything while running as "apache", would they?
On Mon, Jan 2, 2012 at 6:03 AM, Bennett Haselton bennett@peacefire.org wrote:
I tried SELinux but it broke so much needed functionality on the server that it was not an option.
Pretty much all of the stock programs work with SELinux, so this by itself implies that you are running 3rd party or local apps that have write access in non-standard places. Which is a good start at what you need to break in. What apps are those (i.e. the ones that SELinux would have broken) and if they are open source, have those projects updated the app or the underlying language(s)/libraries since you have?
You said SELinux could prevent an exploit from "breaking a process from its routine". But even without SELinux, an attacker who found an exploit that could take control of httpd and make it try any action he wanted, still wouldn't be able to actually do anything while running as "apache", would they?
There have been many, many vulnerabilities that permit local user privilege escalation to root (in the kernel, glibc, suid programs, etc.) and there are probably many we still don't know about. They often require writing to the filesystem. For example, one fixed around 5.4 just required the ability to make a symlink somewhere. The published exploit script (which I've seen in the wild) tries to use /tmp. If the httpd process can't write in /tmp, it would fail.
On 1/2/2012 9:18 AM, Les Mikesell wrote:
On Mon, Jan 2, 2012 at 6:03 AM, Bennett Haseltonbennett@peacefire.org wrote:
I tried SELinux but it broke so much needed functionality on the server that it was not an option.
Pretty much all of the stock programs work with SELinux, so this by itself implies that you are running 3rd party or local apps that have write access in non-standard places. Which is a good start at what you need to break in. What apps are those (i.e. the ones that SELinux would have broken) and if they are open source, have those projects updated the app or the underlying language(s)/libraries since you have?
So here's a perfect example. I installed squid on one machine and changed it to listen to a non-standard port instead of 3128. It turns out that SELinux blocks this. (Which I still don't see the reasoning behind. Why would it be any less secure to run a service on a non-standard port? A lot of sources say it's *more* secure to run services on a non-standard port if you don't want people poking around! In general I don't think it's any more secure to run a service on a non-standard port -- all it buys you is time, as it's trivial for an attacker to scan all your ports, especially with a botnet -- but even if it's not more secure, it certainly shouldn't be *less* secure.)
But here's the real problem. Since I didn't know it was caused by SELinux, all the cache.log file said was:
2012/01/02 17:40:40| commBind: Cannot bind socket FD 13 to *:[portnum redacted]: (13) Permission denied
Nothing indicating why. Even worse, if you Google +squid +"cannot bind socket" +"permission denied" *none* of the first ten pages that come up, mention SELinux as a possible source of the problem. (One FAQ mentions SELinux but only as the source of a different problem.) All they do is recommend other workarounds, each of which takes time to test out, and may have other side-effects.
In other words, when SELinux causes a problem, it can take hours or days to find out that SELinux is the cause -- and even then you're not done, because you have to figure out a workaround if you want to fix the problem while keeping SELinux turned on.
-Bennett
On 01/03/2012 03:30 AM, Bennett Haselton wrote:
In other words, when SELinux causes a problem, it can take hours or days to find out that SELinux is the cause -- and even then you're not done, because you have to figure out a workaround if you want to fix the problem while keeping SELinux turned on.
You can always set SELinux to permissible mode for testing purposes and it will allow the action, but report that it would have been blocked.
On 1/2/2012 9:41 PM, Ljubomir Ljubojevic wrote:
On 01/03/2012 03:30 AM, Bennett Haselton wrote:
In other words, when SELinux causes a problem, it can take hours or days to find out that SELinux is the cause -- and even then you're not done, because you have to figure out a workaround if you want to fix the problem while keeping SELinux turned on.
You can always set SELinux to permissible mode for testing purposes and it will allow the action, but report that it would have been blocked.
Then, re-boot back into enforcing mode and run "audit2allow" and it will tell you how to set up a module which can be installed so that SELinux will allow the operation.
Here is a little file I keep in my /root directory to remind me some basic SELinux stuff: -------------------------------------------------------------------------- [root@monstro selinux]# more README Procedure to make an seliux policy named localtmp...
cd /root mkdir tmp cd tmp chcon -R -t usr_t . ln -s /usr/share/selinux/devel/Makefile . audit2allow -m mickey1 -i /var/log/audit/audit.log -o mickey1.te make -f /usr/share/selinux/devel/Makefile mv filename.te filename.pp ../selinux/ cd ../selinux semodule -i filename.pp
Commands to fix sshd binding to non-standard ports... semanage port -a -t ssh_port_t -p tcp 2244 semanage port -l | grep 22
Needed by samba setsebool -P samba_export_all_ro 1 setsebool -P samba_enable_home_dirs 1 setsebool -P samba_export_all_rw 1
[root@monstro selinux]# -------------------------------------------------------------------------- Harold
On Mon, Jan 2, 2012 at 8:30 PM, Bennett Haselton bennett@peacefire.org wrote:
What apps are those (i.e. the ones that
SELinux would have broken) and if they are open source, have those projects updated the app or the underlying language(s)/libraries since you have?
So here's a perfect example. I installed squid on one machine and changed it to listen to a non-standard port instead of 3128. It turns out that SELinux blocks this. (Which I still don't see the reasoning behind. Why would it be any less secure to run a service on a non-standard port?
Standard/non-standard isn't the point. The point is to control what an app can do even if some unexpected flaw lets it execute arbitrary code.
But here's the real problem. Since I didn't know it was caused by SELinux, all the cache.log file said was:
Things blocked by SELinux show up in audit.log. I agree that the SELinux implementation is confusing, but it does give more control over what apps are allowed to do.
In other words, when SELinux causes a problem, it can take hours or days to find out that SELinux is the cause
Errr, no - all you have to do is disable SELinux and see if the behavior changes. On your test machine where you should be testing changes, this shouldn't be a big risk.
-- and even then you're not done,
because you have to figure out a workaround if you want to fix the problem while keeping SELinux turned on.
You do have a point there.
On 1/2/2012 7:48 PM, Les Mikesell wrote:
On Mon, Jan 2, 2012 at 8:30 PM, Bennett Haseltonbennett@peacefire.org wrote:
What apps are those (i.e. the ones that
SELinux would have broken) and if they are open source, have those projects updated the app or the underlying language(s)/libraries since you have?
So here's a perfect example. I installed squid on one machine and changed it to listen to a non-standard port instead of 3128. It turns out that SELinux blocks this. (Which I still don't see the reasoning behind. Why would it be any less secure to run a service on a non-standard port?
Standard/non-standard isn't the point. The point is to control what an app can do even if some unexpected flaw lets it execute arbitrary code.
What's the scenario where this port restriction would make a difference? Suppose an attacker does find a way to make squid execute arbitrary code. Then if part of their plan is to make squid start listening on a second port in addition the one it's already using (3128, the default), they could just make it listen on another port like 8080 which is permitted by SELinux.
But here's the real problem. Since I didn't know it was caused by SELinux, all the cache.log file said was:
Things blocked by SELinux show up in audit.log. I agree that the SELinux implementation is confusing, but it does give more control over what apps are allowed to do.
In other words, when SELinux causes a problem, it can take hours or days to find out that SELinux is the cause
Errr, no - all you have to do is disable SELinux and see if the behavior changes. On your test machine where you should be testing changes, this shouldn't be a big risk.
Well I meant, if you didn't happen to know enough about SELinux to realize that it was the cause of many non-standard system behaviors. If you knew about SELinux as one of those things that frequently gets in the way, then you'd probably think of it a lot faster :)
One could easily say, "Hey, you should just know about SELinux", but the problem is that there can be dozens of things on a machine that could potentially cause failures (without giving a useful error message), and each additional thing that you "should just know about", decreases the chances that any one person would actually know to check them all, if they're not a professional admin.
Again, you don't have to take my word for it -- in the first 10 Google hits of pages with people posting about the problem I ran into, none of the people helping them, thought to suggest SELinux as the cause of the problem. (By contrast, when iptables causes a problem, people usually figure out that's what's going on.)
Bennett
On Mon, Jan 02, 2012 at 10:41:15PM -0800, Bennett Haselton wrote:
Again, you don't have to take my word for it -- in the first 10 Google hits of pages with people posting about the problem I ran into, none of the people helping them, thought to suggest SELinux as the cause of the problem. (By contrast, when iptables causes a problem, people usually figure out that's what's going on.)
There's a lot of FUD going around in this thread. If people would bother to spend some time _reading_ _documentation_ on the systems they are attempting to admin they might find that subsystems such as selinux aren't quite as complex as they make them out to be. Oh, and like all other aspects of the internet, google is just as susceptible to indexing idiots as it is to indexing pertinent and accurate results.
selinux is fully integrated into the system auditing facilities and even provides multiple tools to aid an administrator in problem isolation and remediation. These tools are, of course, fully documented.
There is _ample_ documentation on the web, starting with the CentOS wiki site, that covers selinux in great detail. I would urge you and anyone else not familiar with the facilities that selinux offers, both from an enforcement standpoint and also from a management standpoint, to spend some quality time reading up on it.
Blaming selinux itself for creating what you perceive as a "problem" because you won't make a rudimentary attempt at learning to properly manage it is ludicrous.
Anyone that has an internet facing box that does not take advantage of each and every security technology at their disposal should be in a different line of work. And taking advantage of such technologies requires one to read associated documentation.
And while this response seems to single you out I mean to point a finger at anyone out there that doesn't bother to take time to learn about systems / data that they are responsible for.
John
On 1/2/2012 11:01 PM, John R. Dennison wrote:
On Mon, Jan 02, 2012 at 10:41:15PM -0800, Bennett Haselton wrote:
Again, you don't have to take my word for it -- in the first 10 Google hits of pages with people posting about the problem I ran into, none of the people helping them, thought to suggest SELinux as the cause of the problem. (By contrast, when iptables causes a problem, people usually figure out that's what's going on.)
There's a lot of FUD going around in this thread. If people would bother to spend some time _reading_ _documentation_ on the systems they are attempting to admin they might find that subsystems such as selinux aren't quite as complex as they make them out to be.
Well for one thing, much of the documentation for Linux tools is missing, unclear, or sometimes just wrong. Here's a message I posted two years ago: http://forums.mysql.com/read.php?11,280886,280886#msg-280886 about how when you first install MySQL, it tells you -- shouting in all caps, no less -- to set a password by running a pair of commands, where the second command is always guaranteed to fail (for reasons explained in the post). I verified this on every dedicated server I had access to. But the message never got answered, and for all I know MySQL still shouts the wrong information at every new user who installs it.
However, completely wrong documentation is actually rare; the problem with most documentation is that it's unclear or ambiguous, because it's written or judged with the mindset that users "should" know enough to resolve the ambiguities and the errors, and if the user doesn't know enough to figure out the errors, it's a failing on the user's part. Or the documentation is extremely long-winded and doesn't contain a short version that is good enough for 99% of users' purposes, because it's assumed that if users don't want to read the 50-page version, that's also a failing on the user's part.
I think most unclear documentation could easily be made better. I'd be happy to volunteer for a project where someone writing documentation wanted to check to see if it made sense to people who didn't already know what the author was trying to say. But the authors have to want to do that. The main obstacle is the mindset that problems with documentation are presumed to be the user's fault.
(To be fair, this isn't a problem specific to Linux documentation; many instructions out there are pretty bad, because it's notoriously difficult to judge the quality of instructions in a field in which you're an expert, since your brain automatically fixes errors and ambiguities if you know what the author was trying to say. Recipes written by cooking experts are pretty bad.)
Oh, and like all other aspects of the internet, google is just as susceptible to indexing idiots as it is to indexing pertinent and accurate results.
selinux is fully integrated into the system auditing facilities and even provides multiple tools to aid an administrator in problem isolation and remediation. These tools are, of course, fully documented.
There is _ample_ documentation on the web, starting with the CentOS wiki site, that covers selinux in great detail. I would urge you and anyone else not familiar with the facilities that selinux offers, both from an enforcement standpoint and also from a management standpoint, to spend some quality time reading up on it.
Blaming selinux itself for creating what you perceive as a "problem" because you won't make a rudimentary attempt at learning to properly manage it is ludicrous.
Anyone that has an internet facing box that does not take advantage of each and every security technology at their disposal should be in a different line of work. And taking advantage of such technologies requires one to read associated documentation.
And while this response seems to single you out I mean to point a finger at anyone out there that doesn't bother to take time to learn about systems / data that they are responsible for.
John
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Tue, Jan 3, 2012 at 12:41 AM, Bennett Haselton bennett@peacefire.org wrote:
Standard/non-standard isn't the point. The point is to control what an app can do even if some unexpected flaw lets it execute arbitrary code.
What's the scenario where this port restriction would make a difference? Suppose an attacker does find a way to make squid execute arbitrary code. Then if part of their plan is to make squid start listening on a second port in addition the one it's already using (3128, the default), they could just make it listen on another port like 8080 which is permitted by SELinux.
You are thinking the wrong direction. No one can anticipate every possibility that someone might do to take advantage of vulnerabilities. Instead think in terms of the minimum the application should be allowed to do under any circumstance. Then you'll also have firewalls blocking every port except where you know your own application is listening anyway.
In other words, when SELinux causes a problem, it can take hours or days to find out that SELinux is the cause
Errr, no - all you have to do is disable SELinux and see if the behavior changes. On your test machine where you should be testing changes, this shouldn't be a big risk.
Well I meant, if you didn't happen to know enough about SELinux to realize that it was the cause of many non-standard system behaviors. If you knew about SELinux as one of those things that frequently gets in the way, then you'd probably think of it a lot faster :)
Well, the other security rules for linux/unix are trivial, so SELinux should pop to mind immediately for surprising behavior, especially if you have changed configurations from the expected defaults.
One could easily say, "Hey, you should just know about SELinux", but the problem is that there can be dozens of things on a machine that could potentially cause failures (without giving a useful error message), and each additional thing that you "should just know about", decreases the chances that any one person would actually know to check them all, if they're not a professional admin.
OK, the point here is that you have some unknown vulnerability that the stock linux security mechanisms aren't handling. I'm more inclined to think it is a software bug rather than brute-forcing a password, but that's speculation. So, which do you think will be more difficult - tracking down some unknown bug in millions of lines of code with no real evidence, or adding another layer of security that is mostly pre-configured in the distro anyway?
Again, you don't have to take my word for it -- in the first 10 Google hits of pages with people posting about the problem I ran into, none of the people helping them, thought to suggest SELinux as the cause of the problem.
Most distributions don't include SELinux, so don't expect to find it always mentioned in random googling.
On 1/2/2012 11:04 PM, Les Mikesell wrote:
On Tue, Jan 3, 2012 at 12:41 AM, Bennett Haseltonbennett@peacefire.org wrote:
Standard/non-standard isn't the point. The point is to control what an app can do even if some unexpected flaw lets it execute arbitrary code.
What's the scenario where this port restriction would make a difference? Suppose an attacker does find a way to make squid execute arbitrary code. Then if part of their plan is to make squid start listening on a second port in addition the one it's already using (3128, the default), they could just make it listen on another port like 8080 which is permitted by SELinux.
You are thinking the wrong direction. No one can anticipate every possibility that someone might do to take advantage of vulnerabilities. Instead think in terms of the minimum the application should be allowed to do under any circumstance. Then you'll also have firewalls blocking every port except where you know your own application is listening anyway.
I agree about minimum permissions, but my argument here is that permitting squid to "listen" on any arbitrary port it wants is just as "minimum", in terms of security implications, as permitting it to "listen" on port 3128. Either way, once the attacker has connected to it, the scenarios are identical from that point on (either the attacker knows an exploit to take control of squid, or they don't; either the squid process runs with sufficiently high privileges that the exploit can be used to do damage, or it can't).
In other words, when SELinux causes a problem, it can take hours or days to find out that SELinux is the cause
Errr, no - all you have to do is disable SELinux and see if the behavior changes. On your test machine where you should be testing changes, this shouldn't be a big risk.
Well I meant, if you didn't happen to know enough about SELinux to realize that it was the cause of many non-standard system behaviors. If you knew about SELinux as one of those things that frequently gets in the way, then you'd probably think of it a lot faster :)
Well, the other security rules for linux/unix are trivial, so SELinux should pop to mind immediately for surprising behavior, especially if you have changed configurations from the expected defaults.
One could easily say, "Hey, you should just know about SELinux", but the problem is that there can be dozens of things on a machine that could potentially cause failures (without giving a useful error message), and each additional thing that you "should just know about", decreases the chances that any one person would actually know to check them all, if they're not a professional admin.
OK, the point here is that you have some unknown vulnerability that the stock linux security mechanisms aren't handling. I'm more inclined to think it is a software bug rather than brute-forcing a password, but that's speculation. So, which do you think will be more difficult - tracking down some unknown bug in millions of lines of code with no real evidence, or adding another layer of security that is mostly pre-configured in the distro anyway?
Well I'm trying to weigh the costs of using it -- with all of the silent failures for operations that have no security implications -- against the reduced risks. If many exploits against httpd, for example, could have been prevented by SELinux, that may make it worth it. (And of course the suggestions about how to diagnose problems caused by SELinux are helpful.)
Bennett
On Jan 2, 2012, at 9:30 PM, Bennett Haselton wrote:
On 1/2/2012 9:18 AM, Les Mikesell wrote:
On Mon, Jan 2, 2012 at 6:03 AM, Bennett Haseltonbennett@peacefire.org wrote:
I tried SELinux but it broke so much needed functionality on the server that it was not an option.
Pretty much all of the stock programs work with SELinux, so this by itself implies that you are running 3rd party or local apps that have write access in non-standard places. Which is a good start at what you need to break in. What apps are those (i.e. the ones that SELinux would have broken) and if they are open source, have those projects updated the app or the underlying language(s)/libraries since you have?
So here's a perfect example. I installed squid on one machine and changed it to listen to a non-standard port instead of 3128. It turns out that SELinux blocks this. (Which I still don't see the reasoning behind. Why would it be any less secure to run a service on a non-standard port? A lot of sources say it's *more* secure to run services on a non-standard port if you don't want people poking around! In general I don't think it's any more secure to run a service on a non-standard port -- all it buys you is time, as it's trivial for an attacker to scan all your ports, especially with a botnet -- but even if it's not more secure, it certainly shouldn't be *less* secure.)
But here's the real problem. Since I didn't know it was caused by SELinux, all the cache.log file said was:
2012/01/02 17:40:40| commBind: Cannot bind socket FD 13 to *:[portnum redacted]: (13) Permission denied
Nothing indicating why. Even worse, if you Google +squid +"cannot bind socket" +"permission denied" *none* of the first ten pages that come up, mention SELinux as a possible source of the problem. (One FAQ mentions SELinux but only as the source of a different problem.) All they do is recommend other workarounds, each of which takes time to test out, and may have other side-effects.
In other words, when SELinux causes a problem, it can take hours or days to find out that SELinux is the cause -- and even then you're not done, because you have to figure out a workaround if you want to fix the problem while keeping SELinux turned on.
How is it SELinux's problem? Its like running iptables with the default http ports open and then blaming iptables when you change Apache to a non-standard port.
SELinux's list of open ports is pretty extensive. You can use the semanage (part of policycoreutils-python package) to get the list of allowed ports and grep for specific items that are open by default (like http and squid): [root@kerberos kvm]# semanage port -l | grep http http_cache_port_t tcp 3128, 8080, 8118, 8123, 10001-10010 http_cache_port_t udp 3130 http_port_t tcp 80, 443, 488, 8008, 8009, 8443 pegasus_http_port_t tcp 5988 pegasus_https_port_t tcp 5989
- Rilindo
On 1/2/2012 8:11 PM, RILINDO FOSTER wrote:
On Jan 2, 2012, at 9:30 PM, Bennett Haselton wrote:
On 1/2/2012 9:18 AM, Les Mikesell wrote:
On Mon, Jan 2, 2012 at 6:03 AM, Bennett Haseltonbennett@peacefire.org wrote:
I tried SELinux but it broke so much needed functionality on the server that it was not an option.
Pretty much all of the stock programs work with SELinux, so this by itself implies that you are running 3rd party or local apps that have write access in non-standard places. Which is a good start at what you need to break in. What apps are those (i.e. the ones that SELinux would have broken) and if they are open source, have those projects updated the app or the underlying language(s)/libraries since you have?
So here's a perfect example. I installed squid on one machine and changed it to listen to a non-standard port instead of 3128. It turns out that SELinux blocks this. (Which I still don't see the reasoning behind. Why would it be any less secure to run a service on a non-standard port? A lot of sources say it's *more* secure to run services on a non-standard port if you don't want people poking around! In general I don't think it's any more secure to run a service on a non-standard port -- all it buys you is time, as it's trivial for an attacker to scan all your ports, especially with a botnet -- but even if it's not more secure, it certainly shouldn't be *less* secure.)
But here's the real problem. Since I didn't know it was caused by SELinux, all the cache.log file said was:
2012/01/02 17:40:40| commBind: Cannot bind socket FD 13 to *:[portnum redacted]: (13) Permission denied
Nothing indicating why. Even worse, if you Google +squid +"cannot bind socket" +"permission denied" *none* of the first ten pages that come up, mention SELinux as a possible source of the problem. (One FAQ mentions SELinux but only as the source of a different problem.) All they do is recommend other workarounds, each of which takes time to test out, and may have other side-effects.
In other words, when SELinux causes a problem, it can take hours or days to find out that SELinux is the cause -- and even then you're not done, because you have to figure out a workaround if you want to fix the problem while keeping SELinux turned on.
How is it SELinux's problem? Its like running iptables with the default http ports open and then blaming iptables when you change Apache to a non-standard port.
Well for one thing, if you knew iptables was running and you knew what it was, you might suspect that it's the cause of the problem, since that's the kind of thing iptables is supposed to block. On the other hand, if you knew that SELinux is a security enforcement system, it's much less likely to come to mind as the reason why squid can't run on a different port, since this achieves nothing for the security of your system.
Don't take my word for it, you can google for people asking about such problems. If someone reports a problem with port blockage that is caused by iptables, at least some users will usually suggest iptables as the issue. On the other hand, when I googled for my error which was caused by SELinux, nobody in the first ten pages of suggestions knew that SELinux could cause this.
So I stand by the statement that SELinux is more likely to cause problems that are hard to figure out for people who aren't professional admins.
And then there's the fact that solving a problem caused by iptables is usually trivial, while every problem caused by SELinux has a different solution.
Bennett
On Tue, Jan 3, 2012 at 12:23 AM, Bennett Haselton bennett@peacefire.org wrote:
So I stand by the statement that SELinux is more likely to cause problems that are hard to figure out for people who aren't professional admins.
Don't think anyone claims otherwise. Or that security is easy.
And then there's the fact that solving a problem caused by iptables is usually trivial, while every problem caused by SELinux has a different solution.
Yes, but the system comes configured so the standard services work. You only have to deal with changes you make to the system yourself or for 3rd party apps that aren't packaged properly.
On 3 January 2012 02:30, Bennett Haselton bennett@peacefire.org wrote:
In other words, when SELinux causes a problem, it can take hours or days to find out that SELinux is the cause -- and even then you're not done, because you have to figure out a workaround if you want to fix the problem while keeping SELinux turned on.
Unfortunately, good security is hard. I didn't understand SELinux a few years back and turned it off but didn't realise that a php application on my webserver left me vulnerable. Sure enough, one day I was attacked but luckily I had set the permissions up very tightly and they were unable to cause any damage.
These days, I wouldn't leave it to chance and would keep SELinux as an additional layer of security; yes it's annoying at times, yes it can be difficult to get right but investing a few hours now is better than taking your critical systems down for days in the future. There are lots of resources out there to help you understand it - ones I have used in the past include:
http://www.amazon.co.uk/SELinux-Source-Security-Enhanced-Linux/dp/0596007167... http://www.ibm.com/developerworks/linux/library/l-selinux/ http://www.ibm.com/developerworks/linux/library/l-rbac-selinux/
SELinux isn't a panacea and should be combined with other security precautions, but it will help you when the attackers come knocking on your server if you take the time to configure it properly.
Ben
On 1/2/2012 9:18 AM, Les Mikesell wrote:
There have been many, many vulnerabilities that permit local user privilege escalation to root (in the kernel, glibc, suid programs, etc.) and there are probably many we still don't know about. They often require writing to the filesystem. For example, one fixed around 5.4 just required the ability to make a symlink somewhere. The published exploit script (which I've seen in the wild) tries to use /tmp. If the httpd process can't write in /tmp, it would fail.
So are you saying that SELinux is supposed to prevent httpd from writing to /tmp ?
Because I just tested that and SELinux didn't appear to stop it. I set selinux to "enforcing", rebooted just to make sure, and put this perl script on my webserver:
#!/usr/bin/perl use IO::File; use strict; my $fh = IO::File->new("> /tmp/foo.txt"); close($fh); print "Content-type: text/html\n\nDone.\n";
then invoked it from the web, and this file was created: [root@g6950-21025 ~]# ls -l /tmp/foo.txt -rw-r--r-- 1 apache apache 0 Jan 2 16:47 /tmp/foo.txt
[root@g6950-21025 ~]# cat /etc/selinux/config # This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - SELinux is fully disabled. SELINUX=enforcing # SELINUXTYPE= type of policy in use. Possible values are: # targeted - Only targeted network daemons are protected. # strict - Full SELinux protection. SELINUXTYPE=targeted
So it looks like SELinux in this case does not prevent httpd from writing to /tmp , in which case it would not have prevented the exploit you're referring to. Or am I missing something?
On Jan 2, 2012, at 9:37 PM, Bennett Haselton wrote:
On 1/2/2012 9:18 AM, Les Mikesell wrote:
There have been many, many vulnerabilities that permit local user privilege escalation to root (in the kernel, glibc, suid programs, etc.) and there are probably many we still don't know about. They often require writing to the filesystem. For example, one fixed around 5.4 just required the ability to make a symlink somewhere. The published exploit script (which I've seen in the wild) tries to use /tmp. If the httpd process can't write in /tmp, it would fail.
So are you saying that SELinux is supposed to prevent httpd from writing to /tmp ?
Because I just tested that and SELinux didn't appear to stop it. I set selinux to "enforcing", rebooted just to make sure, and put this perl script on my webserver:
#!/usr/bin/perl use IO::File; use strict; my $fh = IO::File->new("> /tmp/foo.txt"); close($fh); print "Content-type: text/html\n\nDone.\n";
then invoked it from the web, and this file was created: [root@g6950-21025 ~]# ls -l /tmp/foo.txt -rw-r--r-- 1 apache apache 0 Jan 2 16:47 /tmp/foo.txt
[root@g6950-21025 ~]# cat /etc/selinux/config # This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - SELinux is fully disabled. SELINUX=enforcing # SELINUXTYPE= type of policy in use. Possible values are: # targeted - Only targeted network daemons are protected. # strict - Full SELinux protection. SELINUXTYPE=targeted
Actually, SELinux needs to let http write to tmp, otherwise scripts such as those written in PHP will fail.
What it WON'T do is to let scripts execute from that directory. You have to explicitly enabled it with:
setsebool -P httpd_tmp_exec on
So yes, it would have prevented attackers from launching exploit scripts in /tmp. Of course, mounting /tmp as not executable would work the same, but that requires that /tmp be a separate files system, which may not an option if the server is already partitioned.*
- Rilindo
*It could work if you use --bind option. But I can't confirm that, unfortunately.
On Mon, Jan 2, 2012 at 9:33 AM, RILINDO FOSTER rilindo@me.com wrote:
The script in question is an exploit from a web board which is apparently designed to pull outside traffic. If you had SELinux, it would put httpd in its own context and by default, it will NOT allow connections from that context to another. You have to enable it with:
The only time my server got hacked was because of phpBB. Using cross-site scripting, the hacker managed to put a pl file and when I ran it, it opened a console. Apparently you are running one of the web boards. Pls follow up any security advisories of that product and any addon/module closely.
If you are really curious how yours got hack. You can setup similar system and put a bounty (maybe $1000) in one of the underground community for anyone to hack it and tell you how they do it.
On Sun, Jan 1, 2012 at 6:03 PM, Fajar Priyanto fajarpri@arinet.org wrote:
On Mon, Jan 2, 2012 at 9:33 AM, RILINDO FOSTER rilindo@me.com wrote:
The script in question is an exploit from a web board which is
apparently designed to pull outside traffic. If you had SELinux, it would put httpd in its own context and by default, it will NOT allow connections from that context to another. You have to enable it with:
The only time my server got hacked was because of phpBB. Using cross-site scripting, the hacker managed to put a pl file and when I ran it, it opened a console. Apparently you are running one of the web boards.
I'm not running phpBB or vBulletin. The script apparently runs on machine X to attack a *different* machine Y where machine Y has vBulletin installed on it.
Pls follow up any security advisories of that product and any addon/module closely.
If you are really curious how yours got hack. You can setup similar system and put a bounty (maybe $1000) in one of the underground community for anyone to hack it and tell you how they do it.
Is there a non-"underground" place to post such requests? It's not illegal to offer a bounty to someone for finding a security hole in your system -- Facebook, Google, and Mozilla all do it.
Bennett
On Sun, Jan 1, 2012 at 4:23 PM, Bennett Haselton bennett@peacefire.org wrote:
So, following people's suggestions, the machine is disconnected and hooked up to a KVM so I can still examine the files. I've found this file: -rw-r--r-- 1 root root 1358 Oct 21 17:40 /home/file.pl which appears to be a copy of this exploit script: http://archive.cert.uni-stuttgart.de/bugtraq/2006/11/msg00302.html Note the last-mod date of October 21.
Did you do an rpm -Va to see if any installed files were modified besides your own changes? Even better if you have an old backup that you can restore somewhere and run an rsync -avn against the old/new instances.
Anywhere else that the logs would contain useful data?
/root/.bash_history might be interesting. Obviously this would be after the fact, but maybe they are trying to repeat the exploit with this machine as a base.
On Sun, Jan 1, 2012 at 5:01 PM, Les Mikesell lesmikesell@gmail.com wrote:
On Sun, Jan 1, 2012 at 4:23 PM, Bennett Haselton bennett@peacefire.org wrote:
So, following people's suggestions, the machine is disconnected and
hooked
up to a KVM so I can still examine the files. I've found this file: -rw-r--r-- 1 root root 1358 Oct 21 17:40 /home/file.pl which appears to be a copy of this exploit script: http://archive.cert.uni-stuttgart.de/bugtraq/2006/11/msg00302.html Note the last-mod date of October 21.
Did you do an rpm -Va to see if any installed files were modified besides your own changes? Even better if you have an old backup that you can restore somewhere and run an rsync -avn against the old/new instances.
rpm -Va gives:
....L... c /etc/pam.d/system-auth S.5....T c /etc/httpd/conf.d/ssl.conf SM5...GT c /etc/squid/squid.conf S.5....T c /etc/login.defs S.5....T c /etc/ssh/sshd_config S.5....T c /etc/httpd/conf.d/welcome.conf S.5....T c /etc/httpd/conf/httpd.conf S.5..... c /etc/ldap.conf S.5..... c /etc/openldap/ldap.conf ....L... c /etc/pam.d/system-auth .......T c /etc/audit/auditd.conf S.5....T c /etc/printcap S.5....T c /etc/yum/yum-updatesd.conf S.5..... c /etc/ldap.conf S.5..... c /etc/openldap/ldap.conf
According to http://www.rpm.org/max-rpm/s1-rpm-verify-output.html many config files do not verify successfully (and I recognize some of them from modifying them manually, and others presumably could have been modified by the hosting company).
I don't have a backup since there is no data stored only on the machine, so if anything is lost on the machine I just ask the host to re-format it and then re-upload everything.
Anywhere else that the logs would contain useful data?
/root/.bash_history might be interesting. Obviously this would be after the fact, but maybe they are trying to repeat the exploit with this machine as a base.
Good idea but it only shows the commands that I've run since logging in to try and find out what happened. Perhaps the attacker wiped /root/.bash_history after getting in.
On Sun, 2012-01-01 at 14:23 -0800, Bennett Haselton wrote:
(Sorry, third time -- last one, promise, just giving it a subject line!)
OK, a second machine hosted at the same hosting company has also apparently been hacked. Since 2 of out of 3 machines hosted at that company have now been hacked, but this hasn't happened to any of the other 37 dedicated servers that I've got hosted at other hosting companies (also CentOS, same version or almost), this makes me wonder if there's a security breach at this company, like if they store customers' passwords in a place that's been hacked. (Of course it could also be that whatever attacker found an exploit, was just scanning that company's address space for hackable machines, and didn't happen to scan the address space of the other hosting companies.)
So, following people's suggestions, the machine is disconnected and hooked up to a KVM so I can still examine the files. I've found this file: -rw-r--r-- 1 root root 1358 Oct 21 17:40 /home/file.pl which appears to be a copy of this exploit script: http://archive.cert.uni-stuttgart.de/bugtraq/2006/11/msg00302.html Note the last-mod date of October 21.
No other files on the system were last modified on October 21st. However there was a security advisory dated October 20th which affected httpd: http://mailinglist-archive.com/centos-announce/2011-10/00035-CentOSannounce+... https://rhn.redhat.com/errata/RHSA-2011-1392.html
and a large number of files on the machine, including lots of files in */ usr/lib64/httpd/modules/* and */lib/modules/2.6.18-274.7.1.el5/kernel/* , have a last-mod date of October 20th. So I assume that these are files which were updated automatically by yum as a result of the patch that goes with this advisory -- does that sound right?
So a couple of questions that I could use some help with:
- The last patch affecting httpd was released on October 20th, and the
earliest evidence I can find of the machine being hacked is a file dated October 21st. This could be just a coincidence, but could it also suggest that the patch on October 20th introduced a new exploit, which the attacker then used to get in on October 21st? (Another possibility: I think that when yum installs updates, it doesn't actually restart httpd. So maybe even after the patch was installed, my old httpd instance kept running and was still vulnerable? As for why it got hacked the very next day, maybe the attacker looked at the newly released patch and reverse-engineered it to figure out where the vulnerabilities were, that the patch fixed?)
- Since the */var/log/httpd/* and /var/log/secure* logs only go back 4-5
weeks by default, it looks like any log entries related to how the attacker would have gotten in on or before October 21st, are gone. (The secure* logs do show multiple successful logins as "root" within the last 4 weeks, mostly from IP addresses in Asia, but that's to be expected once the machine was compromised -- it doesn't help track down how they originally got in.) Anywhere else that the logs would contain useful data?
---- the particular issue which was patched by this httpd (apache) update was to fix a problem with reverse proxy so the first question is did this server actually have a reverse proxy configured?
My next thought is that since this particular hacker managed to get access to more than one of your machines, is it possible that there is a mechanism (ie a pre-shared public key) that would allow them access to the second server from the first server they managed to crack? The point being that this computer may not have been the one that they originally cracked and there may not be evidence of cracking on this computer.
The script you identified would seem to be a script for attacking other systems and by the time it landed on your system, it was already broken into.
There are some tools to identify a hackers access though they are often obscured by the hacker...
last # reads /var/log/wtmp and provides a list of users, login date/time login duration, etc. Read the man page for last to get other options on its usage including the '-f' option to read older wtmp log files if needed.
lastb # reads /var/log/btmp much as above but list 'failed' logins though this requires pro-active configuration and if you didn't do that, you probably will do that going forward.
looking at /etc/passwd to see what users are on your system and then search their $HOME directories carefully for any evidence that their account was the first one compromised. Very often, a single user with a weak password has his account cracked and then a hacker can get a copy of /etc/shadow and brute force the root password.
Consider that this type of activity is often done with 'hidden' files & directories. This hacker was apparently brazen enough to operate openly in /home so it's likely that he wasn't very concerned about his cracking being discovered.
The most important thing to do at this point is to figure out HOW they got into your systems in the first place and discussions of SELinux and yum updates are not useful to that end. Yes, you should always update and always run SELinux but not useful in determining what actually happened.
Make a list of all open ports on this system, check the directories, files, data from all daemons/applications that were exposed (Apache? PHP?, MySQL?, etc.) and be especially vigilant to any directories where user apache had write access.
Again though, I am concerned that your first action on your first discovered hacked server was to wipe it out and of a notion that it's entirely possible that the actual cracking occurred on that system and this (and perhaps other servers) are simply free gifts to the hacker because they had pre-shared keys or the same root password.
Craig
On Mon, Jan 2, 2012 at 12:04 AM, Craig White craigwhite@azapple.com wrote:
On Sun, 2012-01-01 at 14:23 -0800, Bennett Haselton wrote:
(Sorry, third time -- last one, promise, just giving it a subject line!)
OK, a second machine hosted at the same hosting company has also
apparently
been hacked. Since 2 of out of 3 machines hosted at that company have
now
been hacked, but this hasn't happened to any of the other 37 dedicated servers that I've got hosted at other hosting companies (also CentOS,
same
version or almost), this makes me wonder if there's a security breach at this company, like if they store customers' passwords in a place that's been hacked. (Of course it could also be that whatever attacker found an exploit, was just scanning that company's address space for hackable machines, and didn't happen to scan the address space of the other
hosting
companies.)
So, following people's suggestions, the machine is disconnected and
hooked
up to a KVM so I can still examine the files. I've found this file: -rw-r--r-- 1 root root 1358 Oct 21 17:40 /home/file.pl which appears to be a copy of this exploit script: http://archive.cert.uni-stuttgart.de/bugtraq/2006/11/msg00302.html Note the last-mod date of October 21.
No other files on the system were last modified on October 21st. However there was a security advisory dated October 20th which affected httpd:
http://mailinglist-archive.com/centos-announce/2011-10/00035-CentOSannounce+...
https://rhn.redhat.com/errata/RHSA-2011-1392.html
and a large number of files on the machine, including lots of files in */ usr/lib64/httpd/modules/* and */lib/modules/2.6.18-274.7.1.el5/kernel/* , have a last-mod date of October 20th. So I assume that these are files which were updated automatically by yum as a result of the patch that
goes
with this advisory -- does that sound right?
So a couple of questions that I could use some help with:
- The last patch affecting httpd was released on October 20th, and the
earliest evidence I can find of the machine being hacked is a file dated October 21st. This could be just a coincidence, but could it also
suggest
that the patch on October 20th introduced a new exploit, which the
attacker
then used to get in on October 21st? (Another possibility: I think that when yum installs updates, it doesn't actually restart httpd. So maybe even after the patch was installed, my old httpd instance kept running and was still vulnerable?
As
for why it got hacked the very next day, maybe the attacker looked at the newly released patch and reverse-engineered it to figure out where the vulnerabilities were, that the patch fixed?)
- Since the */var/log/httpd/* and /var/log/secure* logs only go back 4-5
weeks by default, it looks like any log entries related to how the
attacker
would have gotten in on or before October 21st, are gone. (The secure* logs do show multiple successful logins as "root" within the last 4
weeks,
mostly from IP addresses in Asia, but that's to be expected once the machine was compromised -- it doesn't help track down how they originally got in.) Anywhere else that the logs would contain useful data?
the particular issue which was patched by this httpd (apache) update was to fix a problem with reverse proxy so the first question is did this server actually have a reverse proxy configured?
I'm not exactly sure how to tell but httpd.conf contains the lines LoadModule proxy_module modules/mod_proxy.so LoadModule cache_module modules/mod_cache.so so I think that means the answer is yes.
My next thought is that since this particular hacker managed to get
access to more than one of your machines, is it possible that there is a mechanism (ie a pre-shared public key) that would allow them access to the second server from the first server they managed to crack? The point being that this computer may not have been the one that they originally cracked and there may not be evidence of cracking on this computer.
OK, no I don't have pre-shared keys or any other link between them.
The script you identified would seem to be a script for attacking other systems and by the time it landed on your system, it was already broken into.
Yes. That's what I've been saying to some apparently confused people who thought that the script was used to break into our server :)
There are some tools to identify a hackers access though they are often obscured by the hacker...
last # reads /var/log/wtmp and provides a list of users, login date/time login duration, etc. Read the man page for last to get other options on its usage including the '-f' option to read older wtmp log files if needed.
Tried that -- looks like this has been "obscured by the hacker" as you said, since the output says "wtmp begins Sun Jan 1 03:03:28 2012".
lastb # reads /var/log/btmp much as above but list 'failed' logins
though this requires pro-active configuration and if you didn't do that, you probably will do that going forward.
looking at /etc/passwd to see what users are on your system and then search their $HOME directories carefully for any evidence that their account was the first one compromised. Very often, a single user with a weak password has his account cracked and then a hacker can get a copy of /etc/shadow and brute force the root password.
In the secure* logs going back the last four weeks I've seen "successful" logins for users called "ssh" and "bash". As far as I can tell those are not actually standard users on the system, so I assume the attacker created them to give themselves a backdoor in case I changed the root password, so I deleted those users.
But when the server was first set up, the only user was root.
Consider that this type of activity is often done with 'hidden' files &
directories. This hacker was apparently brazen enough to operate openly in /home so it's likely that he wasn't very concerned about his cracking being discovered.
The most important thing to do at this point is to figure out HOW they got into your systems in the first place and discussions of SELinux and yum updates are not useful to that end. Yes, you should always update and always run SELinux but not useful in determining what actually happened.
Make a list of all open ports on this system, check the directories, files, data from all daemons/applications that were exposed (Apache? PHP?, MySQL?, etc.) and be especially vigilant to any directories where user apache had write access.
portmap and rpc.statd are listening, and I can turn those off. (That's an easy change to make since it reduces complexity and hence the number of things that can go wrong, rather than increasing it.) Although I don't know how much safer that makes me -- I don't know of any data showing how frequently exploits are discovered (whether published or not) in those services, compared to others like httpd that cannot be turned off. (If there are more exploits discovered in httpd than there are in portmap and rpc.statd combined, then presumably turning off portmap and rpc.statd only reduces chance of a breakin by less than 50% -- not trivial, but not reassuring, either.)
Again though, I am concerned that your first action on your first
discovered hacked server was to wipe it out and of a notion that it's entirely possible that the actual cracking occurred on that system and this (and perhaps other servers) are simply free gifts to the hacker because they had pre-shared keys or the same root password.
OK that's something to keep in mind as a reason never to use pre-shared keys or a common root password :) but I didn't do that in this case so the two servers should have been as independent of each other as any other two servers on the Internet.
On 01/02/2012 02:04 AM, Craig White wrote:
On Sun, 2012-01-01 at 14:23 -0800, Bennett Haselton wrote:
(Sorry, third time -- last one, promise, just giving it a subject line!)
OK, a second machine hosted at the same hosting company has also apparently been hacked. Since 2 of out of 3 machines hosted at that company have now been hacked, but this hasn't happened to any of the other 37 dedicated servers that I've got hosted at other hosting companies (also CentOS, same version or almost), this makes me wonder if there's a security breach at this company, like if they store customers' passwords in a place that's been hacked. (Of course it could also be that whatever attacker found an exploit, was just scanning that company's address space for hackable machines, and didn't happen to scan the address space of the other hosting companies.)
So, following people's suggestions, the machine is disconnected and hooked up to a KVM so I can still examine the files. I've found this file: -rw-r--r-- 1 root root 1358 Oct 21 17:40 /home/file.pl which appears to be a copy of this exploit script: http://archive.cert.uni-stuttgart.de/bugtraq/2006/11/msg00302.html Note the last-mod date of October 21.
No other files on the system were last modified on October 21st. However there was a security advisory dated October 20th which affected httpd: http://mailinglist-archive.com/centos-announce/2011-10/00035-CentOSannounce+... https://rhn.redhat.com/errata/RHSA-2011-1392.html
and a large number of files on the machine, including lots of files in */ usr/lib64/httpd/modules/* and */lib/modules/2.6.18-274.7.1.el5/kernel/* , have a last-mod date of October 20th. So I assume that these are files which were updated automatically by yum as a result of the patch that goes with this advisory -- does that sound right?
So a couple of questions that I could use some help with:
- The last patch affecting httpd was released on October 20th, and the
earliest evidence I can find of the machine being hacked is a file dated October 21st. This could be just a coincidence, but could it also suggest that the patch on October 20th introduced a new exploit, which the attacker then used to get in on October 21st? (Another possibility: I think that when yum installs updates, it doesn't actually restart httpd. So maybe even after the patch was installed, my old httpd instance kept running and was still vulnerable? As for why it got hacked the very next day, maybe the attacker looked at the newly released patch and reverse-engineered it to figure out where the vulnerabilities were, that the patch fixed?)
- Since the */var/log/httpd/* and /var/log/secure* logs only go back 4-5
weeks by default, it looks like any log entries related to how the attacker would have gotten in on or before October 21st, are gone. (The secure* logs do show multiple successful logins as "root" within the last 4 weeks, mostly from IP addresses in Asia, but that's to be expected once the machine was compromised -- it doesn't help track down how they originally got in.) Anywhere else that the logs would contain useful data?
At this point, I would like to point out that if the machine initially had ssh access only from subnets where you would have logged in from then the Asian subnets would have been excluded. (Assuming that the original attack came from there)
You can also set up openvpn on the server and control ports like ssh to only be open to you if you are using an openvpn client to connect to the machine.
the particular issue which was patched by this httpd (apache) update was to fix a problem with reverse proxy so the first question is did this server actually have a reverse proxy configured?
My next thought is that since this particular hacker managed to get access to more than one of your machines, is it possible that there is a mechanism (ie a pre-shared public key) that would allow them access to the second server from the first server they managed to crack? The point being that this computer may not have been the one that they originally cracked and there may not be evidence of cracking on this computer.
The script you identified would seem to be a script for attacking other systems and by the time it landed on your system, it was already broken into.
I agree with Craig, the fact that the script is owned by root.root indicates that it was likely not put there by the apache daemon itself (unless you have the User and Group variable for your apache set to root and root).
This would tend to support Craig's assertion that the script was placed on the system after the fact and was not the source of the attack or placed there through an apache flaw.
You can look in your apache config file locations to see if /home/ is listed as a "DocumentRoot" location or if there is a alias that contains /home/.
You are also going to need to be able to execute CGI scripts ... so there is probably a "Options +ExecCGI" or "ScriptAlias" somewhere in a config file (either in /etc/httpd/conf/http.conf or a *.conf file in /etc/httpd/conf.d/ ... assuming they did not totally replace apache and setup a different location to read in config files).
This does not help you with the breakin ... it does help you find if any of your other machines are being used for the same thing.
There are some tools to identify a hackers access though they are often obscured by the hacker...
last # reads /var/log/wtmp and provides a list of users, login date/time login duration, etc. Read the man page for last to get other options on its usage including the '-f' option to read older wtmp log files if needed.
lastb # reads /var/log/btmp much as above but list 'failed' logins though this requires pro-active configuration and if you didn't do that, you probably will do that going forward.
looking at /etc/passwd to see what users are on your system and then search their $HOME directories carefully for any evidence that their account was the first one compromised. Very often, a single user with a weak password has his account cracked and then a hacker can get a copy of /etc/shadow and brute force the root password.
Consider that this type of activity is often done with 'hidden' files & directories. This hacker was apparently brazen enough to operate openly in /home so it's likely that he wasn't very concerned about his cracking being discovered.
Right, but you can find other directories in the apache config files if those are found.
The most important thing to do at this point is to figure out HOW they got into your systems in the first place and discussions of SELinux and yum updates are not useful to that end. Yes, you should always update and always run SELinux but not useful in determining what actually happened.
Make a list of all open ports on this system, check the directories, files, data from all daemons/applications that were exposed (Apache? PHP?, MySQL?, etc.) and be especially vigilant to any directories where user apache had write access.
Again though, I am concerned that your first action on your first discovered hacked server was to wipe it out and of a notion that it's entirely possible that the actual cracking occurred on that system and this (and perhaps other servers) are simply free gifts to the hacker because they had pre-shared keys or the same root password.
I would also look at that 3rd server very closely since they probably also had access there if they knew you owned it.
On 1/2/2012 7:29 AM, Johnny Hughes wrote:
On 01/02/2012 02:04 AM, Craig White wrote:
On Sun, 2012-01-01 at 14:23 -0800, Bennett Haselton wrote:
(Sorry, third time -- last one, promise, just giving it a subject line!)
OK, a second machine hosted at the same hosting company has also apparently been hacked. Since 2 of out of 3 machines hosted at that company have now been hacked, but this hasn't happened to any of the other 37 dedicated servers that I've got hosted at other hosting companies (also CentOS, same version or almost), this makes me wonder if there's a security breach at this company, like if they store customers' passwords in a place that's been hacked. (Of course it could also be that whatever attacker found an exploit, was just scanning that company's address space for hackable machines, and didn't happen to scan the address space of the other hosting companies.)
So, following people's suggestions, the machine is disconnected and hooked up to a KVM so I can still examine the files. I've found this file: -rw-r--r-- 1 root root 1358 Oct 21 17:40 /home/file.pl which appears to be a copy of this exploit script: http://archive.cert.uni-stuttgart.de/bugtraq/2006/11/msg00302.html Note the last-mod date of October 21.
No other files on the system were last modified on October 21st. However there was a security advisory dated October 20th which affected httpd: http://mailinglist-archive.com/centos-announce/2011-10/00035-CentOSannounce+... https://rhn.redhat.com/errata/RHSA-2011-1392.html
and a large number of files on the machine, including lots of files in */ usr/lib64/httpd/modules/* and */lib/modules/2.6.18-274.7.1.el5/kernel/* , have a last-mod date of October 20th. So I assume that these are files which were updated automatically by yum as a result of the patch that goes with this advisory -- does that sound right?
So a couple of questions that I could use some help with:
- The last patch affecting httpd was released on October 20th, and the
earliest evidence I can find of the machine being hacked is a file dated October 21st. This could be just a coincidence, but could it also suggest that the patch on October 20th introduced a new exploit, which the attacker then used to get in on October 21st? (Another possibility: I think that when yum installs updates, it doesn't actually restart httpd. So maybe even after the patch was installed, my old httpd instance kept running and was still vulnerable? As for why it got hacked the very next day, maybe the attacker looked at the newly released patch and reverse-engineered it to figure out where the vulnerabilities were, that the patch fixed?)
- Since the */var/log/httpd/* and /var/log/secure* logs only go back 4-5
weeks by default, it looks like any log entries related to how the attacker would have gotten in on or before October 21st, are gone. (The secure* logs do show multiple successful logins as "root" within the last 4 weeks, mostly from IP addresses in Asia, but that's to be expected once the machine was compromised -- it doesn't help track down how they originally got in.) Anywhere else that the logs would contain useful data?
At this point, I would like to point out that if the machine initially had ssh access only from subnets where you would have logged in from then the Asian subnets would have been excluded. (Assuming that the original attack came from there)
You can also set up openvpn on the server and control ports like ssh to only be open to you if you are using an openvpn client to connect to the machine.
True but I travel a lot and sometimes need to connect to the machines from subnets that I don't know about in advance.
If I used openvpn to connect, and then connected via ssh over openvpn, this seems like essentially security through obscurity :) by just replacing the public ssh daemon with a different public daemon (with a different connection protocol) which an attacker could try to brute-force the same way they could try to brute-force sshd.
However it still seems that this would only matter if the attacker got in by brute-forcing the login. If they obtained the ability to run privileged commands any other way, then (1) they could continue to run privileged commands that way anyway, or (2) as their first action they could just remove all the IP address restrictions on ssh connections at which point they could connect normally via ssh from anywhere.
So if this only matters when the attacker is trying to brute-force the login, and I still think that a 12-character random password is un-bruteforceable which makes the IP restrictions moot.
If I'm wrong, then why? What do you think -- if my password is already a 12-character random string, do think it adds additional security to restrict ssh logins to only subnets that I'm logging in from? If so, then what's a specific scenario where the attacker would be able to break in (or would have a larger chance of breaking in) if I'm not restricting ssh logins by IP, but would not be able to break in if I were restricting ssh logins?
the particular issue which was patched by this httpd (apache) update was to fix a problem with reverse proxy so the first question is did this server actually have a reverse proxy configured?
My next thought is that since this particular hacker managed to get access to more than one of your machines, is it possible that there is a mechanism (ie a pre-shared public key) that would allow them access to the second server from the first server they managed to crack? The point being that this computer may not have been the one that they originally cracked and there may not be evidence of cracking on this computer.
The script you identified would seem to be a script for attacking other systems and by the time it landed on your system, it was already broken into.
I agree with Craig, the fact that the script is owned by root.root indicates that it was likely not put there by the apache daemon itself (unless you have the User and Group variable for your apache set to root and root).
This would tend to support Craig's assertion that the script was placed on the system after the fact and was not the source of the attack or placed there through an apache flaw.
Yes I agreed with Craig too :) Maybe my first post was not clear but I always assumed that they broke in first and put that file there afterwards, not that the file somehow enabled the break-in.
You can look in your apache config file locations to see if /home/ is listed as a "DocumentRoot" location or if there is a alias that contains /home/.
You are also going to need to be able to execute CGI scripts ... so there is probably a "Options +ExecCGI" or "ScriptAlias" somewhere in a config file (either in /etc/httpd/conf/http.conf or a *.conf file in /etc/httpd/conf.d/ ... assuming they did not totally replace apache and setup a different location to read in config files).
OK I checked. No "/home/" in httpd.conf or in any of the .conf files under conf.d.
This does not help you with the breakin ... it does help you find if any of your other machines are being used for the same thing.
There are some tools to identify a hackers access though they are often obscured by the hacker...
last # reads /var/log/wtmp and provides a list of users, login date/time login duration, etc. Read the man page for last to get other options on its usage including the '-f' option to read older wtmp log files if needed.
lastb # reads /var/log/btmp much as above but list 'failed' logins though this requires pro-active configuration and if you didn't do that, you probably will do that going forward.
looking at /etc/passwd to see what users are on your system and then search their $HOME directories carefully for any evidence that their account was the first one compromised. Very often, a single user with a weak password has his account cracked and then a hacker can get a copy of /etc/shadow and brute force the root password.
Consider that this type of activity is often done with 'hidden' files& directories. This hacker was apparently brazen enough to operate openly in /home so it's likely that he wasn't very concerned about his cracking being discovered.
Right, but you can find other directories in the apache config files if those are found.
The most important thing to do at this point is to figure out HOW they got into your systems in the first place and discussions of SELinux and yum updates are not useful to that end. Yes, you should always update and always run SELinux but not useful in determining what actually happened.
Make a list of all open ports on this system, check the directories, files, data from all daemons/applications that were exposed (Apache? PHP?, MySQL?, etc.) and be especially vigilant to any directories where user apache had write access.
Again though, I am concerned that your first action on your first discovered hacked server was to wipe it out and of a notion that it's entirely possible that the actual cracking occurred on that system and this (and perhaps other servers) are simply free gifts to the hacker because they had pre-shared keys or the same root password.
I would also look at that 3rd server very closely since they probably also had access there if they knew you owned it.
Well with all of the break-ins happening at one company, and other problems with them, I've been thinking that I might as well cancel all the servers hosted there anyway. Since I don't have any data stored on them and I can re-create the systems from scratch, might as well re-create them from scratch somewhere else.
There's always the possibility that the hosting company stored all of the root passwords somewhere where an attacker found them.
Bennett
On 01/02/2012 10:48 PM, Bennett Haselton wrote:
True but I travel a lot and sometimes need to connect to the machines from subnets that I don't know about in advance.
You could secure another system somewhere on the internet (could be a $20/month virtual host), leave no pointers to your production systems on that system, and allow remote logins on your production systems from that other host. It's called a back door. You could also take a look at something like fwknop. That in combination with some type of back door for the situation where you don't have your keys available should cover any situation where you need to get to your system. But access using the key authentication should be preferred and only use the back door for emergencies.
If I used openvpn to connect, and then connected via ssh over openvpn, this seems like essentially security through obscurity :) by just replacing the public ssh daemon with a different public daemon (with a different connection protocol) which an attacker could try to brute-force the same way they could try to brute-force sshd.
Pretty much all security is based on something that you know/have that the attacker doesn't know/have. This is true for computer access, the locks on your front door and the safe at the bank. What your getting from the people on this list is their experience, comments based on what they did that worked and what they did that didn't. Check the past 10 years of cert advisories and count the number of security advisories for sshd and then count the number for openvpn.
However it still seems that this would only matter if the attacker got in by brute-forcing the login. If they obtained the ability to run privileged commands any other way, then (1) they could continue to run privileged commands that way anyway, or (2) as their first action they could just remove all the IP address restrictions on ssh connections at which point they could connect normally via ssh from anywhere.
The more security mechanisms you have in place, the greater is the probability that even if they made a partial compromise of your system, they might fail when they try to get through to the next level and if you have warning systems, such as daily reports or even alerts sent to your cell phone, you might be able to stop them first.
So if this only matters when the attacker is trying to brute-force the login, and I still think that a 12-character random password is un-bruteforceable which makes the IP restrictions moot.
Experience has shown that passwords can be cracked much more easily then private/public keys. Your the one telling us that your system has been compromised. Others sharing this fact may not have their systems compromised, or if they did, they learned from it.
If I'm wrong, then why? What do you think -- if my password is already a 12-character random string, do think it adds additional security to restrict ssh logins to only subnets that I'm logging in from? If so, then what's a specific scenario where the attacker would be able to break in (or would have a larger chance of breaking in) if I'm not restricting ssh logins by IP, but would not be able to break in if I were restricting ssh logins?
That's a straight probability calculation. How many billion systems are on the Internet? If I allow logins from even 100,000 systems versus several billion, I've substantially reduced the probability of a sucessful brute force attack.
I have had problems with password guessing attacks on my pop and ftp servers (my ssh port is totally closed). Since I'm providing services to users, I can't just close those ports. I've been running fail2ban now for some time and it has helped, but I wanted to further reduce having even a handful of guesses. I discovered that the majority of attacks are coming from Asia, Russia, eastern Europe, South America and the middle east. Well I don't have any ftp users in those areas, so I blocked access to these countries and in fact now only allow access from regions where I have users. Things have been pretty darn quiet since I did that.
By allowing access from only a handful of systems that you might be familiar with, you probably won't have bot attacks.
Nataraj
On 1/3/2012 12:50 AM, Nataraj wrote:
On 01/02/2012 10:48 PM, Bennett Haselton wrote:
True but I travel a lot and sometimes need to connect to the machines from subnets that I don't know about in advance.
You could secure another system somewhere on the internet (could be a $20/month virtual host), leave no pointers to your production systems on that system, and allow remote logins on your production systems from that other host. It's called a back door.
But assuming the attacker is targeting my production system, suppose they find a vulnerability and obtain the ability to run commands as root on the system. Then wouldn't their first action be to remove restrictions on where you can log in from? (Or, they could just continue to run root commands using whatever trick they'd discovered?)
You could also take a look at something like fwknop. That in combination with some type of back door for the situation where you don't have your keys available should cover any situation where you need to get to your system. But access using the key authentication should be preferred and only use the back door for emergencies.
If I used openvpn to connect, and then connected via ssh over openvpn, this seems like essentially security through obscurity :) by just replacing the public ssh daemon with a different public daemon (with a different connection protocol) which an attacker could try to brute-force the same way they could try to brute-force sshd.
Pretty much all security is based on something that you know/have that the attacker doesn't know/have. This is true for computer access, the locks on your front door and the safe at the bank.
Yes, but the argument for "security over obscurity" is that the "secret" should reside in something that cannot be obtained even in trillions of trillions of guesses (i.e. a strong password), not in something that could be obtained in a few dozen or a few thousand guesses (i.e. finding OpenVPN listening on a given port). The reason being is that if something is obtainable in a few thousand guesses, then it will create the illusion of being unguessable, but an attacker could still get it.
What your getting from the people on this list is their experience, comments based on what they did that worked and what they did that didn't.
Unfortunately it may not be possible to tell that a particular safeguard ever actually "worked" or made a difference. How could you ever know, for example, that an attacker was stopped because you used an ssh key instead of a 12-character truly random password?
One way you can know is if you have two barriers one behind the other, and attackers get through the first barrier but not the second one, then you know the second barrier mattered. That's why the argument for SELinux sounds persuasive, because of identified instances where attackers circumvented the first barrier (finding a way to make Apache create executables in /tmp/) but were stopped by the second (SELinux prevented those executables from being run).
Check the past 10 years of cert advisories and count the number of security advisories for sshd and then count the number for openvpn.
OK, that's different from the obscurity factor (since the difference in exploit frequency, would still be a reason to use openvpn instead of sshd, even if the attacker knew that you were using it). However that also has to be weighed against the side effects of using a non-standard setup. I assume, for example, that most security audit tools would look at IP addresses that attempted to log in via ssh. That wouldn't work if your gateway is OpenVPN instead of sshd. (In my experience, everything you're doing differently from everyone else, makes it harder to get help when things go wrong.)
However it still seems that this would only matter if the attacker got in by brute-forcing the login. If they obtained the ability to run privileged commands any other way, then (1) they could continue to run privileged commands that way anyway, or (2) as their first action they could just remove all the IP address restrictions on ssh connections at which point they could connect normally via ssh from anywhere.
The more security mechanisms you have in place, the greater is the probability that even if they made a partial compromise of your system, they might fail when they try to get through to the next level and if you have warning systems, such as daily reports or even alerts sent to your cell phone, you might be able to stop them first.
For partial compromises that makes sense.
However for total compromises (i.e. if attacker is running root commands on your system), then presumably this would only work if your tripwire warning system was obscure enough that the attacker didn't know to look for it. Otherwise their first action would be to disable the tripwire before it warned you.
So if this only matters when the attacker is trying to brute-force the login, and I still think that a 12-character random password is un-bruteforceable which makes the IP restrictions moot.
Experience has shown that passwords can be cracked much more easily then private/public keys.
"passwords", generally, can be. I don't think 12-character truly random passwords can be. (If once you hit the "takes 100,000 years to crack" mark, you don't care any more.)
Your the one telling us that your system has been compromised. Others sharing this fact may not have their systems compromised, or if they did, they learned from it.
If I'm wrong, then why? What do you think -- if my password is already a 12-character random string, do think it adds additional security to restrict ssh logins to only subnets that I'm logging in from? If so, then what's a specific scenario where the attacker would be able to break in (or would have a larger chance of breaking in) if I'm not restricting ssh logins by IP, but would not be able to break in if I were restricting ssh logins?
That's a straight probability calculation. How many billion systems are on the Internet? If I allow logins from even 100,000 systems versus several billion, I've substantially reduced the probability of a sucessful brute force attack.
Others have pointed out that the limiting factor is the rate at which the ssh daemon can accept or refuse logins. With that being the case, it doesn't matter if the attacker has 1 billion systems or 100,000 -- they're throttled by the ssh daemon. And with 10^24 possible truly-random-12-char passwords there's no way they can brute force your password that way in either case.
I mean, I don't want to miss anything here, but if you think I'm wrong about this, then why?
I have had problems with password guessing attacks on my pop and ftp servers (my ssh port is totally closed). Since I'm providing services to users, I can't just close those ports. I've been running fail2ban now for some time and it has helped, but I wanted to further reduce having even a handful of guesses. I discovered that the majority of attacks are coming from Asia, Russia, eastern Europe, South America and the middle east. Well I don't have any ftp users in those areas, so I blocked access to these countries and in fact now only allow access from regions where I have users. Things have been pretty darn quiet since I did that.
OK but those are *users* who have their own passwords that they have chosen, presumably. User-chosen passwords cannot be assumed to be secure against a brute-force attack. What I'm saying is that if you're the only user, by my reasoning you don't need fail2ban if you just use a 12-character truly random password.
Bennett
On Tue, Jan 3, 2012 at 4:28 AM, Bennett Haselton bennett@peacefire.org wrote:
But assuming the attacker is targeting my production system, suppose they find a vulnerability and obtain the ability to run commands as root on the system. Then wouldn't their first action be to remove restrictions on where you can log in from? (Or, they could just continue to run root commands using whatever trick they'd discovered?)
No, they'd probably replace your ssh binary with one that makes a hidden outbound connection to their own control center. And replace netstat with one that doesn't show that connection. If anyone has ever gotten root access, all other bets are off - those tools are a dime a dozen.
Yes, but the argument for "security over obscurity" is that the "secret" should reside in something that cannot be obtained even in trillions of trillions of guesses (i.e. a strong password), not in something that could be obtained in a few dozen or a few thousand guesses (i.e. finding OpenVPN listening on a given port). The reason being is that if something is obtainable in a few thousand guesses, then it will create the illusion of being unguessable, but an attacker could still get it.
Openvpn runs over UDP. With the tls-auth option it won't respond to an unsigned packet. So without the key you can't tell the difference between a listening openvpn or a firewall that drops packets silently. That is, you can't 'find' it.
Unfortunately it may not be possible to tell that a particular safeguard ever actually "worked" or made a difference. How could you ever know, for example, that an attacker was stopped because you used an ssh key instead of a 12-character truly random password?
If you look at your logs under normal conditions, you'll know if someone has tried and failed a password login. After someone has gotten in, it may be too late because they can destroy the evidence. Logging remotely to a more protected machine can help a bit.
OK but those are *users* who have their own passwords that they have chosen, presumably. User-chosen passwords cannot be assumed to be secure against a brute-force attack. What I'm saying is that if you're the only user, by my reasoning you don't need fail2ban if you just use a 12-character truly random password.
But you aren't exactly an authority when you are still guessing about the cause of your problem, are you? (And haven't mentioned what your logs said about failed attempts leading up to the break in...).
On Tuesday 03 January 2012 07:57:47 Les Mikesell wrote:
On Tue, Jan 3, 2012 at 4:28 AM, Bennett Haselton bennett@peacefire.org wrote:
But assuming the attacker is targeting my production system, suppose they find a vulnerability and obtain the ability to run commands as root on the system. Then wouldn't their first action be to remove restrictions on where you can log in from? (Or, they could just continue to run root commands using whatever trick they'd discovered?)
No, they'd probably replace your ssh binary with one that makes a hidden outbound connection to their own control center. And replace netstat with one that doesn't show that connection. If anyone has ever gotten root access, all other bets are off - those tools are a dime a dozen.
Those kind of things can be detected with SHA512 (for example)
Yes, but the argument for "security over obscurity" is that the "secret" should reside in something that cannot be obtained even in trillions of trillions of guesses (i.e. a strong password), not in something that could be obtained in a few dozen or a few thousand guesses (i.e. finding OpenVPN listening on a given port). The reason being is that if something is obtainable in a few thousand guesses, then it will create the illusion of being unguessable, but an attacker could still get it.
Openvpn runs over UDP. With the tls-auth option it won't respond to an unsigned packet. So without the key you can't tell the difference between a listening openvpn or a firewall that drops packets silently. That is, you can't 'find' it.
We are not going to argue drop vs reject, are we? :P
Unfortunately it may not be possible to tell that a particular safeguard ever actually "worked" or made a difference. How could you ever know, for example, that an attacker was stopped because you used an ssh key instead of a 12-character truly random password?
If you look at your logs under normal conditions, you'll know if someone has tried and failed a password login. After someone has gotten in, it may be too late because they can destroy the evidence. Logging remotely to a more protected machine can help a bit.
Remote syslog? that way they'll have to hack into two different machines...
Regards
On Tue, Jan 3, 2012 at 9:31 AM, Marc Deop damnshock@gmail.com wrote:
Openvpn runs over UDP. With the tls-auth option it won't respond to an unsigned packet. So without the key you can't tell the difference between a listening openvpn or a firewall that drops packets silently. That is, you can't 'find' it.
We are not going to argue drop vs reject, are we? :P
It follows the usual pattern: dropping is more secure in that you can't tell if anything is there at all where rejecting is more convenient because attempts to open a connection don't have to wait for timeouts. Pick the one that meets your specific need.
Having been on vacation, I'm coming in very late in this....
Les Mikesell wrote:
On Tue, Jan 3, 2012 at 4:28 AM, Bennett Haselton bennett@peacefire.org wrote:
<snip>
OK but those are *users* who have their own passwords that they have chosen, presumably. User-chosen passwords cannot be assumed to be secure against a brute-force attack. What I'm saying is that if you're the only user, by my reasoning you don't need fail2ban if you just use a 12-character truly random password.
But you aren't exactly an authority when you are still guessing about the cause of your problem, are you? (And haven't mentioned what your logs said about failed attempts leading up to the break in...).
Further, that's a ridiculous assumption. Without fail2ban, or something like it, they'll keep trying. You, instead, Bennett, are presumably generating that "truly random" password[1] and assigning it to all your users[2], and not allowing them to change their passwords, and you will be changing it occasionally and informing them of the change.[3]
Right?
mark
1. How will you generate "truly random"? Clicks on a Geiger counter? There is no such thing as a random number generator. 2. Which, being "truly random", they will write down somewhere, or store it on a key, labelling the file "mypassword" or some such. 3. How will you notify them of their new password - in plain text?
On 01/03/2012 04:47 PM, m.roth@5-cent.us wrote:
Having been on vacation, I'm coming in very late in this....
Les Mikesell wrote:
On Tue, Jan 3, 2012 at 4:28 AM, Bennett Haseltonbennett@peacefire.org wrote:
<snip> >> OK but those are *users* who have their own passwords that they have >> chosen, presumably. User-chosen passwords cannot be assumed to be >> secure against a brute-force attack. What I'm saying is that if you're >> the only user, by my reasoning you don't need fail2ban if you just use a >> 12-character truly random password. > > But you aren't exactly an authority when you are still guessing about > the cause of your problem, are you? (And haven't mentioned what your > logs said about failed attempts leading up to the break in...).
Further, that's a ridiculous assumption. Without fail2ban, or something like it, they'll keep trying. You, instead, Bennett, are presumably generating that "truly random" password[1] and assigning it to all your users[2], and not allowing them to change their passwords, and you will be changing it occasionally and informing them of the change.[3]
Right?
mark
- How will you generate "truly random"? Clicks on a Geiger counter? There
is no such thing as a random number generator. 2. Which, being "truly random", they will write down somewhere, or store it on a key, labelling the file "mypassword" or some such. 3. How will you notify them of their new password - in plain text?
Bennet was/is the only one using those systems, and only as root. No additional users existed prior to breach. And he is very persisting in placing his own opinion/belief above those he asks for help. That is why we have such a long long long thread. It came to the point where I am starting to believe him being a troll. Not sure yet, but it is getting there.
I am writing this for your sake, not his. I decided to just watch from no on. This thread WAS very informative, I did lear A LOT, but enough is enough, and I spent far to much time reading this thread.
Ljubomir,
Ljubomir Ljubojevic wrote:
On 01/03/2012 04:47 PM, m.roth@5-cent.us wrote:
Having been on vacation, I'm coming in very late in this....
Les Mikesell wrote:
On Tue, Jan 3, 2012 at 4:28 AM, Bennett Haseltonbennett@peacefire.org wrote:
<snip> >> OK but those are *users* who have their own passwords that they have >> chosen, presumably. User-chosen passwords cannot be assumed to be >> secure against a brute-force attack. What I'm saying is that if >> you're the only user, by my reasoning you don't need fail2ban if >> you just use a 12-character truly random password. > > But you aren't exactly an authority when you are still guessing about > the cause of your problem, are you? (And haven't mentioned what your > logs said about failed attempts leading up to the break in...).
Further, that's a ridiculous assumption. Without fail2ban, or something like it, they'll keep trying. You, instead, Bennett, are presumably generating that "truly random" password[1] and assigning it to all your users[2], and not allowing them to change their passwords, and you will be changing it occasionally and informing them of the change.[3]
Right?
- How will you generate "truly random"? Clicks on a Geiger counter?
There is no such thing as a random number generator. 2. Which, being "truly random", they will write down somewhere, or store it on a key, labelling the file "mypassword" or some such. 3. How will you notify them of their new password - in plain text?
Bennet was/is the only one using those systems, and only as root. No
Ohhhh....
additional users existed prior to breach. And he is very persisting in placing his own opinion/belief above those he asks for help. That is why
So he's not only not wanting to accept that he blew it, but wants "validation" for that wrongheadedness.
we have such a long long long thread. It came to the point where I am starting to believe him being a troll. Not sure yet, but it is getting there.
As long as no one's giving him support in his ideas, he's now got someone outside himself (and the intruder) to be against. Just like the US right wing....
I am writing this for your sake, not his. I decided to just watch from no on. This thread WAS very informative, I did lear A LOT, but enough is enough, and I spent far to much time reading this thread.
Thanks for the offlist email. Happy new year to you.
mark
Whoops, sorry, thought this was offlist.
mark, not reading closely enough.
m.roth@5-cent.us wrote:
Ljubomir,
Ljubomir Ljubojevic wrote:
On 01/03/2012 04:47 PM, m.roth@5-cent.us wrote:
Having been on vacation, I'm coming in very late in this....
Les Mikesell wrote:
On Tue, Jan 3, 2012 at 4:28 AM, Bennett Haseltonbennett@peacefire.org wrote:
<snip> >> OK but those are *users* who have their own passwords that they have >> chosen, presumably. User-chosen passwords cannot be assumed to be >> secure against a brute-force attack. What I'm saying is that if >> you're the only user, by my reasoning you don't need fail2ban if >> you just use a 12-character truly random password. > > But you aren't exactly an authority when you are still guessing about > the cause of your problem, are you? (And haven't mentioned what your > logs said about failed attempts leading up to the break in...).
Further, that's a ridiculous assumption. Without fail2ban, or something like it, they'll keep trying. You, instead, Bennett, are presumably generating that "truly random" password[1] and assigning it to all your users[2], and not allowing them to change their passwords, and you will be changing it occasionally and informing them of the change.[3]
Right?
- How will you generate "truly random"? Clicks on a Geiger counter?
There is no such thing as a random number generator. 2. Which, being "truly random", they will write down somewhere, or store it on a key, labelling the file "mypassword" or some such. 3. How will you notify them of their new password - in plain text?
Bennet was/is the only one using those systems, and only as root. No
Ohhhh....
additional users existed prior to breach. And he is very persisting in placing his own opinion/belief above those he asks for help. That is why
So he's not only not wanting to accept that he blew it, but wants "validation" for that wrongheadedness.
we have such a long long long thread. It came to the point where I am starting to believe him being a troll. Not sure yet, but it is getting there.
As long as no one's giving him support in his ideas, he's now got someone outside himself (and the intruder) to be against. Just like the US right wing....
I am writing this for your sake, not his. I decided to just watch from no on. This thread WAS very informative, I did lear A LOT, but enough is enough, and I spent far to much time reading this thread.
Thanks for the offlist email. Happy new year to you.
mark
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On 1/3/2012 11:36 AM, Ljubomir Ljubojevic wrote:
On 01/03/2012 04:47 PM, m.roth@5-cent.us wrote:
Having been on vacation, I'm coming in very late in this....
Les Mikesell wrote:
On Tue, Jan 3, 2012 at 4:28 AM, Bennett Haseltonbennett@peacefire.org wrote:
<snip> >> OK but those are *users* who have their own passwords that they have >> chosen, presumably. User-chosen passwords cannot be assumed to be >> secure against a brute-force attack. What I'm saying is that if you're >> the only user, by my reasoning you don't need fail2ban if you just use a >> 12-character truly random password. > But you aren't exactly an authority when you are still guessing about > the cause of your problem, are you? (And haven't mentioned what your > logs said about failed attempts leading up to the break in...). Further, that's a ridiculous assumption. Without fail2ban, or something like it, they'll keep trying. You, instead, Bennett, are presumably generating that "truly random" password[1] and assigning it to all your users[2], and not allowing them to change their passwords, and you will be changing it occasionally and informing them of the change.[3]
Right?
mark
- How will you generate "truly random"? Clicks on a Geiger counter? There
is no such thing as a random number generator. 2. Which, being "truly random", they will write down somewhere, or store it on a key, labelling the file "mypassword" or some such. 3. How will you notify them of their new password - in plain text?
Bennet was/is the only one using those systems, and only as root. No additional users existed prior to breach. And he is very persisting in placing his own opinion/belief above those he asks for help.
That there are 10^21 possible random 12-character alphanumeric passwords -- making it secure against brute-forcing -- is a fact, not an opinion.
To date, *nobody* on this thread has ever responded when I said that there are 10^21 possible such passwords and as such I don't think that the password can be brute-forced in that way. Almost every time I said this, I added, "If you think this is incorrect, why do you think it's incorrect?", because I did genuinely want to know. When people didn't reply, I thought maybe they hadn't realized before that I was using actually long, actually random passwords, and maybe they no longer thought that was insecure after all.
Again: Do you think I'm wrong that if you use a 12-character mixed-case alphanumeric password, then switching to sshkeys or using fail2ban will not make the system any more secure? If you think I'm wrong, why? What is the exact scenario that you think those would prevent?
That is why we have such a long long long thread. It came to the point where I am starting to believe him being a troll. Not sure yet, but it is getting there.
The thread grew so long partly because few people were offering suggestions on *preventive* measures (mostly on what to do differently next time to diagnose after the fact -- which was fine and useful, but I kept trying to steer the discussion back to preventive measures), and the two preventive measures that did come up the most, were using ssh keys and using fail2ban to stop people brute-forcing the login, and I kept explaining why I did not think that would make me any safer the next time around.
Note that after over 100 messages had been posted on the subject, someone did mention SELinux and the specific scenario (which has come up in the real world) in which SELinux can stop a break-in (exploit is found where attacker takes control of Apache, Apache writes to /tmp dir and tries to execute a program there).
If I had accepted the "advice" offered at the beginning to use keys instead of passwords, the discussion might have never gotten past that. It was because I stood my ground that brute-forcing a 12-character random password was not logically possible, that the discussion eventually turned to something which *might* at least reduce the chance of a future break-in.
I am writing this for your sake, not his. I decided to just watch from no on. This thread WAS very informative, I did lear A LOT, but enough is enough, and I spent far to much time reading this thread.
Bennett Haselton wrote:
mark wrote:
<snip>
- How will you generate "truly random"? Clicks on a Geiger counter?
There is no such thing as a random number generator.
<snip>
That there are 10^21 possible random 12-character alphanumeric passwords -- making it secure against brute-forcing -- is a fact, not an opinion.
To date, *nobody* on this thread has ever responded when I said that there are 10^21 possible such passwords and as such I don't think that the password can be brute-forced in that way. Almost every time I said
Ok, I'll answer, here and now: YOU IGNORED MY QUESTION: HOW WILL YOU "RANDOMLY" GENERATE THE PASSWORDS? All algorithmic ones are pseudo-random. If someone has any idea what the o/s is, they can guess which pseudo-random generator you're using, and can try different salts. Someone here posted a link to the Rainbow tables, and precomputed partial lists. <snip>
Again: Do you think I'm wrong that if you use a 12-character mixed-case alphanumeric password, then switching to sshkeys or using fail2ban will not make the system any more secure? If you think I'm wrong, why? What is the exact scenario that you think those would prevent?
Without fail2ban, or something like it, they'll hit your system thousands of times an hour, at least. Sooner or later, they'll get lucky.
But I suppose you'll ignore this, as well.
mark
On 1/3/2012 12:32 PM, m.roth@5-cent.us wrote:
Bennett Haselton wrote:
mark wrote:
<snip> >>> 1. How will you generate "truly random"? Clicks on a Geiger counter? >>> There is no such thing as a random number generator. <snip> > That there are 10^21 possible random 12-character alphanumeric passwords > -- making it secure against brute-forcing -- is a fact, not an opinion. > > To date, *nobody* on this thread has ever responded when I said that > there are 10^21 possible such passwords and as such I don't think that > the password can be brute-forced in that way. Almost every time I said Ok, I'll answer, here and now: YOU IGNORED MY QUESTION: HOW WILL YOU "RANDOMLY" GENERATE THE PASSWORDS? All algorithmic ones are pseudo-random. If someone has any idea what the o/s is, they can guess which pseudo-random generator you're using, and can try different salts.
I generally change them from the values assigned by the hosting company, and just bang my fingers around on the keyboard, with the shift key randomly on and off for good measure :) This also removes the possibility that an incompetent hosting company will store their own copy of the password somewhere that it can be compromised. Even when that possibility is very unlikely, it's still astronomically more likely than the attacker guessing the password by brute force.
But even if someone did not do that, don't most Linux distros a good crypto-random number generator for generating new passwords, when they're picked by the machine and not the user? You can use salts that depend on the low bits of high-precision performance counters, and other values that are impossible for an attacker to predict. If any Linux implementation is using anything less than a cryptographically strong generator for creating passwords, like I said it's not my problem, but I would take that up with the developers.
Someone here posted a link to the Rainbow tables, and precomputed partial lists.
<snip> > Again: Do you think I'm wrong that if you use a 12-character mixed-case > alphanumeric password, then switching to sshkeys or using fail2ban will > not make the system any more secure? If you think I'm wrong, why? What > is the exact scenario that you think those would prevent? Without fail2ban, or something like it, they'll hit your system thousands of times an hour, at least. Sooner or later, they'll get lucky.
OK do you *literally mean that* -- that with 10^21 possible passwords that an attacker has to search, I have to worry about the attacker "getting lucky" if they're trying "thousands of times per hour"?
But I suppose you'll ignore this, as well.
mark
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Bennett Haselton wrote:
On 1/3/2012 12:32 PM, m.roth@5-cent.us wrote:
Bennett Haselton wrote:
mark wrote:
<snip> >>> 1. How will you generate "truly random"? Clicks on a Geiger counter? >>> There is no such thing as a random number generator.
<snip>
To date, *nobody* on this thread has ever responded when I said that there are 10^21 possible such passwords and as such I don't think that the password can be brute-forced in that way. Almost every time I said
Ok, I'll answer, here and now: YOU IGNORED MY QUESTION: HOW WILL YOU "RANDOMLY" GENERATE THE PASSWORDS? All algorithmic ones are pseudo-random. If someone has any idea what the o/s is, they can guess which pseudo-random generator you're using, and can try different salts.
I generally change them from the values assigned by the hosting company, and just bang my fingers around on the keyboard, with the shift key randomly on and off for good measure :) This also removes the
Real random, there. Do you also use a Dvorak keyboard, or a std. querty? You want to be there aren't algorithms out there for guessing that? Certainly, until this minute, I hadn't thought of it, but I'll be there is.
possibility that an incompetent hosting company will store their own
Hosting co? You're hosted somewhere? And an admin there can't get into your snapshot and add a back door?
copy of the password somewhere that it can be compromised. Even when that possibility is very unlikely, it's still astronomically more likely than the attacker guessing the password by brute force.
Question 1: why is it that brute force attacks go on, day and night, everywhere? I see plenty of them here, when fail2ban tells me it's banning an IP.
But even if someone did not do that, don't most Linux distros a good crypto-random number generator for generating new passwords, when they're picked by the machine and not the user? You can use salts that
They're all pseudo-random. Unless, maybe, you can get truly random with quantum computing, all you can ever do is pseudo-random. <snip
Without fail2ban, or something like it, they'll hit your system thousands of times an hour, at least. Sooner or later, they'll get lucky.
OK do you *literally mean that* -- that with 10^21 possible passwords that an attacker has to search, I have to worry about the attacker "getting lucky" if they're trying "thousands of times per hour"?
But I suppose you'll ignore this, as well.
Oh, and your system wasn't compromised, so all of us are wrong, and you're correct.
This thread's killfiled for me - it's pointless.
mark
On Tuesday, January 03, 2012 03:24:34 PM Bennett Haselton wrote:
That there are 10^21 possible random 12-character alphanumeric passwords -- making it secure against brute-forcing -- is a fact, not an opinion.
To date, *nobody* on this thread has ever responded when I said that there are 10^21 possible such passwords and as such I don't think that the password can be brute-forced in that way.
Hmm, methinks you need to rethink the number. The number of truly random passwords available from a character set with N charaters and of a length L is N^L. (see https://en.wikipedia.org/wiki/Password_strength#Random_passwords )
If L=12, then: For: The numerals only: 10^12. The Uppercase alphabet only: 26^12 (9.5x10^16) Uppers and lowers: 52^12 (3.9x10^20) Numerals plus uppers and lowers: 62^12 (3.2x10^21) Base64: 64^12 (4.7x10^21) Full ASCII printables, minus space: 94^12 (4.76x10^23)
This of course includes repeating characters. NIST recommends 80-bit entropy-equivalent passwords for secure systems; 12 characters chosen at random from the full 95 ASCII printable characters doesn't quite make that (at a calculated 78 bits or so).
Having said all of that, please see and read http://security.stackexchange.com/questions/3887/is-using-a-public-key-for-l...
The critical thing to remember is that in key auth the authenticating key never leaves the client system, rather an encrypted 'nonce' is sent (the nonce is encrypted by the authenticating key), which only the server, possessing the matching key to the authenticating key, can successfully decrypt. This effectively gives full bit-strength for the whole key; 1024 bits of entropy, if you will, for 1024 bit keys. This would appear to require a 157 character random password chosen from all 95 ASCII printable characters to match, in terms of bit entropy. Obviously, the authenticating key's protection is paramount, and the passphrase is but one part of that. But that key never travels over the wire.
In stark constrast, in password auth the password has to leave the system and traverse the connection, even if it's over an encrypted channel (it's hashed on the server end and compared to the server's stored hash, plus salt (always did like a little salt with my hash....!), not the client, right? After all, the client may not possess the algorithm used to generate the hash, but password auth still works.). This leaves a password vulnerable to a 'man in the middle' attack if a weakness is found in the negotiated stream cipher used in the channel.
Even with a full man-in-the-middle 'sniff' going on, the key pair authentication is as strong as the crypto used to generate the key pairs, which can be quite a bit stronger than the stream cipher. (56 bit DES, for instance, can be directly brute-forced in 'reasonable' amounts of time now). Pfft, if I understand the theory correctly (and I always reserve the right to be wrong!), you could, in theory, securely authenticate with keys over a nonencrypted session with a long enough nonce plaintext and the nonce's ciphertext traversing the wire in the clear. Of course, you encrypt the actual session for other reasons (primarily to prevent connection hijacking, since that defeats the Authorization and Accountability portions of triple-A), but the session encryption and the key auth are two distinct processes.
I've been reading this thread with some amusement; while brute-forcing is an interesting problem it is far from the most important such problem, and simple measures, like not allowing random programs to listen on just any port (or for that matter make random outbound connections to just any port!) is just basic stuff. Machines are hacked into almost routinely these days without any initial knowledge of the authentication method or credentials, but by security vulnerabilities in the system as set up by the admin. SELinux and iptables are two of the most useful technologies for helping combat this problem, but they (and any other tools to improve security) must be properly configured to do any good at all.
And there are many many more Best Practices which have been (and continue to be) refined by many years of experience out in the field. Reality is reality, whether the individual admin likes it or not, sorry.
On 1/3/2012 2:04 PM, Lamar Owen wrote:
On Tuesday, January 03, 2012 03:24:34 PM Bennett Haselton wrote:
That there are 10^21 possible random 12-character alphanumeric passwords -- making it secure against brute-forcing -- is a fact, not an opinion.
To date, *nobody* on this thread has ever responded when I said that there are 10^21 possible such passwords and as such I don't think that the password can be brute-forced in that way.
Hmm, methinks you need to rethink the number. The number of truly random passwords available from a character set with N charaters and of a length L is N^L. (see https://en.wikipedia.org/wiki/Password_strength#Random_passwords )
If L=12, then: For: The numerals only: 10^12. The Uppercase alphabet only: 26^12 (9.5x10^16) Uppers and lowers: 52^12 (3.9x10^20) Numerals plus uppers and lowers: 62^12 (3.2x10^21) Base64: 64^12 (4.7x10^21)
This is the figure I'm using (because I actually use some chars that aren't letters or numbers so I rounded up to 64). You got on the order of 10^21, same as me.
Full ASCII printables, minus space: 94^12 (4.76x10^23)
This of course includes repeating characters. NIST recommends 80-bit entropy-equivalent passwords for secure systems; 12 characters chosen at random from the full 95 ASCII printable characters doesn't quite make that (at a calculated 78 bits or so).
I'm not sure what their logic is for recommending 80. But 72 bits already means that any attack is so improbable that you'd *literally* have to be more worried about the sun going supernova.
Having said all of that, please see and read http://security.stackexchange.com/questions/3887/is-using-a-public-key-for-l...
The critical thing to remember is that in key auth the authenticating key never leaves the client system, rather an encrypted 'nonce' is sent (the nonce is encrypted by the authenticating key), which only the server, possessing the matching key to the authenticating key, can successfully decrypt.
Actually, the top answer at that link appears to say that the server sends the nonce to the client, and only the client can successfully decrypt it. (Is that what you meant?)
This effectively gives full bit-strength for the whole key; 1024 bits of entropy, if you will, for 1024 bit keys. This would appear to require a 157 character random password chosen from all 95 ASCII printable characters to match, in terms of bit entropy.
Yes, I've acknowledged that whether you think 1024 bits is more secure than 72 bits depends on how literally you mean "more secure". If the odds of an attack working in the next year are 1 in 10^10, you can reduce the odds to 1 in 10^20, which in a strict mathematical sense may make you more secure, but -- not really.
Furthermore, when you're dealing with probabilities that ridiculously small, they're overwhelmed by the probability that an attack will be found against the actual algorithm (which I think is your point about possible weaknesses in the stream cipher).
However, *then* you have to take into account the fact that, similarly, the odds of a given machine being compromised by a man-in-the-middle attack combined with cryptanalysis of the stream cipher, is *also* overwhelmed by the probability of a break-in via an exploit in the software it's running. I mean, do you think I'm incorrect about that? Of the compromised machines on the Internet, what proportion do you think were hacked via MITM-and-advanced-crypto, compared to exploits in the services?
It was that calculation I was making when I kept insisting that there must be something more probable than brute-forcing the login or decrypting the session -- and if I hadn't stood my ground about that, the discussion never would have gotten around to SELinux, which, if it works in the manner described, may actually help.
Obviously, the authenticating key's protection is paramount, and the passphrase is but one part of that. But that key never travels over the wire.
In stark constrast, in password auth the password has to leave the system and traverse the connection, even if it's over an encrypted channel (it's hashed on the server end and compared to the server's stored hash, plus salt (always did like a little salt with my hash....!), not the client, right? After all, the client may not possess the algorithm used to generate the hash, but password auth still works.). This leaves a password vulnerable to a 'man in the middle' attack if a weakness is found in the negotiated stream cipher used in the channel.
Even with a full man-in-the-middle 'sniff' going on, the key pair authentication is as strong as the crypto used to generate the key pairs, which can be quite a bit stronger than the stream cipher. (56 bit DES, for instance, can be directly brute-forced in 'reasonable' amounts of time now). Pfft, if I understand the theory correctly (and I always reserve the right to be wrong!), you could, in theory, securely authenticate with keys over a nonencrypted session with a long enough nonce plaintext and the nonce's ciphertext traversing the wire in the clear. Of course, you encrypt the actual session for other reasons (primarily to prevent connection hijacking, since that defeats the Authorization and Accountability portions of triple-A), but the session encryption and the key auth are two distinct processes.
I've been reading this thread with some amusement; while brute-forcing is an interesting problem it is far from the most important such problem, and simple measures, like not allowing random programs to listen on just any port (or for that matter make random outbound connections to just any port!) is just basic stuff. Machines are hacked into almost routinely these days without any initial knowledge of the authentication method or credentials, but by security vulnerabilities in the system as set up by the admin. SELinux and iptables are two of the most useful technologies for helping combat this problem, but they (and any other tools to improve security) must be properly configured to do any good at all.
The problem with such "basic stuff" is that in any field, if there's no way to directly test whether something has the desired effect or not, it can become part of accepted "common sense" even if it's ineffective. (Consider how many doctors recommended stomach-sleeping for babies before a more sophisticated study was done, or how many health advocates used to recommend margarine over butter...) And once something has become "standard", admins sometimes have a legal incentive to use it -- reinforcing its de facto status as "common sense" -- even if deep down they realize that it's ineffective. If your server does get broken into and a customer sues you for compromising their data, and they find that you used passwords instead of keys for example, they can hire an "expert" to say that was a foolish choice that put the customer's data at risk. Even if the odds of that being the true cause of the break-in are less than 1 in 10^10, the odds of fooling the judge are considerably greater. (This is true of many fields, where some professionals say they feel they have to go along with the conventional wisdom even if they think it's wrong, because if something blows up for unrelated reasons, they could be sued by someone who thinks the catastrophe happened because they went against the herd.)
Case in point: in the *entire history of the Internet*, do you think there's been a single attack that worked because squid was allowed to listen on a non-standard port, that would have been blocked if squid had been forced to listen on a standard port?
What's unique about security advice is that some of it can be rejected just on logical grounds, to move the discussion on to something more likely to help. If I hadn't argued the point about passwords vs. keys (not to mention about prevention vs. detection), the thread would have been over before it got to SELinux and what kinds of attacks it can mitigate by controlling where a compromised web server can write and run files.
And there are many many more Best Practices which have been (and continue to be) refined by many years of experience out in the field. Reality is reality, whether the individual admin likes it or not, sorry. _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Tue, Jan 3, 2012 at 5:12 PM, Bennett Haselton bennett@peacefire.org wrote:
The critical thing to remember is that in key auth the authenticating key never leaves the client system, rather an encrypted 'nonce' is sent (the nonce is encrypted by the authenticating key), which only the server, possessing the matching key to the authenticating key, can successfully decrypt.
Actually, the top answer at that link appears to say that the server sends the nonce to the client, and only the client can successfully decrypt it. (Is that what you meant?)
This effectively gives full bit-strength for the whole key; 1024 bits of entropy, if you will, for 1024 bit keys. This would appear to require a 157 character random password chosen from all 95 ASCII printable characters to match, in terms of bit entropy.
Yes, I've acknowledged that whether you think 1024 bits is more secure than 72 bits depends on how literally you mean "more secure". If the odds of an attack working in the next year are 1 in 10^10, you can reduce the odds to 1 in 10^20, which in a strict mathematical sense may make you more secure, but -- not really.
You are still speculating about the wrong things, though. Is your password written down? Has anyone ever been in the same room when you typed it? Could a key logger have been installed anywhere you typed it? And for the brute-force side of things, these may be done from a large number of sources to a large number of targets. They may be too slow to break any specific target but repeat it enough and you'll match something, somewhere. Maybe you were just the lucky one that day - and if you used the same password on the other(s) they would be easy targets.
However, *then* you have to take into account the fact that, similarly, the odds of a given machine being compromised by a man-in-the-middle attack combined with cryptanalysis of the stream cipher, is *also* overwhelmed by the probability of a break-in via an exploit in the software it's running. I mean, do you think I'm incorrect about that? Of the compromised machines on the Internet, what proportion do you think were hacked via MITM-and-advanced-crypto, compared to exploits in the services?
Proportions don't matter. Unless you have something extremely valuable to make this machine a target or someone captured your password and connection destination it was probably a random hit of a random probe. It doesn't matter if they are likely to work or not, some do.
The problem with such "basic stuff" is that in any field, if there's no way to directly test whether something has the desired effect or not, it can become part of accepted "common sense" even if it's ineffective.
If you have multiple layers of protection and look at your logs, you'll see what people are trying. And they keep trying it because it works... If you aren't looking at your logs there's not much use in speculating about what might be happening.
Case in point: in the *entire history of the Internet*, do you think there's been a single attack that worked because squid was allowed to listen on a non-standard port, that would have been blocked if squid had been forced to listen on a standard port?
Generalize that question to 'do you think attacks are helped by permitting applications to use ports the administrator didn't expect them to use' and the answer is clearly yes. There are certainly rogue trojans around that do who-knows-what on other connections while pretending to be your normal applications.
On 1/3/2012 4:21 PM, Les Mikesell wrote:
On Tue, Jan 3, 2012 at 5:12 PM, Bennett Haseltonbennett@peacefire.org wrote:
The critical thing to remember is that in key auth the authenticating key never leaves the client system, rather an encrypted 'nonce' is sent (the nonce is encrypted by the authenticating key), which only the server, possessing the matching key to the authenticating key, can successfully decrypt.
Actually, the top answer at that link appears to say that the server sends the nonce to the client, and only the client can successfully decrypt it. (Is that what you meant?)
This effectively gives full bit-strength for the whole key; 1024 bits of entropy, if you will, for 1024 bit keys. This would appear to require a 157 character random password chosen from all 95 ASCII printable characters to match, in terms of bit entropy.
Yes, I've acknowledged that whether you think 1024 bits is more secure than 72 bits depends on how literally you mean "more secure". If the odds of an attack working in the next year are 1 in 10^10, you can reduce the odds to 1 in 10^20, which in a strict mathematical sense may make you more secure, but -- not really.
You are still speculating about the wrong things, though. Is your password written down? Has anyone ever been in the same room when you typed it? Could a key logger have been installed anywhere you typed it?
Right, but these are all valid concerns if you use keys as well. If someone's in the room when you type in the passphrase for your key, they might come back in later and take the key and use the passphrase. If they install malware, they can capture the passphrase and the key as well.
And for the brute-force side of things, these may be done from a large number of sources to a large number of targets. They may be too slow to break any specific target but repeat it enough and you'll match something, somewhere.
Well yes but that doesn't make *my* password any less secure if it's chosen from a space of 10^21 possible passwords. The attacker will just move on.
Maybe you were just the lucky one that day - and if you used the same password on the other(s) they would be easy targets.
However, *then* you have to take into account the fact that, similarly, the odds of a given machine being compromised by a man-in-the-middle attack combined with cryptanalysis of the stream cipher, is *also* overwhelmed by the probability of a break-in via an exploit in the software it's running. I mean, do you think I'm incorrect about that? Of the compromised machines on the Internet, what proportion do you think were hacked via MITM-and-advanced-crypto, compared to exploits in the services?
Proportions don't matter. Unless you have something extremely valuable to make this machine a target or someone captured your password and connection destination it was probably a random hit of a random probe. It doesn't matter if they are likely to work or not, some do.
I either disagree or I'm not sure what you're saying. What do you mean that "proportions don't matter"? If attack A is 1,000 times more likely to work than attack B, you don't think it's more important to guard against attack A? Wasn't that exactly what you were advising when you said to worry more about someone capturing my password with a keylogger, than brute-forcing it?
The problem with such "basic stuff" is that in any field, if there's no way to directly test whether something has the desired effect or not, it can become part of accepted "common sense" even if it's ineffective.
If you have multiple layers of protection and look at your logs, you'll see what people are trying. And they keep trying it because it works...
Well they might be "trying" it because it works against some other systems (in particular, brute-forcing a weak password). That doesn't mean it's any more likely to work on a system with a 12-char random password.
If you aren't looking at your logs there's not much use in speculating about what might be happening.
Case in point: in the *entire history of the Internet*, do you think there's been a single attack that worked because squid was allowed to listen on a non-standard port, that would have been blocked if squid had been forced to listen on a standard port?
Generalize that question to 'do you think attacks are helped by permitting applications to use ports the administrator didn't expect them to use' and the answer is clearly yes. There are certainly rogue trojans around that do who-knows-what on other connections while pretending to be your normal applications.
Well that seems like it would be trivial for the trojan to circumvent -- just listen on the standard port, and if you receive a connection that contains the "secret handshake", switch that connection over into trojan mode, while continuing to serve other users' standard requests on the same port. Wouldn't that work? In that case it seems like a case of a restriction that might work until it becomes widely deployed enough for trojan authors to take it into account, at which point it becomes obsolete.
Bennett
On Tue, Jan 3, 2012 at 6:49 PM, Bennett Haselton bennett@peacefire.org wrote:
Of the compromised machines on the Internet, what proportion do you think were hacked via MITM-and-advanced-crypto, compared to exploits in the services?
Proportions don't matter. Unless you have something extremely valuable to make this machine a target or someone captured your password and connection destination it was probably a random hit of a random probe. It doesn't matter if they are likely to work or not, some do.
I either disagree or I'm not sure what you're saying. What do you mean that "proportions don't matter"?
I mean, if you get hit by lightning, did it really matter that you didn't have the more likely heart attack?
If attack A is 1,000 times more likely to work than attack B, you don't think it's more important to guard against attack A?
It's not either/or here. You could be the guy who gets hit by lightning.
Case in point: in the *entire history of the Internet*, do you think there's been a single attack that worked because squid was allowed to listen on a non-standard port, that would have been blocked if squid had been forced to listen on a standard port?
Generalize that question to 'do you think attacks are helped by permitting applications to use ports the administrator didn't expect them to use' and the answer is clearly yes. There are certainly rogue trojans around that do who-knows-what on other connections while pretending to be your normal applications.
Well that seems like it would be trivial for the trojan to circumvent -- just listen on the standard port, and if you receive a connection that contains the "secret handshake", switch that connection over into trojan mode, while continuing to serve other users' standard requests on the same port. Wouldn't that work? In that case it seems like a case of a restriction that might work until it becomes widely deployed enough for trojan authors to take it into account, at which point it becomes obsolete.
Do you lock your doors or just leave them open because anyone who wants in can break a window anyway?
On Wed, Jan 4, 2012 at 11:40 AM, Les Mikesell lesmikesell@gmail.com wrote:
Do you lock your doors or just leave them open because anyone who wants in can break a window anyway?
Hi Benneth, In conclusion, IMHO, I think you are worried too much :) Don't be afraid just because it's a dangerous world out there.
- Subscribe to security advisories - Read best practice docs - Follow suggestions said in this list And high chances you will be fine :)
If attack A is 1,000 times more likely to work than attack B, you don't think it's more important to guard against attack A?
It's not either/or here. You could be the guy who gets hit by lightning.
I'm not sure I entirely agree with you there Les.
I'm not going to delve into the intricacies of Cost / Benefit analysis (it made my head spin in my accounting school days) but basically, protecting against threats is in part a case of weighing the costs of setting up the protection vs the benefits of being 'immune' to such an attack adding in a dash of probability and stirring the whole mess in a black cauldron. What comes out is what the Bean counters consider an acceptable cost for that protection.
Case in point, I have several web servers sitting in a rack in our server room. I'm more likely to suffer an attack on my key infrastructure through a compromised web server then I am through someone breaking down the door and entering the room. If I asked for a security system that included bio-metric access control systems, I'd be laughed at and denied. OTOH, I have a firewall with a DMZ that is both physically and logically isolated from the internal network and has IDS/IPS running on all traffic passing through.
At the end of the day, there are finite resources anyone can spend protecting their organization and sometimes, hard choices have to be made. We have threats X, Y, and Z but only enough to protect against 2 of them. Which ones would you chose to protect?
On Tuesday, January 03, 2012 06:12:10 PM Bennett Haselton wrote:
I'm not sure what their logic is for recommending 80. But 72 bits already means that any attack is so improbable that you'd *literally* have to be more worried about the sun going supernova.
I'd be more worried about Eta Carinae than our sun, as with it's mass it's likely to be a GRB. The probability of it happening in our lifetime is quite low; yet, if it does happen in our lifetime (actually, if it happened about 7,500 years ago!) it will be an extinction event. So we watch it over time (and we have plates of it going back into the late 1800's).
Likewise for security; the gaussian curve does have outliers, after all, and while it is highly unlikely for a brute-force attack to actually come up with anything against a single server it is still possible, partially due to the number of servers out there coupled with the sheer number of brute-forcers running. The odds are not 1 out of 4.7x10^21; they're much better than that since there isn't just a single host attempting the attack. If I have a botnet of 10,000,000 infected PC's available to attack 100,000,000 servers (close to the number), what are the odds of one of those attacks succeeding? (the fact is that it has happened already; see my excerpted 'in the wild' brute-forcer dictionary below).
The critical thing to remember is that in key auth the authenticating key never leaves the client system,...
Actually, the top answer at that link appears to say that the server sends the nonce to the client, and only the client can successfully decrypt it. (Is that what you meant?)
That's session setup, not authentication. The server has to auth to the client first for session setup, but then client auth is performed. But either way the actual client authenticating key never traverses the wire and is unsniffable.
Furthermore, when you're dealing with probabilities that ridiculously small, they're overwhelmed by the probability that an attack will be found against the actual algorithm (which I think is your point about possible weaknesses in the stream cipher).
This has happened; read some SANS archives. There have been and are exploits in the wild against SSH and SSL; even caused OpenBSD to have to back down from it's claim of never having a remotely exploitable root attack.
However, *then* you have to take into account the fact that, similarly, the odds of a given machine being compromised by a man-in-the-middle attack combined with cryptanalysis of the stream cipher, is *also* overwhelmed by the probability of a break-in via an exploit in the software it's running. I mean, do you think I'm incorrect about that?
What you're missing is that low probability is not a preventer of an actual attack succeeding; people do win the lottery even with the odds stacked against them.
Of the compromised machines on the Internet, what proportion do you think were hacked via MITM-and-advanced-crypto, compared to exploits in the services?
I don't have sufficient data to speculate. SANS or CERT may have that information.
and if I hadn't stood my ground about that, the discussion never would have gotten around to SELinux, which, if it works in the manner described, may actually help.
The archives of this list already had the information about SELinux contained in this thread. Not to mention the clear and easily accessible documentation from the upstream vendor linked to from the CentOS website.
The problem with such "basic stuff" is that in any field, if there's no way to directly test whether something has the desired effect or not, it can become part of accepted "common sense" even if it's ineffective.
Direct testing of both SELinux and iptables effectiveness is doable, and is done routinely by pen-testers. EL6 has the tools necessary to instrument and control both, and by adding third-party repositories (in particular there is a security repo out there
If your server does get broken into and a customer sues you for compromising their data, and they find that you used passwords instead of keys for example, they can hire an "expert" to say that was a foolish choice that put the customer's data at risk.
There is this concept called due diligence. If an admin ignores known industry standards and then gets compromised because of that, then that admin is negligent. Thus, risk analysis and management is done to weigh the costs of the security against the costs of exploit; or, to put in the words of a security consultant we had here (the project is, unfortunately, under NDA, so I can't drop the name of that consultant) "You will be or are compromised now; you must think and operate that way to mitigate your risks." Regardless of the security you think you have, you will be compromised at some point.
The due diligence is being aware of that and being diligent enough so that a server won't have been compromised for two months or longer before you find it. Secure passwords by themselves are not enough. Staying patched, by itself, is not enough. IPtables and other network firewalls by themselves are not enough. SELinux and other access controls (such as TOMOYO and others) by themselves are not enough. And none of the technologies are 'set it and forget it' technologies. You need awareness, multiple layers, and diligent monitoring.
And I would change your sentence slightly; it's not a matter of 'if' your server is going to get broken into, it's 'when.'
Case in point: in the *entire history of the Internet*, do you think there's been a single attack that worked because squid was allowed to listen on a non-standard port, that would have been blocked if squid had been forced to listen on a standard port?
I'm sure there has been, whether it's documented or not. I'm not aware of a comprehensive study into all the possible avenues of access; but the various STIG's exist for valid reasons. (see Johnny's post to those standards and best practices). Best practices are things learned in the field by looking to see what is being used and has been used to break in; they're not just made up out of whole cloth. And if the theory says it shouldn't be possible, but it reality it has happened, then the theory has issues; emipirical data trumps all. It was impossible for Titanic to sink, but it did anyway.
But it's not just squid; SELinux implements mandatory access controls; this means that the policy is default deny for all services, all files, and all ports. It has nothing to do with squid only; it's about a consistently implemented security policy where programs only get the access that they have to have to do the job they need to do. Simply enforcing that rule consistently can help eliminate backdoor processes (which can easily be implemented over encrypted GRE tunnels through a loopback device, at least the couple I've seen were).
You don't seem to have much experience with dealing with today's metasploit-driven multilayer attacks.
What's unique about security advice is that some of it can be rejected just on logical grounds, to move the discussion on to something more likely to help.
You need to learn more about what you speak before you say such unfounded speculative things, really. The attacks mentioned are real; we're not making this stuff up off the top of our heads, Bennett. Nor am I making up the contents of a brute-forcer dictionary I retrieved, about three years ago, from a compromised machine here which contains the following passwords in it:
... root:LdP9cdON88yW root:u2x2bz root:6e51R12B3Wr0 root:nb0M4uHbI6M root:c3qLzdl2ojFB root:LX5ktj root:34KQ root:8kLKwwpPD root:Bl95X1nU root:3zSlRG73r17 root:fDb8 root:cAeM1KurR root:MXf3RX7 root:4jpk root:j00U3bG1VuA root:HYQ9jbWbgjz3 root:Ex4yI8 root:k9M0AQUVS5D root:0U9mW4Wh root:2HhF19 root:EmGKf4 root:8NI877k8d5v root:K539vxaBR root:5gvksF8g55b root:TO553p9E root:7LX66rL7yx1F root:uOU8k03cK2P root:l9g7QmC9ev0 root:E8Ab root:98WZ4C55 root:kIpfB0Pr3fe2 ...
How do you suppose those passwords (along with 68,887 others) got in that dictionary? Seriously: the passwords are there and they didn't just appear out of thin air. The people running those passwords thought they were secure enough, too, I'm sure.
The slow brute-forcers are at work, and are spreading. This is a fact; and some of those passwords are 12-character alphanumerics (some with punctuation symbols) with 72-bits of entropy, yet there they are, and not just one of them, either.
Facts are stubborn things.
On Thu, Jan 5, 2012 at 1:32 AM, Lamar Owen lowen@pari.edu wrote:
root:LdP9cdON88yW root:u2x2bz root:6e51R12B3Wr0 root:nb0M4uHbI6M root:c3qLzdl2ojFB root:LX5ktj root:34KQ root:8kLKwwpPD root:Bl95X1nU root:3zSlRG73r17 root:fDb8 root:cAeM1KurR root:MXf3RX7 root:4jpk root:j00U3bG1VuA root:HYQ9jbWbgjz3 root:Ex4yI8 root:k9M0AQUVS5D root:0U9mW4Wh root:2HhF19 root:EmGKf4 root:8NI877k8d5v root:K539vxaBR root:5gvksF8g55b root:TO553p9E root:7LX66rL7yx1F root:uOU8k03cK2P root:l9g7QmC9ev0 root:E8Ab root:98WZ4C55 root:kIpfB0Pr3fe2 ...
I bet someone in this list will say surprisingly "Damnit. That's my password!" :)
On 1/4/2012 9:32 AM, Lamar Owen wrote:
On Tuesday, January 03, 2012 06:12:10 PM Bennett Haselton wrote:
I'm not sure what their logic is for recommending 80. But 72 bits already means that any attack is so improbable that you'd *literally* have to be more worried about the sun going supernova.
I'd be more worried about Eta Carinae than our sun, as with it's mass it's likely to be a GRB. The probability of it happening in our lifetime is quite low; yet, if it does happen in our lifetime (actually, if it happened about 7,500 years ago!) it will be an extinction event. So we watch it over time (and we have plates of it going back into the late 1800's).
Likewise for security; the gaussian curve does have outliers, after all, and while it is highly unlikely for a brute-force attack to actually come up with anything against a single server it is still possible, partially due to the number of servers out there coupled with the sheer number of brute-forcers running. The odds are not 1 out of 4.7x10^21; they're much better than that since there isn't just a single host attempting the attack. If I have a botnet of 10,000,000 infected PC's available to attack 100,000,000 servers (close to the number), what are the odds of one of those attacks succeeding? (the fact is that it has happened already; see my excerpted 'in the wild' brute-forcer dictionary below).
(1) Someone already raised the issue of what if you have 10 million infected machines instead of just 1; multiple people pointed out that it doesn't matter because the limiting factor is the speed at which sshd can accept/reject login requests, so it doesn't matter if the attacker has 10 million machines or 1. (2) If there are 100 million machines being attacked, that still doesn't make a brute force attack any more likely for my machine. It's not correct to say that if 10 million of those 100 million machines are likely to get compromised, then mine has a 10% chance of being compromised, because with a 12-char random password the odds are much lower for me than for others in the sample.
If *everyone* used a 12-char random password, then the odds are that *none* of the 10 million machines attacking 100 million servers would hit on a success, not when there are 10^21 possible passwords to choose from.
The critical thing to remember is that in key auth the authenticating key never leaves the client system,...
Actually, the top answer at that link appears to say that the server sends the nonce to the client, and only the client can successfully decrypt it. (Is that what you meant?)
That's session setup, not authentication.
The paragraph I'm reading appears to say that the server sends the nonce to the client, even for *authentication* (after session setup): http://security.stackexchange.com/questions/3887/is-using-a-public-key-for-l... "After the channel is functional and secure... the server has the public key of the user stored. What happens next is that the server creates a random value (nonce), encrypts it with the public key and sends it to the user. If the user is who is supposed to be, he can decrypt the challenge and send it back to the server".
So that's what I meant... you'd said the client sends the nonce to the server whereas the page said the server sends the nonce to the client... just wanted to make sure I wasn't missing anything.
The server has to auth to the client first for session setup, but then client auth is performed. But either way the actual client authenticating key never traverses the wire and is unsniffable.
Furthermore, when you're dealing with probabilities that ridiculously small, they're overwhelmed by the probability that an attack will be found against the actual algorithm (which I think is your point about possible weaknesses in the stream cipher).
This has happened; read some SANS archives. There have been and are exploits in the wild against SSH and SSL; even caused OpenBSD to have to back down from it's claim of never having a remotely exploitable root attack.
However, *then* you have to take into account the fact that, similarly, the odds of a given machine being compromised by a man-in-the-middle attack combined with cryptanalysis of the stream cipher, is *also* overwhelmed by the probability of a break-in via an exploit in the software it's running. I mean, do you think I'm incorrect about that?
What you're missing is that low probability is not a preventer of an actual attack succeeding; people do win the lottery even with the odds stacked against them.
Of the compromised machines on the Internet, what proportion do you think were hacked via MITM-and-advanced-crypto, compared to exploits in the services?
I don't have sufficient data to speculate. SANS or CERT may have that information.
Well, what would you guess, based on what you think is likely? If I bet you that more machines were compromised by exploits in services, and I offered you 100-to-1 odds in your favor, would you take it? :)
and if I hadn't stood my ground about that, the discussion never would have gotten around to SELinux, which, if it works in the manner described, may actually help.
The archives of this list already had the information about SELinux contained in this thread. Not to mention the clear and easily accessible documentation from the upstream vendor linked to from the CentOS website.
Well every one of the thousands of features and functions of Linux is indexed by Google on the web *somewhere* :) The question is whether you'll get pointed to it if you ask for help.
The problem with such "basic stuff" is that in any field, if there's no way to directly test whether something has the desired effect or not, it can become part of accepted "common sense" even if it's ineffective.
Direct testing of both SELinux and iptables effectiveness is doable, and is done routinely by pen-testers. EL6 has the tools necessary to instrument and control both, and by adding third-party repositories (in particular there is a security repo out there
I didn't doubt that SELinux or iptables both do what they say they do, or that they reduce the risk of a break-in. My point was that other pieces of "lore" (like "ssh keys reduce the chance of a break-in more than 12-char passwords") have the potential to become part of "folk wisdom" despite not having been tested directly and despite not actually making any difference.
If your server does get broken into and a customer sues you for compromising their data, and they find that you used passwords instead of keys for example, they can hire an "expert" to say that was a foolish choice that put the customer's data at risk.
There is this concept called due diligence. If an admin ignores known industry standards and then gets compromised because of that, then that admin is negligent.
If they get compromised *because* they ignored industry standards, yes! But if they ignored industry standards that had no bearing on security, and they then get compromised some other way (which perhaps was no more likely to happen to them than to anyone else, but they were unlucky), then they're not negligent.
Thus, risk analysis and management is done to weigh the costs of the security against the costs of exploit; or, to put in the words of a security consultant we had here (the project is, unfortunately, under NDA, so I can't drop the name of that consultant) "You will be or are compromised now; you must think and operate that way to mitigate your risks." Regardless of the security you think you have, you will be compromised at some point.
The due diligence is being aware of that and being diligent enough so that a server won't have been compromised for two months or longer before you find it. Secure passwords by themselves are not enough. Staying patched, by itself, is not enough. IPtables and other network firewalls by themselves are not enough. SELinux and other access controls (such as TOMOYO and others) by themselves are not enough. And none of the technologies are 'set it and forget it' technologies. You need awareness, multiple layers, and diligent monitoring.
And I would change your sentence slightly; it's not a matter of 'if' your server is going to get broken into, it's 'when.'
Case in point: in the *entire history of the Internet*, do you think there's been a single attack that worked because squid was allowed to listen on a non-standard port, that would have been blocked if squid had been forced to listen on a standard port?
I'm sure there has been, whether it's documented or not. I'm not aware of a comprehensive study into all the possible avenues of access; but the various STIG's exist for valid reasons. (see Johnny's post to those standards and best practices). Best practices are things learned in the field by looking to see what is being used and has been used to break in; they're not just made up out of whole cloth. And if the theory says it shouldn't be possible, but it reality it has happened, then the theory has issues; emipirical data trumps all. It was impossible for Titanic to sink, but it did anyway.
But it's not just squid; SELinux implements mandatory access controls; this means that the policy is default deny for all services, all files, and all ports. It has nothing to do with squid only; it's about a consistently implemented security policy where programs only get the access that they have to have to do the job they need to do. Simply enforcing that rule consistently can help eliminate backdoor processes (which can easily be implemented over encrypted GRE tunnels through a loopback device, at least the couple I've seen were).
Yes, the totality of SELinux restrictions sounds like it could make a system more secure if it helps to guard against exploits in the services and the OS. My point was that some individual restrictions may not make sense. But it's not a big deal if there's an easy way to disable individual restrictions.
You don't seem to have much experience with dealing with today's metasploit-driven multilayer attacks.
What's unique about security advice is that some of it can be rejected just on logical grounds, to move the discussion on to something more likely to help.
You need to learn more about what you speak before you say such unfounded speculative things, really. The attacks mentioned are real; we're not making this stuff up off the top of our heads, Bennett. Nor am I making up the contents of a brute-forcer dictionary I retrieved, about three years ago, from a compromised machine here which contains the following passwords in it:
... root:LdP9cdON88yW root:u2x2bz root:6e51R12B3Wr0 root:nb0M4uHbI6M root:c3qLzdl2ojFB root:LX5ktj root:34KQ root:8kLKwwpPD root:Bl95X1nU root:3zSlRG73r17 root:fDb8 root:cAeM1KurR root:MXf3RX7 root:4jpk root:j00U3bG1VuA root:HYQ9jbWbgjz3 root:Ex4yI8 root:k9M0AQUVS5D root:0U9mW4Wh root:2HhF19 root:EmGKf4 root:8NI877k8d5v root:K539vxaBR root:5gvksF8g55b root:TO553p9E root:7LX66rL7yx1F root:uOU8k03cK2P root:l9g7QmC9ev0 root:E8Ab root:98WZ4C55 root:kIpfB0Pr3fe2 ...
How do you suppose those passwords (along with 68,887 others) got in that dictionary?
Well in that case it looks like we have the answer: http://pastebin.com/bnu6ZSvj Someone broke into a League of Legends server and posted a bunch of user account passwords. So someone added them to a dictionary hoping that if just one of the users they were attacking happened to be a League of Legends player and happened to use the same password on another system they were targeting, it would work. But to avoid that problem, just don't re-use your root password anywhere else.
Seriously: the passwords are there and they didn't just appear out of thin air. The people running those passwords thought they were secure enough, too, I'm sure.
The slow brute-forcers are at work, and are spreading. This is a fact; and some of those passwords are 12-character alphanumerics (some with punctuation symbols) with 72-bits of entropy, yet there they are, and not just one of them, either.
Well yes of course an attacker can try *particular* 12-character passwords, I never said they couldn't :) And in this case the attacker stole the passwords by grabbing them en masse from a database, not by brute-forcing them.
To be absolutely clear: Do you, personally, believe there is more than a 1 in a million chance that the attacker who got into my machine, got it by brute-forcing the password? As opposed to, say, using an underground exploit?
Bennett
[Distilling to the core matter; everything else is peripheral.]
On Jan 4, 2012, at 2:58 PM, Bennett Haselton wrote:
To be absolutely clear: Do you, personally, believe there is more than a 1 in a million chance that the attacker who got into my machine, got it by brute-forcing the password? As opposed to, say, using an underground exploit?
Here's how I see it breaking down:
1.) Attacker uses apache remote exploit (or other means) to obtain your /etc/shadow file (not a remote shell, just GET the file without that fact being logged); 2.) Attacker runs cloud-based (and/or CUDA accelerated) brute-forcer on 10,000,000 machines against your /etc/shadow file without your knowledge; 3.) Some time passes; 4.) Attacker obtains your password using distributed brute forcing of the hash in the window of time prior to you resetting it; 5.) Attacker logs in since you allow password login. You're pwned by a non-login brute-force attack.
In contrast, with ssh keys and no password logins allowed:
1.) Attacker obtains /etc/shadow and cracks your password after some time; 2.) Attacker additionally obtains /root/.ssh/* 3.) Attacker now has your public key. Good for them; public keys don't have to be kept secure since it is vastly more difficult to reverse known plaintext, known ciphertext, and the public key into a working private key than it is to brute-force the /etc/shadow hash (part of the difficulty is getting all three required components to successfully reverse your private key; the other part boils down to factoring and hash brute-forcing); 4.) Attacker also has root's public and private keys, if there is a pair in root's ~/.ssh, which may or may not help them. If there's a passphrase on the private key, it's quite difficult to obtain that from the key; 5.) Attacker can't leverage either your public key or root's key pair (or the machine key; even if they can leverage that to do MitM (which they can and likely will) that doesn't help them obtain your private key for authentication; 6.) Attacker still can't get in because you don't allow password login, even though attacker has root's password.
This only requires an apache httpd exploit that allows reading of any file; no files have to be modified and no shells have to be acquired through any exploits. Those make it faster, for sure; but even then the attacker is going to acquire your /etc/shadow as one of the first things they do; the next thing they're going to do is install a rootkit with a backdoor password.
Brute-forcing by hash-cracking, not by attempting to login over ssh, is what I'm talking about.
This is what I mean when I say 'multilayer metasploit-driven attacks.'
The weakest link is the security of /etc/shadow on the server for password auth (unless you use a different auth method on your server, like LDAP or other, but that just adds a layer, making the attacker work harder to get that all-import password). Key based auth is superior, since the attacker reading any file on your server cannot compromise the security.
Kerberos is better still.
Now, the weakest link for key auth is the private key itself. But it's better protected than any password is (if someone can swipe your private key off of your workstation you have bigger problems, and they will have your /etc/shadow for your workstation, and probably a backdoor.....). The passphrase is also better protected than the typical MD5 hash password, too.
It is the consensus of the security community that key-based authentication with strong private key passphrases is better than any password-only authentication, and that consensus is based on facts derived from evidence of actual break-ins. While login-based brute- forcing of a password that is long-enough (based upon sshd/login/ hashing speed) is impractical for passwords of sufficient strength, login-based brute forcing is not the 'state of the art' in brute- forcing of passwords. Key-based auth with a passphrase is still not the ultimate, but it is better than only a password, regardless of the strength of that password.
If your password was brute-forced, it really doesn't matter how the attacker did it; you're pwned either way.
It is a safe assumption that there are httpd exploits in the wild, that are not known by the apache project, that specifically attempt to grab /etc/shadow and send to the attacker. It's also a safe assumption that the attacker will have sufficient horsepower to crack your password from /etc/shadow in a 'reasonable' timeframe for an MD5 hash. So you don't allow password authentication and you're not vulnerable to a remote /etc/shadow brute-forcing attack regardless of how much horsepower the attacker can throw your way, and regardless of how the attacker got your /etc/shadow (you could even post it publicly and it wouldn't help them any!).
On 01/04/2012 10:59 PM, Lamar Owen wrote:
[Distilling to the core matter; everything else is peripheral.]
<snip>
It is a safe assumption that there are httpd exploits in the wild, that are not known by the apache project, that specifically attempt to grab /etc/shadow and send to the attacker. It's also a safe assumption that the attacker will have sufficient horsepower to crack your password from /etc/shadow in a 'reasonable' timeframe for an MD5 hash. So you don't allow password authentication and you're not vulnerable to a remote /etc/shadow brute-forcing attack regardless of how much horsepower the attacker can throw your way, and regardless of how the attacker got your /etc/shadow (you could even post it publicly and it wouldn't help them any!).
Excellent text. This should be published on some Blog, or CentOS wiki maybe.
Thank you for this. Concise and practical. Wow. Thanks again!
On 1/4/2012 1:59 PM, Lamar Owen wrote:
[Distilling to the core matter; everything else is peripheral.]
On Jan 4, 2012, at 2:58 PM, Bennett Haselton wrote:
To be absolutely clear: Do you, personally, believe there is more than a 1 in a million chance that the attacker who got into my machine, got it by brute-forcing the password? As opposed to, say, using an underground exploit?
Here's how I see it breaking down:
1.) Attacker uses apache remote exploit (or other means) to obtain your /etc/shadow file (not a remote shell, just GET the file without that fact being logged); 2.) Attacker runs cloud-based (and/or CUDA accelerated) brute-forcer on 10,000,000 machines against your /etc/shadow file without your knowledge; 3.) Some time passes; 4.) Attacker obtains your password using distributed brute forcing of the hash in the window of time prior to you resetting it; 5.) Attacker logs in since you allow password login. You're pwned by a non-login brute-force attack.
In contrast, with ssh keys and no password logins allowed:
1.) Attacker obtains /etc/shadow and cracks your password after some time; 2.) Attacker additionally obtains /root/.ssh/* 3.) Attacker now has your public key. Good for them; public keys don't have to be kept secure since it is vastly more difficult to reverse known plaintext, known ciphertext, and the public key into a working private key than it is to brute-force the /etc/shadow hash (part of the difficulty is getting all three required components to successfully reverse your private key; the other part boils down to factoring and hash brute-forcing); 4.) Attacker also has root's public and private keys, if there is a pair in root's ~/.ssh, which may or may not help them. If there's a passphrase on the private key, it's quite difficult to obtain that from the key; 5.) Attacker can't leverage either your public key or root's key pair (or the machine key; even if they can leverage that to do MitM (which they can and likely will) that doesn't help them obtain your private key for authentication; 6.) Attacker still can't get in because you don't allow password login, even though attacker has root's password.
This only requires an apache httpd exploit that allows reading of any file; no files have to be modified and no shells have to be acquired through any exploits. Those make it faster, for sure; but even then the attacker is going to acquire your /etc/shadow as one of the first things they do; the next thing they're going to do is install a rootkit with a backdoor password.
Brute-forcing by hash-cracking, not by attempting to login over ssh, is what I'm talking about.
I acknowledged that the first time I replied to someone's post saying a 12-char password wasn't secure enough. I hypothesized an attacker with the fastest GPU-driven password cracker in the world (even allowing for 100-factor improvements in coming years) and it would still take centuries to break. I understand about brute-forcing the hash vs. brute-forcing the login, but some others had posted about brute-forcing the login specifically and I was commenting on how ridiculous that was.
This is what I mean when I say 'multilayer metasploit-driven attacks.'
The weakest link is the security of /etc/shadow on the server for password auth (unless you use a different auth method on your server, like LDAP or other, but that just adds a layer, making the attacker work harder to get that all-import password). Key based auth is superior, since the attacker reading any file on your server cannot compromise the security.
Kerberos is better still.
Now, the weakest link for key auth is the private key itself. But it's better protected than any password is (if someone can swipe your private key off of your workstation you have bigger problems, and they will have your /etc/shadow for your workstation, and probably a backdoor.....). The passphrase is also better protected than the typical MD5 hash password, too.
It is the consensus of the security community that key-based authentication with strong private key passphrases is better than any password-only authentication, and that consensus is based on facts derived from evidence of actual break-ins.
Well yes, on average, password-authentication is going to be worse because it includes people in the sample who are using passwords like "Patricia". Did they compare the break-in rate for systems with 12-char passwords vs. systems with keys?
I have nothing in particular against ssh keys - how could anybody be "against ssh keys"? :) My point was that when I asked "How did attackers probably get in, given that the password was a random 12-character string?" people pounced on the fact that I was using a password at all, and kept insisting that that had a non-trivial likelihood of being the cause (rather than the less-than-one-in-a-billion it actually was), even to the point of making ridiculous statements like Mark saying that an attacker trying "thousands of times per hour" would get in "sooner or later". This was to the exclusion of what was vastly more likely to be the correct answer, which was "Apache, sshd, and CentOS have enough exploits that it's far more likely an attacker got in by finding one of those (and tools like SELinux help mitigate that)."
Again, if I hadn't stood behind the math on the issue of long passwords vs. keys, I probably never would have gotten the answer that was actually useful.
Do you think it's possible that people focused so much on the use of a "password" as a possible cause, vs. the existence of exploits, despite the former being literally about 1 billion times less probably than the latter, because the former puts more of the blame on the user? (Not that anyone is "to blame" that CentOS and Apache have bugs -- everything does -- but that password security would be an issue even with a perfect operating system.)
While login-based brute-forcing of a password that is long-enough (based upon sshd/login/hashing speed) is impractical for passwords of sufficient strength, login-based brute forcing is not the 'state of the art' in brute-forcing of passwords. Key-based auth with a passphrase is still not the ultimate, but it is better than only a password, regardless of the strength of that password.
If your password was brute-forced, it really doesn't matter how the attacker did it; you're pwned either way.
It is a safe assumption that there are httpd exploits in the wild, that are not known by the apache project, that specifically attempt to grab /etc/shadow and send to the attacker. It's also a safe assumption that the attacker will have sufficient horsepower to crack your password from /etc/shadow in a 'reasonable' timeframe for an MD5 hash.
Well I disagree that that's a "safe assumption". If you think that 12-character passwords are within striking distance, try 20-char passwords -- 1^36 possible values to search, so with a botnet of over 1 billion infected computers each checking 10 billion passwords per second (both orders of magnitude beyond what's in play today, and unrealistically assuming that every resource in the world is focused on this one password), it would take on the order of 10 billion years to crack.
So you don't allow password authentication and you're not vulnerable to a remote /etc/shadow brute-forcing attack regardless of how much horsepower the attacker can throw your way, and regardless of how the attacker got your /etc/shadow (you could even post it publicly and it wouldn't help them any!).
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On 01/04/2012 07:47 PM, Bennett Haselton wrote:
On 1/4/2012 1:59 PM, Lamar Owen wrote:
[Distilling to the core matter; everything else is peripheral.]
On Jan 4, 2012, at 2:58 PM, Bennett Haselton wrote:
To be absolutely clear: Do you, personally, believe there is more than a 1 in a million chance that the attacker who got into my machine, got it by brute-forcing the password? As opposed to, say, using an underground exploit?
Here's how I see it breaking down:
1.) Attacker uses apache remote exploit (or other means) to obtain your /etc/shadow file (not a remote shell, just GET the file without that fact being logged); 2.) Attacker runs cloud-based (and/or CUDA accelerated) brute-forcer on 10,000,000 machines against your /etc/shadow file without your knowledge; 3.) Some time passes; 4.) Attacker obtains your password using distributed brute forcing of the hash in the window of time prior to you resetting it; 5.) Attacker logs in since you allow password login. You're pwned by a non-login brute-force attack.
In contrast, with ssh keys and no password logins allowed:
1.) Attacker obtains /etc/shadow and cracks your password after some time; 2.) Attacker additionally obtains /root/.ssh/* 3.) Attacker now has your public key. Good for them; public keys don't have to be kept secure since it is vastly more difficult to reverse known plaintext, known ciphertext, and the public key into a working private key than it is to brute-force the /etc/shadow hash (part of the difficulty is getting all three required components to successfully reverse your private key; the other part boils down to factoring and hash brute-forcing); 4.) Attacker also has root's public and private keys, if there is a pair in root's ~/.ssh, which may or may not help them. If there's a passphrase on the private key, it's quite difficult to obtain that from the key; 5.) Attacker can't leverage either your public key or root's key pair (or the machine key; even if they can leverage that to do MitM (which they can and likely will) that doesn't help them obtain your private key for authentication; 6.) Attacker still can't get in because you don't allow password login, even though attacker has root's password.
This only requires an apache httpd exploit that allows reading of any file; no files have to be modified and no shells have to be acquired through any exploits. Those make it faster, for sure; but even then the attacker is going to acquire your /etc/shadow as one of the first things they do; the next thing they're going to do is install a rootkit with a backdoor password.
Brute-forcing by hash-cracking, not by attempting to login over ssh, is what I'm talking about.
I acknowledged that the first time I replied to someone's post saying a 12-char password wasn't secure enough. I hypothesized an attacker with the fastest GPU-driven password cracker in the world (even allowing for 100-factor improvements in coming years) and it would still take centuries to break. I understand about brute-forcing the hash vs. brute-forcing the login, but some others had posted about brute-forcing the login specifically and I was commenting on how ridiculous that was.
This is what I mean when I say 'multilayer metasploit-driven attacks.'
The weakest link is the security of /etc/shadow on the server for password auth (unless you use a different auth method on your server, like LDAP or other, but that just adds a layer, making the attacker work harder to get that all-import password). Key based auth is superior, since the attacker reading any file on your server cannot compromise the security.
Kerberos is better still.
Now, the weakest link for key auth is the private key itself. But it's better protected than any password is (if someone can swipe your private key off of your workstation you have bigger problems, and they will have your /etc/shadow for your workstation, and probably a backdoor.....). The passphrase is also better protected than the typical MD5 hash password, too.
It is the consensus of the security community that key-based authentication with strong private key passphrases is better than any password-only authentication, and that consensus is based on facts derived from evidence of actual break-ins.
Well yes, on average, password-authentication is going to be worse because it includes people in the sample who are using passwords like "Patricia". Did they compare the break-in rate for systems with 12-char passwords vs. systems with keys?
I have nothing in particular against ssh keys - how could anybody be "against ssh keys"? :) My point was that when I asked "How did attackers probably get in, given that the password was a random 12-character string?" people pounced on the fact that I was using a password at all, and kept insisting that that had a non-trivial likelihood of being the cause (rather than the less-than-one-in-a-billion it actually was), even to the point of making ridiculous statements like Mark saying that an attacker trying "thousands of times per hour" would get in "sooner or later". This was to the exclusion of what was vastly more likely to be the correct answer, which was "Apache, sshd, and CentOS have enough exploits that it's far more likely an attacker got in by finding one of those (and tools like SELinux help mitigate that)."
Again, if I hadn't stood behind the math on the issue of long passwords vs. keys, I probably never would have gotten the answer that was actually useful.
Do you think it's possible that people focused so much on the use of a "password" as a possible cause, vs. the existence of exploits, despite the former being literally about 1 billion times less probably than the latter, because the former puts more of the blame on the user? (Not that anyone is "to blame" that CentOS and Apache have bugs -- everything does -- but that password security would be an issue even with a perfect operating system.)
While login-based brute-forcing of a password that is long-enough (based upon sshd/login/hashing speed) is impractical for passwords of sufficient strength, login-based brute forcing is not the 'state of the art' in brute-forcing of passwords. Key-based auth with a passphrase is still not the ultimate, but it is better than only a password, regardless of the strength of that password.
If your password was brute-forced, it really doesn't matter how the attacker did it; you're pwned either way.
It is a safe assumption that there are httpd exploits in the wild, that are not known by the apache project, that specifically attempt to grab /etc/shadow and send to the attacker. It's also a safe assumption that the attacker will have sufficient horsepower to crack your password from /etc/shadow in a 'reasonable' timeframe for an MD5 hash.
Well I disagree that that's a "safe assumption". If you think that 12-character passwords are within striking distance, try 20-char passwords -- 1^36 possible values to search, so with a botnet of over 1 billion infected computers each checking 10 billion passwords per second (both orders of magnitude beyond what's in play today, and unrealistically assuming that every resource in the world is focused on this one password), it would take on the order of 10 billion years to crack.
Then we get back to rainbow tables and hashes that have been generated by someone else (with super computer access) and published for "X" sized passwords (you pick the size: 8,12,20,24,etc). Then they don't need to calculate anything, just do a sql lookup against a database with what they get from the shadow file. Someone else already cracked all "X" size logins for all possible iterations.
On 1/5/2012 6:53 AM, Johnny Hughes wrote:
On 01/04/2012 07:47 PM, Bennett Haselton wrote:
On 1/4/2012 1:59 PM, Lamar Owen wrote:
[Distilling to the core matter; everything else is peripheral.]
On Jan 4, 2012, at 2:58 PM, Bennett Haselton wrote:
To be absolutely clear: Do you, personally, believe there is more than a 1 in a million chance that the attacker who got into my machine, got it by brute-forcing the password? As opposed to, say, using an underground exploit?
Here's how I see it breaking down:
1.) Attacker uses apache remote exploit (or other means) to obtain your /etc/shadow file (not a remote shell, just GET the file without that fact being logged); 2.) Attacker runs cloud-based (and/or CUDA accelerated) brute-forcer on 10,000,000 machines against your /etc/shadow file without your knowledge; 3.) Some time passes; 4.) Attacker obtains your password using distributed brute forcing of the hash in the window of time prior to you resetting it; 5.) Attacker logs in since you allow password login. You're pwned by a non-login brute-force attack.
In contrast, with ssh keys and no password logins allowed:
1.) Attacker obtains /etc/shadow and cracks your password after some time; 2.) Attacker additionally obtains /root/.ssh/* 3.) Attacker now has your public key. Good for them; public keys don't have to be kept secure since it is vastly more difficult to reverse known plaintext, known ciphertext, and the public key into a working private key than it is to brute-force the /etc/shadow hash (part of the difficulty is getting all three required components to successfully reverse your private key; the other part boils down to factoring and hash brute-forcing); 4.) Attacker also has root's public and private keys, if there is a pair in root's ~/.ssh, which may or may not help them. If there's a passphrase on the private key, it's quite difficult to obtain that from the key; 5.) Attacker can't leverage either your public key or root's key pair (or the machine key; even if they can leverage that to do MitM (which they can and likely will) that doesn't help them obtain your private key for authentication; 6.) Attacker still can't get in because you don't allow password login, even though attacker has root's password.
This only requires an apache httpd exploit that allows reading of any file; no files have to be modified and no shells have to be acquired through any exploits. Those make it faster, for sure; but even then the attacker is going to acquire your /etc/shadow as one of the first things they do; the next thing they're going to do is install a rootkit with a backdoor password.
Brute-forcing by hash-cracking, not by attempting to login over ssh, is what I'm talking about.
I acknowledged that the first time I replied to someone's post saying a 12-char password wasn't secure enough. I hypothesized an attacker with the fastest GPU-driven password cracker in the world (even allowing for 100-factor improvements in coming years) and it would still take centuries to break. I understand about brute-forcing the hash vs. brute-forcing the login, but some others had posted about brute-forcing the login specifically and I was commenting on how ridiculous that was.
This is what I mean when I say 'multilayer metasploit-driven attacks.'
The weakest link is the security of /etc/shadow on the server for password auth (unless you use a different auth method on your server, like LDAP or other, but that just adds a layer, making the attacker work harder to get that all-import password). Key based auth is superior, since the attacker reading any file on your server cannot compromise the security.
Kerberos is better still.
Now, the weakest link for key auth is the private key itself. But it's better protected than any password is (if someone can swipe your private key off of your workstation you have bigger problems, and they will have your /etc/shadow for your workstation, and probably a backdoor.....). The passphrase is also better protected than the typical MD5 hash password, too.
It is the consensus of the security community that key-based authentication with strong private key passphrases is better than any password-only authentication, and that consensus is based on facts derived from evidence of actual break-ins.
Well yes, on average, password-authentication is going to be worse because it includes people in the sample who are using passwords like "Patricia". Did they compare the break-in rate for systems with 12-char passwords vs. systems with keys?
I have nothing in particular against ssh keys - how could anybody be "against ssh keys"? :) My point was that when I asked "How did attackers probably get in, given that the password was a random 12-character string?" people pounced on the fact that I was using a password at all, and kept insisting that that had a non-trivial likelihood of being the cause (rather than the less-than-one-in-a-billion it actually was), even to the point of making ridiculous statements like Mark saying that an attacker trying "thousands of times per hour" would get in "sooner or later". This was to the exclusion of what was vastly more likely to be the correct answer, which was "Apache, sshd, and CentOS have enough exploits that it's far more likely an attacker got in by finding one of those (and tools like SELinux help mitigate that)."
Again, if I hadn't stood behind the math on the issue of long passwords vs. keys, I probably never would have gotten the answer that was actually useful.
Do you think it's possible that people focused so much on the use of a "password" as a possible cause, vs. the existence of exploits, despite the former being literally about 1 billion times less probably than the latter, because the former puts more of the blame on the user? (Not that anyone is "to blame" that CentOS and Apache have bugs -- everything does -- but that password security would be an issue even with a perfect operating system.)
While login-based brute-forcing of a password that is long-enough (based upon sshd/login/hashing speed) is impractical for passwords of sufficient strength, login-based brute forcing is not the 'state of the art' in brute-forcing of passwords. Key-based auth with a passphrase is still not the ultimate, but it is better than only a password, regardless of the strength of that password.
If your password was brute-forced, it really doesn't matter how the attacker did it; you're pwned either way.
It is a safe assumption that there are httpd exploits in the wild, that are not known by the apache project, that specifically attempt to grab /etc/shadow and send to the attacker. It's also a safe assumption that the attacker will have sufficient horsepower to crack your password from /etc/shadow in a 'reasonable' timeframe for an MD5 hash.
Well I disagree that that's a "safe assumption". If you think that 12-character passwords are within striking distance, try 20-char passwords -- 1^36 possible values to search, so with a botnet of over 1 billion infected computers each checking 10 billion passwords per second (both orders of magnitude beyond what's in play today, and unrealistically assuming that every resource in the world is focused on this one password), it would take on the order of 10 billion years to crack.
Then we get back to rainbow tables and hashes that have been generated by someone else (with super computer access) and published for "X" sized passwords (you pick the size: 8,12,20,24,etc). Then they don't need to calculate anything, just do a sql lookup against a database with what they get from the shadow file. Someone else already cracked all "X" size logins for all possible iterations.
But if the system adds a salt to the password before taking the hash, then the size of the precomputed rainbow table grows exponentially, because it has to store the hash of all possible passwords to test, with all possible salts. If a 12-char random password (72 bits of randomness) is salted with a 32-bit salt you now have to precompute 10^31 values in your rainbow table instead of "only" 10^21.
Bennett
On 01/05/2012 02:51 PM, Bennett Haselton wrote:
On 1/5/2012 6:53 AM, Johnny Hughes wrote:
On 01/04/2012 07:47 PM, Bennett Haselton wrote:
On 1/4/2012 1:59 PM, Lamar Owen wrote:
[Distilling to the core matter; everything else is peripheral.]
On Jan 4, 2012, at 2:58 PM, Bennett Haselton wrote:
To be absolutely clear: Do you, personally, believe there is more than a 1 in a million chance that the attacker who got into my machine, got it by brute-forcing the password? As opposed to, say, using an underground exploit?
Here's how I see it breaking down:
1.) Attacker uses apache remote exploit (or other means) to obtain your /etc/shadow file (not a remote shell, just GET the file without that fact being logged); 2.) Attacker runs cloud-based (and/or CUDA accelerated) brute-forcer on 10,000,000 machines against your /etc/shadow file without your knowledge; 3.) Some time passes; 4.) Attacker obtains your password using distributed brute forcing of the hash in the window of time prior to you resetting it; 5.) Attacker logs in since you allow password login. You're pwned by a non-login brute-force attack.
In contrast, with ssh keys and no password logins allowed:
1.) Attacker obtains /etc/shadow and cracks your password after some time; 2.) Attacker additionally obtains /root/.ssh/* 3.) Attacker now has your public key. Good for them; public keys don't have to be kept secure since it is vastly more difficult to reverse known plaintext, known ciphertext, and the public key into a working private key than it is to brute-force the /etc/shadow hash (part of the difficulty is getting all three required components to successfully reverse your private key; the other part boils down to factoring and hash brute-forcing); 4.) Attacker also has root's public and private keys, if there is a pair in root's ~/.ssh, which may or may not help them. If there's a passphrase on the private key, it's quite difficult to obtain that from the key; 5.) Attacker can't leverage either your public key or root's key pair (or the machine key; even if they can leverage that to do MitM (which they can and likely will) that doesn't help them obtain your private key for authentication; 6.) Attacker still can't get in because you don't allow password login, even though attacker has root's password.
This only requires an apache httpd exploit that allows reading of any file; no files have to be modified and no shells have to be acquired through any exploits. Those make it faster, for sure; but even then the attacker is going to acquire your /etc/shadow as one of the first things they do; the next thing they're going to do is install a rootkit with a backdoor password.
Brute-forcing by hash-cracking, not by attempting to login over ssh, is what I'm talking about.
I acknowledged that the first time I replied to someone's post saying a 12-char password wasn't secure enough. I hypothesized an attacker with the fastest GPU-driven password cracker in the world (even allowing for 100-factor improvements in coming years) and it would still take centuries to break. I understand about brute-forcing the hash vs. brute-forcing the login, but some others had posted about brute-forcing the login specifically and I was commenting on how ridiculous that was.
This is what I mean when I say 'multilayer metasploit-driven attacks.'
The weakest link is the security of /etc/shadow on the server for password auth (unless you use a different auth method on your server, like LDAP or other, but that just adds a layer, making the attacker work harder to get that all-import password). Key based auth is superior, since the attacker reading any file on your server cannot compromise the security.
Kerberos is better still.
Now, the weakest link for key auth is the private key itself. But it's better protected than any password is (if someone can swipe your private key off of your workstation you have bigger problems, and they will have your /etc/shadow for your workstation, and probably a backdoor.....). The passphrase is also better protected than the typical MD5 hash password, too.
It is the consensus of the security community that key-based authentication with strong private key passphrases is better than any password-only authentication, and that consensus is based on facts derived from evidence of actual break-ins.
Well yes, on average, password-authentication is going to be worse because it includes people in the sample who are using passwords like "Patricia". Did they compare the break-in rate for systems with 12-char passwords vs. systems with keys?
I have nothing in particular against ssh keys - how could anybody be "against ssh keys"? :) My point was that when I asked "How did attackers probably get in, given that the password was a random 12-character string?" people pounced on the fact that I was using a password at all, and kept insisting that that had a non-trivial likelihood of being the cause (rather than the less-than-one-in-a-billion it actually was), even to the point of making ridiculous statements like Mark saying that an attacker trying "thousands of times per hour" would get in "sooner or later". This was to the exclusion of what was vastly more likely to be the correct answer, which was "Apache, sshd, and CentOS have enough exploits that it's far more likely an attacker got in by finding one of those (and tools like SELinux help mitigate that)."
Again, if I hadn't stood behind the math on the issue of long passwords vs. keys, I probably never would have gotten the answer that was actually useful.
Do you think it's possible that people focused so much on the use of a "password" as a possible cause, vs. the existence of exploits, despite the former being literally about 1 billion times less probably than the latter, because the former puts more of the blame on the user? (Not that anyone is "to blame" that CentOS and Apache have bugs -- everything does -- but that password security would be an issue even with a perfect operating system.)
While login-based brute-forcing of a password that is long-enough (based upon sshd/login/hashing speed) is impractical for passwords of sufficient strength, login-based brute forcing is not the 'state of the art' in brute-forcing of passwords. Key-based auth with a passphrase is still not the ultimate, but it is better than only a password, regardless of the strength of that password.
If your password was brute-forced, it really doesn't matter how the attacker did it; you're pwned either way.
It is a safe assumption that there are httpd exploits in the wild, that are not known by the apache project, that specifically attempt to grab /etc/shadow and send to the attacker. It's also a safe assumption that the attacker will have sufficient horsepower to crack your password from /etc/shadow in a 'reasonable' timeframe for an MD5 hash.
Well I disagree that that's a "safe assumption". If you think that 12-character passwords are within striking distance, try 20-char passwords -- 1^36 possible values to search, so with a botnet of over 1 billion infected computers each checking 10 billion passwords per second (both orders of magnitude beyond what's in play today, and unrealistically assuming that every resource in the world is focused on this one password), it would take on the order of 10 billion years to crack.
Then we get back to rainbow tables and hashes that have been generated by someone else (with super computer access) and published for "X" sized passwords (you pick the size: 8,12,20,24,etc). Then they don't need to calculate anything, just do a sql lookup against a database with what they get from the shadow file. Someone else already cracked all "X" size logins for all possible iterations.
But if the system adds a salt to the password before taking the hash, then the size of the precomputed rainbow table grows exponentially, because it has to store the hash of all possible passwords to test, with all possible salts. If a 12-char random password (72 bits of randomness) is salted with a 32-bit salt you now have to precompute 10^31 values in your rainbow table instead of "only" 10^21.
OK ... you continue to use passwords on your servers. I'll use keys and require a vpn to access mine.
On Jan 5, 2012, at 6:34 PM, Johnny Hughes johnny@centos.org wrote:
On 01/05/2012 02:51 PM, Bennett Haselton wrote:
On 1/5/2012 6:53 AM, Johnny Hughes wrote:
On 01/04/2012 07:47 PM, Bennett Haselton wrote:
On 1/4/2012 1:59 PM, Lamar Owen wrote:
[Distilling to the core matter; everything else is peripheral.]
On Jan 4, 2012, at 2:58 PM, Bennett Haselton wrote:
To be absolutely clear: Do you, personally, believe there is more than a 1 in a million chance that the attacker who got into my machine, got it by brute-forcing the password? As opposed to, say, using an underground exploit?
Here's how I see it breaking down:
1.) Attacker uses apache remote exploit (or other means) to obtain your /etc/shadow file (not a remote shell, just GET the file without that fact being logged); 2.) Attacker runs cloud-based (and/or CUDA accelerated) brute-forcer on 10,000,000 machines against your /etc/shadow file without your knowledge; 3.) Some time passes; 4.) Attacker obtains your password using distributed brute forcing of the hash in the window of time prior to you resetting it; 5.) Attacker logs in since you allow password login. You're pwned by a non-login brute-force attack.
In contrast, with ssh keys and no password logins allowed:
1.) Attacker obtains /etc/shadow and cracks your password after some time; 2.) Attacker additionally obtains /root/.ssh/* 3.) Attacker now has your public key. Good for them; public keys don't have to be kept secure since it is vastly more difficult to reverse known plaintext, known ciphertext, and the public key into a working private key than it is to brute-force the /etc/shadow hash (part of the difficulty is getting all three required components to successfully reverse your private key; the other part boils down to factoring and hash brute-forcing); 4.) Attacker also has root's public and private keys, if there is a pair in root's ~/.ssh, which may or may not help them. If there's a passphrase on the private key, it's quite difficult to obtain that from the key; 5.) Attacker can't leverage either your public key or root's key pair (or the machine key; even if they can leverage that to do MitM (which they can and likely will) that doesn't help them obtain your private key for authentication; 6.) Attacker still can't get in because you don't allow password login, even though attacker has root's password.
This only requires an apache httpd exploit that allows reading of any file; no files have to be modified and no shells have to be acquired through any exploits. Those make it faster, for sure; but even then the attacker is going to acquire your /etc/shadow as one of the first things they do; the next thing they're going to do is install a rootkit with a backdoor password.
Brute-forcing by hash-cracking, not by attempting to login over ssh, is what I'm talking about.
I acknowledged that the first time I replied to someone's post saying a 12-char password wasn't secure enough. I hypothesized an attacker with the fastest GPU-driven password cracker in the world (even allowing for 100-factor improvements in coming years) and it would still take centuries to break. I understand about brute-forcing the hash vs. brute-forcing the login, but some others had posted about brute-forcing the login specifically and I was commenting on how ridiculous that was.
This is what I mean when I say 'multilayer metasploit-driven attacks.'
The weakest link is the security of /etc/shadow on the server for password auth (unless you use a different auth method on your server, like LDAP or other, but that just adds a layer, making the attacker work harder to get that all-import password). Key based auth is superior, since the attacker reading any file on your server cannot compromise the security.
Kerberos is better still.
Now, the weakest link for key auth is the private key itself. But it's better protected than any password is (if someone can swipe your private key off of your workstation you have bigger problems, and they will have your /etc/shadow for your workstation, and probably a backdoor.....). The passphrase is also better protected than the typical MD5 hash password, too.
It is the consensus of the security community that key-based authentication with strong private key passphrases is better than any password-only authentication, and that consensus is based on facts derived from evidence of actual break-ins.
Well yes, on average, password-authentication is going to be worse because it includes people in the sample who are using passwords like "Patricia". Did they compare the break-in rate for systems with 12-char passwords vs. systems with keys?
I have nothing in particular against ssh keys - how could anybody be "against ssh keys"? :) My point was that when I asked "How did attackers probably get in, given that the password was a random 12-character string?" people pounced on the fact that I was using a password at all, and kept insisting that that had a non-trivial likelihood of being the cause (rather than the less-than-one-in-a-billion it actually was), even to the point of making ridiculous statements like Mark saying that an attacker trying "thousands of times per hour" would get in "sooner or later". This was to the exclusion of what was vastly more likely to be the correct answer, which was "Apache, sshd, and CentOS have enough exploits that it's far more likely an attacker got in by finding one of those (and tools like SELinux help mitigate that)."
Again, if I hadn't stood behind the math on the issue of long passwords vs. keys, I probably never would have gotten the answer that was actually useful.
Do you think it's possible that people focused so much on the use of a "password" as a possible cause, vs. the existence of exploits, despite the former being literally about 1 billion times less probably than the latter, because the former puts more of the blame on the user? (Not that anyone is "to blame" that CentOS and Apache have bugs -- everything does -- but that password security would be an issue even with a perfect operating system.)
While login-based brute-forcing of a password that is long-enough (based upon sshd/login/hashing speed) is impractical for passwords of sufficient strength, login-based brute forcing is not the 'state of the art' in brute-forcing of passwords. Key-based auth with a passphrase is still not the ultimate, but it is better than only a password, regardless of the strength of that password.
If your password was brute-forced, it really doesn't matter how the attacker did it; you're pwned either way.
It is a safe assumption that there are httpd exploits in the wild, that are not known by the apache project, that specifically attempt to grab /etc/shadow and send to the attacker. It's also a safe assumption that the attacker will have sufficient horsepower to crack your password from /etc/shadow in a 'reasonable' timeframe for an MD5 hash.
Well I disagree that that's a "safe assumption". If you think that 12-character passwords are within striking distance, try 20-char passwords -- 1^36 possible values to search, so with a botnet of over 1 billion infected computers each checking 10 billion passwords per second (both orders of magnitude beyond what's in play today, and unrealistically assuming that every resource in the world is focused on this one password), it would take on the order of 10 billion years to crack.
Then we get back to rainbow tables and hashes that have been generated by someone else (with super computer access) and published for "X" sized passwords (you pick the size: 8,12,20,24,etc). Then they don't need to calculate anything, just do a sql lookup against a database with what they get from the shadow file. Someone else already cracked all "X" size logins for all possible iterations.
But if the system adds a salt to the password before taking the hash, then the size of the precomputed rainbow table grows exponentially, because it has to store the hash of all possible passwords to test, with all possible salts. If a 12-char random password (72 bits of randomness) is salted with a 32-bit salt you now have to precompute 10^31 values in your rainbow table instead of "only" 10^21.
OK ... you continue to use passwords on your servers. I'll use keys and require a vpn to access mine.
Keys are definitely more secure then passwords, but a VPN is probably a step back as it provides a pseudo-layer2 gateway to your trusted network.
I might use a key based authenticating reverse-proxy (client cert) that then allows one to ssh in and authenticate using a password. Basically authenticate to a landing page using a client cert, the proxy then adds your IP to a list of authenticated clients which opens the ssh port for that client. As long as your browser has the page open and the proxy re-authenticates every X minutes the ssh port remains open for the client.
-Ross
1.) Attacker uses apache remote exploit (or other means) to obtain
your /etc/shadow file (not a remote shell, just GET the file without that fact being logged);
I don't mean to thread-hijack, but I'm curious, if apache runs as its own non-root user and /etc/shadow is root-owned and 0400, then how could any exploit of software not running as root ever have access to that file??
On 1/5/2012 9:13 PM, email builder wrote:
1.) Attacker uses apache remote exploit (or other means) to obtain
your /etc/shadow file (not a remote shell, just GET the file without that fact being logged);
I don't mean to thread-hijack, but I'm curious, if apache runs as its own non-root user and /etc/shadow is root-owned and 0400, then how could any exploit of software not running as root ever have access to that file?? _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
It's possible if the kernel is vulnerable to a local root exploit, and the attacker who gained entry to the system via apache, was able to use it and elevate privileges.
On Thu, Jan 5, 2012 at 10:13 PM, email builder emailbuilder88@yahoo.com wrote:
1.) Attacker uses apache remote exploit (or other means) to obtain
your /etc/shadow file (not a remote shell, just GET the file without that fact being logged);
I don't mean to thread-hijack, but I'm curious, if apache runs as its own non-root user and /etc/shadow is root-owned and 0400, then how could any exploit of software not running as root ever have access to that file??
Apache starts as root so it can open port 80. Certain bugs might happen before it switched to a non-privileged user. But, a more likely scenario would be to get the ability to run some arbitrary command through an apache, app, or library vulnerability, and that command would use a different kernel, library, or suid program vulnerability to get root access. Look back through the update release notes and you'll find an assortment of suitable bugs that have been there...
1.) Attacker uses apache remote exploit (or other means) to obtain
your /etc/shadow file (not a remote shell, just GET the file without that fact being logged);
I don't mean to thread-hijack, but I'm curious, if apache runs as its own non-root user and /etc/shadow is root-owned and 0400, then how could any exploit of software not running as root ever have access to that file??
Apache starts as root so it can open port 80. Certain bugs might happen before it switched to a non-privileged user. But, a more likely scenario would be to get the ability to run some arbitrary command through an apache, app, or library vulnerability, and that command would use a different kernel, library, or suid program vulnerability to get root access. Look back through the update release notes and you'll find an assortment of suitable bugs that have been there...
That makes sense - but that scenario seems like the vulnerability is more in some third party application or tool that happens to be executable by apache. Seems like the best defense against that is not running things like WordPress ;-p :-)
On Fri, Jan 6, 2012 at 1:52 PM, email builder emailbuilder88@yahoo.com wrote:
Apache starts as root so it can open port 80. Certain bugs might happen before it switched to a non-privileged user. But, a more likely scenario would be to get the ability to run some arbitrary command through an apache, app, or library vulnerability, and that command would use a different kernel, library, or suid program vulnerability to get root access. Look back through the update release notes and you'll find an assortment of suitable bugs that have been there...
That makes sense - but that scenario seems like the vulnerability is more in some third party application or tool that happens to be executable by apache. Seems like the best defense against that is not running things like WordPress ;-p :-)
There have been bugs in just about everything - apache itself, php or other modules, or the applications that use them. And in java/struts, etc. if you prefer java web services. You just can't get away from the theme of trading security against convenience - whatever you run that has useful features is probably also going to have vulnerabilities.
On Jan 5, 2012, at 11:13 PM, email builder wrote:
I don't mean to thread-hijack, but I'm curious, if apache runs as its own non-root user and /etc/shadow is root-owned and 0400, then how could any exploit of software not running as root ever have access to that file??
To listen on the default port 80, httpd requires running as root. According to the Apache httpd site, the main httpd process continues running as root, with the child processes dropping privileges to the other user. See: http://httpd.apache.org/docs/2.1/invoking.html
On Wednesday, January 04, 2012 08:47:47 PM Bennett Haselton wrote:
Well yes, on average, password-authentication is going to be worse because it includes people in the sample who are using passwords like "Patricia". Did they compare the break-in rate for systems with 12-char passwords vs. systems with keys?
And this is where the rubber meets the road. Keys are uniformly secure (as long as physical access to the private key isn't available to the attacker), passwords are not.
It is a best practice to not run password auth on a public facing server running ssh on port 22. Simple as that. Since this is such a basic best practice, it will get mentioned anytime anyone mentions using a password to log in remotely over ssh as root; the other concerns and possible exploits are more advanced than this.
Addressing that portion of this thread, it's been my experience that once an attacker gains root on your server you have a very difficult job on your hands determining how they got in; specialized forensics tools that analyze more than just logs can be required to adequately find this; that is, this is a job for a forensics specialist.
Now, anyone (yes, anyone) can become a forensics specialist, and I encourage every admin to at least know enough about forensics to at least be able to take a forensics-quality image of a disk and do some simple forensics-quality read-only analysis (simply mounting, even as read-only, an ext3/4 filesystem breaks full forensics, for instance). But when it comes to analyzing today's advanced persistent threats and breakins related to them, you should at least read after experts in this field like Mandiant's Kevin Mandia (there's a slashdot story about him and exactly this sort of thing; see http://it.slashdot.org/story/12/01/04/0630203/cleaning-up-the-mess-after-a-m... for details). He's a nice guy, too.
I would suspect that no one on this list would be able or willing to provide a full analysis on-list, perhaps privately, though, and/or for a fee.
In conclusion, as I am done with this branch of this thread, I'd recommend you read http://www.securelist.com/en/blog/625/The_Mystery_of_Duqu_Part_Six_The_Comma...
On 01/05/2012 07:56 PM, Lamar Owen wrote:
On Wednesday, January 04, 2012 08:47:47 PM Bennett Haselton wrote:
Well yes, on average, password-authentication is going to be worse because it includes people in the sample who are using passwords like "Patricia". Did they compare the break-in rate for systems with 12-char passwords vs. systems with keys?
And this is where the rubber meets the road. Keys are uniformly secure (as long as physical access to the private key isn't available to the attacker), passwords are not.
It is a best practice to not run password auth on a public facing server running ssh on port 22. Simple as that. Since this is such a basic best practice, it will get mentioned anytime anyone mentions using a password to log in remotely over ssh as root; the other concerns and possible exploits are more advanced than this.
Addressing that portion of this thread, it's been my experience that once an attacker gains root on your server you have a very difficult job on your hands determining how they got in; specialized forensics tools that analyze more than just logs can be required to adequately find this; that is, this is a job for a forensics specialist.
Now, anyone (yes, anyone) can become a forensics specialist, and I encourage every admin to at least know enough about forensics to at least be able to take a forensics-quality image of a disk and do some simple forensics-quality read-only analysis (simply mounting, even as read-only, an ext3/4 filesystem breaks full forensics, for instance). But when it comes to analyzing today's advanced persistent threats and breakins related to them, you should at least read after experts in this field like Mandiant's Kevin Mandia (there's a slashdot story about him and exactly this sort of thing; see http://it.slashdot.org/story/12/01/04/0630203/cleaning-up-the-mess-after-a-m... for details). He's a nice guy, too.
I would suspect that no one on this list would be able or willing to provide a full analysis on-list, perhaps privately, though, and/or for a fee.
In conclusion, as I am done with this branch of this thread, I'd recommend you read http://www.securelist.com/en/blog/625/The_Mystery_of_Duqu_Part_Six_The_Comma...
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
What is sentiment about having dedicated box with only ssh, and then use that one to raise ssh tunnels to inside systems? So there is no exploits to be used, denyhosts in affect?
On Thursday, January 05, 2012 02:25:50 PM Ljubomir Ljubojevic wrote:
What is sentiment about having dedicated box with only ssh, and then use that one to raise ssh tunnels to inside systems? So there is no exploits to be used, denyhosts in affect?
Without being too specific, I already do this sort of thing, but with two 'bastion' hosts in a failover/load-balanced scenario on physical server hardware.
I use a combination of firewalling to keep incoming on port 22 out of the other hosts, using nat rules, cisco incoming and outgoing acls on the multiple routers between the servers and the 'outside' world, iptables, and other means. In particular, Cisco's NAT 'extendable' feature enables interesting layer 4 switching possibilities.
I'm not going to say that it's perfectly secure and won't ever allow a penetration, but it seems to be doing a pretty good job at the moment.
Improvements I could make would include: 1.) Boot and run the bastion hosts from customized LiveCD or LiveDVD on real DVD-ROM read-only drives with no persistent storage (updating the LiveCD/DVD image periodically with updates and with additional authentication users/data as needed; DVD+RW works very well for this as long as the boot drive is a DVD-ROM and not an RW drive!); 2.) Scheduled rolling reboots of the bastion hosts using a physical power timer (rebooting each machine at a separate time once every 24 hours during hours remote use wouldn't happen (best time is during local lunchtime, actually); the boxes are set to power on automatically upon power restoration after loss); 3.) Port knocking and similar techniques for the bastion hosts in addition to the layered ssh solution in place (I'm using NX, which logins in as the nx user via keys first, then authenticates the user, either with keys or with a password); 4.) Packetfence or similar snort IDS box sitting on the ethernet VLANs of these boxes with custom rules designed to detect intrusions in progress and dynamically add acls to the border routers upon detection (this one will take a while);
I'm still thinking of unusual ways of securing; I've looked at tarpits and honeypots, too, and have really enjoyed some of the more arcane advice I've seen on this list in the past. I still want the device used to remotely fry the computer in the movie 'Electric Dreams' personally..... :-)
Lamar Owen wrote: <snip>
I'm still thinking of unusual ways of securing; I've looked at tarpits and honeypots, too, and have really enjoyed some of the more arcane advice I've seen on this list in the past. I still want the device used to remotely fry the computer in the movie 'Electric Dreams' personally..... :-)
Wimp. Never read Neuromancer? Don't want black ICE?
mark, hoping for a neural interface without a jack behind the ear....
On 01/05/2012 08:58 PM, Lamar Owen wrote:
1.) Boot and run the bastion hosts from customized LiveCD or LiveDVD on real DVD-ROM read-only drives with no persistent storage (updating the LiveCD/DVD image periodically with updates and with additional authentication users/data as needed; DVD+RW works very well for this as long as the boot drive is a DVD-ROM and not an RW drive!);
How about using Stateless CentOS system with: http://plone.lucidsolutions.co.nz/linux/io/using-centos-5.2-stateless-linux-..., then mounting KVM guests system as read-only, shutting it down and then setting KVM guests virtual drive file as read-only for KVM. That ways change of read-only to write would have no effect on the HDD/image.
But I do not know if this is possible from KVM "read-only" point of view.
On 4.1.2012 20:58, Bennett Haselton wrote:
On 1/4/2012 9:32 AM, Lamar Owen wrote:
The slow brute-forcers are at work, and are spreading. ...
Well yes of course an attacker can try *particular* 12-character passwords, I never said they couldn't :) ...
If you enforce use of ssh keys an attacker can try passwords but cannot succeed because he has not the private key.
You are free however to apply a 12-character password to your private key, then you have to know your 12-character password plus you have to own the private key. So the whole blah about brute force becomes lame. More secure or not?
To be absolutely clear: Do you, personally, believe there is more than a 1 in a million chance that the attacker who got into my machine, got it by brute-forcing the password?
I think it was Lamar trying to point out that statistics and probabilities are not applicable to the single individuum (at least not to lotterie players or captains of big vessels)
On Wed, Jan 4, 2012 at 4:13 PM, Markus Falb markus.falb@fasel.at wrote:
To be absolutely clear: Do you, personally, believe there is more than a 1 in a million chance that the attacker who got into my machine, got it by brute-forcing the password?
I think it was Lamar trying to point out that statistics and probabilities are not applicable to the single individuum (at least not to lotterie players or captains of big vessels)
And the last post was more to the point that there have been earlier exploits that could have permitted access to the shadow file even if those are currently fixed with updates. And there are lots of other ways to steal a password. Whether it was brute-forced or not is mostly irrelevant. It is reusable and you don't know if someone else has it.
On Wednesday 04 January 2012 11:58:07 Bennett Haselton wrote:
If *everyone* used a 12-char random password, then the odds are that *none* of the 10 million machines attacking 100 million servers would hit on a success, not when there are 10^21 possible passwords to choose from.
It is too naive to identify the statement "something has very low probability" with the statement "it will not happen".
There are processes in nature that have 1 / 10^21 (or any other) probability of happening, but they are detected to actually happen every couple of seconds or so (hint: ask any nuclear physicist).
In a security-related context, relying on low probability is always a risk (regardless of how small), and it should be avoided if feasible. IOW, chances of "10^<insert any number here> to one" are *infinitely* bigger than zero. Proof --- divide that number by zero to find out how many times it is bigger. ;-)
You should never rely on any probability count if you have critical security concerns. Yes, I also use a strong password rather than ssh key (mostly for the same reason you do --- convenience), but I understand the risk of doing so, I don't have any valuable data on the machine, and I never claim that any password is as effective as a ssh key.
Btw, I am also one of the "lucky" people who managed to get hacked by ssh brute-forcing. The password was as "random" as it can get, but the attacker just got lucky (he didn't get root, though, just my user password, so I could mitigate the damage). After that I installed fail2ban, but I still don't keep anything valuable on that machine...
However, *then* you have to take into account the fact that, similarly, the odds of a given machine being compromised by a man-in-the-middle attack combined with cryptanalysis of the stream cipher, is *also* overwhelmed by the probability of a break-in via an exploit in the software it's running. I mean, do you think I'm incorrect about that?
Are you basically saying that this is a premature optimization problem? If I understand your argument correctly, some attack vectors are much more probable than others, so guarding against a low-probability attack vector is superfluous, given that there are more probable ones still unguarded. Is that what you are saying here?
If yes, let me stress --- the premature optimization issue is *void* in a security-related context. The main guideline is rather the "cover all bases" principle. The fact that something is unlikely to happen does not mean you should not guard against it, if you can. You may find the pain/gain ratio too high sometimes, and you are welcome to ignore some obvious security holes for the sake of convenience if you like, but you cannot argue that low-probability holes are safe to ignore *in* *principle*. That is where the cover-all-bases always wins over avoiding premature optimization.
The archives of this list already had the information about SELinux contained in this thread. Not to mention the clear and easily accessible documentation from the upstream vendor linked to from the CentOS website.
Well every one of the thousands of features and functions of Linux is indexed by Google on the web *somewhere* :) The question is whether you'll get pointed to it if you ask for help.
No, this is not the right question. SELinux is enabled by *default* in CentOS, and for a good reason. You had to make a conscious choice to disable it, and if you are security-aware admin, you should have *first* get yourself educated on what you will lose if you do so.
So you were already pointed to SELinux (and iptables and some other stuff) by the very fact that you installed CentOS. The real question is why did you disable SELinux without looking at the documentation or asking on this list is it useful for you?
If you are ignorant about security software to begin with, you have no right to bitch about relevant information not being available at your glance.
I didn't doubt that SELinux or iptables both do what they say they do, or that they reduce the risk of a break-in. My point was that other pieces of "lore" (like "ssh keys reduce the chance of a break-in more than 12-char passwords") have the potential to become part of "folk wisdom" despite not having been tested directly and despite not actually making any difference.
It's not folk wisdom. The probability of someone guessing your password is nonzero (regardless of how small). The probability of someone "guessing" your ssh key is still much smaller than that. There is an extremely big difference there.
Both methods can be considered "reasonably safe", and at the same time "not completely safe", but one *can* compare *relative* safeness, and conclude that keys are much safer than passwords. Why do you think people invented keys in the first place? Because they were too stupid to see see that a good password is "good enough"? I doubt.
Again, it is the cover-all-bases principle.
Yes, the totality of SELinux restrictions sounds like it could make a system more secure if it helps to guard against exploits in the services and the OS. My point was that some individual restrictions may not make sense.
There is a wrong premise here as well. The idea of SELinux is "if it is not known to be safe/necessary, restrict it", regardless of whether that restriction "makes sense" or not.
If some particular app is known to be safe to use a random port, it will still be restricted because *in* *the* *future* the situation may change for the worse (the app may introduce an unknown exploit via an update). SELinux guards you against such situations, even if *now* there are no known exploits against that app, and it doesn't make sense to restrict it. It may make sense in the future, so it is again the cover-all-bases principle at work.
Security is all about being paranoid, and SELinux is paranoid by design.
Of course, if you have a need to change it's behavior, you also have the means to do so. But it should be *your* decision, while the *default* setting should be as paranoid as possible.
Well yes of course an attacker can try *particular* 12-character passwords, I never said they couldn't :) And in this case the attacker stole the passwords by grabbing them en masse from a database, not by brute-forcing them.
You should also be aware that there is no such thing as "random in general". Rather, there is only the "random with respect to"-type of randomness. Give me any 12-character password that is "completely random" by any standards of yours, and I will give you 10 guessing algorithms that will generate precisely that password with just 1/1000 probability.
As ssh passwords get ever more "random" as perceived by users, the attacker scripts will evolve more and more to become efficient at guessing precisely those passwords. Exlcuding all dictionary-based passwords is just a first step --- an algorithm can be "hardened" to be more and more effective in guessing the passwords generated by some other algorithm (even against the "blindly banging at the keyboard" algorithm). Its efficiency is of course never perfect, but it can get much higher than you would expect.
Consequently, in the future there will be a race between the efficiency of password-guessing algorithms and password-generating algorithms, for every n- character class of passwords. This is a typical usecase of expert systems, machine learning and AI in general (you may want to get informed a bit on these topics). ;-)
Therefore, any given password may *look* random (to you, to humans, to some algorithms), but it cannot actaully *be* random. This is because in principle there always exist algorithms which will generate that password with higher probability than some other passwords.
You cannot hide behind randomness.
HTH, :-) Marko
On 1/4/2012 3:01 PM, Marko Vojinovic wrote:
On Wednesday 04 January 2012 11:58:07 Bennett Haselton wrote:
If *everyone* used a 12-char random password, then the odds are that *none* of the 10 million machines attacking 100 million servers would hit on a success, not when there are 10^21 possible passwords to choose from.
It is too naive to identify the statement "something has very low probability" with the statement "it will not happen".
There are processes in nature that have 1 / 10^21 (or any other) probability of happening, but they are detected to actually happen every couple of seconds or so (hint: ask any nuclear physicist).
That's because they are observing quantities of particles on the order of 10^21, so the odds of the event occurring are realistic. (Recall Avogadro's number is 6 x 10^23, the number of particles in one mole of a substance.)
In a security-related context, relying on low probability is always a risk (regardless of how small), and it should be avoided if feasible. IOW, chances of "10^<insert any number here> to one" are *infinitely* bigger than zero. Proof --- divide that number by zero to find out how many times it is bigger. ;-)
You should never rely on any probability count if you have critical security concerns. Yes, I also use a strong password rather than ssh key (mostly for the same reason you do --- convenience), but I understand the risk of doing so, I don't have any valuable data on the machine, and I never claim that any password is as effective as a ssh key.
Well as I've said it depends on how literally you mean "as effective". If your password is strong enough that there's only a 1 in 10^10 chance of it being broken by an attacker in the next year, then if an alternative method reduces that chance to 1 in 10^20, you could do that, but I wouldn't bother.
Again, I would have been perfectly happy to use ssh keys -- it would have been less work to switch to ssh keys than to write all the messages defending 12-char passwords :) The reason I wrote all those messages about 12-char passwords was not because I wanted to avoid switching to ssh keys. It was because I wanted some alternative suggestions for how an attacker could have gotten in, given that the chance of brute-forcing the password (even if the attacker had obtained the password hash) was so astronomically small!
Btw, I am also one of the "lucky" people who managed to get hacked by ssh brute-forcing. The password was as "random" as it can get, but the attacker just got lucky
Not sure what you mean by "as random as it can get", but -- I can write this in my sleep by now -- if you have a 12-character password, with 10^21 possibilities to search from, the odds of an attacker getting "lucky" and guessing it, are less probable than you being hit by a meteorite tomorrow. I can absolutely guarantee you that either the password was shorter and less random, or else the attacker got it some other way (possibly your machine got infected with malware that captured your password and uploaded it to a botnet).
(he didn't get root, though, just my user password, so I could mitigate the damage). After that I installed fail2ban, but I still don't keep anything valuable on that machine...
However, *then* you have to take into account the fact that, similarly, the odds of a given machine being compromised by a man-in-the-middle attack combined with cryptanalysis of the stream cipher, is *also* overwhelmed by the probability of a break-in via an exploit in the software it's running. I mean, do you think I'm incorrect about that?
Are you basically saying that this is a premature optimization problem? If I understand your argument correctly, some attack vectors are much more probable than others, so guarding against a low-probability attack vector is superfluous, given that there are more probable ones still unguarded. Is that what you are saying here?
If yes, let me stress --- the premature optimization issue is *void* in a security-related context. The main guideline is rather the "cover all bases" principle. The fact that something is unlikely to happen does not mean you should not guard against it, if you can. You may find the pain/gain ratio too high sometimes, and you are welcome to ignore some obvious security holes for the sake of convenience if you like, but you cannot argue that low-probability holes are safe to ignore *in* *principle*. That is where the cover-all-bases always wins over avoiding premature optimization.
It depend on what you mean by "low probability". As I said, if it's less likely than being hit with a meteor, I don't care.
The archives of this list already had the information about SELinux contained in this thread. Not to mention the clear and easily accessible documentation from the upstream vendor linked to from the CentOS website.
Well every one of the thousands of features and functions of Linux is indexed by Google on the web *somewhere* :) The question is whether you'll get pointed to it if you ask for help.
No, this is not the right question. SELinux is enabled by *default* in CentOS, and for a good reason. You had to make a conscious choice to disable it, and if you are security-aware admin, you should have *first* get yourself educated on what you will lose if you do so.
It was disabled by default in every dedicated server and VPS that I've leased from a hosting company (except one time).
So you were already pointed to SELinux (and iptables and some other stuff) by the very fact that you installed CentOS. The real question is why did you disable SELinux without looking at the documentation or asking on this list is it useful for you?
If you are ignorant about security software to begin with, you have no right to bitch about relevant information not being available at your glance.
I didn't doubt that SELinux or iptables both do what they say they do, or that they reduce the risk of a break-in. My point was that other pieces of "lore" (like "ssh keys reduce the chance of a break-in more than 12-char passwords") have the potential to become part of "folk wisdom" despite not having been tested directly and despite not actually making any difference.
It's not folk wisdom. The probability of someone guessing your password is nonzero (regardless of how small). The probability of someone "guessing" your ssh key is still much smaller than that. There is an extremely big difference there.
Both methods can be considered "reasonably safe", and at the same time "not completely safe", but one *can* compare *relative* safeness, and conclude that keys are much safer than passwords. Why do you think people invented keys in the first place? Because they were too stupid to see see that a good password is "good enough"? I doubt.
Again, it is the cover-all-bases principle.
Yes, the totality of SELinux restrictions sounds like it could make a system more secure if it helps to guard against exploits in the services and the OS. My point was that some individual restrictions may not make sense.
There is a wrong premise here as well. The idea of SELinux is "if it is not known to be safe/necessary, restrict it", regardless of whether that restriction "makes sense" or not.
If some particular app is known to be safe to use a random port, it will still be restricted because *in* *the* *future* the situation may change for the worse (the app may introduce an unknown exploit via an update). SELinux guards you against such situations, even if *now* there are no known exploits against that app, and it doesn't make sense to restrict it. It may make sense in the future, so it is again the cover-all-bases principle at work.
Security is all about being paranoid, and SELinux is paranoid by design.
Of course, if you have a need to change it's behavior, you also have the means to do so. But it should be *your* decision, while the *default* setting should be as paranoid as possible.
Well yes of course an attacker can try *particular* 12-character passwords, I never said they couldn't :) And in this case the attacker stole the passwords by grabbing them en masse from a database, not by brute-forcing them.
You should also be aware that there is no such thing as "random in general". Rather, there is only the "random with respect to"-type of randomness. Give me any 12-character password that is "completely random" by any standards of yours, and I will give you 10 guessing algorithms that will generate precisely that password with just 1/1000 probability.
Um... even if you believe passwords are not completely random, this is still not correct. Even under the most pessimistic assumptions about random password generation, the chance of guessing a randomly generated 12-char password are not 1 in 1000.
I assume you were just using sample numbers, but the trouble is that if you adjust your sample numbers to closer to the real values, they prove the opposite point from the one you're arguing with your original numbers. Even if my random password generator has nonrandomness which takes away 20 bits of randomness from the result, your odds of guessing it are still only 1 in 10^15 -- not so worrisome anymore.
Look, people are perfectly free to believe that 12-char passwords are insecure if they want. Nobody's stopping you, and it certainly won't make you *less* secure, if it motivates you to use to ssh keys. Again, my problem was that the "passwords" mantra virtually shut down the discussion, and I had to keep pressing the point for over 100 messages in the thread before someone offered a suggestion that addressed the real problem, which is exploits in the web server and the operating system.
As ssh passwords get ever more "random" as perceived by users, the attacker scripts will evolve more and more to become efficient at guessing precisely those passwords. Exlcuding all dictionary-based passwords is just a first step --- an algorithm can be "hardened" to be more and more effective in guessing the passwords generated by some other algorithm (even against the "blindly banging at the keyboard" algorithm). Its efficiency is of course never perfect, but it can get much higher than you would expect.
Consequently, in the future there will be a race between the efficiency of password-guessing algorithms and password-generating algorithms, for every n- character class of passwords. This is a typical usecase of expert systems, machine learning and AI in general (you may want to get informed a bit on these topics). ;-)
Therefore, any given password may *look* random (to you, to humans, to some algorithms), but it cannot actaully *be* random. This is because in principle there always exist algorithms which will generate that password with higher probability than some other passwords.
You cannot hide behind randomness.
HTH, :-) Marko
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Wed, Jan 4, 2012 at 8:12 PM, Bennett Haselton bennett@peacefire.org wrote:
Yes, the totality of SELinux restrictions sounds like it could make a system more secure if it helps to guard against exploits in the services and the OS. My point was that some individual restrictions may not make sense.
There is a wrong premise here as well. The idea of SELinux is "if it is not known to be safe/necessary, restrict it", regardless of whether that restriction "makes sense" or not.
Even if my random password generator has nonrandomness which takes away 20 bits of randomness from the result, your odds of guessing it are still only 1 in 10^15 -- not so worrisome anymore.
Look, people are perfectly free to believe that 12-char passwords are insecure if they want. Nobody's stopping you, and it certainly won't make you *less* secure, if it motivates you to use to ssh keys. Again, my problem was that the "passwords" mantra virtually shut down the discussion, and I had to keep pressing the point for over 100 messages in the thread before someone offered a suggestion that addressed the real problem, which is exploits in the web server and the operating system.
The real point which you don't seem to have absorbed yet, is that it doesn't work to count on some specific difficulty in the path of an expected attack. The attacker will use a method you didn't expect. You are right that there is a low probability of a single attacker succeeding starting from scratch with brute force network password guessing on a single target. But that doesn't matter, does it?
On Jan 3, 2012 12:36 PM, "Ljubomir Ljubojevic" office@plnet.rs wrote:
On 01/03/2012 04:47 PM, m.roth@5-cent.us wrote:
Having been on vacation, I'm coming in very late in this....
Les Mikesell wrote:
On Tue, Jan 3, 2012 at 4:28 AM, Bennett Haseltonbennett@peacefire.org wrote:
<snip> >> OK but those are *users* who have their own passwords that they have >> chosen, presumably. User-chosen passwords cannot be assumed to be >> secure against a brute-force attack. What I'm saying is that if
you're
the only user, by my reasoning you don't need fail2ban if you just
use a
12-character truly random password.
But you aren't exactly an authority when you are still guessing about the cause of your problem, are you? (And haven't mentioned what your logs said about failed attempts leading up to the break in...).
Further, that's a ridiculous assumption. Without fail2ban, or something like it, they'll keep trying. You, instead, Bennett, are presumably generating that "truly random" password[1] and assigning it to all your users[2], and not allowing them to change their passwords, and you will
be
changing it occasionally and informing them of the change.[3]
Right?
mark
- How will you generate "truly random"? Clicks on a Geiger counter?
There
is no such thing as a random number generator. 2. Which, being "truly random", they will write down somewhere, or store it on a key, labelling the file "mypassword" or some such. 3. How will you notify them of their new password - in plain text?
Bennet was/is the only one using those systems, and only as root. No additional users existed prior to breach. And he is very persisting in placing his own opinion/belief above those he asks for help. That is why we have such a long long long thread. It came to the point where I am starting to believe him being a troll. Not sure yet, but it is getting there.
I am writing this for your sake, not his. I decided to just watch from no on. This thread WAS very informative, I did lear A LOT, but enough is enough, and I spent far to much time reading this thread.
--
Ljubomir Ljubojevic (Love is in the Air) PL Computers Serbia, Europe
Google is the Mother, Google is the Father, and traceroute is your trusty Spiderman... StarOS, Mikrotik and CentOS/RHEL/Linux consultant _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
I'm subscribed to this list just because of threads like this. I want to thank you all for exposing me to knowledge and discussion that reveals far more than manpages or readmes - it helps a lot to know where to start reading, and about what.
I am not a statistician, but I feel an observation should be made on the idea of an 'unguessable password.' A 12 character string may have 12^42 possible permutations, but you are assuming that the correct guess will be the last possible guess. Simplistic probability puts the odds of success at 50% - either the attacker gets it right, or they don't. An intelligent brute forcing tool could be making some assumptions about the minimum length and complexity of your password, and ruling out the dictionary words and strings based on them happens quickly. The next guess has the same rough odds of being correct as the 100563674th guess.
Of course, no amount of guessing will succeed on a system that doesn't accept passwords. System security, in terms of probability, seems to be an 'every little bit helps' sort of endeavour.
Thanks again for the insights,
Pete
On 1/3/2012 12:31 PM, Pete Travis wrote:
On Jan 3, 2012 12:36 PM, "Ljubomir Ljubojevic"office@plnet.rs wrote:
On 01/03/2012 04:47 PM, m.roth@5-cent.us wrote:
Having been on vacation, I'm coming in very late in this....
Les Mikesell wrote:
On Tue, Jan 3, 2012 at 4:28 AM, Bennett Haseltonbennett@peacefire.org wrote:
<snip> >> OK but those are *users* who have their own passwords that they have >> chosen, presumably. User-chosen passwords cannot be assumed to be >> secure against a brute-force attack. What I'm saying is that if
you're
the only user, by my reasoning you don't need fail2ban if you just
use a
12-character truly random password.
But you aren't exactly an authority when you are still guessing about the cause of your problem, are you? (And haven't mentioned what your logs said about failed attempts leading up to the break in...).
Further, that's a ridiculous assumption. Without fail2ban, or something like it, they'll keep trying. You, instead, Bennett, are presumably generating that "truly random" password[1] and assigning it to all your users[2], and not allowing them to change their passwords, and you will
be
changing it occasionally and informing them of the change.[3]
Right?
mark
- How will you generate "truly random"? Clicks on a Geiger counter?
There
is no such thing as a random number generator. 2. Which, being "truly random", they will write down somewhere, or store it on a key, labelling the file "mypassword" or some such. 3. How will you notify them of their new password - in plain text?
Bennet was/is the only one using those systems, and only as root. No additional users existed prior to breach. And he is very persisting in placing his own opinion/belief above those he asks for help. That is why we have such a long long long thread. It came to the point where I am starting to believe him being a troll. Not sure yet, but it is getting there.
I am writing this for your sake, not his. I decided to just watch from no on. This thread WAS very informative, I did lear A LOT, but enough is enough, and I spent far to much time reading this thread.
--
Ljubomir Ljubojevic (Love is in the Air) PL Computers Serbia, Europe
Google is the Mother, Google is the Father, and traceroute is your trusty Spiderman... StarOS, Mikrotik and CentOS/RHEL/Linux consultant _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
I'm subscribed to this list just because of threads like this. I want to thank you all for exposing me to knowledge and discussion that reveals far more than manpages or readmes - it helps a lot to know where to start reading, and about what.
I am not a statistician, but I feel an observation should be made on the idea of an 'unguessable password.' A 12 character string may have 12^42 possible permutations,
I'm not sure where you got the 12^42 figure from. My assumption was that each character has about 64 = 2^6 possible values, so there are 2^72 possible passwords, which is on the order of 10^24.
but you are assuming that the correct guess will be the last possible guess.
Actually I was using the fact that the average time to break the password would be the time required to check half of the possible passwords, not that they won't find it until the last possible one. That's still on the order of 10^24.
Simplistic probability puts the odds of success at 50% - either the attacker gets it right, or they don't.
I can't make sense of this statement at all. Just because there are two possible outcomes doesn't mean that each possibility is equally likely -- you might as well that either the sun comes up tomorrow, or it doesn't, so the odds of success are 50% :) The only time the "50%" figure comes up is that the average time to guess a password is the time taken to check half of all possible passwords, and hence is 50% of the worst-case time to guess a password (which would be the time taken to check all of them).
An intelligent brute forcing tool could be making some assumptions about the minimum length and complexity of your password, and ruling out the dictionary words and strings based on them happens quickly. The next guess has the same rough odds of being correct as the 100563674th guess.
Actually, each time you make a guess and it's wrong, the probability of success goes up slightly for your next guess. Imagine having 10 cups with a ball under one of them. The probability of turning over the right cup on the first try is 1/10. If you're wrong, though, then the probability of getting it right on the next cup goes up to 1/9, and so on.
But it's all a moot point if there are 10^24 possible passwords and the odds of finding the right one in any conceivable length of time are essentially zero.
Of course, no amount of guessing will succeed on a system that doesn't accept passwords. System security, in terms of probability, seems to be an 'every little bit helps' sort of endeavour.
Well it depends on how literally you mean "every little bit" :) If the chance of a break-in occurring in the next year from a given attack is 1 in 10^10, you can reduce it to 1 in 10^20, but it's already less likely than your data center being hit by a meteorite. The real problem is that it takes away from time that can be used for things that have a greater likelihood of reducing the chance of a break-in. If I had taken the advice about ssh keys at the beginning of the thread, I never would have gotten to the suggestion about SELinux.
Bennett
Here's the qualifying statement I made, in an attempt to preempt pedantic squabbles over my choice of arbitrary figures and oversimplified math:
I am not a statistician, but
Here is a statement intended to startle you into re-examining your position:
Simplistic probability puts the odds of success at 50% - either the attacker gets it right, or they don't.
Here's the intended take home message:
The next guess has the same rough odds of being correct as the 100563674th guess.
Yes, you have to worry about a brute force attack succeeding, every hour of every day that you give it a window to knock on.
Here is you nitpicking over figures; acknowledging the opportunity for an improvement of several orders of magnitude and disregarding it, stuck in your misconceptions; and wholly missing the point.
Actually, each time you make a guess and it's wrong, the probability of success goes up slightly for your next guess. Imagine having 10 cups with a ball under one of them. The probability of turning over the right cup on the first try is 1/10. If you're wrong, though, then the probability of getting it right on the next cup goes up to 1/9, and so on.
But it's all a moot point if there are 10^24 possible passwords and the odds of finding the right one in any conceivable length of time are essentially zero.
Of course, no amount of guessing will succeed on a system that doesn't accept passwords. System security, in terms of probability, seems to
be
an 'every little bit helps' sort of endeavour.
Well it depends on how literally you mean "every little bit" :) If the chance of a break-in occurring in the next year from a given attack is 1 in 10^10, you can reduce it to 1 in 10^20, but it's already less likely than your data center being hit by a meteorite. The real problem is that it takes away from time that can be used for things that have a greater likelihood of reducing the chance of a break-in. If I had taken the advice about ssh keys at the beginning of the thread, I never would have gotten to the suggestion about SELinux.
Bennett _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
I'm moving on from this - much better men than I have tried and failed here.
On 1/3/2012 2:10 PM, Pete Travis wrote:
Here's the qualifying statement I made, in an attempt to preempt pedantic squabbles over my choice of arbitrary figures and oversimplified math:
I am not a statistician, but
Here is a statement intended to startle you into re-examining your position:
Simplistic probability puts the odds of success at 50% - either the attacker gets it right, or they don't.
Oh, did you mean something like, "Let's pick any value p as the probability of the attacker getting in by brute force in a given hour"? OK, that's different. But it's still missing the point that if the odds of an event happening in the next year are less than the Earth crashing into the Sun, then it's not worth worrying about.
There's a more basic error in your math. If there are two ways X and Y to attack a server, and X has a 1 in 100 chance of succeeding and Y has a 1 in 10^10 chance of succeeding, then if you reduce the chances of Y succeeding to 1 in 10^20, that's only an order of magnitude change in the likelihood of Y, *not* an order of magnitude change in the overall chance of a break-in, which changes by a negligible amount.
Here's the intended take home message: The next guess has the same rough odds of being correct as the 100563674th guess.
Yes, you have to worry about a brute force attack succeeding, every hour of every day that you give it a window to knock on.
Here is you nitpicking over figures; acknowledging the opportunity for an improvement of several orders of magnitude and disregarding it, stuck in your misconceptions; and wholly missing the point.
Actually, each time you make a guess and it's wrong, the probability of success goes up slightly for your next guess. Imagine having 10 cups with a ball under one of them. The probability of turning over the right cup on the first try is 1/10. If you're wrong, though, then the probability of getting it right on the next cup goes up to 1/9, and so on.
But it's all a moot point if there are 10^24 possible passwords and the odds of finding the right one in any conceivable length of time are essentially zero.
Of course, no amount of guessing will succeed on a system that doesn't accept passwords. System security, in terms of probability, seems to
be
an 'every little bit helps' sort of endeavour.
Well it depends on how literally you mean "every little bit" :) If the chance of a break-in occurring in the next year from a given attack is 1 in 10^10, you can reduce it to 1 in 10^20, but it's already less likely than your data center being hit by a meteorite. The real problem is that it takes away from time that can be used for things that have a greater likelihood of reducing the chance of a break-in. If I had taken the advice about ssh keys at the beginning of the thread, I never would have gotten to the suggestion about SELinux.
Bennett _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
I'm moving on from this - much better men than I have tried and failed here. _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Tue, Jan 3, 2012 at 12:48 AM, Bennett Haselton bennett@peacefire.org wrote:
You can also set up openvpn on the server and control ports like ssh to only be open to you if you are using an openvpn client to connect to the machine.
True but I travel a lot and sometimes need to connect to the machines from subnets that I don't know about in advance.
Have you ever typed your password on a machine you didn't control? Or even one that was not completely secure (i.e. could have had a hardware keylogger attached, or a software key logger installed by a trojan, virus, or wifi hack)? If so, you might be missing the most likely possibility for someone having your password: simply grabbing it as you type before ssh gets a chance to encrypt it.
Hello, just if it helps, please find below these lines the steps I have used to analyze several suspicious machines in some customers, to check if they have been compromised or not:
* Chrootkit && rkhunter -> To search for known trojans and common linux malware. * unhide (http://www.unhide-forensics.info/) -> to check for hidden processes and tcp sockets * rpm -Va -> To check binary integrity against installed rpms * If netstat binary looks to be sane, check listening sockets * If ps binary looks to be sane, check shown running processes * Check console connections with "last" and "lastb" commands * Tcpdump on network interfaces avoiding traffic for known running services (80, 25, 21, etc... depending on the role of the machine) to check for the weird traffic * grep -i segfault /var/log/* -> to check for buffer overflows in logs * grep -i auth /var/log/* |grep -i failed -> to check authentication failed tries. * lsmod -> to check loaded kernel modules (it is ver difficult to find out something wrong here, but just to be sure nothing weird appears). * lsof -> to check opened current files * Check xinetd -> to find out if someone has added some new "service" * have a look to /tmp, /opt, /usr/bin, /usr/local/bin, /usr/sbin and .bash_history... * check /etc/passwd and verify created users are licit to be there. * check crontab for every user to avoid any process to be programmed
Hope the checklist helps... Regards,
El 02/01/12 09:04, Craig White escribió:
On Sun, 2012-01-01 at 14:23 -0800, Bennett Haselton wrote:
(Sorry, third time -- last one, promise, just giving it a subject line!)
OK, a second machine hosted at the same hosting company has also apparently been hacked. Since 2 of out of 3 machines hosted at that company have now been hacked, but this hasn't happened to any of the other 37 dedicated servers that I've got hosted at other hosting companies (also CentOS, same version or almost), this makes me wonder if there's a security breach at this company, like if they store customers' passwords in a place that's been hacked. (Of course it could also be that whatever attacker found an exploit, was just scanning that company's address space for hackable machines, and didn't happen to scan the address space of the other hosting companies.)
So, following people's suggestions, the machine is disconnected and hooked up to a KVM so I can still examine the files. I've found this file: -rw-r--r-- 1 root root 1358 Oct 21 17:40 /home/file.pl which appears to be a copy of this exploit script: http://archive.cert.uni-stuttgart.de/bugtraq/2006/11/msg00302.html Note the last-mod date of October 21.
No other files on the system were last modified on October 21st. However there was a security advisory dated October 20th which affected httpd: http://mailinglist-archive.com/centos-announce/2011-10/00035-CentOSannounce+... https://rhn.redhat.com/errata/RHSA-2011-1392.html
and a large number of files on the machine, including lots of files in */ usr/lib64/httpd/modules/* and */lib/modules/2.6.18-274.7.1.el5/kernel/* , have a last-mod date of October 20th. So I assume that these are files which were updated automatically by yum as a result of the patch that goes with this advisory -- does that sound right?
So a couple of questions that I could use some help with:
- The last patch affecting httpd was released on October 20th, and the
earliest evidence I can find of the machine being hacked is a file dated October 21st. This could be just a coincidence, but could it also suggest that the patch on October 20th introduced a new exploit, which the attacker then used to get in on October 21st? (Another possibility: I think that when yum installs updates, it doesn't actually restart httpd. So maybe even after the patch was installed, my old httpd instance kept running and was still vulnerable? As for why it got hacked the very next day, maybe the attacker looked at the newly released patch and reverse-engineered it to figure out where the vulnerabilities were, that the patch fixed?)
- Since the */var/log/httpd/* and /var/log/secure* logs only go back 4-5
weeks by default, it looks like any log entries related to how the attacker would have gotten in on or before October 21st, are gone. (The secure* logs do show multiple successful logins as "root" within the last 4 weeks, mostly from IP addresses in Asia, but that's to be expected once the machine was compromised -- it doesn't help track down how they originally got in.) Anywhere else that the logs would contain useful data?
the particular issue which was patched by this httpd (apache) update was to fix a problem with reverse proxy so the first question is did this server actually have a reverse proxy configured?
My next thought is that since this particular hacker managed to get access to more than one of your machines, is it possible that there is a mechanism (ie a pre-shared public key) that would allow them access to the second server from the first server they managed to crack? The point being that this computer may not have been the one that they originally cracked and there may not be evidence of cracking on this computer.
The script you identified would seem to be a script for attacking other systems and by the time it landed on your system, it was already broken into.
There are some tools to identify a hackers access though they are often obscured by the hacker...
last # reads /var/log/wtmp and provides a list of users, login date/time login duration, etc. Read the man page for last to get other options on its usage including the '-f' option to read older wtmp log files if needed.
lastb # reads /var/log/btmp much as above but list 'failed' logins though this requires pro-active configuration and if you didn't do that, you probably will do that going forward.
looking at /etc/passwd to see what users are on your system and then search their $HOME directories carefully for any evidence that their account was the first one compromised. Very often, a single user with a weak password has his account cracked and then a hacker can get a copy of /etc/shadow and brute force the root password.
Consider that this type of activity is often done with 'hidden' files& directories. This hacker was apparently brazen enough to operate openly in /home so it's likely that he wasn't very concerned about his cracking being discovered.
The most important thing to do at this point is to figure out HOW they got into your systems in the first place and discussions of SELinux and yum updates are not useful to that end. Yes, you should always update and always run SELinux but not useful in determining what actually happened.
Make a list of all open ports on this system, check the directories, files, data from all daemons/applications that were exposed (Apache? PHP?, MySQL?, etc.) and be especially vigilant to any directories where user apache had write access.
Again though, I am concerned that your first action on your first discovered hacked server was to wipe it out and of a notion that it's entirely possible that the actual cracking occurred on that system and this (and perhaps other servers) are simply free gifts to the hacker because they had pre-shared keys or the same root password.
Craig
Hello Craig,
On Mon, 2012-01-02 at 01:04 -0700, Craig White wrote:
Very often, a single user with a weak password has his account cracked and then a hacker can get a copy of /etc/shadow and brute force the root password.
This is incorrect. The whole reasoning behind /etc/shadow is to hide the actual hashes from normal system users. /etc/shadow is chown root.root and chmod 0400. Without root access /etc/shadow is not accessible.
Regards, Leonard.
On Tue, Jan 3, 2012 at 11:08 AM, Leonard den Ottolander leonard@den.ottolander.nl wrote:
Hello Craig,
On Mon, 2012-01-02 at 01:04 -0700, Craig White wrote:
Very often, a single user with a weak password has his account cracked and then a hacker can get a copy of /etc/shadow and brute force the root password.
This is incorrect. The whole reasoning behind /etc/shadow is to hide the actual hashes from normal system users. /etc/shadow is chown root.root and chmod 0400. Without root access /etc/shadow is not accessible.
Regards, Leonard.
-- mount -t life -o ro /dev/dna /genetic/research
So, explain this then:
How does something like c99shell allow a local user (not root) to read the /etc/shadow file?
On 01/03/12 1:14 AM, Rudi Ahlers wrote:
How does something like c99shell allow a local user (not root) to read the /etc/shadow file?
presumably it uses a suid utility? i'm not familiar with c99shell, but thats classically how you elevate privileges.
Hello Rudi,
On Tue, 2012-01-03 at 11:14 +0200, Rudi Ahlers wrote:
How does something like c99shell allow a local user (not root) to read the /etc/shadow file?
I do not vouch for every app that is written to break good security practices. Try $ ls -l /etc/shadow
If the tool you are using allows normal users access to /etc/shadow it is using some sort of root privileges, either it's a suid tool (ouch) or it needs entries in /etc/sudoers (visudo). In either case, I cannot think of a valid reason to allow normal users access to this file.
http://tldp.org/HOWTO/Shadow-Password-HOWTO.html for more information on shadow passwords.
Regards, Leonard.
On Tue, Jan 3, 2012 at 3:14 AM, Rudi Ahlers Rudi@softdux.com wrote:
Very often, a single user with a weak password has his account cracked and then a hacker can get a copy of /etc/shadow and brute force the root password.
This is incorrect. The whole reasoning behind /etc/shadow is to hide the actual hashes from normal system users. /etc/shadow is chown root.root and chmod 0400. Without root access /etc/shadow is not accessible.
So, explain this then:
How does something like c99shell allow a local user (not root) to read the /etc/shadow file?
The description from here: http://jigneshdhameliya.blogspot.com/2010/03/backdoorphpc99shellw.html and other places says "16. exploit a range of Linux kernel and bash command interpreter vulnerabilies"
In other words, all those things that get listed as 'local' vulnerabilities become remote vulnerabilities as soon as any app or service allows running an arbitrary command.