There is a lot of talk about the vulnerable Linux kernel. I'm simply wondering the telltale signs if a given system has been hacked? What, specifically, does a person look for?
Thanks.
Scott
Scott Ehrlich wrote:
There is a lot of talk about the vulnerable Linux kernel. I'm simply wondering the telltale signs if a given system has been hacked? What, specifically, does a person look for?
rpm -Va is a good start for modified binaries/libraries. rootkit detectors is another thing you can try.
Other than that, it is checking your logs and looking for odd files lying around...
Christopher Chan wrote:
Scott Ehrlich wrote:
There is a lot of talk about the vulnerable Linux kernel. I'm simply wondering the telltale signs if a given system has been hacked? What, specifically, does a person look for?
rpm -Va is a good start for modified binaries/libraries. rootkit detectors is another thing you can try.
Other than that, it is checking your logs and looking for odd files lying around...
Also, processes running that you don't recognize. Users you don't recognize. Logged in sessions that you don't recognize. Free space shrinking abnormally. An increase in bandwidth usage that is unexpected.
Ryan
Ryan Pugatch wrote:
Christopher Chan wrote:
Scott Ehrlich wrote:
There is a lot of talk about the vulnerable Linux kernel. I'm simply wondering the telltale signs if a given system has been hacked? What, specifically, does a person look for?
rpm -Va is a good start for modified binaries/libraries. rootkit detectors is another thing you can try.
Other than that, it is checking your logs and looking for odd files lying around...
Also, processes running that you don't recognize. Users you don't recognize. Logged in sessions that you don't recognize. Free space shrinking abnormally. An increase in bandwidth usage that is unexpected.
Yeah...one should not assume that those will be hidden by rogue libraries/binaries. Not every case will be taken that far or unspotted before it gets that far.
On Tue, Aug 18, 2009, Scott Ehrlich wrote:
There is a lot of talk about the vulnerable Linux kernel. I'm simply wondering the telltale signs if a given system has been hacked? What, specifically, does a person look for?
To really know whether a system has been hacked, it's necessary to use something like Tripwire or Aide, taking a baseline before the system is put on-line, and continually monitoring for changes.
By using the 6 P's (Prior Planning Prevents Piss-Poor Performance) it's possible to detect crackages, and even to restore a system without a complete reinstall as good intrusion detection tools which find changed files as well as new files that crackers have added, or files that have gone missing.
It's also a good idea to check for executables in places they normally shouldn't be, /tmp, /dev/shm on SuSE systems, /var/tmp, and similar directories where crackers like to hide their work. Often these executes will be in directories with names like ``.. '' (note the trailing space) that look legitimate.
There's one crack that adds lines to /etc/inittab to run something called ``ttymon'' that looks reasonable if (a) you don't notice that the file has changed, and (b) don't have a backup to compare it to.
You cannot trust tools like ``ps'', ``find'', ``netstat'', and ``lsof'' as these are frequently replaced by ones that are modified to hide the cracker's work.
Bill
On Wed, Aug 19, 2009 at 1:57 AM, Bill Campbellcentos@celestial.com wrote:
You cannot trust tools like ``ps'', ``find'', ``netstat'', and ``lsof'' as these are frequently replaced by ones that are modified to hide the cracker's work.
As a corollary, the only safe way to audit a suspected system is booting your diagnostic tool from known good media (eg try a security Live CD distro)
Check for failed logins in /var/log/messages
Check if the /etc/passwd file have been changed
Use commands like last, w and uptime.
2009/8/19 Eduardo Grosclaude eduardo.grosclaude@gmail.com
On Wed, Aug 19, 2009 at 1:57 AM, Bill Campbellcentos@celestial.com wrote:
You cannot trust tools like ``ps'', ``find'', ``netstat'', and ``lsof'' as these are frequently replaced by ones that are modified to hide the cracker's work.
As a corollary, the only safe way to audit a suspected system is booting your diagnostic tool from known good media (eg try a security Live CD distro)
-- Eduardo Grosclaude Universidad Nacional del Comahue Neuquen, Argentina _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Tue, Aug 18, 2009 at 3:53 PM, Scott Ehrlichsrehrlich@gmail.com wrote:
There is a lot of talk about the vulnerable Linux kernel. I'm simply wondering the telltale signs if a given system has been hacked? What, specifically, does a person look for?
This is an interesting and frustrating question. Perfect security is impossible, but maybe we can achieve 'good enough'.
On Tue, Aug 18, 2009 at 5:14 PM, Christopher Chanchristopher.chan@bradbury.edu.hk wrote:
rpm -Va is a good start for modified binaries/libraries.
Problems with rpm: 1) Many files on a system did not come from an rpm or had good reasons to change, so dealing with false positives is a problem 2) Some packages are sloppy and no longer verify immediately after installation 3) rpm cannot address memory-only attacks or bios attacks. Still, if rpm tells you some binary has changed since installation, you know you're in trouble. Or you're using prelink.
rootkit detectors is another thing you can try.
Problem with rootkit detectors I have used: many false positives. At least, I hope they were false! Googling around, I found that once you had a positive result, it was sort of complicated process to figure out whether you'd really been hacked or were just having timing problems or had an unusual configuration.
Other than that, it is checking your logs and looking for odd files lying around...
And prayer.
On Tue, Aug 18, 2009 at 5:22 PM, Ryan Pugatchrpug@tripadvisor.com wrote:
Also, processes running that you don't recognize.
Unfortunately, if you're like me there are really a lot of processes running on a virgin linux box (never touched the internet) that I don't recognize. I once tried just making a big file of them and having a cron job send me email when the list changed significantly. This could have caught an unlucky or inept cracker who launched some process named "meEvilCrackhead", but wouldn't have done much to catch someone using an innocuous name, like say 'grep'.
Users you don't recognize.
Again, it is possible to catch someone who doesn't bother to get rid of the smoking gun. Someone who has root on your system can create a new user, or they could use a pre-existing user. You can keep an eye out for strange users, but the real problem is spotting familiar users doing stuff they ought not. Even that can be covered if the cracker replaces your tools or hacks the kernel.
Logged in sessions that you don't recognize.
I'm not sure what Ryan means here, unless he is assuming only one person (you) has authorized access to your machine, and you see sessions logged in as you that you know nothing about. Yeah, that would tip you off. If lots of users can log in, there's not much point.
Free space shrinking abnormally.
Again not really sure what this would mean. Too high a load, too many programs running? Again, someone with root access could hack the tools you use to monitor this, or even the kernel, and make it really hard to see. Assuming you really know how much free space you ought to have at a given moment, which, for me, I am ashamed to admit, would be quite rare.
An increase in bandwidth usage that is unexpected.
Now we're talking! Well, I am still pretty damn ignorant of what a system's bandwidth demand ought to be, but at least you could see the stuff actually happening and make a sort of reasonable investigation of 'what do I have running that would possibly want to talk to IP xxx.yyy.zzz.aaa?' And for once, no matter how good the intruder is, they won't be able to get your own system to lie to you for them (assuming you're using a different system to do the network analysis). But while you analize the traffic, the bad guys has more time to damage your data.
On Tue, Aug 18, 2009 at 5:58 PM, Christopher Chanchristopher.chan@bradbury.edu.hk wrote:
Yeah...one should not assume that those will be hidden by rogue libraries/binaries. Not every case will be taken that far or unspotted before it gets that far.
Every intrusion is vulnerable for a while at least, while the intruder is trying to get in and get root. After that they will probably try to cover their tracks.
On Tue, Aug 18, 2009 at 6:57 PM, Bill Campbellcentos@celestial.com wrote:
To really know whether a system has been hacked, it's necessary to use something like Tripwire or Aide,
And very carefully. Only that won't help you with memory-only attacks, or bios stuff, etc. These tools concentrate on verifying that your disk files have not been altered. I don't think they would help with an attack that uses free space (guessing here). Also, they are a pain, unless your system stays absolutely static, which in effect means, if you never use it. Have them ignore your data space, and the hacker can exploit that. And even then, linux is constantly updating various files in the background, and of course you need to update software to keep up with the security patches. You need to track every change of every file. I doubt many people have the patience.
[snip]
It's also a good idea to check for executables in places they normally shouldn't be, /tmp, /dev/shm on SuSE systems, /var/tmp, and similar directories where crackers like to hide their work. Often these executes will be in directories with names like ``.. '' (note the trailing space) that look legitimate.
I like this, because it might actually be automated. Of course, you're trusting stat or whatever.
[snip]
You cannot trust tools like ``ps'', ``find'', ``netstat'', and ``lsof'' as these are frequently replaced by ones that are modified to hide the cracker's work.
Naturally we are running aide and tripwire from a CD or other read-only medium, why not toss in a copy of these tools as well? Of course, if the kernel has been hacked, even that won't save us, but we have to take what we can get.
On Wed, Aug 19, 2009 at 1:03 AM, Eduardo Grosclaudeeduardo.grosclaude@gmail.com wrote:
As a corollary, the only safe way to audit a suspected system is booting your diagnostic tool from known good media (eg try a security Live CD distro)
Safe in what sense? When you rebooted, you may have erased from memory the only remaining trace of the intrusion and you are still vulnerable. But at least you *can* trust what the tools are telling you - I hope.
On Thu, Aug 20, 2009 at 12:59 PM, Magnus Holmströmmagnus.holmstrom@gmail.com wrote:
Oops, someone is going to scold you for top-posting.
Check for failed logins in /var/log/messages
Kind of useless, don't you think? Failed logins I can live with. It's successful ones that hurt.
Check if the /etc/passwd file have been changed
Certainly, no decent detection system should overlook changes to /etc/passwd. But that's just one of many important files.
Use commands like last, w and uptime.
These tools are kind of weak. If you have a single-user system or the intruder is doing something really obvious (what? rm -rf /? wall 'I am an intruder!'?) they would help. With multiple users they won't tell you enough, and if the intruder is good they will lie to you.
Thanks to all for stimulating thoughts. I've been thinking about this for a while. I need to think some more. xoxo, Dave
On Fri, Aug 21, 2009, Dave wrote:
On Tue, Aug 18, 2009 at 3:53 PM, Scott Ehrlichsrehrlich@gmail.com wrote:
... stuff deleted
On Tue, Aug 18, 2009 at 6:57 PM, Bill Campbellcentos@celestial.com wrote:
To really know whether a system has been hacked, it's necessary to use something like Tripwire or Aide,
And very carefully. Only that won't help you with memory-only attacks, or bios stuff, etc. These tools concentrate on verifying that your disk files have not been altered. I don't think they would help with an attack that uses free space (guessing here). Also, they are a pain, unless your system stays absolutely static, which in effect means, if you never use it. Have them ignore your data space, and the hacker can exploit that. And even then, linux is constantly updating various files in the background, and of course you need to update software to keep up with the security patches. You need to track every change of every file. I doubt many people have the patience.
One of the problems I've found with tripwire in particular and aide to a lesser extent is that they (a) tend to be very verbose even when nothing has changed, and (b) updating their database is fairly complex. I have developed a system that we use here and at our client sites that uses the tripwire formatted configuration files, but maintains its own database, and produces minimal reports of changes (none of nothing has changed). Updating its database after changes have been checked and verified is a simple file ``mv'' command.
I review daily reports from over 50 systems every morning, checking changes found, usually taking no more than 10 minutes a day. The key is to keep the reports simple, and to make updating easy (and to have procedures that monitor systems to be sure they's still alive and reporting in).
We also remove prelink from our kickstart installs on CentOS systems because I think that the benefits of prelinking are marginal compared with the problems it creates tracking system changes. The changes of prelink makes on a system can be removed by turning it off then the appropriate /etc/sysconfig file and waiting a day for the daily maintenance to restore things to their original condition.
[snip]
It's also a good idea to check for executables in places they normally shouldn't be, /tmp, /dev/shm on SuSE systems, /var/tmp, and similar directories where crackers like to hide their work. Often these executes will be in directories with names like ``.. '' (note the trailing space) that look legitimate.
I like this, because it might actually be automated. Of course, you're trusting stat or whatever.
Actually I'm trusing the python os.path.walk and ``file'' command to check for executables.
[snip]
You cannot trust tools like ``ps'', ``find'', ``netstat'', and ``lsof'' as these are frequently replaced by ones that are modified to hide the cracker's work.
Naturally we are running aide and tripwire from a CD or other read-only medium, why not toss in a copy of these tools as well? Of course, if the kernel has been hacked, even that won't save us, but we have to take what we can get.
We create a file system initially, the same size as ``/'', and make a copy of ``/'' in it identical except for the /etc/fstab entry. This is not mounted in normal operations, but the system can be booted from it to get to a clean system. Of course this must be updated using rsync after significant changes in the root file system.
The key to all of this is to plan for security and intrusion detection at the outset.
... Bill
On Sat, Aug 22, 2009 at 6:49 AM, Bill Campbellcentos@celestial.com wrote:
I review daily reports from over 50 systems every morning, checking changes found, usually taking no more than 10 minutes a day. The key is to keep the reports simple, and to make updating easy (and to have procedures that monitor systems to be sure they's still alive and reporting in).
So how do you track the inevitable changes? Not saying you can't, just curious. For me, when I look at a batch of changes, some of them are obviously stuff I've done, other stuff not so obvious. I also filter reports through a script that sort of does a diff and makes an attempt to limit the boilerplate. Sometimes it is a bit too terse.
We create a file system initially, the same size as ``/'', and make a copy of ``/'' in it identical except for the /etc/fstab entry. This is not mounted in normal operations, but the system can be booted from it to get to a clean system.
Wow, elaborate. How do you protect this file system from intruders? Exterrnal and powerred off?
Dave
On Sat, Aug 22, 2009, Dave wrote:
On Sat, Aug 22, 2009 at 6:49 AM, Bill Campbellcentos@celestial.com wrote:
I review daily reports from over 50 systems every morning, checking changes found, usually taking no more than 10 minutes a day. The key is to keep the reports simple, and to make updating easy (and to have procedures that monitor systems to be sure they's still alive and reporting in).
So how do you track the inevitable changes? Not saying you can't, just curious. For me, when I look at a batch of changes, some of them are obviously stuff I've done, other stuff not so obvious. I also filter reports through a script that sort of does a diff and makes an attempt to limit the boilerplate. Sometimes it is a bit too terse.
First off, we don't allow automatic updates on most systems, much preferring to do them manually making it pretty easy to refresh the comparison database immediately after the update is complete. The odds that a cracker will get in and do their dirty deeds while this are going on are pretty low, and can probably be ignored.
We handle pretty much all server stuff under the OpenPKG portable package management system so things like spamassassin, amavisd, clamav, and postfix are not the distribution versions, but those from OpenPKG (which are generally updated more quickly then the distribution's). A typical occurrence will be that we get an e-mail saying that clamav is out of date from the nightly freshclam update, I will pick up the new sources, update the OpenPKG SRPM for it, and deploy it 40 or so systems running it, and expect to see a corresponding set of notices the next morning that files under clamav have changed.
The clusterssh program makes this sort of thing much more efficient as one can execute shell commands on multiple systems simultaneously.
We create a file system initially, the same size as ``/'', and make a copy of ``/'' in it identical except for the /etc/fstab entry. This is not mounted in normal operations, but the system can be booted from it to get to a clean system.
Wow, elaborate. How do you protect this file system from intruders? Exterrnal and powerred off?
That's one way to do it. We also run a fair number of Linux servers under VMware so periodic snapshots and backups simplify the task.
I have not seen many successful cracks of Linux boxes that we have configured from scratch. Some basic things can be done to minimize the chances of cracks.
+ Create the baseline for intrusion detection tools before putting the syste on line, and monitor it daily.
+ Configure openssh to refuse password authentication requiring authorized_keys access.
+ Configure openssh with tcp_wrappers support, restricting access by IP address and/or domain names. I consider this absolutely mandatory if one needs to all username and password authentication.
+ Use fail2ban or similar techniques to quickly block IP addresses that are found probing the system (don't forget to look at POP and IMAP logs for failed login attempts).
+ Use /bin/false as the standard shell for accounts that don't have good reason for shell access. This does not affect e-mail or most services that a typical ISP customer needs.
+ Use OpenVPN for access. This works well even when in hotels with NAT firewalls, and is not easily hacked anonymously.
+ Restrict access of webmin and usermin to local networks so they are not vulnerable to outside attack. These services are available to people outside connecting with OpenVPN.
+ Restrict webmail, pop, and imap access to secure connections using https, tls, ssl. We have never been able to get the average ISP customer to use good passwords, but every little bit helps.
Bill
On Sat, Aug 22, 2009 at 6:07 PM, Bill Campbellcentos@celestial.com wrote:
On Sat, Aug 22, 2009, Dave wrote:
On Sat, Aug 22, 2009 at 6:49 AM, Bill Campbellcentos@celestial.com wrote:
I review daily reports from over 50 systems every morning, checking changes found, usually taking no more than 10 minutes a day. The key is to keep the reports simple, and to make updating easy (and to have procedures that monitor systems to be sure they's still alive and reporting in).
So how do you track the inevitable changes? Not saying you can't, just curious. For me, when I look at a batch of changes, some of them are obviously stuff I've done, other stuff not so obvious. I also filter reports through a script that sort of does a diff and makes an attempt to limit the boilerplate. Sometimes it is a bit too terse.
First off, we don't allow automatic updates on most systems, much preferring to do them manually making it pretty easy to refresh the comparison database immediately after the update is complete. The odds that a cracker will get in and do their dirty deeds while this are going on are pretty low, and can probably be ignored.
We handle pretty much all server stuff under the OpenPKG portable package management system so things like spamassassin, amavisd, clamav, and postfix are not the distribution versions, but those from OpenPKG (which are generally updated more quickly then the distribution's). A typical occurrence will be that we get an e-mail saying that clamav is out of date from the nightly freshclam update, I will pick up the new sources, update the OpenPKG SRPM for it, and deploy it 40 or so systems running it, and expect to see a corresponding set of notices the next morning that files under clamav have changed.
The clusterssh program makes this sort of thing much more efficient as one can execute shell commands on multiple systems simultaneously.
We create a file system initially, the same size as ``/'', and make a copy of ``/'' in it identical except for the /etc/fstab entry. This is not mounted in normal operations, but the system can be booted from it to get to a clean system.
Wow, elaborate. How do you protect this file system from intruders? Exterrnal and powerred off?
That's one way to do it. We also run a fair number of Linux servers under VMware so periodic snapshots and backups simplify the task.
I have not seen many successful cracks of Linux boxes that we have configured from scratch. Some basic things can be done to minimize the chances of cracks.
+ Create the baseline for intrusion detection tools before putting the syste on line, and monitor it daily.
+ Configure openssh to refuse password authentication requiring authorized_keys access.
+ Configure openssh with tcp_wrappers support, restricting access by IP address and/or domain names. I consider this absolutely mandatory if one needs to all username and password authentication.
+ Use fail2ban or similar techniques to quickly block IP addresses that are found probing the system (don't forget to look at POP and IMAP logs for failed login attempts).
+ Use /bin/false as the standard shell for accounts that don't have good reason for shell access. This does not affect e-mail or most services that a typical ISP customer needs.
+ Use OpenVPN for access. This works well even when in hotels with NAT firewalls, and is not easily hacked anonymously.
+ Restrict access of webmin and usermin to local networks so they are not vulnerable to outside attack. These services are available to people outside connecting with OpenVPN.
Cross Site Attacks (CSRF, XSS) make webmin very vulnerable in this scenario. It is a bad idea to use a single browser. If in Firefox, you already logged in to webmin and browse to a malicious site (many reputable sites unknowingly have malicious javascript -- see HoneyNet), the malicious site could do nasty things via webmin or any other internal webserver. Yes, NoScript may help, but NoScript has to be updated daily and Firefox restarted.
The best practice is to Install three separate browser application such as Epiphany or Dillo and only use this for internal websites. Use Firefox for email. Use Chrome for everything else. The idea is to have completely separate processes using completely separate memory and harddrive locations.
I don't think there are many malicious variants of InvisibleThings's BluePill or BlueChicken, but if a malicious variant can elevate itself to become the Hypervisor, then all of your virtual machines could be monitored by a HyperKit -- rootkit in the hypervisor. Again, i don't know if there are many malicious in-the-wild versions of bluepill, but if just one malicious vmware image is uploaded to the Amazon EC2, then every other VM on that same hardware at Amazon can be controlled by a hyperkit. InvisibleThings are professional security researchers in Poland, so the original bluepill isn't malicious. BlueChicken is to evade detection by moving bluepill back and forth between the hypervisor and a vm at will. Others are going to know more about what actually is in-the-wild versus in the lab.
+ Restrict webmail, pop, and imap access to secure connections using https, tls, ssl. We have never been able to get the average ISP customer to use good passwords, but every little bit helps.
Bill
INTERNET: bill@celestial.com Bill Campbell; Celestial Software LLC URL: http://www.celestial.com/ PO Box 820; 6641 E. Mercer Way Voice: (206) 236-1676 Mercer Island, WA 98040-0820 Fax: (206) 232-9186 Skype: jwccsllc (206) 855-5792
bad economics will sink any economy no matter how much they believe this time things are different. They aren't. -- Arthur Laffer _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Sat, Aug 22, 2009 at 10:49 AM, Bill Campbell centos@celestial.com wrote:
On Fri, Aug 21, 2009, Dave wrote:
On Tue, Aug 18, 2009 at 3:53 PM, Scott Ehrlichsrehrlich@gmail.com wrote:
... stuff deleted
On Tue, Aug 18, 2009 at 6:57 PM, Bill Campbellcentos@celestial.com wrote:
To really know whether a system has been hacked, it's necessary to use something like Tripwire or Aide,
One of the problems I've found with tripwire in particular and aide to a lesser extent is that they (a) tend to be very verbose even when nothing has changed, and (b) updating their database is fairly complex. I have developed a system that we use here and at our client sites that uses the tripwire formatted configuration files, but maintains its own database, and produces minimal reports of changes (none of nothing has changed). Updating its database after changes have been checked and verified is a simple file ``mv'' command.
Another open source tool you might want to consider.
http://ftimes.sourceforge.net/FTimes/index.shtml
-- Drew Einhorn
On 08/19/2009 02:53 AM, Scott Ehrlich wrote:
There is a lot of talk about the vulnerable Linux kernel. I'm simply wondering the telltale signs if a given system has been hacked? What, specifically, does a person look for?
there have been some really good ideas that came through this conversation, would someone like to take ownership of a wiki page that puts all this together, into the Security section perhaps ?