Hi
I am using rsyslog to get logs to a central box and they are stored in the format of
/<hostname>/<year>/<month>/<day>/<logfilename>
I need a solution that can trawl through these directories and pick up exceptions like failed logons and sudo usage that sort of thing.
Has anyone got any clues as to what might help to achieve this, i am looking into logsurfer but not sure if this handles the directory structure nicely.
thanks for any tips
Hi
I am using rsyslog to get logs to a central box and they are stored in the format of
/<hostname>/<year>/<month>/<day>/<logfilename>
I need a solution that can trawl through these directories and pick up exceptions like failed logons and sudo usage that sort of thing.
Has anyone got any clues as to what might help to achieve this, i am looking into logsurfer but not sure if this handles the directory structure nicely.
thanks for any tips
Good question. How many servers do you have to collect logs from?
I'd like to hear of people who have used both Splunk and/or prelude in an environment with, say, 500<x<1000 servers, for collection of logs and can voice a few opinions.
The problem, as the author recognizes, is not collection but retrieval and processing (a cron-job that deletes them periodically does not qualify as "processing"...).
Rainer
Good question. How many servers do you have to collect logs from?
a few thousand ultimately
I'd like to hear of people who have used both Splunk and/or prelude in an environment with, say, 500<x<1000 servers, for collection of logs and can voice a few opinions.
in the log term i might use loglogic or something similar but in the interim i'd like to know if there are people out there doing similar things with a tool i can evaluate in the short term
I recently ran across the Octopussy project which looks interesting. I haven't tried it out yet though. Can't say that I like the url too much either. http://www.8pussy.org/doku.php -- David
On Fri, Apr 16, 2010 at 11:38 AM, rainer@ultra-secure.de wrote:
Hi
I am using rsyslog to get logs to a central box and they are stored in
the
format of
/<hostname>/<year>/<month>/<day>/<logfilename>
I need a solution that can trawl through these directories and pick up exceptions like failed logons and sudo usage that sort of thing.
Has anyone got any clues as to what might help to achieve this, i am looking into logsurfer but not sure if this handles the directory structure nicely.
thanks for any tips
Good question. How many servers do you have to collect logs from?
I'd like to hear of people who have used both Splunk and/or prelude in an environment with, say, 500<x<1000 servers, for collection of logs and can voice a few opinions.
The problem, as the author recognizes, is not collection but retrieval and processing (a cron-job that deletes them periodically does not qualify as "processing"...).
Rainer _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Fri, Apr 16, 2010 at 11:45 AM, David Miller david3d@gmail.com wrote:
I recently ran across the Octopussy project which looks interesting. I haven't tried it out yet though. Can't say that I like the url too much either. http://www.8pussy.org/doku.php -- David
On Fri, Apr 16, 2010 at 11:38 AM, rainer@ultra-secure.de wrote:
Hi
I am using rsyslog to get logs to a central box and they are stored in
the
format of
/<hostname>/<year>/<month>/<day>/<logfilename>
I need a solution that can trawl through these directories and pick up exceptions like failed logons and sudo usage that sort of thing.
Has anyone got any clues as to what might help to achieve this, i am looking into logsurfer but not sure if this handles the directory structure nicely.
thanks for any tips
Good question. How many servers do you have to collect logs from?
I'd like to hear of people who have used both Splunk and/or prelude in an environment with, say, 500<x<1000 servers, for collection of logs and can voice a few opinions.
The problem, as the author recognizes, is not collection but retrieval and processing (a cron-job that deletes them periodically does not qualify as "processing"...).
Rainer _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Doh sorry for the top post. Need to pay more attention to that with gmail. -- David
I recently ran across the Octopussy project which looks interesting. I
Interesting , thanks.
haven't tried it out yet though. Can't say that I like the url too much either. http://www.8pussy.org/doku.php
;-) They should _really__ never, ever let that domain-name expire....
rainer@ultra-secure.de wrote:
I'd like to hear of people who have used both Splunk and/or prelude in an environment with, say, 500<x<1000 servers, for collection of logs and can voice a few opinions.
I use Splunk with a few hundred systems and it works alright, using it right can take some time though creating the reports and stuff, but it does make searching and reporting very easy.
Splunk licenses based on the amount of indexed data it collects per day, so you should know how much data your going to index before you buy, and of course give plenty of headroom.
I have a friend who works over at T-mobile who is one of the biggest Splunk customers in the world they do something well over 1TB of new data per day, and it works ok for them(off the record it sucks but it sucks FAR less than everything else they have tried).
nate
On 16 Apr 2010, at 18:49, "nate" centos@linuxpowered.net wrote:
rainer@ultra-secure.de wrote:
I'd like to hear of people who have used both Splunk and/or prelude in an environment with, say, 500<x<1000 servers, for collection of logs and can voice a few opinions.
I use Splunk with a few hundred systems and it works alright, using it right can take some time though creating the reports and stuff, but it does make searching and reporting very easy.
Splunk licenses based on the amount of indexed data it collects per day, so you should know how much data your going to index before you buy, and of course give plenty of headroom.
I have a friend who works over at T-mobile who is one of the biggest Splunk customers in the world they do something well over 1TB of new data per day, and it works ok for them(off the record it sucks but it sucks FAR less than everything else they have tried).
nate
We will most likely go with loglogic in the future but I need something in the interim.
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Em 16-04-2010 16:38, rainer@ultra-secure.de escreveu:
Hi
I am using rsyslog to get logs to a central box and they are stored in the format of
/<hostname>/<year>/<month>/<day>/<logfilename>
I need a solution that can trawl through these directories and pick up exceptions like failed logons and sudo usage that sort of thing.
Has anyone got any clues as to what might help to achieve this, i am looking into logsurfer but not sure if this handles the directory structure nicely.
thanks for any tips
Good question. How many servers do you have to collect logs from?
I'd like to hear of people who have used both Splunk and/or prelude in an environment with, say, 500<x<1000 servers, for collection of logs and can voice a few opinions.
I've recently set up syslog-ng to collect syslog from about 60 machines (and counting), don't know if I'll reach there.
I'd like to know of good Free Software replacement(s) for Splunk, oriented to log analysis, if anyone can speak of any.
Right now, another absolutely crappy solution from a famous 3-letter-acronym company is being used, even though the users would prefer Splunk.
I'd like to show off something about as good as Splunk for log analysis.
Rui