On Sat, Aug 04, 2012 at 06:19:39PM -0400, Tim Dunphy wrote:
hello list,
I'm trying to write a script that will search through a directory of trace logs for an oracle database. From what I understand new files are always being created in the directory and it's not possible to know the exact names of the files before they are created. The purpose of this is to create service checks in nagios. Because you don't know the names of the files ahead of time traditional plugins like check_logs or check_logfiles.plwon't work.
Here's what I was able to come up with:
#!/bin/bash
log1='/u01/app/oracle/admin/ecom/udump/*' crit1=($(grep 'ORA-00600' $log1)) crit2=($(grep 'ORA-04031' $log1)) crit3=($(grep 'ORA-07445' $log1))
if [ $crit1 ] ; then echo "$crit1 on ecom1" status=2
elif [ $crit2 ]; then echo "$crit2 on ecom1" status=2
elif [ $crit3 ]; then echo "$crit3 on ecom1" status=2 fi
echo $status exit $status
This is a very early version of the scripts, so as you can see I'm echoing a test message at the end letting you know the exit status.
The problem with this script is that it is only able to detect one error in the logs. If you echo more than one test phrase into a log file or into multiple log files it still only picks up one error message.
I was just wondering if anyone on the list might have a suggestion on how best to accomplish this task?
Thanks Tim
I'm not sure I understand the problem well. But, perhaps something like this
#!/bin/sh
for log in /u01...../udump/* do egrep -e 'ORA-00600|ORA-04031|ORA-07445' ${log} done
this will find any line matching any of the ORA- keys. You can capture the return code if you wish.
Output of egrep could be passed to wc to echo instead a count of the errors. Filenames could be produced, too, with a bit more scripting, which you can obviously handle.
Dave