...of a shell script for rsync that won't start again if it is already running? I thought of using a lock file, but what if it is killed mid script or bombs?
Actually if you use a language that supports flock() do the following:
Create lock file at installation of the script. In script call flock(2) in a non-blocking manner against this file. If you don't aquire the lock then exit with suitable message. If you do get the lock do your stuff and exit.
Locks aquired by flock automatically go away when the file handle is closed (and the filehandle automatically gets closed like or not after you exit a proccess...also in various languages scoping will also apply).
This is easily done in perl (man perlfunc and look up flock()), and if your comfortable in C its pretty trivial create said wrapper.
The key is the file must pre-exist before your script is ever called, because there is a race condition on creating the file, but you guarantee the files pre-existance the race condition is removed. Typically I would make the "lock" file owned by the package that delivers the software. No fuss, no muss.
Good luck...james
On 8/29/06, Scott Silva ssilva@sgvwater.com wrote:
...of a shell script for rsync that won't start again if it is already running? I thought of using a lock file, but what if it is killed mid script or bombs?
--
MailScanner is like deodorant... You hope everybody uses it, and you notice quickly if they don't!!!!
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
James Olin Oden spake the following on 8/29/2006 11:48 AM:
Actually if you use a language that supports flock() do the following:
Create lock file at installation of the script. In script call flock(2) in a non-blocking manner against this file. If you don't aquire the lock then exit with suitable message. If you do get the lock do your stuff and exit.
Locks aquired by flock automatically go away when the file handle is closed (and the filehandle automatically gets closed like or not after you exit a proccess...also in various languages scoping will also apply).
This is easily done in perl (man perlfunc and look up flock()), and if your comfortable in C its pretty trivial create said wrapper.
The key is the file must pre-exist before your script is ever called, because there is a race condition on creating the file, but you guarantee the files pre-existance the race condition is removed. Typically I would make the "lock" file owned by the package that delivers the software. No fuss, no muss.
Good luck...james
On 8/29/06, Scott Silva ssilva@sgvwater.com wrote:
...of a shell script for rsync that won't start again if it is already running? I thought of using a lock file, but what if it is killed mid script or bombs?
I was hoping to throw this together in bash, but I guess I'll dig out my perl book. I need some practice anyways.
Scott Silva wrote:
...of a shell script for rsync that won't start again if it is already running? I thought of using a lock file, but what if it is killed mid script or bombs?
You write your PID to the lock file, then in startup, you check if the lock file exist and if the PID process is running, then it is still active.
It could be that the process is just hung, so this method will only help assure that you do not start a second copy.
Something like this works alright:
# Check if process is running already SCRIPT_NAME=do_stuff pidfile=/tmp/$SCRIPT_NAME.pid
if [ -f $pidfile ]; then runpid=`cat $pidfile` if ps -p $runpid; then echo "$SCRIPT_NAME is already running ... stopping" exit 1 else echo "$SCRIPT_NAME pid found but process dead, cleaning up and starting ..." rm -f $pidfile fi else echo "No $SCRIPT_NAME Process Detected ... starting" fi
echo $$ > $pidfile
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Morten Torstensen Sent: August 29, 2006 3:20 PM To: CentOS mailing list Subject: Re: [CentOS] Any one have a good example...
Scott Silva wrote:
...of a shell script for rsync that won't start again if it is already
running? I thought of using a lock file, but what if it is killed mid script or bombs?
You write your PID to the lock file, then in startup, you check if the lock file exist and if the PID process is running, then it is still active.
It could be that the process is just hung, so this method will only help assure that you do not start a second copy.
On 8/29/06, mike.redan@bell.ca mike.redan@bell.ca wrote:
Something like this works alright:
# Check if process is running already SCRIPT_NAME=do_stuff pidfile=/tmp/$SCRIPT_NAME.pid
if [ -f $pidfile ]; then runpid=`cat $pidfile` if ps -p $runpid; then echo "$SCRIPT_NAME is already running ... stopping" exit 1 else echo "$SCRIPT_NAME pid found but process dead, cleaning up and starting ..." rm -f $pidfile fi else echo "No $SCRIPT_NAME Process Detected ... starting" fi
echo $$ > $pidfile
As long as you don't have multiple scripts starting at approximately the same time. I use this same algorithm sometimes but I always use flock() because otherwise there is a race condition. Even using flock() does not remove the race on the creation of the file, though, which is why if you really want to remove the race condition you make sure the file pre-existant (via package delivery or something to that affect).
At least this is how I understand the problem....james
James Olin Oden spake the following on 8/29/2006 12:42 PM:
On 8/29/06, mike.redan@bell.ca mike.redan@bell.ca wrote:
Something like this works alright:
# Check if process is running already SCRIPT_NAME=do_stuff pidfile=/tmp/$SCRIPT_NAME.pid
if [ -f $pidfile ]; then runpid=`cat $pidfile` if ps -p $runpid; then echo "$SCRIPT_NAME is already running ... stopping" exit 1 else echo "$SCRIPT_NAME pid found but process dead, cleaning up and starting ..." rm -f $pidfile fi else echo "No $SCRIPT_NAME Process Detected ... starting" fi
echo $$ > $pidfile
As long as you don't have multiple scripts starting at approximately the same time. I use this same algorithm sometimes but I always use flock() because otherwise there is a race condition. Even using flock() does not remove the race on the creation of the file, though, which is why if you really want to remove the race condition you make sure the file pre-existant (via package delivery or something to that affect).
At least this is how I understand the problem....james
Basically it is for an rsync script that runs once an hour, but occasionally, the process takes more than an hour, and another one starts. I also thought of using the --timeout parameter for rsync, but wanted to do a little option checking.
As long as you don't have multiple scripts starting at approximately the same time. I use this same algorithm sometimes but I always use flock() because otherwise there is a race condition. Even using flock() does not remove the race on the creation of the file, though, which is why if you really want to remove the race condition you make sure the file pre-existant (via package delivery or something to that affect).
At least this is how I understand the problem....james _______________________________________________
Yep. Defintely. If multiple scripts start up at the same time, then that pid file will get smashed with multiple values. This works nicely if you are just looking to make a long running script a little more polite.
Mike
On Tue, 2006-08-29 at 15:25 -0400, mike.redan@bell.ca wrote:
Something like this works alright:
# Check if process is running already SCRIPT_NAME=do_stuff pidfile=/tmp/$SCRIPT_NAME.pid
if [ -f $pidfile ]; then runpid=`cat $pidfile` if ps -p $runpid; then echo "$SCRIPT_NAME is already running ... stopping" exit 1 else echo "$SCRIPT_NAME pid found but process dead, cleaning up and starting ..." rm -f $pidfile fi else echo "No $SCRIPT_NAME Process Detected ... starting" fi
echo $$ > $pidfile
Maybe I am being dense here ... BUT ...
Doesn't the "echo $$" only happen AFTER the else process is finished ???
if you make the "else" process be the rsync script, then it will not create $pidfile until after the rsync is done ... which does not help you.
if you leave the else process as is and kick off the rsync after the echo $$ then it is not the same PID that you wrote to the $pidfile and you will start more than one rsync process ... as the PID that you wrote to $pidfile as the echo process ... that already finished ... or I am mistaken?
Maybe I am being dense here ... BUT ...
Doesn't the "echo $$" only happen AFTER the else process is finished ???
if you make the "else" process be the rsync script, then it will not create $pidfile until after the rsync is done ... which does not help you.
if you leave the else process as is and kick off the rsync after the echo $$ then it is not the same PID that you wrote to the $pidfile and you will start more than one rsync process ... as the PID that you wrote to $pidfile as the echo process ... that already finished ... or I am mistaken?
The idea of it is to place that bit of code at or near the beginning of your script, then have the rsync process start after the "echo $$". That will put the PID of your script into that file, the rsync process will be started in the script, and the script would not end until the rsync one does..so you are fairly safe that two instances of your script will run at thte same time..
Mike
On Tue, 2006-08-29 at 15:59 -0400, mike.redan@bell.ca wrote:
Maybe I am being dense here ... BUT ...
Doesn't the "echo $$" only happen AFTER the else process is finished ???
if you make the "else" process be the rsync script, then it will not create $pidfile until after the rsync is done ... which does not help you.
if you leave the else process as is and kick off the rsync after the echo $$ then it is not the same PID that you wrote to the $pidfile and you will start more than one rsync process ... as the PID that you wrote to $pidfile as the echo process ... that already finished ... or I am mistaken?
The idea of it is to place that bit of code at or near the beginning of your script, then have the rsync process start after the "echo $$". That will put the PID of your script into that file, the rsync process will be started in the script, and the script would not end until the rsync one does..so you are fairly safe that two instances of your script will run at thte same time..
OK ... I see.
The rsync process is a second PID ... but the first PID is also still open until after the script closes.
Johnny Hughes spake the following on 8/29/2006 1:02 PM:
On Tue, 2006-08-29 at 15:59 -0400, mike.redan@bell.ca wrote:
Maybe I am being dense here ... BUT ...
Doesn't the "echo $$" only happen AFTER the else process is finished ???
if you make the "else" process be the rsync script, then it will not create $pidfile until after the rsync is done ... which does not help you.
if you leave the else process as is and kick off the rsync after the echo $$ then it is not the same PID that you wrote to the $pidfile and you will start more than one rsync process ... as the PID that you wrote to $pidfile as the echo process ... that already finished ... or I am mistaken?
The idea of it is to place that bit of code at or near the beginning of your script, then have the rsync process start after the "echo $$". That will put the PID of your script into that file, the rsync process will be started in the script, and the script would not end until the rsync one does..so you are fairly safe that two instances of your script will run at thte same time..
OK ... I see.
The rsync process is a second PID ... but the first PID is also still open until after the script closes.
I tried the above, but was able to start multiple copies of the script. So I will have to see what is not catching the runs. The lockfile is there, but not stopping another execution of the script.
On Tue, August 29, 2006 5:07 pm, Scott Silva wrote:
Johnny Hughes spake the following on 8/29/2006 1:02 PM:
On Tue, 2006-08-29 at 15:59 -0400, mike.redan@bell.ca wrote:
Maybe I am being dense here ... BUT ...
Doesn't the "echo $$" only happen AFTER the else process is finished ???
if you make the "else" process be the rsync script, then it will not create $pidfile until after the rsync is done ... which does not help you.
if you leave the else process as is and kick off the rsync after the echo $$ then it is not the same PID that you wrote to the $pidfile and you will start more than one rsync process ... as the PID that you wrote to $pidfile as the echo process ... that already finished ... or I am mistaken?
The idea of it is to place that bit of code at or near the beginning of your script, then have the rsync process start after the "echo $$". That will put the PID of your script into that file, the rsync process will be started in the script, and the script would not end until the rsync one does..so you are fairly safe that two instances of your script will run at thte same time..
OK ... I see.
The rsync process is a second PID ... but the first PID is also still open until after the script closes.
I tried the above, but was able to start multiple copies of the script. So I will have to see what is not catching the runs. The lockfile is there, but not stopping another execution of the script.
I use the lockfile utility. Take a look at its man page, it is well written. Here is how I use it (Bourne shell):
# try to create the lock and check the outcome lockfile -r 0 /var/run/myapp.lock 1>/dev/null 2>&1 status=$?
if [ ${status} -ne 0 ] ;then echo "Another instance already running. Aborting." exit 1 fi
# body of the script here
# remove the lock at the end of the script rm -f /var/run/myapp.lock
I would run it from inittab, with a sleep at the end to make up the right delay,
this way only one copy ever at a time will run....
P.
Marko A. Jennings wrote:
On Tue, August 29, 2006 5:07 pm, Scott Silva wrote:
Johnny Hughes spake the following on 8/29/2006 1:02 PM:
On Tue, 2006-08-29 at 15:59 -0400, mike.redan@bell.ca wrote:
Maybe I am being dense here ... BUT ...
Doesn't the "echo $$" only happen AFTER the else process is finished ???
if you make the "else" process be the rsync script, then it will not create $pidfile until after the rsync is done ... which does not help you.
if you leave the else process as is and kick off the rsync after the echo $$ then it is not the same PID that you wrote to the $pidfile and you will start more than one rsync process ... as the PID that you wrote to $pidfile as the echo process ... that already finished ... or I am mistaken?
The idea of it is to place that bit of code at or near the beginning of your script, then have the rsync process start after the "echo $$". That will put the PID of your script into that file, the rsync process will be started in the script, and the script would not end until the rsync one does..so you are fairly safe that two instances of your script will run at thte same time..
OK ... I see.
The rsync process is a second PID ... but the first PID is also still open until after the script closes.
I tried the above, but was able to start multiple copies of the script. So I will have to see what is not catching the runs. The lockfile is there, but not stopping another execution of the script.
I use the lockfile utility. Take a look at its man page, it is well written. Here is how I use it (Bourne shell):
# try to create the lock and check the outcome lockfile -r 0 /var/run/myapp.lock 1>/dev/null 2>&1 status=$?
if [ ${status} -ne 0 ] ;then echo "Another instance already running. Aborting." exit 1 fi
# body of the script here
# remove the lock at the end of the script rm -f /var/run/myapp.lock
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Also...
running it from inittab will ensure that even if the process crashes it will run again next time, unlike if you use a self created lockfile....
P.
Marko A. Jennings wrote:
On Tue, August 29, 2006 5:07 pm, Scott Silva wrote:
Johnny Hughes spake the following on 8/29/2006 1:02 PM:
On Tue, 2006-08-29 at 15:59 -0400, mike.redan@bell.ca wrote:
Maybe I am being dense here ... BUT ...
Doesn't the "echo $$" only happen AFTER the else process is finished ???
if you make the "else" process be the rsync script, then it will not create $pidfile until after the rsync is done ... which does not help you.
if you leave the else process as is and kick off the rsync after the echo $$ then it is not the same PID that you wrote to the $pidfile and you will start more than one rsync process ... as the PID that you wrote to $pidfile as the echo process ... that already finished ... or I am mistaken?
The idea of it is to place that bit of code at or near the beginning of your script, then have the rsync process start after the "echo $$". That will put the PID of your script into that file, the rsync process will be started in the script, and the script would not end until the rsync one does..so you are fairly safe that two instances of your script will run at thte same time..
OK ... I see.
The rsync process is a second PID ... but the first PID is also still open until after the script closes.
I tried the above, but was able to start multiple copies of the script. So I will have to see what is not catching the runs. The lockfile is there, but not stopping another execution of the script.
I use the lockfile utility. Take a look at its man page, it is well written. Here is how I use it (Bourne shell):
# try to create the lock and check the outcome lockfile -r 0 /var/run/myapp.lock 1>/dev/null 2>&1 status=$?
if [ ${status} -ne 0 ] ;then echo "Another instance already running. Aborting." exit 1 fi
# body of the script here
# remove the lock at the end of the script rm -f /var/run/myapp.lock
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Tue, Aug 29, 2006 at 11:13:43AM -0700, Scott Silva wrote:
...of a shell script for rsync that won't start again if it is already running? I thought of using a lock file, but what if it is killed mid script or bombs?
1. Use a directory for locking instead of a file, because testing for existence and creating the directory is an atomic operation.
2. Use trap to remove the lockdir
so basically:
if ! mkdir $LOCKDIR ; then echo "Lockdir exists; aborting." exit 1 fi trap "rmdir $LOCKDIR; echo 'mirror script exiting'" EXIT TERM INT QUIT
rsync $RSYNCOPTS $REMOTE $LOCAL