Hi,
I am wondering if there exists a centos command that runs another command and ensures the second command doesnt take more than x seconds? When x is given on the command line.
If the second command is not "done" the first command will just kill it and both exit.
Does such a method or command exist?
I just need to ensure the second command does just continue to run and run and run.
Thanks,
Jerry
Jerry Geis wrote:
Hi,
I am wondering if there exists a centos command that runs another command and ensures the second command doesnt take more than x seconds? When x is given on the command line.
If the second command is not "done" the first command will just kill it and both exit.
Does such a method or command exist?
I just need to ensure the second command does just continue to run and run and run.
Here's my admittedly kludgey quick and dirty way of doing this .... write a shell script that does the following:
1. takes two arguments -- the command to run (in quotes) and then the drip dead time (in seconds? or minutes?)
2. start the command in the background, saving its PID in a var (say $pid).
3. create an "at" job to kill the pid at the appointed time, as in:
echo kill -TERM $pid | at now + 15 minutes
If the job has already finished, the kill -TERM will hopefully be harmless (i.e., the pid's haven't cycled around and there is now a new, but different, job with the same pid).
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Peter Gross Sent: Saturday, April 07, 2007 12:06 AM To: CentOS mailing list Subject: Re: [CentOS] command to ensure other command does last longer than5 seconds
Jerry Geis wrote:
Hi,
I am wondering if there exists a centos command that runs another command and ensures the second command doesnt take more than x seconds? When x is given on the command line.
If the second command is not "done" the first command will
just kill
it and both exit.
Does such a method or command exist?
I just need to ensure the second command does just continue
to run and
run and run.
Here's my admittedly kludgey quick and dirty way of doing this .... write a shell script that does the following:
- takes two arguments -- the command to run (in quotes) and
then the drip dead time (in seconds? or minutes?)
- start the command in the background, saving its PID in a
var (say $pid).
- create an "at" job to kill the pid at the appointed time, as in:
echo kill -TERM $pid | at now + 15 minutes
If the job has already finished, the kill -TERM will hopefully be harmless (i.e., the pid's haven't cycled around and there is now a new, but different, job with the same pid). _______________________________________________
I think the best way to do it would be with the sleep command since the 'at' command does not allow you to specify seconds.
In a script (which I presume is your first command) start the second command in the background, get the pid of that second command, then sleep for 5 seconds, and kill it.
Michael
On Saturday 07 April 2007, Michael Velez wrote:
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Peter Gross Sent: Saturday, April 07, 2007 12:06 AM To: CentOS mailing list Subject: Re: [CentOS] command to ensure other command does last longer than5 seconds
Jerry Geis wrote:
Hi,
I am wondering if there exists a centos command that runs another command and ensures the second command doesnt take more than x seconds? When x is given on the command line.
If the second command is not "done" the first command will
just kill
it and both exit.
Does such a method or command exist?
I just need to ensure the second command does just continue
to run and
run and run.
Here's my admittedly kludgey quick and dirty way of doing this .... write a shell script that does the following:
- takes two arguments -- the command to run (in quotes) and
then the drip dead time (in seconds? or minutes?)
- start the command in the background, saving its PID in a
var (say $pid).
- create an "at" job to kill the pid at the appointed time, as in:
echo kill -TERM $pid | at now + 15 minutes
If the job has already finished, the kill -TERM will hopefully be harmless (i.e., the pid's haven't cycled around and there is now a new, but different, job with the same pid). _______________________________________________
I think the best way to do it would be with the sleep command since the 'at' command does not allow you to specify seconds.
In a script (which I presume is your first command) start the second command in the background, get the pid of that second command, then sleep for 5 seconds, and kill it.
Michael
Just building off Micheal's idea:
killafter.sh <command> <time> #!/bin/bash
$1 & pid=$! sleep $2 kill -TERM $pid
On 2007-04-07, Shawn Everett shawn@tandac.com wrote:
Just building off Micheal's idea:
killafter.sh <command> <time> #!/bin/bash
$1 & pid=$! sleep $2 kill -TERM $pid
Just in case it might have died an recycled the pid, refer to the job (%1), not the pid:
killafter.sh <command> <time> #!bin/bash $1 & sleep $2 kill -TERM %1
Another way of doing it might be to fork a sub-shell with limits:
(ulimit -t 1 ; top)
But this is cputime, not walltime...
-jf
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Sat, Apr 07, 2007 at 10:30:10AM +0200, Jan-Frode Myklebust wrote:
On 2007-04-07, Shawn Everett shawn@tandac.com wrote:
Just building off Micheal's idea:
killafter.sh <command> <time> #!/bin/bash
$1 & pid=$! sleep $2 kill -TERM $pid
Just in case it might have died an recycled the pid, refer to the job (%1), not the pid:
killafter.sh <command> <time> #!bin/bash $1 & sleep $2 kill -TERM %1
Another way of doing it might be to fork a sub-shell with limits:
(ulimit -t 1 ; top)
But this is cputime, not walltime...
You guys are forgetting 2 things:
1) Maybe he doesn't want to run the process on the background.
This is somewhat easy to solve, with a little C program. You trap SIG_ALARM, fork, execp, and set and alarm. Another option would be to use wait/wait3/wait4 and alarm, which might be simpler. Or a combination of both.
but the main issue is
2) You are forgeting to take starting up time in consideration
Say a problem will take 2 seconds to start up, due to any reason (disk I/O, memory, swap etc). If you specify 4 seconds as the limit, the problem will actually run for only 2 seconds.
I have no idea how to solve this second issue.
[]s
- -- Rodrigo Barbosa "Quid quid Latine dictum sit, altum viditur" "Be excellent to each other ..." - Bill & Ted (Wyld Stallyns)
On Sat, 2007-04-07 at 12:47 -0300, Rodrigo Barbosa wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Sat, Apr 07, 2007 at 10:30:10AM +0200, Jan-Frode Myklebust wrote:
On 2007-04-07, Shawn Everett shawn@tandac.com wrote:
Just building off Micheal's idea:
killafter.sh <command> <time> #!/bin/bash
$1 & pid=$! sleep $2 kill -TERM $pid
Just in case it might have died an recycled the pid, refer to the job (%1), not the pid:
killafter.sh <command> <time> #!bin/bash $1 & sleep $2 kill -TERM %1
Another way of doing it might be to fork a sub-shell with limits:
(ulimit -t 1 ; top)
But this is cputime, not walltime...
You guys are forgetting 2 things:
Maybe he doesn't want to run the process on the background.
This is somewhat easy to solve, with a little C program.
You trap SIG_ALARM, fork, execp, and set and alarm. Another option would be to use wait/wait3/wait4 and alarm, which might be simpler. Or a combination of both.
but the main issue is
You are forgeting to take starting up time in consideration
Say a problem will take 2 seconds to start up, due to
any reason (disk I/O, memory, swap etc). If you specify 4 seconds as the limit, the problem will actually run for only 2 seconds.
I have no idea how to solve this second issue.
Run the pre-defined script with a nice command (maybe -20, etc.), start the process in background (with a somewhat smaller nice?), capture it's start time and the current time (both in seconds since epoch), take difference between them and subtract that from desired run duration, round to seconds/minutes, ..., use that value.
<snip sig stuff>
HTH -- Bill
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Sat, Apr 07, 2007 at 12:00:14PM -0400, William L. Maltby wrote:
You are forgeting to take starting up time in consideration
Say a problem will take 2 seconds to start up, due to
any reason (disk I/O, memory, swap etc). If you specify 4 seconds as the limit, the problem will actually run for only 2 seconds.
I have no idea how to solve this second issue.
Run the pre-defined script with a nice command (maybe -20, etc.), start the process in background (with a somewhat smaller nice?), capture it's start time and the current time (both in seconds since epoch), take difference between them and subtract that from desired run duration, round to seconds/minutes, ..., use that value.
Hummm, I still don't see how that can work.
my_prio=getpriority(PRIO_PROCESS,0);
if (!(cpid=fork())) { setpriority(PRIO_PROCESS,0,-20); execvp(...); }
some_magic(); setpriority(PRIO_PROCESS,cpid,my_prio); alarm(limit_time);
So here I have the cpid, which is the PID of the process. How can I know when the startup is finished ? What is this some_magic() function you are proposing ?
One last quiestion: how off-topic are we ? :) ehehehe
- -- Rodrigo Barbosa "Quid quid Latine dictum sit, altum viditur" "Be excellent to each other ..." - Bill & Ted (Wyld Stallyns)
On Sat, 2007-04-07 at 13:15 -0300, Rodrigo Barbosa wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Sat, Apr 07, 2007 at 12:00:14PM -0400, William L. Maltby wrote:
You are forgeting to take starting up time in consideration
Say a problem will take 2 seconds to start up, due to
any reason (disk I/O, memory, swap etc). If you specify 4 seconds as the limit, the problem will actually run for only 2 seconds.
I have no idea how to solve this second issue.
Run the pre-defined script with a nice command (maybe -20, etc.), start the process in background (with a somewhat smaller nice?), capture it's start time and the current time (both in seconds since epoch), take difference between them and subtract that from desired run duration, round to seconds/minutes, ..., use that value.
Hummm, I still don't see how that can work.
my_prio=getpriority(PRIO_PROCESS,0);
if (!(cpid=fork())) { setpriority(PRIO_PROCESS,0,-20); execvp(...); }
some_magic(); setpriority(PRIO_PROCESS,cpid,my_prio); alarm(limit_time);
So here I have the cpid, which is the PID of the process. How can I know when the startup is finished ? What is this some_magic() function you are proposing ?
No. I envision the controlling shell started at some hard-coded "nice" value and the other process is kicked of with nice at a "lower" hard- coded priority. Shell returns the PID of children - it is captured.
Capture the target process PID from the shell commands (no 'C' envisioned here - otherwise I'd solve it all with sigaction calls - "sleep" and/or "trap" in BASH) and the target process can be assumed to start up reasonable quickly (we *do* "nice" it) or it may have desirable attributes, like leaving a PID file in /var, ...
Also, we could do a small loop looking for CPU time used vi "ps" (if significant enough load is imposed by the target) or such "rabbit tracks" as PID files or whatever. If they are not available, well a SWAG (Scientific Wild Ass Guess) may suffice (OP didn't sound like precision was an issue at all) for OP's needs.
One last quiestion: how off-topic are we ? :) ehehehe
:-) Based on past list postings for *many* other threads? Not at all! ;->
<snip sig stuff>
-- Bill
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Sat, Apr 07, 2007 at 12:43:52PM -0400, William L. Maltby wrote:
You are forgeting to take starting up time in consideration
Say a problem will take 2 seconds to start up, due to
any reason (disk I/O, memory, swap etc). If you specify 4 seconds as the limit, the problem will actually run for only 2 seconds.
I have no idea how to solve this second issue.
Run the pre-defined script with a nice command (maybe -20, etc.), start the process in background (with a somewhat smaller nice?), capture it's start time and the current time (both in seconds since epoch), take difference between them and subtract that from desired run duration, round to seconds/minutes, ..., use that value.
Hummm, I still don't see how that can work.
my_prio=getpriority(PRIO_PROCESS,0);
if (!(cpid=fork())) { setpriority(PRIO_PROCESS,0,-20); execvp(...); }
some_magic(); setpriority(PRIO_PROCESS,cpid,my_prio); alarm(limit_time);
So here I have the cpid, which is the PID of the process. How can I know when the startup is finished ? What is this some_magic() function you are proposing ?
No. I envision the controlling shell started at some hard-coded "nice" value and the other process is kicked of with nice at a "lower" hard- coded priority. Shell returns the PID of children - it is captured.
So we can have something like: setpriority(PRIO_PROCESS,0,my_prio-10)
that should have about the same effect.
Capture the target process PID from the shell commands (no 'C' envisioned here - otherwise I'd solve it all with sigaction calls - "sleep" and/or "trap" in BASH)
sigaction is fine by me. I proposed it earlier. I still don't think you can do anything even moderately reliable (in this particular case) using shell only.
If I've got your meaning write, you are proposing something like:
( sleep $TIMER; kill -WHATEVER $PID )&
Which is pretty interesting way to do it, which I haven't though about until now. I never tried (...)&, and after you said this, I decided to see if it worked. And it did. It is a pretty neat way to create alarms inside a shell script.
and the target process can be assumed to start up reasonable quickly (we *do* "nice" it)
Well, maybe at runlevel 1. Otherwise, we really can't make that assumption. Process scheduling is pretty weird these days (not to mention delays due to I/O).
or it may have desirable attributes, like leaving a PID file in /var, ...
You mean like using libfam/gamin to monitor that file ? It _could_ work, but we have no way to garantee that will be always the case.
Also, we could do a small loop looking for CPU time used vi "ps" (if significant enough load is imposed by the target) or such "rabbit tracks" as PID files or whatever.
If you want to go as farther as that, we can always ptrace the child. But I'm not willing to do something as weird like that.
If they are not available, well a SWAG (Scientific Wild Ass Guess) may suffice (OP didn't sound like precision was an issue at all) for OP's needs.
I know. We can always run a calibration cicle beforehand. We can even dlopen all the libraries the problem will need (you can get then either by decoding the ELF header, or even by popen()ing ldd against it, and reading the results). Actually, we could even start a child specific for dlopening the libs (this way we don't affect the calling process), and wait for it to signal back. This child, of course, won't _exit, but enter an idle loop, maybe using a select() where we can communicate with it (and so tell it when to finish).
If you add times(2) to the mix, you can get a somewhat precise callibration (baring sudden usage spikes).
And yes, I think you are right about the OPs doesn't needing that much precision. But this is a question that has occured to me from time to time in the past years, and I never got to discussing it with anyone.
One last quiestion: how off-topic are we ? :) ehehehe
:-) Based on past list postings for *many* other threads? Not at all! ;->
Well, this one is at least interesting ;-)
- -- Rodrigo Barbosa "Quid quid Latine dictum sit, altum viditur" "Be excellent to each other ..." - Bill & Ted (Wyld Stallyns)
On Sat, 2007-04-07 at 14:20 -0300, Rodrigo Barbosa wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Sat, Apr 07, 2007 at 12:43:52PM -0400, William L. Maltby wrote:
You are forgeting to take starting up time in consideration
Say a problem will take 2 seconds to start up, due to
any reason (disk I/O, memory, swap etc). If you specify 4 seconds as the limit, the problem will actually run for only 2 seconds.
I have no idea how to solve this second issue.
Run the pre-defined script with a nice command (maybe -20, etc.), start the process in background (with a somewhat smaller nice?), capture it's start time and the current time (both in seconds since epoch), take difference between them and subtract that from desired run duration, round to seconds/minutes, ..., use that value.
Hummm, I still don't see how that can work.
my_prio=getpriority(PRIO_PROCESS,0);
if (!(cpid=fork())) { setpriority(PRIO_PROCESS,0,-20); execvp(...); }
some_magic(); setpriority(PRIO_PROCESS,cpid,my_prio); alarm(limit_time);
So here I have the cpid, which is the PID of the process. How can I know when the startup is finished ? What is this some_magic() function you are proposing ?
No. I envision the controlling shell started at some hard-coded "nice" value and the other process is kicked of with nice at a "lower" hard- coded priority. Shell returns the PID of children - it is captured.
So we can have something like: setpriority(PRIO_PROCESS,0,my_prio-10)
that should have about the same effect.
Yes. My goal though is to use shell scripts because I assume no "C" expertise from most admins and users, although it is certain many of them have some based on my experience.
Capture the target process PID from the shell commands (no 'C' envisioned here - otherwise I'd solve it all with sigaction calls - "sleep" and/or "trap" in BASH)
sigaction is fine by me. I proposed it earlier. I still don't think you can do anything even moderately reliable (in this particular case) using shell only.
Without more info from the OP, you are right about "reliable". Since he specified no precision needs, we are free to assume high precision is need, as you seem to, or low precision is needed, as I do.
My background leads me to presume the latter in cases of insufficient information for the task. It allows me to use the KISS principal more frequently. He-he! That remeinds me of the "inverse KISS" principal (which I *think* I coined - a large number of my former clients used to "Keep It STUPID, Simpleton" :-)
Needless to say, I garnered much more revenue when fixing up their "inverse KISS" implementations.
If I've got your meaning write, you are proposing something like:
( sleep $TIMER; kill -WHATEVER $PID )&
Which is pretty interesting way to do it, which I haven't though about until now. I never tried (...)&, and after you said this, I decided to see if it worked. And it did. It is a pretty neat way to create alarms inside a shell script.
Yep. My first *IX experience was in the original shell in PWB V6/7 and subsequents. Later I also taught myself some C and such. I guess that's why I'm ambivalent about "bash". What used to be a nice compact "shell" is now a *relative* hog. I think I like "ash" for that reason.
and the target process can be assumed to start up reasonable quickly (we *do* "nice" it)
Well, maybe at runlevel 1. Otherwise, we really can't make that assumption. Process scheduling is pretty weird these days (not to mention delays due to I/O).
As I say above, we can assume a lot since insufficient needs specification was provided. I still stick with low precision until told otherwise.
or it may have desirable attributes, like leaving a PID file in /var, ...
You mean like using libfam/gamin to monitor that file ? It _could_ work, but we have no way to garantee that will be always the case.
Ummm... I would use the plain old "test" command or bash's "if [ -f ..." stuff. In a small loop with something like "sleep 1". Of course, the bash "-s" or a read of a few bytes from the known file followed by a test using -N might do the job. Again, in an appropriate loop. It all is dependent on knowing what the target process is going to do and what leeway we have, such as erasing/removing some file beforehand. If we can do that, I'm sure some "while... sleep ..; if [ -{s|N}} <file> ] ; then ..." construct would do admirably.
Also, we could do a small loop looking for CPU time used vi "ps" (if significant enough load is imposed by the target) or such "rabbit tracks" as PID files or whatever.
If you want to go as farther as that, we can always ptrace the child. But I'm not willing to do something as weird like that.
If they are not available, well a SWAG (Scientific Wild Ass Guess) may suffice (OP didn't sound like precision was an issue at all) for OP's needs.
I know. We can always run a calibration cicle beforehand. We can even dlopen all the libraries the problem will need (you can get then either by decoding the ELF header, or even by popen()ing ldd against it, and reading the results). Actually, we could even start a child specific for dlopening the libs (this way we don't affect the calling process), and wait for it to signal back. This child, of course, won't _exit, but enter an idle loop, maybe using a select() where we can communicate with it (and so tell it when to finish).
That requires that we can require the target to have this facility. All above are valid strategies but, at the risk of seeming rude, seem a little overkill for the task originally presented to us.
If you add times(2) to the mix, you can get a somewhat precise callibration (baring sudden usage spikes).
As you already know, as soon as we rely on that, there will be a "spike". Something about a joker named Murphy predicting this.
And yes, I think you are right about the OPs doesn't needing that much precision. But this is a question that has occured to me from time to time in the past years, and I never got to discussing it with anyone.
I enjoy this sort of stuff. Don't get to do it often enough. And in the last upmteen years, haven't kept my shell/C skills up-to-snuff. Essentially retired from the race after enough decades of it.
One last quiestion: how off-topic are we ? :) ehehehe
:-) Based on past list postings for *many* other threads? Not at all! ;->
Well, this one is at least interesting ;-)
Yep. BTW ...
If we knew how much of a cpu hog the app was, we could "ulimit -t" the target and forget all this other stuff.
<snip sig stuff>
-- Bill
On Sat, 2007-04-07 at 14:03 -0400, William L. Maltby wrote:
On Sat, 2007-04-07 at 14:20 -0300, Rodrigo Barbosa wrote:
<snip>
If I've got your meaning write, you are proposing something like:
( sleep $TIMER; kill -WHATEVER $PID )&
Which is pretty interesting way to do it, which I haven't though about until now. I never tried (...)&, and after you said this, I decided to see if it worked. And it did. It is a pretty neat way to create alarms inside a shell script.
Almost forgot - you might enjoy looking at the "trap" command in bash (and other shells). Allows entry to arbitrary routines based on asynchronous events, similar to signal handling in "C". Receiving function has the choice of resetting the trap, issuing arbitrary actions and signals, setting other traps, etc. Fun stuff.
<snip>
-- Bill
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Sat, Apr 07, 2007 at 02:08:35PM -0400, William L. Maltby wrote:
On Sat, 2007-04-07 at 14:03 -0400, William L. Maltby wrote:
On Sat, 2007-04-07 at 14:20 -0300, Rodrigo Barbosa wrote:
<snip>
If I've got your meaning write, you are proposing something like:
( sleep $TIMER; kill -WHATEVER $PID )&
Which is pretty interesting way to do it, which I haven't though about until now. I never tried (...)&, and after you said this, I decided to see if it worked. And it did. It is a pretty neat way to create alarms inside a shell script.
Almost forgot - you might enjoy looking at the "trap" command in bash (and other shells). Allows entry to arbitrary routines based on asynchronous events, similar to signal handling in "C". Receiving function has the choice of resetting the trap, issuing arbitrary actions and signals, setting other traps, etc. Fun stuff.
Yup, used it in the past.
Actually, my idea for using (...)& is because of the trap. This way, you can trap SIG_ALARM as you would with sigaction, and use (...)& as you would alarm().
[]s
- -- Rodrigo Barbosa "Quid quid Latine dictum sit, altum viditur" "Be excellent to each other ..." - Bill & Ted (Wyld Stallyns)
On Sat, 2007-04-07 at 15:24 -0300, Rodrigo Barbosa wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Sat, Apr 07, 2007 at 02:08:35PM -0400, William L. Maltby wrote:
On Sat, 2007-04-07 at 14:03 -0400, William L. Maltby wrote:
On Sat, 2007-04-07 at 14:20 -0300, Rodrigo Barbosa wrote:
<snip>
<snip>
Almost forgot - you might enjoy looking at the "trap" command in bash (and other shells). Allows entry to arbitrary routines based on asynchronous events, similar to signal handling in "C". Receiving function has the choice of resetting the trap, issuing arbitrary actions and signals, setting other traps, etc. Fun stuff.
Yup, used it in the past.
Actually, my idea for using (...)& is because of the trap. This way, you can trap SIG_ALARM as you would with sigaction, and use (...)& as you would alarm().
Since the (...)& and {...} are in-line, they have some minor reduced usefulness as compared to trap. Because setting the trap, and processing the trap when a signal is received, returns to the script position it happened to be executing when the signal was received, you don't need to have something like and alarm followed directly by the kill. You can set the alarm, let the shell do wome other things and the signal will cause entry to the "trap code" regardless of what else is going on. And then execution can continue at the place the shell/process was interrupted.
BUT, it's use is not nearly as straightforward. And first time users will have to work a little harder at understanding how to use it effectively (or... at all) because there are no samples in the docs. But they know how to Google I think. That'll fill in the missing pieces for them.
<snip sig stuff>
Well, end of thread? We might have overstayed our welcome on this already.
I've enjoyed it.
-- Bill
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Sat, Apr 07, 2007 at 02:37:57PM -0400, William L. Maltby wrote:
On Sat, 2007-04-07 at 15:24 -0300, Rodrigo Barbosa wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Sat, Apr 07, 2007 at 02:08:35PM -0400, William L. Maltby wrote:
On Sat, 2007-04-07 at 14:03 -0400, William L. Maltby wrote:
On Sat, 2007-04-07 at 14:20 -0300, Rodrigo Barbosa wrote:
<snip>
<snip>
Almost forgot - you might enjoy looking at the "trap" command in bash (and other shells). Allows entry to arbitrary routines based on asynchronous events, similar to signal handling in "C". Receiving function has the choice of resetting the trap, issuing arbitrary actions and signals, setting other traps, etc. Fun stuff.
Yup, used it in the past.
Actually, my idea for using (...)& is because of the trap. This way, you can trap SIG_ALARM as you would with sigaction, and use (...)& as you would alarm().
Since the (...)& and {...} are in-line, they have some minor reduced usefulness as compared to trap. Because setting the trap, and processing the trap when a signal is received, returns to the script position it happened to be executing when the signal was received, you don't need to have something like and alarm followed directly by the kill. You can set the alarm, let the shell do wome other things and the signal will cause entry to the "trap code" regardless of what else is going on. And then execution can continue at the place the shell/process was interrupted.
The idea is to use (...)& to trigger the trap. Just like alarm().
BUT, it's use is not nearly as straightforward. And first time users will have to work a little harder at understanding how to use it effectively (or... at all) because there are no samples in the docs. But they know how to Google I think. That'll fill in the missing pieces for them.
For sure.
Well, end of thread? We might have overstayed our welcome on this already.
I've enjoyed it.
Humm, maybe. I'll contact you again (off-list, of course) if I decide to go ahead and code this application. You do have a sourceforge account, don't you :-D.
[]s
- -- Rodrigo Barbosa "Quid quid Latine dictum sit, altum viditur" "Be excellent to each other ..." - Bill & Ted (Wyld Stallyns)
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Sat, Apr 07, 2007 at 12:13:17PM -0700, Akemi Yagi wrote:
On Sat, 07 Apr 2007 14:08:35 -0400, William L. Maltby wrote:
Almost forgot - you might enjoy looking at the "trap" command in bash (and other shells).
Is there a C-shell equivalent of trap?
Doubt it.
- -- Rodrigo Barbosa "Quid quid Latine dictum sit, altum viditur" "Be excellent to each other ..." - Bill & Ted (Wyld Stallyns)
On Sat, 2007-04-07 at 12:13 -0700, Akemi Yagi wrote:
On Sat, 07 Apr 2007 14:08:35 -0400, William L. Maltby wrote:
Almost forgot - you might enjoy looking at the "trap" command in bash (and other shells).
Is there a C-shell equivalent of trap?
Never used C-shell. Man csh shows a section "Signal handling" that mentions "onintr". In the builtins later on,it has "hup" processing and an expanded description of "onintr". Doesn't seem to be quite as powerful as bash's "trap", but without testing, one never knows what was omitted from the docs.
The onintr seems to handle the special condition of traps for interrupts of a general nature and hang-ups specifically. If *really* handles general interrupts, implied by the use of that word, or just interrupts as from the keyboard, is unclear to me.
Regardless, from a quick scan of its narrative, it seems less powerful than "trap". But that may be only due to my ignorance.
<snip sig stuff>
-- Bill
Almost forgot - you might enjoy looking at the "trap" command in bash (and other shells).
Is there a C-shell equivalent of trap?
Sure. Sub-shell the job out to a sane shell environment and work there. :-P Friends don't let friends use csh.
On Sat, 07 Apr 2007 17:16:32 -0400, Jim Perrin wrote:
Is there a C-shell equivalent of trap?
Sure. Sub-shell the job out to a sane shell environment and work there. :-P Friends don't let friends use csh.
.....I'd better stay home where seldom is heard a discouraging word.....
Akemi
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Sat, Apr 07, 2007 at 02:03:10PM -0400, William L. Maltby wrote:
So we can have something like: setpriority(PRIO_PROCESS,0,my_prio-10)
that should have about the same effect.
Yes. My goal though is to use shell scripts because I assume no "C" expertise from most admins and users, although it is certain many of them have some based on my experience.
oh, we can just create a nice little GPLed C application, generic enough to suit most cases.
Capture the target process PID from the shell commands (no 'C' envisioned here - otherwise I'd solve it all with sigaction calls - "sleep" and/or "trap" in BASH)
sigaction is fine by me. I proposed it earlier. I still don't think you can do anything even moderately reliable (in this particular case) using shell only.
Without more info from the OP, you are right about "reliable". Since he specified no precision needs, we are free to assume high precision is need, as you seem to, or low precision is needed, as I do.
Actually, I'm assuming low precision too. I just got facinated by this issue.
My background leads me to presume the latter in cases of insufficient information for the task. It allows me to use the KISS principal more frequently. He-he! That remeinds me of the "inverse KISS" principal (which I *think* I coined - a large number of my former clients used to "Keep It STUPID, Simpleton" :-)
LOL.
Needless to say, I garnered much more revenue when fixing up their "inverse KISS" implementations.
Tell me about it. I have spent a lot of time lately migrating SERVERS built with Kurumim (ick!) to CentOS.
If I've got your meaning write, you are proposing something like:
( sleep $TIMER; kill -WHATEVER $PID )&
Which is pretty interesting way to do it, which I haven't though about until now. I never tried (...)&, and after you said this, I decided to see if it worked. And it did. It is a pretty neat way to create alarms inside a shell script.
Yep. My first *IX experience was in the original shell in PWB V6/7 and subsequents. Later I also taught myself some C and such. I guess that's why I'm ambivalent about "bash". What used to be a nice compact "shell" is now a *relative* hog. I think I like "ash" for that reason.
Oh, I did a lot of shell coding too. Even tho I have to confess I usually cheat with a lot of sed and awk :)
or it may have desirable attributes, like leaving a PID file in /var, ...
You mean like using libfam/gamin to monitor that file ? It _could_ work, but we have no way to garantee that will be always the case.
Ummm... I would use the plain old "test" command or bash's "if [ -f ..." stuff. In a small loop with something like "sleep 1". Of course, the bash "-s" or a read of a few bytes from the known file followed by a test using -N might do the job. Again, in an appropriate loop. It all is dependent on knowing what the target process is going to do and what leeway we have, such as erasing/removing some file beforehand. If we can do that, I'm sure some "while... sleep ..; if [ -{s|N}} <file> ] ; then ..." construct would do admirably.
Your solution would have a precision of 5 to 10 seconds, I estimate. If that is good enough, it is a simple way to do it. That should give higher than 95% precision, usually higher than 98%. Not bad for a small script.
Also, we could do a small loop looking for CPU time used vi "ps" (if significant enough load is imposed by the target) or such "rabbit tracks" as PID files or whatever.
If you want to go as farther as that, we can always ptrace the child. But I'm not willing to do something as weird like that.
If they are not available, well a SWAG (Scientific Wild Ass Guess) may suffice (OP didn't sound like precision was an issue at all) for OP's needs.
I know. We can always run a calibration cicle beforehand. We can even dlopen all the libraries the problem will need (you can get then either by decoding the ELF header, or even by popen()ing ldd against it, and reading the results). Actually, we could even start a child specific for dlopening the libs (this way we don't affect the calling process), and wait for it to signal back. This child, of course, won't _exit, but enter an idle loop, maybe using a select() where we can communicate with it (and so tell it when to finish).
That requires that we can require the target to have this facility. All above are valid strategies but, at the risk of seeming rude, seem a little overkill for the task originally presented to us.
It IS overkill :) I'm just considering a generic implementation, not the need of OP. Actually, I'm considering creating a small GPLed program to do this, so I have to cover as many situations as possible.
I think you misunderstood me, when you say the target has to have this facility. I was talking about a callibration child, not the exec() one.
If you add times(2) to the mix, you can get a somewhat precise callibration (baring sudden usage spikes).
As you already know, as soon as we rely on that, there will be a "spike". Something about a joker named Murphy predicting this.
I know. The only (somewhat) reliable way to do this is to trap the ELF loader, not the target program itself. And that would be nasty to do.
I was even considering using some kind of LD_PRELOAD trick for this. Now, THAT would be overkill :)
And yes, I think you are right about the OPs doesn't needing that much precision. But this is a question that has occured to me from time to time in the past years, and I never got to discussing it with anyone.
I enjoy this sort of stuff. Don't get to do it often enough. And in the last upmteen years, haven't kept my shell/C skills up-to-snuff. Essentially retired from the race after enough decades of it.
I second that. Since I started my own company, the business side of it is taking so much time I'm getting really rusty on shell/C skills.
One last quiestion: how off-topic are we ? :) ehehehe
:-) Based on past list postings for *many* other threads? Not at all! ;->
Well, this one is at least interesting ;-)
Yep. BTW ...
If we knew how much of a cpu hog the app was, we could "ulimit -t" the target and forget all this other stuff.
Well, it IS possible to run the target a few times, and gather the information either using times(2) (in C) or time(1) (in a shell). That would give us the values we need. It would be kind of precise, too, since we could look past all those I/O and scheduller issues.
Which might be exactly what the OP needs.
I imagine a callibration loop, where we would run the program a few times, with different (...)& alarms and maybe a few "ulimit -t" passes, gathering time(1) values.
Then, with that data in hand, we could save that value inside a cache/db for further uses. From that point on, the script would only need to check that database for the appropriate value when executing a given program.
- -- Rodrigo Barbosa "Quid quid Latine dictum sit, altum viditur" "Be excellent to each other ..." - Bill & Ted (Wyld Stallyns)
On Sat, 2007-04-07 at 15:37 -0300, Rodrigo Barbosa wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Sat, Apr 07, 2007 at 02:03:10PM -0400, William L. Maltby wrote:
<snip>
You mean like using libfam/gamin to monitor that file ? It _could_ work, but we have no way to garantee that will be always the case.
Ummm... I would use the plain old "test" command or bash's "if [ -f ..." stuff. In a small loop with something like "sleep 1". Of course, the bash "-s" or a read of a few bytes from the known file followed by a test using -N might do the job. Again, in an appropriate loop. It all is dependent on knowing what the target process is going to do and what leeway we have, such as erasing/removing some file beforehand. If we can do that, I'm sure some "while... sleep ..; if [ -{s|N}} <file> ] ; then ..." construct would do admirably.
Your solution would have a precision of 5 to 10 seconds, I estimate. If that is good enough, it is a simple way to do it. That should give higher than 95% precision, usually higher than 98%. Not bad for a small script.
5 - 10 seconds =:-O I think it would be better than that... If we have the right "trigger". Knowing, for instance, that the last "setup" issue would be some distinct event (like opening a new output file, probably not /var/pid because that should be early) would then allow us to consider all remaining activities to be "processing". Then, if wall clock was the only criteria, we s/b pretty accurate. Naturally, on heavily loaded servers, some other mile-marker, like user CPU time, would be better. But on single-user workstations, the simple remedies we have touched on will certainly do better than and 8 to ten second window. That's betting user doesn't run the stuff during his nightly full-copy-backup operation.
<snip>
It IS overkill :) I'm just considering a generic implementation, not the need of OP. Actually, I'm considering creating a small GPLed program to do this, so I have to cover as many situations as possible.
I think you misunderstood me, when you say the target has to have this facility. I was talking about a callibration child, not the exec() one.
Yes, I misunderstood. But along that vein, it seems to me than that the calibration should be a concurrent process so that it can determine in "real time" when enough "wall clock" time has passed. As long as wall clock is the criteria, we get stuck with using any pre-acquired benchmarks as reference data only. The current processing environment/load would need to be blended in *throughout* the life of the target application and the "calibrator" when then decide in real- time that mayhem was due.
If we get to that point, it *seems* to me that the only reliable metric becomes the user CPU time and/or I/O completions, depending on the applications profile.
And that would tend to indicate that relatively high-precision (I hope my understanding of your concept of that is "good enough") can only be approached (ignoring real-time acquisition of the target's accounting, I/O, CPU, ...) by the calibrator running concurrently and seeing the current deviation from the database of captured test runs of the past.
<snip>
I enjoy this sort of stuff. Don't get to do it often enough. And in the last upmteen years, haven't kept my shell/C skills up-to-snuff. Essentially retired from the race after enough decades of it.
I second that. Since I started my own company, the business side of it is taking so much time I'm getting really rusty on shell/C skills.
Ditto when I had my own biz. Plus, I found I didn't like "managing" others. They weren't enough like me! =:O
<snip>
Well, it IS possible to run the target a few times, and gather the information either using times(2) (in C) or time(1) (in a shell). That would give us the values we need. It would be kind of precise, too, since we could look past all those I/O and scheduller issues.
Which might be exactly what the OP needs.
I imagine a callibration loop, where we would run the program a few times, with different (...)& alarms and maybe a few "ulimit -t" passes, gathering time(1) values.
Then, with that data in hand, we could save that value inside a cache/db for further uses. From that point on, the script would only need to check that database for the appropriate value when executing a given program.
<snip sig stuff>
-- Bill
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Sat, Apr 07, 2007 at 03:21:45PM -0400, William L. Maltby wrote:
On Sat, 2007-04-07 at 15:37 -0300, Rodrigo Barbosa wrote:
Your solution would have a precision of 5 to 10 seconds, I estimate. If that is good enough, it is a simple way to do it. That should give higher than 95% precision, usually higher than 98%. Not bad for a small script.
5 - 10 seconds =:-O I think it would be better than that... If we have the right "trigger". Knowing, for instance, that the last "setup" issue would be some distinct event (like opening a new output file, probably not /var/pid because that should be early) would then allow us to consider all remaining activities to be "processing". Then, if wall clock was the only criteria, we s/b pretty accurate. Naturally, on heavily loaded servers, some other mile-marker, like user CPU time, would be better. But on single-user workstations, the simple remedies we have touched on will certainly do better than and 8 to ten second window. That's betting user doesn't run the stuff during his nightly full-copy-backup operation.
My chief worry are other processes generation I/O, not the target itself. That is why I assign that kind of precision.
It IS overkill :) I'm just considering a generic implementation, not the need of OP. Actually, I'm considering creating a small GPLed program to do this, so I have to cover as many situations as possible.
I think you misunderstood me, when you say the target has to have this facility. I was talking about a callibration child, not the exec() one.
Yes, I misunderstood. But along that vein, it seems to me than that the calibration should be a concurrent process so that it can determine in "real time" when enough "wall clock" time has passed. As long as wall clock is the criteria, we get stuck with using any pre-acquired benchmarks as reference data only. The current processing environment/load would need to be blended in *throughout* the life of the target application and the "calibrator" when then decide in real- time that mayhem was due.
The idea is the callibrator to get a general feeling of the machine load, before starting the target. The process I meant to keep open is the one that dlopen()s for preloading.
If we get to that point, it *seems* to me that the only reliable metric becomes the user CPU time and/or I/O completions, depending on the applications profile.
And the general system state.
And that would tend to indicate that relatively high-precision (I hope my understanding of your concept of that is "good enough") can only be approached (ignoring real-time acquisition of the target's accounting, I/O, CPU, ...) by the calibrator running concurrently and seeing the current deviation from the database of captured test runs of the past.
That would be the ideal case. And couple with the pre-callibration I proposed earlier, would wield even more precise results.
I enjoy this sort of stuff. Don't get to do it often enough. And in the last upmteen years, haven't kept my shell/C skills up-to-snuff. Essentially retired from the race after enough decades of it.
I second that. Since I started my own company, the business side of it is taking so much time I'm getting really rusty on shell/C skills.
Ditto when I had my own biz. Plus, I found I didn't like "managing" others. They weren't enough like me! =:O
TELL ME ABOUT IT :) ehehehehe
[]s
- -- Rodrigo Barbosa "Quid quid Latine dictum sit, altum viditur" "Be excellent to each other ..." - Bill & Ted (Wyld Stallyns)
William L. Maltby wrote:
Yes. My goal though is to use shell scripts because I assume no "C" expertise from most admins and users, although it is certain many of them have some based on my experience.
(sleep 5 && kill $$) & exec program_that_better_work_fast
On Sat, 2007-04-07 at 13:37 -0500, Les Mikesell wrote:
William L. Maltby wrote:
Yes. My goal though is to use shell scripts because I assume no "C" expertise from most admins and users, although it is certain many of them have some based on my experience.
(sleep 5 && kill $$) & exec program_that_better_work_fast
That works, of course. But the majority of the thread has been oriented to greater precision when considering "startup" times. This has the flaw that Rodrigo (sp?) was trying to address. But I sure didn't think of the "$$". That saves a couple of lines.
-- Bill
On 07/04/07, Jerry Geis geisj@pagestation.com wrote:
Hi,
I am wondering if there exists a centos command that runs another command and ensures the second command doesnt take more than x seconds? When x is given on the command line.
If the second command is not "done" the first command will just kill it and both exit.
Does such a method or command exist?
I just need to ensure the second command does just continue to run and run and run.
I found something on the web ages ago that does just this - see:
http://www.splode.com/~friedman/software/scripts/src/with-timeout
James Pearson