Anyone know how to submit jobs to at or anything else that allows jobs submitted to a queue to be executed consecutively?
I have a series of servers that submits a job via an ssh background job but I can only have one execute at any given time.
Possibly some clever bash work?
Thanks! jlc
On Wed, Apr 07, 2010 at 03:57:14AM +0000, Joseph L. Casale wrote:
Anyone know how to submit jobs to at or anything else that allows jobs submitted to a queue to be executed consecutively?
I have a series of servers that submits a job via an ssh background job but I can only have one execute at any given time.
Are you looking for a real job scheduler? If so, it might be overkill, but you might want to look into software like Sun Grid Engine, Torque, or Condor (there are quite a few other schedulers out there). If those aren't what you're hoping for, perhaps you could give more details about what you're trying to accomplish.
--keith
Are you looking for a real job scheduler? If so, it might be overkill, but you might want to look into software like Sun Grid Engine, Torque, or Condor (there are quite a few other schedulers out there). If those aren't what you're hoping for, perhaps you could give more details about what you're trying to accomplish.
Based on John and Les's reco for a different cluster need, I am reading up on Torque etc now, but this just a trivial need I want to plug asap. It really is as simple as I wrote out earlier, multiple servers ssh a background job at nearly the same time consisting of an rsync command. The destination host for this target gets overwhelmed with more than one at a time...
Thanks, jlc
On Wed, 2010-04-07 at 05:17 +0000, Joseph L. Casale wrote:
Are you looking for a real job scheduler? If so, it might be overkill, but you might want to look into software like Sun Grid Engine, Torque, or Condor (there are quite a few other schedulers out there). If those aren't what you're hoping for, perhaps you could give more details about what you're trying to accomplish.
Based on John and Les's reco for a different cluster need, I am reading up on Torque etc now, but this just a trivial need I want to plug asap. It really is as simple as I wrote out earlier, multiple servers ssh a background job at nearly the same time consisting of an rsync command. The destination host for this target gets overwhelmed with more than one at a time...
--- Not to hard to do.... Here's one way.... Say you have 10 compute nodes. They exec rsync at selected time interval pointing to the target machine. SSH is your key here. You can do a wrapper script for ssh to "SSH" to all nodes at once with one command then exec the rsync command. It will only work with key authentication.
In fact I know this method will work. I use it weekly to generate reports or run updates on all boxes at once. The key here also is it has to be the same command executed to to all nodes from one head controller.
If you could explain a bit more in detail it would be a lot clearer on what you really want. The drawback is if you cron job it on every node you must have a precise time server for ntp on the local subnet or your effectively PPPing in the wind.
John
If you could explain a bit more in detail it would be a lot clearer on what you really want. The drawback is if you cron job it on every node you must have a precise time server for ntp on the local subnet or your effectively PPPing in the wind.
John, They all mirror to this one file server based on a snapshot they take at that time as the data is constantly changing.
Once they all complete their mirror to this file server, that file server is then instructed by ssh with key auth to rsync that data set remotely.
The file server can handle any number of local dumps on to it, but if more than 1 local machine instructs it to rsync remotely, that is where the trouble starts.
Dropping the jobs via ssh into 'at' for example would be nice if only 'at' sequentially executed the job queue.
Batch still might execute more than one job at a time, correct? The qjob script looks promising...
Thanks guys! jlc
On Wed, Apr 07, 2010 at 11:47:10AM +0000, Joseph L. Casale wrote:
If you could explain a bit more in detail it would be a lot clearer on what you really want. The drawback is if you cron job it on every node you must have a precise time server for ntp on the local subnet or your effectively PPPing in the wind.
John, They all mirror to this one file server based on a snapshot they take at that time as the data is constantly changing.
Once they all complete their mirror to this file server, that file server is then instructed by ssh with key auth to rsync that data set remotely.
The file server can handle any number of local dumps on to it, but if more than 1 local machine instructs it to rsync remotely, that is where the trouble starts.
Looks like a FIFO. It may work by feeding the commands through a named pipe to a script that just waits for them and executes one at a time.
Mihai
jlc wrote:
John wrote:
If you could explain a bit more in detail it would be a lot clearer on what you really want. The drawback is if you cron job it on every node you must have a precise time server for ntp on the local subnet or your effectively PPPing in the wind.
They all mirror to this one file server based on a snapshot they take at that time as the data is constantly changing.
<snip>
Dropping the jobs via ssh into 'at' for example would be nice if only 'at' sequentially executed the job queue.
Batch still might execute more than one job at a time, correct? The qjob script looks promising...
Coming into this late in the thread... you can, or cannot, do more than one rsync from each machine at a time?
'Bout a year ago, I wrote a perl script that had threads, and the number of concurrent jobs was merely a variable that I set. A cron job that ran that would do you, I think.
mark
Joseph L. Casale wrote:
Are you looking for a real job scheduler? If so, it might be overkill, but you might want to look into software like Sun Grid Engine, Torque, or Condor (there are quite a few other schedulers out there). If those aren't what you're hoping for, perhaps you could give more details about what you're trying to accomplish.
Based on John and Les's reco for a different cluster need, I am reading up on Torque etc now, but this just a trivial need I want to plug asap. It really is as simple as I wrote out earlier, multiple servers ssh a background job at nearly the same time consisting of an rsync command. The destination host for this target gets overwhelmed with more than one at a time...
If you don't mind introducing a single point of failure, pick a control host and ssh all the commands in a shell script that loops over the list of targets to run it.
On Wed, 2010-04-07 at 07:33 -0500, Les Mikesell wrote:
Joseph L. Casale wrote:
Are you looking for a real job scheduler? If so, it might be overkill, but you might want to look into software like Sun Grid Engine, Torque, or Condor (there are quite a few other schedulers out there). If those aren't what you're hoping for, perhaps you could give more details about what you're trying to accomplish.
Based on John and Les's reco for a different cluster need, I am reading up on Torque etc now, but this just a trivial need I want to plug asap. It really is as simple as I wrote out earlier, multiple servers ssh a background job at nearly the same time consisting of an rsync command. The destination host for this target gets overwhelmed with more than one at a time...
If you don't mind introducing a single point of failure, pick a control host and ssh all the commands in a shell script that loops over the list of targets to run it.
--- And yes that it what I shared that I currently do in the previous thread. BUT I do not use ssh_cluster now. Actually the one controler host is 999.999 up time at presant IBM AIX knock on wood. My next thing is to do job ques through the mrg platform utilizing the python modules on 2 machines and dump cosly AIX.
John
JohnS wrote:
On Wed, 2010-04-07 at 07:33 -0500, Les Mikesell wrote:
Joseph L. Casale wrote:
Are you looking for a real job scheduler? If so, it might be overkill, but you might want to look into software like Sun Grid Engine, Torque, or Condor (there are quite a few other schedulers out there). If those aren't what you're hoping for, perhaps you could give more details about what you're trying to accomplish.
Based on John and Les's reco for a different cluster need, I am reading up on Torque etc now, but this just a trivial need I want to plug asap. It really is as simple as I wrote out earlier, multiple servers ssh a background job at nearly the same time consisting of an rsync command. The destination host for this target gets overwhelmed with more than one at a time...
If you don't mind introducing a single point of failure, pick a control host and ssh all the commands in a shell script that loops over the list of targets to run it.
And yes that it what I shared that I currently do in the previous thread. BUT I do not use ssh_cluster now. Actually the one controler host is 999.999 up time at presant IBM AIX knock on wood. My next thing is to do job ques through the mrg platform utilizing the python modules on 2 machines and dump cosly AIX.
A very long time ago I did some generic job queueing stuff by hooking scripts into the unix lpr print spooler which had the advantage that they could be submitted from windows boxes through samba by going through the motions of printing them, but things have changed quite a bit since then and I don't know if that would still be easy. If the commands are always the same or the variable parts can be read as a list from a file, a shell loop is probably the easiest approach. You can also make a shell script read from a named pipe which will wait for something to be written, but you have to be careful about the size of the writes if there are concurrent writers to keep them from being interleaved.
On Thu, 2010-04-08 at 07:34 -0500, Les Mikesell wrote:
A very long time ago I did some generic job queueing stuff by hooking scripts into the unix lpr print spooler which had the advantage that they could be submitted from windows boxes through samba by going through the motions of printing them, but things have changed quite a bit since then and I don't know if that would still be easy. If the commands are always the same or the variable parts can be read as a list from a file, a shell loop is probably the easiest approach. You can also make a shell script read from a named pipe which will wait for something to be written, but you have to be careful about the size of the writes if there are concurrent writers to keep them from being interleaved.
--- I thought I was the only one that had that idea.... I guess not. Using the Spooler.
John
On Wed, Apr 07, 2010 at 03:57:14AM +0000, Joseph L. Casale wrote:
Anyone know how to submit jobs to at or anything else that allows jobs submitted to a queue to be executed consecutively?
I have a series of servers that submits a job via an ssh background job but I can only have one execute at any given time.
Probably batch will do it:
batch executes commands when system load levels permit; in other words, when the load average drops below 0.8, or the value specified in the invocation of atd.
Mihai
Anyone know how to submit jobs to at or anything else that allows jobs submitted to a queue to be executed consecutively?
This could be of some help: http://www.theillien.com/Sys_Admin_v12/html/v14/i08/a8.htm
Chris
This could be of some help: http://www.theillien.com/Sys_Admin_v12/html/v14/i08/a8.htm
Chris, After looking everything over, I'd love to use a high power scheduler for this but it's such overkill. The fifo idea fits perfectly, and it seems this script has done the heavy lifting already.
Thanks for all the comments by everyone, jlc
Greetings,
On Thu, Apr 8, 2010 at 12:38 AM, Joseph L. Casale jcasale@activenetwerx.com wrote:
cssh (ClusterSSH) anybody?
Regards,
Rajagopal
please take this email address off your mail list..... Bob has died so he will no longer use this site....thank you