Joseph L. Casale wrote: >> This isn't a complete answer, but a possible approach: I'd use a named pipe >> on the destination host. Here's a little experiment to demonstrate. >> >> [user at dhost temp]$ mkfifo pipe >> [user at dhost temp]$ while true ; do ` < pipe` ; done >> >> Now, in another xterm: >> >> [user at dhost temp]$ echo date > pipe >> [user at dhost temp]$ echo 'ls -l' > pipe >> >> You can 'echo' commands to the pipe in multiple windows; the while loop >> will only read and execute one command at a time off the pipe. You can see >> where this is going - now your multiple servers just ssh those echo >> commands to the destination host and the corresponding commands are >> executed. >> >> I'll leave it to you to make it suitably robust if you go this way; you'll >> need to add some error handling, possibly signal handling, etc. but that's >> just standard shell scripting. >> >> Best, >> >> --- Les Bell >> > > Thank you all very much! This is the exact approach I am trying out now as this > is such a small scale quick need, once I roll out Torque or something for my earlier > problem, I could apply it here as well. > > The command I execute is always the same shell script with a unique parameter > passed into it, I'll look into tuning this up so it accepts the single word passed > in via ssh, then executes the bash script with this as $1. > Two things to keep in mind here. 1) The 'echo' command above will block until something on the other side reads the input. This means that when you are trying to queue a command, the session will simply sit there until the server is ready to process it. This may be a problem if there is a firewall or something in the middle that kills inactive connections. 2) In the above example, "echo exit > pipe" will cause the terminal session reading the pipe to close. It would be safer to run the command in a subshell: $ while true; do (` < pipe`); done -- Bowie