Hi
I have a process that creates 'some data' and outputs this to standard out and i want to shift this data over ssh to a remote box without ever writing anything locally. I have been experimenting with tar to create the archive as the i dont know what the contents of 'some data' might be so i just need to capture it and output it on the other side.
I have been trying with
$ tar czf - . | ssh -q 192.168.122.2 "tar xzf -"
and this works fine to create an archive of this '.' directory and pipe that over to the other side but i want to take standard out so....
$ tar czf - `the thing that generates standard out here` | ssh -q 192.168.122.2 "tar xzf -"
Would that work or is there a better way to get this over to the other side? It needs to be a data stream though so things like scp and rsync are no good and i need to know what the command is on the remote side being run so that i can restrict this in the ssh public key on the remote side.
thanks
Tom Brown wrote:
Hi
I have a process that creates 'some data' and outputs this to standard out and i want to shift this data over ssh to a remote box without ever writing anything locally. I have been experimenting with tar to create the archive as the i dont know what the contents of 'some data' might be so i just need to capture it and output it on the other side.
I have been trying with
$ tar czf - . | ssh -q 192.168.122.2 "tar xzf -"
and this works fine to create an archive of this '.' directory and pipe that over to the other side but i want to take standard out so....
$ tar czf - `the thing that generates standard out here` | ssh -q 192.168.122.2 "tar xzf -"
Would that work or is there a better way to get this over to the other side? It needs to be a data stream though so things like scp and rsync are no good and i need to know what the command is on the remote side being run so that i can restrict this in the ssh public key on the remote side.
Why do you need any other process involved to work with a data stream? If you want to collect it to a remote file, you can | ssh remotehost 'cat > path_to_file'. Just be sure to quote the redirection so it happens on the remote side.
On 08/12/2010 05:33 AM, Les Mikesell wrote:
Why do you need any other process involved to work with a data stream? If you want to collect it to a remote file, you can | ssh remotehost 'cat> path_to_file'. Just be sure to quote the redirection so it happens on the remote side.
At a guess it's the compression he is after. Over a slow link it could make a substantial difference.
At Thu, 12 Aug 2010 06:05:25 -0700 CentOS mailing list centos@centos.org wrote:
On 08/12/2010 05:33 AM, Les Mikesell wrote:
Why do you need any other process involved to work with a data stream? If you want to collect it to a remote file, you can | ssh remotehost 'cat> path_to_file'. Just be sure to quote the redirection so it happens on the remote side.
At a guess it's the compression he is after. Over a slow link it could make a substantial difference.
Just add gzip (or bzip2) to the pipeline:
program | bzip2 | ssh -q remote-host 'bunzip2 | remote-program'
Robert Heller wrote, On 08/12/2010 09:18 AM:
At Thu, 12 Aug 2010 06:05:25 -0700 CentOS mailing list centos@centos.org wrote:
On 08/12/2010 05:33 AM, Les Mikesell wrote:
Why do you need any other process involved to work with a data stream? If you want to collect it to a remote file, you can | ssh remotehost 'cat> path_to_file'. Just be sure to quote the redirection so it happens on the remote side.
At a guess it's the compression he is after. Over a slow link it could make a substantial difference.
Just add gzip (or bzip2) to the pipeline:
program | bzip2 | ssh -q remote-host 'bunzip2 | remote-program'
or even easier (though maybe not as good a compression as bzip would get if dealing with text only) program | ssh -C -q remote-host 'remote-program'
On Thu, Aug 12, 2010 at 09:18:31AM -0400, Robert Heller wrote:
program | bzip2 | ssh -q remote-host 'bunzip2 | remote-program'
If you're gonna put a compression tool in the pipeline then I recommend you ensure ssh's own on-the-wire compression is turned off 'cos otherwise you're potentially wasting CPU cycles. ssh -q -o 'Compression no' remote-host
Yes; this may be the default value but it's always a good thing to ensure sane values are used in cases like this :-)
At Thu, 12 Aug 2010 13:11:21 +0100 CentOS mailing list centos@centos.org wrote:
Hi
I have a process that creates 'some data' and outputs this to standard out and i want to shift this data over ssh to a remote box without ever writing anything locally. I have been experimenting with tar to create the archive as the i dont know what the contents of 'some data' might be so i just need to capture it and output it on the other side.
I have been trying with
$ tar czf - . | ssh -q 192.168.122.2 "tar xzf -"
and this works fine to create an archive of this '.' directory and pipe that over to the other side but i want to take standard out so....
$ tar czf - `the thing that generates standard out here` | ssh -q 192.168.122.2 "tar xzf -"
Would that work or is there a better way to get this over to the other side? It needs to be a data stream though so things like scp and rsync are no good and i need to know what the command is on the remote side being run so that i can restrict this in the ssh public key on the remote side.
Why not just do
`the thing that generates standard out here` | ssh -q 192.168.122.2 dd of=somethin
eg
find . | ssh -q 192.168.122.2 dd of=find.out
You don't need tar for anything.
thanks _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Why not just do
`the thing that generates standard out here` | ssh -q 192.168.122.2 dd of=somethin
eg
find . | ssh -q 192.168.122.2 dd of=find.out
You don't need tar for anything.
alas the thing that generates the output creates 5 or 6 seperate streams in sequence that generate 5 or 6 log files but i dont know in advance the names of these logs.
On 8/12/2010 8:46 AM, Tom Brown wrote:
Why not just do
`the thing that generates standard out here` | ssh -q 192.168.122.2 dd of=somethin
eg
find . | ssh -q 192.168.122.2 dd of=find.out
You don't need tar for anything.
alas the thing that generates the output creates 5 or 6 seperate streams in sequence that generate 5 or 6 log files but i dont know in advance the names of these logs.
You'll have to explain how the streams get sorted out locally before anyone can help you do it remotely. Maybe the program itself could piped each stream through a separate ssh instance. Or if you can wait for output to complete, collect all the files in an otherwise empty directory and rsync the whole thing to the remote.
At Thu, 12 Aug 2010 14:46:49 +0100 CentOS mailing list centos@centos.org wrote:
Why not just do
`the thing that generates standard out here` | ssh -q 192.168.122.2 dd of=somethin
eg
find . | ssh -q 192.168.122.2 dd of=find.out
You don't need tar for anything.
alas the thing that generates the output creates 5 or 6 seperate streams in sequence that generate 5 or 6 log files but i dont know in advance the names of these logs.
So the thing (program) does not write to stdout itself? It it does '5 or 6' fopen("<random>.log","w")s? Well, then you need to do:
(mkdir temp && cd temp && thing && tar czvf - . | \ ssh -q 192.168.122.2 tar xvf -) && rm -rf temp
And yes, the log files will be written to the local disk before being transfered. There is not really anyway around this, unless you were will / able to rewrite 'thing' to work differently.
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Rsync works fine for this, keeping group and user
Regards
2010/8/12, Robert Heller heller@deepsoft.com:
At Thu, 12 Aug 2010 14:46:49 +0100 CentOS mailing list centos@centos.org wrote:
Why not just do
`the thing that generates standard out here` | ssh -q 192.168.122.2 dd of=somethin
eg
find . | ssh -q 192.168.122.2 dd of=find.out
You don't need tar for anything.
alas the thing that generates the output creates 5 or 6 seperate streams in sequence that generate 5 or 6 log files but i dont know in advance the names of these logs.
So the thing (program) does not write to stdout itself? It it does '5 or 6' fopen("<random>.log","w")s? Well, then you need to do:
(mkdir temp && cd temp && thing && tar czvf - . | \ ssh -q 192.168.122.2 tar xvf -) && rm -rf temp
And yes, the log files will be written to the local disk before being transfered. There is not really anyway around this, unless you were will / able to rewrite 'thing' to work differently.
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
-- Robert Heller -- 978-544-6933 Deepwoods Software -- Download the Model Railroad System http://www.deepsoft.com/ -- Binaries for Linux and MS-Windows heller@deepsoft.com -- http://www.deepsoft.com/ModelRailroadSystem/
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On 08/12/2010 06:46 AM, Tom Brown wrote:
alas the thing that generates the output creates 5 or 6 seperate streams in sequence that generate 5 or 6 log files but i dont know in advance the names of these logs.
If "the thing" is generating log files, then it's not using "standard out". Perhaps you are using that term incorrectly.
On a unix-like system, each process has three standard file descriptors when it starts: these are standard output (stdout), standard error (stderr), and standard input (stdin). These three files are inherited from the parent process, which means that your shell normally sets them up for the commands that you run. If you do not redirect any of those three, then they will normally be connected to the controlling terminal (/dev/tty is the controlling terminal for any process). You can use the shell's redirection functions to connect those file descriptors to files rather than to the terminal, or pipe them to another command.
If your application is writing its data to a file without your specific redirection, then it's not using stdout and you can not pipe it to another system without writing the data to disk.