On Tue, Jan 31, 2012 at 1:50 PM, <m.roth(a)5-cent.us> wrote:
> <snip>
>>> That's odd, try rsync -e 'ssh -v' to get some more details.
>>> Also you will want to use some parameters to rsync (like -av or maybe
>>> even
>>> -z for compression etc).
>>
>> I'm getting a
>> usage: ssh [bunch of ssh options]
>> like it is giving the wrong command line to ssh. Or maybe the lack of
>> an /etc/ssh/ssh_config in this environment is breaking it, although my
>> own ssh commands and connections seem to work. I haven't done the
>> chroot into the installed mount since I was planning to overwrite it.
>
> Any chance that either your system, or the rescued system, are blocking
> it, because it doesn't know who your host, or your host doesn't know the
> rescued host?
>
> I'd try chrooting and restarting sshd.
No, I'm trying to have rsync make an outbound connection over ssh from
the rescue environment and getting what looks like an argument error
from ssh. Ssh itself works and I can connect to the same target if I
run it directly, and the exact same rsync command lines work from a
normal host. Either rsync isn't setting up the remote command right,
or ssh isn't allowing it and giving a bad error message.
The goal here was basically to clone a running machine into a new
VMware image booted into rescue mode. Maybe there's a better way to
do that anyway. The source is using RAID1 drives in a layout that the
VMware converter won't handle. The tar | tar copy seems to be mostly
OK with a little tweaking of fstab, etc. I'm going to give 'ReaR'
(from EPEL) a try too - it is supposed to do most of the grunge work
for you but you have to intervene manually if you want to change the
disk layout.
--
Les Mikesell
lesmikesell(a)gmail.com