[CentOS] Best practices for copying lots of files machine-to-machine

Thu May 18 07:57:23 UTC 2017
Julius Tchanque <tnjulius at gmail.com>

On 17 May 2017 at 22:27, <m.roth at 5-cent.us> wrote:

> Vanhorn, Mike wrote:
> > On 5/17/17, 12:03 PM, "CentOS on behalf of ken" <
> centos-bounces at centos.org
> > on behalf of gebser at mousecar.com> wrote:
> >
> >>An entire filesystem (~180g) needs to be copied from one local linux
> >>machine to another.  Since both systems are on the same local subnet,
> >>there's no need for encryption.
> >>
> >>I've done this sort of thing before a few times in the past in different
> >>ways, but wanted to get input from others on what's worked best for
> >> them.
> >
> > If shutting the machines down is feasible, I’d put the source hard drive
> > into the destination machine and use rsync to copy it from one drive to
> > the other (rather than using rsync to copy from one machine to the other
> > over the network).
> Why? I just rsync'd 159G in less than one workday from one server to
> another. Admittedly, we allegedly have a 1G network, but....
>        mark
> Hi,
you can parallelize rsync with xargs's -P (max-procs) option (man xargs).

rsync -a -f"+ */" -f"- *" source/ server:/destination/  #sync directory
cd source/; find . -type f | *xargs* -n1 -*P0* -I% rsync -az %
server:/destination/% # 0 to let xargs deal with the num of procs