Is there a way to speed up rsync transfers? I tested the bandwidth with iperf (recommended to me in an earlier post & worked well)its as advertised by my ISP's around 740KB/sec. When I manually run my rsync script with the --progress switch the transfers are around 100KB/sec. I googled this and the only thing I found had to do with the TCP window which I understand to be the limiting factor. But if this is true how can I ftp stuff at 300KB/sec? (someone please enlighten me)
I'm backing up jpg files and some days they add 5+GB of images. My goal was to backup the images nightly, but at the 100KB/sec rate that's not possible. Dan
Dan Carl wrote:
Is there a way to speed up rsync transfers? I tested the bandwidth with iperf (recommended to me in an earlier post & worked well)its as advertised by my ISP's around 740KB/sec. When I manually run my rsync script with the --progress switch the transfers are around 100KB/sec. I googled this and the only thing I found had to do with the TCP window which I understand to be the limiting factor. But if this is true how can I ftp stuff at 300KB/sec? (someone please enlighten me)
I'm backing up jpg files and some days they add 5+GB of images. My goal was to backup the images nightly, but at the 100KB/sec rate that's not possible.
What options are you using with rsync ? Stay away from the -z option if your copying jpegs. That'll slow things down quite a bit. I have no problem using rsync to copy at 20+MBytes/second with the default tcp window size on gigabit networks(using rsync over SSH). And I can achieve ~700kByte/s on a 10Mbit link over a VPN between two sites( about 40 miles apart), again, default tcp settings across the board.
Also note that some ISPs, such as Comcast have features built into their services that provide an initial "power boost" for a few seconds to speed up file transfers, then are quickly throttled down to "normal" levels. In this case your bandwidth test with the ISP may not of lasted long enough to show your "true" long, sustained available bandwidth.
The options I use:
rsync -ave ssh (local file) remote_server:(remote file)
or rsync -ave ssh --progress (rest of command)
nate
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org]On Behalf Of nate Sent: Friday, February 22, 2008 11:06 AM To: centos@centos.org Subject: Re: [CentOS] How to speed up Rsync transfers
Dan Carl wrote:
Is there a way to speed up rsync transfers? I tested the bandwidth with iperf (recommended to me in an
earlier post &
worked well)its as advertised by my ISP's around 740KB/sec. When I manually run my rsync script with the --progress switch
the transfers
are around 100KB/sec. I googled this and the only thing I found had to do with the TCP window which I understand to be the limiting factor. But if this is
true how can I
ftp stuff at 300KB/sec? (someone please enlighten me)
I'm backing up jpg files and some days they add 5+GB of images. My goal was to backup the images nightly, but at the 100KB/sec
rate that's
not possible.
What options are you using with rsync ? Stay away from the -z option if your copying jpegs. That'll slow things down quite a bit. I have no problem using rsync to copy at 20+MBytes/second with the default tcp window size on gigabit networks(using rsync over SSH). And I can achieve ~700kByte/s on a 10Mbit link over a VPN between two sites( about 40 miles apart), again, default tcp settings across the board.
Also note that some ISPs, such as Comcast have features built into their services that provide an initial "power boost" for a few seconds to speed up file transfers, then are quickly throttled down to "normal" levels. In this case your bandwidth test with the ISP may not of lasted long enough to show your "true" long, sustained available bandwidth.
The options I use:
rsync -ave ssh (local file) remote_server:(remote file)
or rsync -ave ssh --progress (rest of command)
nate
I don't do anything that special rsync -ae 'ssh -p 2112' --progress $SOURCEPATH $DESTUSER@$DESTHOST:$DESTPATH or rsync -a --progress --rsh='ssh -p 2112' $SOURCEPATH $DESTUSER@$DESTHOST:$DESTPATH
no real speed difference etiher way.
I just ran a test from one local box the another on a 100Mbit link and the fastest transfer OI got was 10MBytes/second but most were between 1MBytes/second and 4MBytes/second.
Dan
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Fri, Feb 22, 2008 at 1:43 PM, Dan Carl danc@bluestarshows.com wrote:
I just ran a test from one local box the another on a 100Mbit link and the fastest transfer OI got was 10MBytes/second but most were between 1MBytes/second and 4MBytes/second.
I could be wrong (it happens), but the best performance you are likely to get from a 100 Mbit/s link would be right around 10 MByte/s.
I think the rest of the time would have to be attributed to memory and file consumption fomr the way rsync works, as discussed previously on this list.
Just my $0.02.
mhr
On Fri, 2008-02-22 at 16:36 -0800, MHR wrote:
On Fri, Feb 22, 2008 at 1:43 PM, Dan Carl danc@bluestarshows.com wrote:
I just ran a test from one local box the another on a 100Mbit link and the fastest transfer OI got was 10MBytes/second but most were between 1MBytes/second and 4MBytes/second.
I could be wrong (it happens), but the best performance you are likely to get from a 100 Mbit/s link would be right around 10 MByte/s.
I think the rest of the time would have to be attributed to memory and file consumption fomr the way rsync works, as discussed previously on this list.
Just my $0.02.
I agree. I just calc'd it out at roughly *maximum* 12.5MB, assuming standard packet sizes. Plus, IIRC, OP said link was 750KB. I don't recall if OP was trying to resync over that or over and internal 10 or 100 LAN. If over a LAN, I can't figure what the ISP 750KB link had to do with anything.
mhr
<snip sig stuff>
On Fri, 22 Feb 2008 20:01:25 -0500 "William L. Maltby" CentOS4Bill@triad.rr.com wrote:
I just ran a test from one local box the another on a 100Mbit link and the fastest transfer OI got was 10MBytes/second but most were between 1MBytes/second and 4MBytes/second.
I could be wrong (it happens), but the best performance you are likely to get from a 100 Mbit/s link would be right around 10 MByte/s.
rsynch's purpose is to speed up file transfer by NOT re-transferring files that already exist on both sides. From how you describe your problem, it doesn't look like there are identical files on both computers, then rsync will not speedup anything.
In that case, you will get a higher throughput with scp.
man scp
On 23/02/2008, centos@911networks.com centos@911networks.com wrote:
On Fri, 22 Feb 2008 20:01:25 -0500 "William L. Maltby" CentOS4Bill@triad.rr.com wrote:
I just ran a test from one local box the another on a 100Mbit link and the fastest transfer OI got was 10MBytes/second but most were between 1MBytes/second and 4MBytes/second.
I could be wrong (it happens), but the best performance you are likely to get from a 100 Mbit/s link would be right around 10 MByte/s.
That is where the higer end network cards show why they are higer end ;-) On a rtl8139 I could get max 8mbps but on the same network with 3com card I could get 50mpbs!
Not very much a test matrix but eye opener all the same.
--- MHR mhullrich@gmail.com wrote:
On Fri, Feb 22, 2008 at 1:43 PM, Dan Carl danc@bluestarshows.com wrote:
I just ran a test from one local box the another
on a 100Mbit link and the
fastest transfer OI got was 10MBytes/second but
most were between
1MBytes/second and 4MBytes/second.
I could be wrong (it happens), but the best
^^^^^^^^^^^^^^^^^^^^^^=====No Comment :-D
performance you are likely to get from a 100 Mbit/s link would be right around 10 MByte/s.
I think the rest of the time would have to be attributed to memory and file consumption fomr the way rsync works, as discussed previously on this list.
Just my $0.02.
mhr _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Sat, Feb 23, 2008 at 2:05 PM, Steven Vishoot sir_funzone@yahoo.com wrote:
--- MHR mhullrich@gmail.com wrote:
On Fri, Feb 22, 2008 at 1:43 PM, Dan Carl danc@bluestarshows.com wrote:
I just ran a test from one local box the another
on a 100Mbit link and the
fastest transfer OI got was 10MBytes/second but
most were between
1MBytes/second and 4MBytes/second.
I could be wrong (it happens), but the best
^^^^^^^^^^^^^^^^^^^^^^=====No Comment :-D
Really?! So, this isn't a comment.
Do you jump on everyone, or am I your special case here?
mhr
--- MHR mhullrich@gmail.com wrote:
On Sat, Feb 23, 2008 at 2:05 PM, Steven Vishoot sir_funzone@yahoo.com wrote:
--- MHR mhullrich@gmail.com wrote:
On Fri, Feb 22, 2008 at 1:43 PM, Dan Carl danc@bluestarshows.com wrote:
I just ran a test from one local box the
another
on a 100Mbit link and the
fastest transfer OI got was 10MBytes/second
but
most were between
1MBytes/second and 4MBytes/second.
I could be wrong (it happens), but the best
^^^^^^^^^^^^^^^^^^^^^^=====No Comment :-D
Really?! So, this isn't a comment.
Do you jump on everyone, or am I your special case here?
mhr _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Be proud your a special case.
On 2/24/08, Steven Vishoot sir_funzone@yahoo.com wrote:
Be proud your a special case.
Yes, I'm SO glad that this adds to the substantive content of the list.
If I'm right, don't say anything.
If I'm wrong, jump all over me (and make sure you say why).
If I make a comment (like, "I could be wrong"), well, now that's important.
Anyone else want to take a swipe at me?
Oh, yeah, and maybe you don't need to wonder any more why newbies on the list are afraid to post questions when they need help - look at me, the poster boy for what happens when you do.
Thanks a lot.
mhr
On 2/24/08, Dan Dansereau ddansereau@hydropoint.com wrote:
Let's grow folks - use the list for what it is made for!
Yes, please. Thank you.
mhr
Dan Carl wrote:
I just ran a test from one local box the another on a 100Mbit link and the fastest transfer OI got was 10MBytes/second but most were between 1MBytes/second and 4MBytes/second.
Pardon the stupid suggestions but
- Did the test you run with iperf, was that a bi-directional test, are you sure you weren't testing your inbound speed at ~700KByte/s rather than your outbound speed? With most connections, inbound speed is several times faster than outbound speed. - Do you know what sort of bandwidth your "supposed" to have from your ISP? - Are you aware of any sort of filtering or throttling that your ISP does? Do you have a business class connection, or are you on one of the big generic consumer ISPs(much more likely to do strange things with your packets).
My guess is that the test you did only tested your inbound bandwidth and not your outbound. 100KByte/second is fairly typical for outbound connectivity for consumer broadband links. Of course if your lucky enough to have a real good connection such as metro ethernet, or a direct fiber connection, then that's another story and you should have plenty of upstream bandwidth.
nate
----- Original Message ----- From: "nate" centos@linuxpowered.net To: centos@centos.org Sent: Friday, February 22, 2008 10:33 PM Subject: RE: [CentOS] How to speed up Rsync transfers
Dan Carl wrote:
I just ran a test from one local box the another on a 100Mbit link and the fastest transfer OI got was 10MBytes/second but most were between 1MBytes/second and 4MBytes/second.
Pardon the stupid suggestions but
- Did the test you run with iperf, was that a bi-directional test,
are you sure you weren't testing your inbound speed at ~700KByte/s rather than your outbound speed? With most connections, inbound speed is several times faster than outbound speed.
Not stupid question me stupid, I only tested it one way. I guess I just assumed iperf was testing both up and down. I didn't have remote access to the source servers firewall, so I had iperf listening on the distination server. Monday I'll run the test the other way.
- Do you know what sort of bandwidth your "supposed" to have from your
ISP?
source server business DSL 1.5m down / 878k up distination server T1 colo at a large ISP.
nate
Sounds like I'll be stuck with the tranfer rate I'm getting. Thanks Nate Dan
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Sat, 2008-02-23 at 10:38 -0600, Dan Carl wrote:
----- Original Message ----- From: "nate" centos@linuxpowered.net To: centos@centos.org Sent: Friday, February 22, 2008 10:33 PM Subject: RE: [CentOS] How to speed up Rsync transfers
Dan Carl wrote:
<snip>
- Do you know what sort of bandwidth your "supposed" to have from your
ISP?
source server business DSL 1.5m down / 878k up distination server T1 colo at a large ISP.
nate
Sounds like I'll be stuck with the tranfer rate I'm getting.
In that case, it sounds like you need a local staging that can be quickly done before starting upload sync. Then the upload can run 24/7. How you might want to deal with new updates that happen before the previous upload finishes is going to be an interesting problem.
Thanks Nate Dan
<snip sig stuff>
William L. Maltby wrote:
- Do you know what sort of bandwidth your "supposed" to have from your
ISP?
source server business DSL 1.5m down / 878k up distination server T1 colo at a large ISP.
nate
Sounds like I'll be stuck with the tranfer rate I'm getting.
In that case, it sounds like you need a local staging that can be quickly done before starting upload sync. Then the upload can run 24/7. How you might want to deal with new updates that happen before the previous upload finishes is going to be an interesting problem.
Since disk spaces is relatively cheap these days it might work out to have a local copy - even on the same machine if necessary, where you can make a quick snapshot, then rsync from that for the offsite copy. That will also be handy if you need a restore and you can reserve the offsite copy for disaster recovery.
If it is simply impossible to keep up with the remote transfer, you can always do your local snapshot to an external disk and hand-carry it elsewhere.
Hello everybody I'm trying to install MONO in my Centos 5, I try with the packages and it throw a dependency error, I try to install it with --nodeps and when i try to run MoMA it throw an error that cant find gdi...dll Please can anybody help me to install MONO
regards Roilan
______________________________________________ LLama Gratis a cualquier PC del Mundo. Llamadas a fijos y móviles desde 1 céntimo por minuto. http://es.voice.yahoo.com
----- Original Message ----- From: "William L. Maltby" CentOS4Bill@triad.rr.com To: "CentOS General List" centos@centos.org Sent: Saturday, February 23, 2008 12:32 PM Subject: Re: [CentOS] How to speed up Rsync transfers
On Sat, 2008-02-23 at 10:38 -0600, Dan Carl wrote:
----- Original Message ----- From: "nate" centos@linuxpowered.net To: centos@centos.org Sent: Friday, February 22, 2008 10:33 PM Subject: RE: [CentOS] How to speed up Rsync transfers
Dan Carl wrote:
<snip>
- Do you know what sort of bandwidth your "supposed" to have from your
ISP?
source server business DSL 1.5m down / 878k up distination server T1 colo at a large ISP.
nate
Sounds like I'll be stuck with the tranfer rate I'm getting.
In that case, it sounds like you need a local staging that can be quickly done before starting upload sync. Then the upload can run 24/7. How you might want to deal with new updates that happen before the previous upload finishes is going to be an interesting problem.
This is exactly the situation I'm trying to avoid. Right now its less than 2GB new/edited images a day so the rsync backup finishes before the script runs again. But I can't take it for granted that this will always be the case. Any ideas would be appreciated. What do you mean by local staging? I'd like the backup to run from 7pm to 7am and then if it didn't finish to resume again the next night. That way when nothing was added/edited on the weekends the backup can catch up. Dan
--- Dan Carl danc@bluestarshows.com wrote:
----- Original Message ----- From: "William L. Maltby" CentOS4Bill@triad.rr.com To: "CentOS General List" centos@centos.org Sent: Saturday, February 23, 2008 12:32 PM Subject: Re: [CentOS] How to speed up Rsync transfers
On Sat, 2008-02-23 at 10:38 -0600, Dan Carl wrote:
----- Original Message ----- From: "nate" centos@linuxpowered.net To: centos@centos.org Sent: Friday, February 22, 2008 10:33 PM Subject: RE: [CentOS] How to speed up Rsync
transfers
Dan Carl wrote:
<snip>
- Do you know what sort of bandwidth your
"supposed" to have from your
ISP?
source server business DSL 1.5m down / 878k up distination server T1 colo at a large ISP.
nate
Sounds like I'll be stuck with the tranfer rate
I'm getting.
In that case, it sounds like you need a local
staging that can be
quickly done before starting upload sync. Then the
upload can run 24/7.
How you might want to deal with new updates that
happen before the
previous upload finishes is going to be an
interesting problem.
This is exactly the situation I'm trying to avoid. Right now its less than 2GB new/edited images a day so the rsync backup finishes before the script runs again. But I can't take it for granted that this will always be the case. Any ideas would be appreciated. What do you mean by local staging? I'd like the backup to run from 7pm to 7am and then if it didn't finish to resume again the next night. That way when nothing was added/edited on the weekends the backup can catch up. Dan
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
maybe an idea but could you not do a cron job at 0700 to check to see if there is an rsync running if there is then kill it and what was not finished will be picked up the next time.
my .000000000002 cents
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Dan Carl Sent: Sunday, February 24, 2008 18:54 To: CentOS mailing list Subject: Re: [CentOS] How to speed up Rsync transfers
This is exactly the situation I'm trying to avoid. Right now its less than 2GB new/edited images a day so the rsync backup finishes before the script runs again. But I can't take it for granted that this will always be the case. Any ideas would be appreciated. What do you mean by local staging?
Well, we use lock files.
I'd like the backup to run from 7pm to 7am and then if it didn't finish to resume again the next night.
We run it every 20 hours after compleation.
That way when nothing was added/edited on the weekends the backup can catch up.
From my atq:
#!/bin/sh # atrun uid=0 gid=0 # mail root 0 umask 22 interval=20\ hours; export interval limit=--bwlimit=64; export limit cd /var/spool/mirrors || { echo 'Execution directory inaccessible' >&2 exit 1 }
./centos.sh
From centos.sh
[root@host67 ~]# cd /var/spool/mirrors [root@host67 mirrors]# cat centos.sh lockfile=$0.lck
unset interactive unset verbose interval="20 hours" verbose="-q"
while [ "$1" != "" ] do case $1 in
--bwlimit=*) limit="--bwlimit=64" ;; --interactive) interactive=true ;; -t) interval="$2" shift ;; --prune) export prune="--delete-excluded" ;; -v) verbose="-v -v --progress" interactive=true ;; *) echo "Usage: $0" exit 1 esac
shift
done
if ( set -o noclobber; echo "$$" > "$lockfile") 2> /dev/null;
then
trap 'rm -f "$lockfile"; exit $?' INT TERM EXIT
# echo \ rsync -azH \ --delete \ rsync://mirrors.kernel.org/centos/ \ --exclude-from=centos.excludes \ $verbose \ $limit \ $prune \ centos/
rm -f "$lockfile" trap - INT TERM EXIT
if [ "$interactive" != "true" ] then
export limit export interval echo $0 | at now + 20 hours > /dev/null 2> /dev/null
fi
fi
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- - - - Jason Pyeron PD Inc. http://www.pdinc.us - - Principal Consultant 10 West 24th Street #100 - - +1 (443) 269-1555 x333 Baltimore, Maryland 21218 - - - -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
This message is for the designated recipient only and may contain privileged, proprietary, or otherwise private information. If you have received it in error, purge the message from your system and notify the sender immediately. Any other use of the email by you is prohibited.
On Sun, 2008-02-24 at 17:53 -0600, Dan Carl wrote:
----- Original Message ----- From: "William L. Maltby" CentOS4Bill@triad.rr.com
<snip list header and now irrelevant stuff>
In that case, it sounds like you need a local staging that can be quickly done before starting upload sync. Then the upload can run 24/7. How you might want to deal with new updates that happen before the previous upload finishes is going to be an interesting problem.
This is exactly the situation I'm trying to avoid. Right now its less than 2GB new/edited images a day so the rsync backup finishes before the script runs again.
General strategy: 1) maximize local operations to minimize intrusion into the time-constrained resource window by using out-of-band available resources and 2) minimize in-band demands.
But I can't take it for granted that this will always be the case. Any ideas would be appreciated. What do you mean by local staging?
1) E.g. if local HD space is available, do a local rsync from live -> backup copy. This can be done even during normal hours while users are making files (at low priority - see "man nice"), *prior* to the communications window, *if* something like LVM snapshot is available (that way you can be assured that activities starting in the live environment after local copy begins don't get included, although partials started prior to the copy can still get in there. But they will be "corrected" on the next cycle).
In this scenario, it may be easier to use one of the "canned" utilities like amanda or backuppc that have been extensively discussed on this list. I've never used these things, so I don't really know if they are appropriate in this scheme. However, nothing wrong with hand-crafted stuff if you've the inclination and need.
Keep in mind that over time the local rsync will tend to take longer as directory numbers and sizes grow unless there is also a significant amount of file deletion by the users going on. So you may want to schedule several low-priority snapshot/rsync runs throughout the workday.
Don't be afraid to seek/request some kind of raid/NAS/SAN resource if the data is mission-critical, growing constantly and volatile. It may not be needed now, but look down the road so you don't get into a constant cycle of scrambling to keep up with needs.
Ditto for additional band-width to the remote. It should be cheaper in the long run if resource demand is certain to grow significantly.
I'd like the backup to run from 7pm to 7am and then if it didn't finish to resume again the next night.
2) You mention images, so I'm not sure much can be gained by compression because many types of image files are already compressed to a great degree. But if there are a large number that can be (further) compressed for significant gain, compress them *prior* to the start of the communication window. You may need to do some testing to tell which file types are suited for further compression.
The downside to this is that you no longer have an rsync-amenable image on the backup local side. Additional scripting would be needed and instead of rsync, hand-crafted copy operations would be needed. However this is easily overcome using a time-stamp file in conjunction with find's "time" parameters to select only things which have been modified since the previous local copy started.
Another downside is that to restore from either the local or remote copies, decompression would be needed. This is quite fast though. But, again, some additional hand-crafting would be needed. Thorough testing too.
That way when nothing was added/edited on the weekends the backup can catch up.
In conjunction with "lock" files mentioned in another reply, you may be able to gain something by segmenting the local and remote rsync. This allows 1) concurrent *local* compression and rsync (if CPU/memory resources are sufficient to avoid unduly slowing the user's activities - again "man nice" to reduce the effects on users) and 2) easier management of the remote rsync start/stop on directory boundaries as the window is entered/exited. This may not be needed at all or may be of limited benefit.
Lastly, see if it's possible to run the rsync during normal hours. If your site has upload of 750KB/sec and during 90% of the normal workday only a small percentage is consumed, take advantage by doing some of the rsync (maybe in small chunks) during these hours at low priority and throttled appropriately. Presuming that most of your activity is download, not upload during the normal workday, and knowing that most of the rsync activity will be upload, not download, there is an opportunity there.
Testing this scheme before opting for it would be advised.
Finally ...
"Some assembly required". 8-0
Dan
<snip sig stuff>
HTH
On Sun, 2008-02-24 at 17:53 -0600, Dan Carl wrote:
----- Original Message ----- From: "William L. Maltby" CentOS4Bill@triad.rr.com
<snip list header and now irrelevant stuff>
Don't be afraid to seek/request some kind of raid/NAS/SAN resource if the data is mission-critical, growing constantly and volatile. It may not be needed now, but look down the road so you don't get into a constant cycle of scrambling to keep up with needs.
Have a LVM/raid already. Just want to have an automated offsite backup to be on the safe side. Have learned the hard way raid is not a subsitute for a backup.
<snip>
In conjunction with "lock" files mentioned in another reply, you may be able to gain something by segmenting the local and remote rsync. This allows 1) concurrent *local* compression and rsync (if CPU/memory resources are sufficient to avoid unduly slowing the user's activities - again "man nice" to reduce the effects on users) and 2) easier management of the remote rsync start/stop on directory boundaries as the window is entered/exited. This may not be needed at all or may be of limited benefit.
Lock file sound like the way I'll go. I'm going to stick with the hand-crafted stuff
Lastly, see if it's possible to run the rsync during normal hours. If your site has upload of 750KB/sec and during 90% of the normal workday only a small percentage is consumed, take advantage by doing some of the rsync (maybe in small chunks) during these hours at low priority and throttled appropriately. Presuming that most of your activity is download, not upload during the normal workday, and knowing that most of the rsync activity will be upload, not download, there is an opportunity there.
Something I'll look at down the road.
-- Bill
Thanks Bill Lots of good imformation and thanks to all Dan
Are you using an ssh tunnel for your transfer? If so, have a look to the HPN-SSH patches to dramatically improve OpenSSH performance:
http://www.psc.edu/networking/projects/hpn-ssh/
Ciao, Ste.
Dan Carl wrote:
Is there a way to speed up rsync transfers? I tested the bandwidth with iperf (recommended to me in an earlier post & worked well)its as advertised by my ISP's around 740KB/sec. When I manually run my rsync script with the --progress switch the transfers are around 100KB/sec. I googled this and the only thing I found had to do with the TCP window which I understand to be the limiting factor. But if this is true how can I ftp stuff at 300KB/sec? (someone please enlighten me)
I'm backing up jpg files and some days they add 5+GB of images. My goal was to backup the images nightly, but at the 100KB/sec rate that's not possible. Dan