[CentOS] Fail Transfer of Large Files

Nico Kadel-Garcia nkadel at gmail.com
Sun Nov 21 11:47:04 UTC 2010


On Sat, Nov 20, 2010 at 10:28 PM, Michael D. Berger
<m_d_berger_1900 at yahoo.com> wrote:
> On Sat, 20 Nov 2010 18:17:23 -0600, Jay Leafey wrote:
>
>> Les Mikesell wrote:
> [...]
>> This does ring a bell, but the circumstances were a bit different.  In
>> our case we were transferring large files between "home" and a remote
>> site.  SFTP/SCP transfers were stalling part-way through in an
>> unpredictable manner.  It turned out to be a bug in the selective
>> acknowledgment functionality in the TCP stack.   Short story, adding the
>> following line to /etc/sysctl.conf fixed the issue:
>>
>>> net.ipv4.tcp_sack = 0
>>
>> Of course, you can set it on-the-fly using the sysctl command:
>>
>>> sysctl -w net.ipv4.tcp_sack=0
>>
>> It helped in our case, no way of telling if it will help you.  As usual,
>> your mileage may vary.
>
> Googing around, I get the impression that disabling SACK might
> lead to other problems.  Any thoughts on this?
>
> Thanks,
> Mike.

>From decades of experience in many environments, I can tell you that
reliable transfer of large files with protocols that require
uninterrupted transfer is awkward. The larger the file, the larger the
chance that any interruption at any point between the repository and
the client will break things, and with a lot of ISP's over-subscribing
their available bandwidth, such large transfers are, by their nature,
unreliable.

Consider fragmenting the large file: Bittorrent transfers do this
automatically: the old "shar" and "split" tools also work well, and
tools like "rsync" and the lftp "mirror" utility are very good at
mirroring directories of such split up contents quite efficiently.



More information about the CentOS mailing list