[CentOS] Fail Transfer of Large Files

Nico Kadel-Garcia nkadel at gmail.com
Sun Nov 21 20:28:04 UTC 2010

On Sun, Nov 21, 2010 at 11:51 AM, Les Mikesell <lesmikesell at gmail.com> wrote:
> On 11/21/10 9:02 AM, Michael D. Berger wrote:
>> On Sun, 21 Nov 2010 06:47:04 -0500, Nico Kadel-Garcia wrote:
>>> On Sat, Nov 20, 2010 at 10:28 PM, Michael D. Berger
>>> <m_d_berger_1900 at yahoo.com>  wrote:
>> [...]
>>> > From decades of experience in many environments, I can tell you that
>>> reliable transfer of large files with protocols that require
>>> uninterrupted transfer is awkward. The larger the file, the larger the
>>> chance that any interruption at any point between the repository and the
>>> client will break things, and with a lot of ISP's over-subscribing their
>>> available bandwidth, such large transfers are, by their nature,
>>> unreliable.
>>> Consider fragmenting the large file: Bittorrent transfers do this
>>> automatically: the old "shar" and "split" tools also work well, and
>>> tools like "rsync" and the lftp "mirror" utility are very good at
>>> mirroring directories of such split up contents quite efficiently.
>> What, then, is the largest file size that you would consider
>> appropriate?
> There's no particular limit with rsync since if you use the -P option it will be
> able to restart a failed transfer with just a little extra time to verify it
> with a block-checksum transfer.  With methods that don't restart, an appropriate
> size would depend on the reliability and speed of the connections since it
> relates to the odds of a connection problem during the time it takes to complete
> the transfer.

Rsync is wonderful, but not supported by a lot typical web browsers
and a lot of file managers that can speak FTP and HTTP. I like rsync
because it comprehends symlinks, hardlinks, has good scripting, and
allows sophistated exclude options without getting overwhelmed.

More information about the CentOS mailing list