[CentOS] Fail Transfer of Large Files

Michael D. Berger m_d_berger_1900 at yahoo.com
Sun Nov 21 15:02:48 UTC 2010

On Sun, 21 Nov 2010 06:47:04 -0500, Nico Kadel-Garcia wrote:

> On Sat, Nov 20, 2010 at 10:28 PM, Michael D. Berger
> <m_d_berger_1900 at yahoo.com> wrote:
>>From decades of experience in many environments, I can tell you that
> reliable transfer of large files with protocols that require
> uninterrupted transfer is awkward. The larger the file, the larger the
> chance that any interruption at any point between the repository and the
> client will break things, and with a lot of ISP's over-subscribing their
> available bandwidth, such large transfers are, by their nature,
> unreliable.
> Consider fragmenting the large file: Bittorrent transfers do this
> automatically: the old "shar" and "split" tools also work well, and
> tools like "rsync" and the lftp "mirror" utility are very good at
> mirroring directories of such split up contents quite efficiently.

What, then, is the largest file size that you would consider


More information about the CentOS mailing list