On Sun, Nov 21, 2010 at 10:02 AM, Michael D. Berger m_d_berger_1900@yahoo.com wrote:
On Sun, 21 Nov 2010 06:47:04 -0500, Nico Kadel-Garcia wrote:
On Sat, Nov 20, 2010 at 10:28 PM, Michael D. Berger m_d_berger_1900@yahoo.com wrote:
[...]
From decades of experience in many environments, I can tell you that
reliable transfer of large files with protocols that require uninterrupted transfer is awkward. The larger the file, the larger the chance that any interruption at any point between the repository and the client will break things, and with a lot of ISP's over-subscribing their available bandwidth, such large transfers are, by their nature, unreliable.
Consider fragmenting the large file: Bittorrent transfers do this automatically: the old "shar" and "split" tools also work well, and tools like "rsync" and the lftp "mirror" utility are very good at mirroring directories of such split up contents quite efficiently.
What, then, is the largest file size that you would consider appropriate?
Good question. I don't have a hard rule of thumb, but I'd estimate that any one file that takes more than 10 minutes to transfer is too big. So transferring CD images over a high bandwidth local connection at 1 MByte/second, sure, no problem! But for DSL that may have only 80 KB/second, 80 KB/second * 60 seconds/minute * 10 minutes = 48 Meg. So splitting a CD down to lumps of of, say, 50 Megs seems reasonable.
If you look at how Bittorent works, and the old "shar" utilities used for sending binaries as compressed text lumps over Usenet and email, you'll see what I mean. Even commercial tools from the Windows world like WinRAR do something like this.
Thanks, Mike.
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos