On my intranet, I sometimes transfer large files, about 4G, to an CentOS old box that I use for a web server. I transfer with ftp or sftp. Usually, before the file is complete, the transfer "stalls". At that point, ping from the destination box to the router fails. I then deactivate the net interface on the destination box and then activate it. Ping is then successful, and the transfer is completed. The transferred file is correct, as verified with sha1sum.
All connections are via cat6 wire.
So what do you think? Should I try changing the net card? Any tests to run? Any other suggestions?
Thanks for your help.
Mike.
On Fri, Nov 19, 2010 at 4:16 PM, Michael D. Berger m_d_berger_1900@yahoo.com wrote:
On my intranet, I sometimes transfer large files, about 4G, to an CentOS old box that I use for a web server. I transfer with ftp or sftp. Usually, before the file is complete, the transfer "stalls". At that point, ping from the destination box to the router fails. I then deactivate the net interface on the destination box and then activate it. Ping is then successful, and the transfer is completed. The transferred file is correct, as verified with sha1sum.
All connections are via cat6 wire.
So what do you think? Should I try changing the net card? Any tests to run? Any other suggestions?
It could be buffering the transfer then writing it. I notice this on a small xen image I use as a file server.
On 11/19/10 3:16 PM, Michael D. Berger wrote:
On my intranet, I sometimes transfer large files, about 4G, to an CentOS old box that I use for a web server. I transfer with ftp or sftp. Usually, before the file is complete, the transfer "stalls". At that point, ping from the destination box to the router fails. I then deactivate the net interface on the destination box and then activate it. Ping is then successful, and the transfer is completed. The transferred file is correct, as verified with sha1sum.
All connections are via cat6 wire.
So what do you think? Should I try changing the net card? Any tests to run? Any other suggestions?
I haven't seen anything like that, at least in many years so it probably is hardware related - but make sure your software is up to date. As a workaround, you might try using rsync with the --bwlimit option to limit the speed of the transfer - and the -P option so you can restart a failed transfer from the point it stalled on the last attempt.
On 11/20/2010 06:35 PM, Les Mikesell wrote:
On 11/19/10 3:16 PM, Michael D. Berger wrote:
On my intranet, I sometimes transfer large files, about 4G, to an CentOS old box that I use for a web server. I transfer with ftp or sftp. Usually, before the file is complete, the transfer "stalls". At that point, ping from the destination box to the router fails. I then deactivate the net interface on the destination box and then activate it. Ping is then successful, and the transfer is completed. The transferred file is correct, as verified with sha1sum.
All connections are via cat6 wire.
So what do you think? Should I try changing the net card? Any tests to run? Any other suggestions?
I haven't seen anything like that, at least in many years so it probably is hardware related - but make sure your software is up to date. As a workaround, you might try using rsync with the --bwlimit option to limit the speed of the transfer - and the -P option so you can restart a failed transfer from the point it stalled on the last attempt.
If you have a managed switch, check its counters for errors (CRC, giants, runts, etc) and check whether speed and duplex settings are appropriate for all machines connected.
You should also check whether all devices involved are able to handle the MTU you use. I had a similar issue recently with Cisco gear that wouldn't play with the MTUs I had set on some of my machines.
Cheers,
Timo
Les Mikesell wrote:
On 11/19/10 3:16 PM, Michael D. Berger wrote:
On my intranet, I sometimes transfer large files, about 4G, to an CentOS old box that I use for a web server. I transfer with ftp or sftp. Usually, before the file is complete, the transfer "stalls". At that point, ping from the destination box to the router fails. I then deactivate the net interface on the destination box and then activate it. Ping is then successful, and the transfer is completed. The transferred file is correct, as verified with sha1sum.
All connections are via cat6 wire.
So what do you think? Should I try changing the net card? Any tests to run? Any other suggestions?
I haven't seen anything like that, at least in many years so it probably is hardware related - but make sure your software is up to date. As a workaround, you might try using rsync with the --bwlimit option to limit the speed of the transfer - and the -P option so you can restart a failed transfer from the point it stalled on the last attempt.
This does ring a bell, but the circumstances were a bit different. In our case we were transferring large files between "home" and a remote site. SFTP/SCP transfers were stalling part-way through in an unpredictable manner. It turned out to be a bug in the selective acknowledgment functionality in the TCP stack. Short story, adding the following line to /etc/sysctl.conf fixed the issue:
net.ipv4.tcp_sack = 0
Of course, you can set it on-the-fly using the sysctl command:
sysctl -w net.ipv4.tcp_sack=0
It helped in our case, no way of telling if it will help you. As usual, your mileage may vary.
On Sat, 20 Nov 2010 18:17:23 -0600, Jay Leafey wrote:
Les Mikesell wrote:
[...]
This does ring a bell, but the circumstances were a bit different. In our case we were transferring large files between "home" and a remote site. SFTP/SCP transfers were stalling part-way through in an unpredictable manner. It turned out to be a bug in the selective acknowledgment functionality in the TCP stack. Short story, adding the following line to /etc/sysctl.conf fixed the issue:
net.ipv4.tcp_sack = 0
Of course, you can set it on-the-fly using the sysctl command:
sysctl -w net.ipv4.tcp_sack=0
It helped in our case, no way of telling if it will help you. As usual, your mileage may vary.
Googing around, I get the impression that disabling SACK might lead to other problems. Any thoughts on this?
Thanks, Mike.
On Sat, Nov 20, 2010 at 10:28 PM, Michael D. Berger m_d_berger_1900@yahoo.com wrote:
On Sat, 20 Nov 2010 18:17:23 -0600, Jay Leafey wrote:
Les Mikesell wrote:
[...]
This does ring a bell, but the circumstances were a bit different. In our case we were transferring large files between "home" and a remote site. SFTP/SCP transfers were stalling part-way through in an unpredictable manner. It turned out to be a bug in the selective acknowledgment functionality in the TCP stack. Short story, adding the following line to /etc/sysctl.conf fixed the issue:
net.ipv4.tcp_sack = 0
Of course, you can set it on-the-fly using the sysctl command:
sysctl -w net.ipv4.tcp_sack=0
It helped in our case, no way of telling if it will help you. As usual, your mileage may vary.
Googing around, I get the impression that disabling SACK might lead to other problems. Any thoughts on this?
Thanks, Mike.
From decades of experience in many environments, I can tell you that
reliable transfer of large files with protocols that require uninterrupted transfer is awkward. The larger the file, the larger the chance that any interruption at any point between the repository and the client will break things, and with a lot of ISP's over-subscribing their available bandwidth, such large transfers are, by their nature, unreliable.
Consider fragmenting the large file: Bittorrent transfers do this automatically: the old "shar" and "split" tools also work well, and tools like "rsync" and the lftp "mirror" utility are very good at mirroring directories of such split up contents quite efficiently.
On Sun, 21 Nov 2010 06:47:04 -0500, Nico Kadel-Garcia wrote:
On Sat, Nov 20, 2010 at 10:28 PM, Michael D. Berger m_d_berger_1900@yahoo.com wrote:
[...]
From decades of experience in many environments, I can tell you that
reliable transfer of large files with protocols that require uninterrupted transfer is awkward. The larger the file, the larger the chance that any interruption at any point between the repository and the client will break things, and with a lot of ISP's over-subscribing their available bandwidth, such large transfers are, by their nature, unreliable.
Consider fragmenting the large file: Bittorrent transfers do this automatically: the old "shar" and "split" tools also work well, and tools like "rsync" and the lftp "mirror" utility are very good at mirroring directories of such split up contents quite efficiently.
What, then, is the largest file size that you would consider appropriate?
Thanks, Mike.
On Sun, Nov 21, 2010 at 10:02 AM, Michael D. Berger m_d_berger_1900@yahoo.com wrote:
On Sun, 21 Nov 2010 06:47:04 -0500, Nico Kadel-Garcia wrote:
On Sat, Nov 20, 2010 at 10:28 PM, Michael D. Berger m_d_berger_1900@yahoo.com wrote:
[...]
From decades of experience in many environments, I can tell you that
reliable transfer of large files with protocols that require uninterrupted transfer is awkward. The larger the file, the larger the chance that any interruption at any point between the repository and the client will break things, and with a lot of ISP's over-subscribing their available bandwidth, such large transfers are, by their nature, unreliable.
Consider fragmenting the large file: Bittorrent transfers do this automatically: the old "shar" and "split" tools also work well, and tools like "rsync" and the lftp "mirror" utility are very good at mirroring directories of such split up contents quite efficiently.
What, then, is the largest file size that you would consider appropriate?
Good question. I don't have a hard rule of thumb, but I'd estimate that any one file that takes more than 10 minutes to transfer is too big. So transferring CD images over a high bandwidth local connection at 1 MByte/second, sure, no problem! But for DSL that may have only 80 KB/second, 80 KB/second * 60 seconds/minute * 10 minutes = 48 Meg. So splitting a CD down to lumps of of, say, 50 Megs seems reasonable.
If you look at how Bittorent works, and the old "shar" utilities used for sending binaries as compressed text lumps over Usenet and email, you'll see what I mean. Even commercial tools from the Windows world like WinRAR do something like this.
Thanks, Mike.
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Sun, 21 Nov 2010 11:49:29 -0500, Nico Kadel-Garcia wrote:
[...]
Good question. I don't have a hard rule of thumb, but I'd estimate that any one file that takes more than 10 minutes to transfer is too big. So transferring CD images over a high bandwidth local connection at 1 MByte/second, sure, no problem! But for DSL that may have only 80 KB/second, 80 KB/second * 60 seconds/minute * 10 minutes = 48 Meg. So splitting a CD down to lumps of of, say, 50 Megs seems reasonable.
[...]
The file I was having trouble with was a tar file of a complex directory tree containing mostly jpg files under 15M in size. So instead I did rsync -rv on the unpacked directory tree, and it worked just fine. PROBLEM SOLVED.
Thanks, Mike.
On Sun, Nov 21, 2010 at 10:13 PM, Michael D. Berger m_d_berger_1900@yahoo.com wrote:
On Sun, 21 Nov 2010 11:49:29 -0500, Nico Kadel-Garcia wrote:
[...]
Good question. I don't have a hard rule of thumb, but I'd estimate that any one file that takes more than 10 minutes to transfer is too big. So transferring CD images over a high bandwidth local connection at 1 MByte/second, sure, no problem! But for DSL that may have only 80 KB/second, 80 KB/second * 60 seconds/minute * 10 minutes = 48 Meg. So splitting a CD down to lumps of of, say, 50 Megs seems reasonable.
[...]
The file I was having trouble with was a tar file of a complex directory tree containing mostly jpg files under 15M in size. So instead I did rsync -rv on the unpacked directory tree, and it worked just fine. PROBLEM SOLVED.
Good for you. Next time, use "rsync -avH". "-H" preserves hardlinks, "-a" preserves lots of other useful characteristics, such as symlinks and full ownership and permissions.
On 11/21/10 9:02 AM, Michael D. Berger wrote:
On Sun, 21 Nov 2010 06:47:04 -0500, Nico Kadel-Garcia wrote:
On Sat, Nov 20, 2010 at 10:28 PM, Michael D. Berger m_d_berger_1900@yahoo.com wrote:
[...]
From decades of experience in many environments, I can tell you that
reliable transfer of large files with protocols that require uninterrupted transfer is awkward. The larger the file, the larger the chance that any interruption at any point between the repository and the client will break things, and with a lot of ISP's over-subscribing their available bandwidth, such large transfers are, by their nature, unreliable.
Consider fragmenting the large file: Bittorrent transfers do this automatically: the old "shar" and "split" tools also work well, and tools like "rsync" and the lftp "mirror" utility are very good at mirroring directories of such split up contents quite efficiently.
What, then, is the largest file size that you would consider appropriate?
There's no particular limit with rsync since if you use the -P option it will be able to restart a failed transfer with just a little extra time to verify it with a block-checksum transfer. With methods that don't restart, an appropriate size would depend on the reliability and speed of the connections since it relates to the odds of a connection problem during the time it takes to complete the transfer.
On Sun, Nov 21, 2010 at 11:51 AM, Les Mikesell lesmikesell@gmail.com wrote:
On 11/21/10 9:02 AM, Michael D. Berger wrote:
On Sun, 21 Nov 2010 06:47:04 -0500, Nico Kadel-Garcia wrote:
On Sat, Nov 20, 2010 at 10:28 PM, Michael D. Berger m_d_berger_1900@yahoo.com wrote:
[...]
From decades of experience in many environments, I can tell you that
reliable transfer of large files with protocols that require uninterrupted transfer is awkward. The larger the file, the larger the chance that any interruption at any point between the repository and the client will break things, and with a lot of ISP's over-subscribing their available bandwidth, such large transfers are, by their nature, unreliable.
Consider fragmenting the large file: Bittorrent transfers do this automatically: the old "shar" and "split" tools also work well, and tools like "rsync" and the lftp "mirror" utility are very good at mirroring directories of such split up contents quite efficiently.
What, then, is the largest file size that you would consider appropriate?
There's no particular limit with rsync since if you use the -P option it will be able to restart a failed transfer with just a little extra time to verify it with a block-checksum transfer. With methods that don't restart, an appropriate size would depend on the reliability and speed of the connections since it relates to the odds of a connection problem during the time it takes to complete the transfer.
Rsync is wonderful, but not supported by a lot typical web browsers and a lot of file managers that can speak FTP and HTTP. I like rsync because it comprehends symlinks, hardlinks, has good scripting, and allows sophistated exclude options without getting overwhelmed.