Greetings.
When I "pull" data from server A to server B, with nfs, server B reports a high load when viewed via uptime. The load is upwards of 6.x. Server A's load remains light. When I "push" the data, server A's and B's loads are relatively light.
When I change the mount point from nfs to cifs, I don't see this problem. I also don't have this problem using scp.
I don't recall seeing these excessive load factors when these servers ran Centos 4.x. I have noticed these problems with 5.1 and 5.2. I updated the servers from 5.1 to 5.2 yesterday.
I actually have 3 servers set up the same way, and all 3 exhibit the same problem.
iostat reports nothing unusual - the cpu idle percentage hovers around the high 80's to 90's.
I am not seeing any network errors or collisions when viewed via ifconfig. These servers are on a gigabit network, connected via a good 3Com switch. I am getting transfer rates averaging around 55MB/sec using scp, so I don't think I have a network/wiring issue. The nics are intel 82541GI nics on a Supermicro serverboard X6DH8G. I am not currently bonding, although I want to if the bandwidth will actually increase.
Both server's filesystems are ext3, 9TB, running raid 50 on a 3Ware 9550SX raid card. I have performed the tuning procedures listed on 3Ware's website. The "third" server has an XFS filesystem, and the same problem exists.
I am using the e1000 driver (default on install). The servers are 64 bit, and have 2 gigs of memory. Adding memory didn't help the situation.
I have tried turning autoneg off on the nic. They are running 1000baseT, full duplex. I have varied the nfs rsize and wsize parameters to no avail.
I am using nfs v3. nfsv4's configs seem quite a bit different, and I haven't had a chance to familiarize myself with the changes.
Any ideas or suggestions would be greatly appreciated.
TIA,
Monty