Jed Reynolds wrote: > Ugo Bellavance wrote: > >> >> Can you send us the output of vmstat -n 5 5 >> when you're doing a backup? >> > > This is with rsync at bwlimit=2500 > This is doing the same transfer with SSH. The load still climbs...and then load drops. I think NFS is the issue. I wonder if my NFS connection settings in client fstabs are unwise? I figured with beefy machine and fast networking, I could take advantage of large packetsizes. Bad packet sizes? rw,hard,intr,rsize=16384,wsize=16384 top - 23:04:35 up 3 days, 10:34, 4 users, load average: 4.08, 3.06, 2.81 Tasks: 132 total, 1 running, 131 sleeping, 0 stopped, 0 zombie Cpu0 : 5.7% us, 1.7% sy, 0.0% ni, 72.0% id, 19.3% wa, 0.7% hi, 0.7% si Cpu1 : 1.3% us, 3.0% sy, 0.0% ni, 38.4% id, 51.0% wa, 0.7% hi, 5.6% si Mem: 8169712k total, 8149288k used, 20424k free, 162628k buffers Swap: 4194296k total, 160k used, 4194136k free, 6374960k cached then top - 23:08:49 up 3 days, 10:39, 4 users, load average: 0.89, 1.86, 2.38 Tasks: 129 total, 1 running, 128 sleeping, 0 stopped, 0 zombie Cpu0 : 5.2% us, 2.8% sy, 0.0% ni, 63.7% id, 23.4% wa, 1.2% hi, 3.8% si Cpu1 : 1.2% us, 3.2% sy, 0.0% ni, 65.9% id, 27.3% wa, 1.0% hi, 1.4% si Mem: 8169712k total, 8149512k used, 20200k free, 141388k buffers Swap: 4194296k total, 160k used, 4194136k free, 6388856k cached $ vmstat -n 5 5 procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 160 18712 155060 6383956 0 0 96 45 42 70 0 2 89 9 0 0 160 20128 154328 6382988 0 0 421 2578 7622 2433 3 4 64 29 0 0 160 18192 153920 6384076 0 0 126 2498 7116 2238 3 6 72 19 0 1 160 22872 153684 6380640 0 0 110 2451 7065 2063 3 4 64 28 0 0 160 23880 153416 6379752 0 0 34 2520 7091 2506 3 4 68 25