# iostat -m -x 10 Linux 2.6.18-8.1.6.el5 (data1.iol) 09/10/2009
. . .
avg-cpu: %user %nice %system %iowait %steal %idle 0.20 0.00 0.31 8.79 0.00 90.70
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util cciss/c0d0 0.00 0.20 0.92 0.51 0.00 0.00 10.29 0.04 25.29 25.93 3.71 cciss/c0d1 0.00 0.20 3.68 2.45 0.02 0.01 11.07 0.06 9.87 7.27 4.45 cciss/c0d2 0.00 0.20 0.41 2.76 0.00 0.01 8.52 0.03 9.97 2.81 0.89 cciss/c0d3 1.23 0.51 3.98 1.53 0.03 0.01 14.52 0.05 9.69 8.07 4.45 cciss/c0d4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 cciss/c0d5 0.00 0.00 1.02 0.10 0.00 0.00 8.00 0.01 9.36 9.36 1.05 cciss/c0d6 2.45 0.20 0.92 0.51 0.06 0.00 87.43 0.01 9.64 7.21 1.03 cciss/c0d7 0.00 0.00 0.31 0.10 0.00 0.00 8.00 0.01 14.25 14.25 0.58 cciss/c0d8 0.00 0.00 0.10 1.84 0.00 0.01 8.00 0.01 7.26 1.42 0.28 cciss/c0d9 0.00 0.10 0.41 3.78 0.00 0.02 9.56 0.05 12.24 1.59 0.66 cciss/c1d0 0.00 6.03 0.00 1.74 0.00 0.02 26.94 0.04 25.35 12.59 2.19
avg-cpu: %user %nice %system %iowait %steal %idle 0.05 0.00 0.36 25.77 0.00 73.82
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util cciss/c0d0 0.00 0.82 4.60 0.20 0.03 0.00 13.11 0.06 13.55 13.00 6.25 cciss/c0d1 1.23 1.43 1.94 0.20 0.02 0.01 29.33 0.02 11.48 11.48 2.46 cciss/c0d2 0.00 0.00 0.82 0.00 0.00 0.00 8.00 0.01 14.00 14.00 1.15 cciss/c0d3 2.45 1.43 11.25 0.20 0.07 0.01 14.43 0.12 10.62 10.52 12.04 cciss/c0d4 0.00 1.64 7.98 0.20 0.03 0.01 10.00 0.08 9.24 9.10 7.44 cciss/c0d5 5.93 0.20 22.09 0.20 2.19 0.00 201.03 0.58 26.06 1.88 4.19 cciss/c0d6 2.45 1.12 5.62 0.41 0.08 0.01 29.15 0.06 10.66 9.66 5.83 cciss/c0d7 0.00 1.23 5.42 0.20 0.02 0.01 10.76 0.05 8.87 8.73 4.91 cciss/c0d8 0.00 0.72 3.17 0.20 0.02 0.00 13.09 0.04 12.33 12.21 4.12 cciss/c0d9 0.92 0.82 3.17 0.20 0.03 0.00 18.67 0.04 13.12 12.67 4.27 cciss/c1d0 0.00 2.66 0.41 3.07 0.00 0.01 8.29 0.28 81.91 10.94 3.80
avg-cpu: %user %nice %system %iowait %steal %idle 0.10 0.00 0.20 15.95 0.00 83.74
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util cciss/c0d0 0.00 4.19 14.72 0.51 0.10 0.02 15.52 0.17 10.98 9.05 13.79 cciss/c0d1 0.00 0.20 0.31 0.20 0.00 0.00 11.20 0.01 10.20 10.20 0.52 cciss/c0d2 0.00 0.31 0.41 0.41 0.00 0.00 13.00 0.01 7.88 7.88 0.64 cciss/c0d3 0.00 0.31 4.50 0.20 0.02 0.00 9.74 0.05 10.61 10.37 4.88 cciss/c0d4 1.23 0.31 2.76 0.41 0.28 0.00 182.71 0.06 19.97 4.00 1.27 cciss/c0d5 0.00 0.82 3.17 0.20 0.02 0.00 11.64 0.04 11.30 11.30 3.81 cciss/c0d6 2.45 0.51 3.68 0.41 0.28 0.00 143.60 0.07 16.27 5.30 2.17 cciss/c0d7 1.23 0.10 1.94 0.20 0.01 0.00 15.24 0.03 13.33 13.33 2.86 cciss/c0d8 0.00 0.31 0.51 0.41 0.00 0.00 11.56 0.01 8.00 8.00 0.74 cciss/c0d9 0.00 0.10 0.61 0.20 0.00 0.00 10.00 0.01 11.88 11.75 0.96 cciss/c1d0 0.00 3.07 0.00 1.33 0.00 0.01 19.08 0.02 16.77 13.31 1.77
I tried nmon but I did not see anything out of the ordinary... But when I try to see the NFS stats, nmon (11f-1.el5.rf) coredumps. I tried the iostat nfs option (kernel 2.6.18-8.1.6.el5, should support it) but it did not show anything. nfsstat shows no apparent activity... nfs is normaly only used to put files or modify small files (<1K) "from times to times", while http is used to get files.
From: Les Mikesell lesmikesell@gmail.com
I'd usually blame disk seek time first. If your raid level requires several drives to move their heads together and/or the data layout lands on the same drive set, consider what happens when your 10 users all want the disk head(s) to be in different places. Disk drives allow random access but they really aren't that good at it if they have to spend most of their time seeking.
That could be it... It's always a question of space vs speed...
Thx for all the answers, JD