On Monday, May 09, 2011 02:02:08 PM Ali Ahsan wrote:
On 05/09/2011 10:51 PM, Lamar Owen wrote:
iostat -x 1
I am little new to iostat please guide me on this
[snip]
avg-cpu: %user %nice %system %iowait %steal %idle 34.79 0.00 1.25 6.11 0.00 57.86
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdb 14.00 0.00 49.00 0.00 2768.00 0.00 56.49 2.63 79.80 6.98 34.20 sdb1 14.00 0.00 49.00 0.00 2768.00 0.00 56.49 2.63 79.80 6.98 34.20 dm-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-1 0.00 0.00 53.00 0.00 2040.00 0.00 38.49 4.63 111.42 6.45 34.20
Ok, on this particular frame of data, you have a 6% iowait (which isn't bad for a busy server), and the awaits for sdb at 79.8 milliseconds, and a device mapper (this is the LVM piece) await of 111.42 milliseconds, aren't too terrible. That's slower than some busy servers I've seen.
In my case with the WD15EADS drive (in the same family as the WD10EARS drive), I had seen awaits in the 27,000 millisecond (27 seconds!) range during intensive operations; intensive io operations like an svn update on a copy of the Plone collective, which is a really good stress test if you want to bring a box to its knees, would take ten to fifteen times longer than they should have taken.
Watch the output, in particular the await column (you'll want to widen your terminal to get it all on single lines), for 'spikes' to see if this is the issue that is affecting you.
And it may not be the problem; but, then again, on my box with the WD15EADS drive, it would run for hours and then slow to an absolute crawl for minutes at a time, and then run smoothly for hours again.