On 05/25/2016 09:54 AM, Kelly Lesperance wrote: > What we're seeing is that when the weekly raid-check script executes, performance nose dives, and I/O wait skyrockets. The raid check starts out fairly fast (20000K/sec - the limit that's been set), but then quickly drops down to about 4000K/Sec. dev.raid.speed sysctls are at the defaults: It looks like some pretty heavy writes are going on at the time. I'm not sure what you mean by "nose dives", but I'd expect *some* performance impact of running a read-intensive process like a RAID check at the same time you're running a write-intensive process. Do the same write-heavy processes run on the other clusters, where you aren't seeing performance issues? > avg-cpu: %user %nice %system %iowait %steal %idle > 9.24 0.00 1.32 20.02 0.00 69.42 > > Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn > sda 50.00 512.00 20408.00 512 20408 > sdb 50.00 512.00 20408.00 512 20408 > sdc 48.00 512.00 19984.00 512 19984 > sdd 48.00 512.00 19984.00 512 19984 > sdf 50.00 704.00 19968.00 704 19968 > sdg 47.00 512.00 19968.00 512 19968 > sdh 47.00 512.00 19968.00 512 19968 > sde 50.00 704.00 19968.00 704 19968 > sdj 48.00 512.00 19972.00 512 19972 > sdi 48.00 512.00 19972.00 512 19972 > sdk 48.00 512.00 19980.00 512 19980 > sdl 48.00 512.00 19980.00 512 19980 > md127 241.00 0.00 120280.00 0 120280