[CentOS] Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2

Mon Jun 13 23:57:46 UTC 2016
cpolish at surewest.net <cpolish at surewest.net>

On 2016-06-01 20:07, Kelly Lesperance wrote:
> Software RAID 10.  Servers are HP DL380 Gen 8s, with 12x4 TB 7200 RPM drives.
> 
> On 2016-06-01, 3:52 PM, "centos-bounces at centos.org on behalf of m.roth at 5-cent.us" <centos-bounces at centos.org on behalf of m.roth at 5-cent.us> wrote:
> 
> >Kelly Lesperance wrote:
> >> I did some additional testing - I stopped Kafka on the host, and kicked
> >> off a disk check, and it ran at the expected speed overnight. I started
> >> kafka this morning, and the raid check's speed immediately dropped down to
> >> ~2000K/Sec.
> >>
> >> I then enabled the write-back cache on the drives (hdparm -W1 /dev/sd*).
> >> The raid check is now running between 100000K/Sec and 200000K/Sec, and has
> >> been for several hours (it fluctuates, but seems to stay within that
> >> range). Write-back cache is NOT enabled for the drives on the hosts we
> >> haven't upgraded yet, but the speeds are similar (I kicked off a raid
> >> check on one of our CentOS 6 hosts as well, the window seems to be 150000
> >> - 200000K/Sec on that host).

Hi Kelly,

I hope this is relevant -- you might want to try the very most recent
kernel in git to see if your problem is fixed.

Best regards,
-- 
Charles Polisher

Date: Mon, 13 Jun 2016 15:51:19 +0200
From: Tomasz Majchrzak <tomasz.majchrzak at intel.com>
To: linux-raid at vger.kernel.org
Subject: [PATCH] raid1/raid10: slow down resync if there is non-resync activity pending

A performance drop of mkfs has been observed on RAID10 during resync
since commit 09314799e4f0 ("md: remove 'go_faster' option from
->sync_request()"). Resync sends so many IOs it slows down non-resync
IOs significantly (few times). Add a short delay to a resync. The
previous long sleep (1s) has proven unnecessary, even very short delay
brings performance right.

The change also applied to raid1. The problem has not been observed on
raid1, however it shares barriers code with raid10 so it might be an
issue for some setup too.

Suggested-by: NeilBrown <neilb at suse.com>
Link: http://lkml.kernel.org/r/20160609134555.GA9104@proton.igk.intel.com
Signed-off-by: Tomasz Majchrzak <tomasz.majchrzak at intel.com>
---
 drivers/md/raid1.c  | 7 +++++++
 drivers/md/raid10.c | 7 +++++++
 2 files changed, 14 insertions(+)

diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
index 39fb21e..03c5349 100644
--- a/drivers/md/raid1.c
+++ b/drivers/md/raid1.c
@@ -2535,6 +2535,13 @@ static sector_t raid1_sync_request(struct mddev *mddev, sector_t sector_nr,
                return sync_blocks;
        }

+       /*
+       * If there is non-resync activity waiting for a turn,
+       * then let it though before starting on this new sync request.
+       */
+       if (conf->nr_waiting)
+               schedule_timeout_uninterruptible(1);
+
        /* we are incrementing sector_nr below. To be safe, we check against
         * sector_nr + two times RESYNC_SECTORS
         */
diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
index e3fd725..8a4791e 100644
--- a/drivers/md/raid10.c
+++ b/drivers/md/raid10.c
@@ -2912,6 +2912,13 @@ static sector_t raid10_sync_request(struct mddev *mddev, sector_t sector_nr,
            max_sector > (sector_nr | chunk_mask))
                max_sector = (sector_nr | chunk_mask) + 1;

+       /*
+       * If there is non-resync activity waiting for a turn,
+       * then let it though before starting on this new sync request.
+       */
+       if (conf->nr_waiting)
+               schedule_timeout_uninterruptible(1);
+
        /* Again, very different code for resync and recovery.
         * Both must result in an r10bio with a list of bios that
         * have bi_end_io, bi_sector, bi_bdev set,
--
1.8.3.1