Hi,
I have two 320 GB SATA disks (/dev/sda, /dev/sdb) in a server running
CentOS release 5.
They both have three partitions setup as RAID1 using md (boot, swap,
and an LVM data partition).
# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdb1[1] sda1[0]
104320 blocks [2/2] [UU]
md1 : active raid1 sdb2[1] sda2[0]
4192896 blocks [2/2] [UU]
md2 : active raid1 sdb3[1] sda3[0]
308271168 blocks [2/2] [UU]
When I do tests though, I find that the md raid1 read performance is
no better than either of the two disks on their own
# hdparm -tT /dev/sda3 /dev/sdb3 /dev/md2
/dev/sda3:
Timing cached reads: 4160 MB in 2.00 seconds = 2080.92 MB/sec
Timing buffered disk reads: 234 MB in 3.02 seconds = 77.37 MB/sec
/dev/sdb3:
Timing cached reads: 4148 MB in 2.00 seconds = 2074.01 MB/sec
Timing buffered disk reads: 236 MB in 3.01 seconds = 78.46 MB/sec
/dev/md2:
Timing cached reads: 4128 MB in 2.00 seconds = 2064.04 MB/sec
Timing buffered disk reads: 230 MB in 3.02 seconds = 76.17 MB/sec
If I fail and remove one of the disks in /dev/md2:
# mdadm /dev/md2 -f /dev/sda3
# mdadm /dev/md2 -r /dev/sda3
# cat /proc/mdstat
...
md2 : active raid1 sdb3[1]
308271168 blocks [2/1] [_U]
# hdparm -tT /dev/md2
/dev/md2:
Timing cached reads: 4184 MB in 2.00 seconds = 2092.65 MB/sec
Timing buffered disk reads: 240 MB in 3.01 seconds = 79.70 MB/sec
So with only one disk in the array the performance is pretty much the same.
At first I thought maybe the bottleneck is the SATA controller, but if
I do simultaneous reads from both disks:
# mkfifo /tmp/sync
# cat /tmp/sync; hdparm -tT /dev/sda3
(and in another terminal, to make sure they start simultaneously)
# > /tmp/sync; hdparm -tT /dev/sdb3
/dev/sda3:
Timing cached reads: 2248 MB in 2.00 seconds = 1123.83 MB/sec
Timing buffered disk reads: 234 MB in 3.00 seconds = 77.91 MB/sec
/dev/sdb3:
Timing cached reads: 2248 MB in 2.00 seconds = 1123.74 MB/sec
Timing buffered disk reads: 236 MB in 3.01 seconds = 78.30 MB/sec
So the total cached read bandwidth seems limited to about 2250 MB/s,
which is slightly higher than the cache read bandwidth for /dev/md2,
but I'm not too worried about that. More concerning is that I am still
getting ~80MB/s from each disk simultaneously on the buffered reads.
Given this I would expect /dev/md2 to give buffered read speeds of at
least 120MB/s (if not 150MB/s).
What can I do to track down this issue? Any help would be really appreciated.
Thanks,
Kieran Clancy.