On Tuesday 16 January 2007 16:37, Ross S. W. Walker wrote: ...
To follow up on this (even if it is a little late), how is this affected by LVM use? I'm curious to know how (or if) this math changes with ext3 sitting on LVM on the raid array.
Depends is the best answer. It really depends on LVM and the other block layer devices. As the io requests descend down the different layers they will enter multiple request_queues, each request_queue will have and io scheduler assigned to it, either the system default or one of the others, or one of the block devices own, so it is hard to say. Only by testing can you know for sure. In my tests LVM is very good with unnoticeable overhead going to hardware RAID, but if you use MD RAID then your experience might be different.
I don't think that is quite correct. AFAICT only the "real" devices (such as /dev/sda) has an io-scheduler. See the difference of ls /sys/block/..: # ls /sys/block/dm-0 dev range removable size stat # ls /sys/block/sdc dev device queue range removable size stat
As for read-ahead it's the reverse. Read-ahead has no effect (in my tests) when applied to the underlying device (such as sda) but has to be set on the lvm-device. Here are some performance numbers:
sdc:256,dm-0:256 and sdc:8192,dm-0:256 gives: # time dd if=file10G of=/dev/null bs=1M real 0m59.465s
sdc:8192,dm-0:256 and sdc:8192,dm-0:8192 gives: # time dd if=file10G of=/dev/null bs=1M real 0m24.163s
This on a 8 disk 3ware raid6 (hardware raid) with fully updated centos-4.4 x86_64. The file dd read was 1000 MiB. 256 is the default read-ahead and blockdev --setra was used to change it.
/Peter