OK. Now I'm a bit confused. Raid 1 read performance is not what I expected. CentOS 4.4 2.6.9-42.0.2.ELsmp ===== [root at hagar ~]# cat /proc/mdstat Personalities : [raid1] md1 : active raid1 sdb2[1] sda2[0] 244035264 blocks [2/2] [UU] ===== ===== [root at hagar scsi]# cat /proc/scsi/scsi Attached devices: Host: scsi0 Channel: 00 Id: 06 Lun: 00 Vendor: SEAGATE Model: DAT DAT72-052 Rev: A16E Type: Sequential-Access ANSI SCSI revision: 03 Host: scsi2 Channel: 00 Id: 00 Lun: 00 Vendor: ATA Model: Maxtor 7L250S0 Rev: BACE (This is /dev/sda) Type: Direct-Access ANSI SCSI revision: 05 Host: scsi3 Channel: 00 Id: 00 Lun: 00 Vendor: ATA Model: Maxtor 7L250S0 Rev: BACE (This is /dev/sdb) Type: Direct-Access ANSI SCSI revision: 05 ===== ===== [root at hagar ~]# hdparm -t /dev/sda2 /dev/sdb2 /dev/sda2: Timing buffered disk reads: 154 MB in 3.01 seconds = 51.15 MB/sec /dev/sdb2: Timing buffered disk reads: 162 MB in 3.03 seconds = 53.47 MB/sec ===== ===== Then I run this script: ===== # flush the cache dd if=/dev/md1 bs=32M count=64 of=/dev/null # sync the data sync # Run two read operations, on different parts of /dev/md1 simultaneously # This reads a total of 1GB of data time dd if=/dev/md1 bs=4k count=131072 of=/dev/null & time dd if=/dev/md1 skip=262144 bs=4k count=131072 of=/dev/null & ===== The results show about 58MB/sec transferred, which is about the same as hdparm is showing for each drive individually. Running the same thing, but reading the whole 1GB using one dd process in the foreground gives identical results. Why am I not seeing higher numbers? Thanks, Steve