I just installed CentOS 5.4 64 bit release on a 1.9ghz CPU with 8gB of RAM. It has 2 Western Digital 1.5TB SATA2 drives in RAID1.
[root@server ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/md2 1.4T 1.4G 1.3T 1% / /dev/md0 99M 19M 76M 20% /boot tmpfs 4.0G 0 4.0G 0% /dev/shm [root@server ~]#
Its barebones right now. Nothing really running. I intended to move our current email server over to it eventually. The thing is slow as mud due to disk I/O though. I have no idea whats going on.
Here is a bit of iostat -x output.
[root@server ~]# iostat -x Linux 2.6.18-164.9.1.el5 (server.x.us) 01/05/2010
avg-cpu: %user %nice %system %iowait %steal %idle 0.13 0.10 0.03 38.23 0.00 61.51
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 0.18 0.33 0.04 0.20 27.77 4.22 132.30 3.70 15239.91 2560.21 61.91 sda1 0.00 0.00 0.00 0.00 0.34 0.00 654.72 0.01 12358.84 3704.24 0.19 sda2 0.18 0.00 0.04 0.00 27.30 0.00 742.93 0.30 8065.37 1930.35 7.09 sda3 0.00 0.33 0.01 0.20 0.14 4.22 21.29 3.40 16537.41 3008.03 61.52 sdb 0.19 0.33 0.06 0.20 28.39 4.22 126.29 2.51 9676.49 862.49 22.27 sdb1 0.00 0.00 0.00 0.00 0.34 0.00 606.29 0.00 2202.03 643.13 0.04 sdb2 0.18 0.00 0.04 0.00 27.30 0.00 724.25 0.10 2579.45 745.76 2.81 sdb3 0.01 0.33 0.02 0.20 0.75 4.22 22.61 2.41 10913.16 988.40 21.74 md2 0.00 0.00 0.04 0.48 0.89 3.87 9.04 0.00 0.00 0.00 0.00 md1 0.00 0.00 0.00 0.00 0.00 0.00 8.00 0.00 0.00 0.00 0.00 md0 0.00 0.00 0.00 0.00 0.00 0.00 7.20 0.00 0.00 0.00 0.00
Does anyone have an idea what I have wrong here? This is my first software RAID install. Built a number of Centos servers without RAID and they have all worked fine.
Matt
Matt wrote:
Does anyone have an idea what I have wrong here? This is my first software RAID install. Built a number of Centos servers without RAID and they have all worked fine.
wild guess, md is still striping. try...
# mdadm -D /dev/md0 /dev/md0: Version : 00.90.01 Creation Time : Thu Dec 10 12:58:00 2009 Raid Level : raid10 Array Size : 286743936 (273.46 GiB 293.63 GB) Device Size : 143371968 (136.73 GiB 146.81 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 0 Persistence : Superblock is persistent
Update Time : Mon Dec 21 16:48:16 2009 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0
Layout : near=2, far=1 Chunk Size : 64K
UUID : efd5289d:577a741b:16302cca:5b33303f Events : 0.237213
Number Major Minor RaidDevice State 0 8 33 0 active sync /dev/sdc1 1 8 49 1 active sync /dev/sdd1 2 8 65 2 active sync /dev/sde1 3 8 81 3 active sync /dev/sdf1
Does anyone have an idea what I have wrong here? This is my first software RAID install. Built a number of Centos servers without RAID and they have all worked fine.
wild guess, md is still striping. try...
I get this:
[root@server ~]# mdadm -D /dev/md0 /dev/md0: Version : 0.90 Creation Time : Tue Dec 22 23:35:17 2009 Raid Level : raid1 Array Size : 104320 (101.89 MiB 106.82 MB) Used Dev Size : 104320 (101.89 MiB 106.82 MB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Persistence : Superblock is persistent
Update Time : Sun Jan 3 04:38:15 2010 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0
UUID : fc5658f0:7ed7c190:9bc8468e:abc6d2ae Events : 0.16
Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1 [root@server ~]# mdadm -D /dev/md2 /dev/md2: Version : 0.90 Creation Time : Tue Dec 22 22:01:47 2009 Raid Level : raid1 Array Size : 1456645568 (1389.17 GiB 1491.61 GB) Used Dev Size : 1456645568 (1389.17 GiB 1491.61 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 2 Persistence : Superblock is persistent
Update Time : Tue Jan 5 16:44:38 2010 State : active Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0
UUID : 94f6487b:cb068544:241217e2:db0f5e8a Events : 0.11
Number Major Minor RaidDevice State 0 8 3 0 active sync /dev/sda3 1 8 19 1 active sync /dev/sdb3
On 1/5/2010 4:44 PM, Matt wrote:
I just installed CentOS 5.4 64 bit release on a 1.9ghz CPU with 8gB of RAM. It has 2 Western Digital 1.5TB SATA2 drives in RAID1.
[root@server ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/md2 1.4T 1.4G 1.3T 1% / /dev/md0 99M 19M 76M 20% /boot tmpfs 4.0G 0 4.0G 0% /dev/shm [root@server ~]#
Its barebones right now. Nothing really running. I intended to move our current email server over to it eventually. The thing is slow as mud due to disk I/O though. I have no idea whats going on.
Here is a bit of iostat -x output.
[...]
Does anyone have an idea what I have wrong here? This is my first software RAID install. Built a number of Centos servers without RAID and they have all worked fine.
Did you just create the RAIDs? It will something that size a few hours to complete the initial sync. Try 'cat /proc/mdstat' to see when the sync completes. Until then, expect to have head contention with anything else that might be trying to use the drives.
I just installed CentOS 5.4 64 bit release on a 1.9ghz CPU with 8gB of RAM. It has 2 Western Digital 1.5TB SATA2 drives in RAID1.
[root@server ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/md2 1.4T 1.4G 1.3T 1% / /dev/md0 99M 19M 76M 20% /boot tmpfs 4.0G 0 4.0G 0% /dev/shm [root@server ~]#
Its barebones right now. Nothing really running. I intended to move our current email server over to it eventually. The thing is slow as mud due to disk I/O though. I have no idea whats going on.
Here is a bit of iostat -x output.
[...]
Does anyone have an idea what I have wrong here? This is my first software RAID install. Built a number of Centos servers without RAID and they have all worked fine.
Did you just create the RAIDs? It will something that size a few hours to complete the initial sync. Try 'cat /proc/mdstat' to see when the sync completes. Until then, expect to have head contention with anything else that might be trying to use the drives.
Here is what I got there. Md1 is swap I beleive:
[root@server ~]# uptime 17:20:54 up 7 days, 3:26, 2 users, load average: 3.41, 2.93, 2.89 [root@server ~]# [root@server ~]# [root@server ~]# cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdb1[1] sda1[0] 104320 blocks [2/2] [UU]
md1 : active raid1 sdb2[1] sda2[0] 8385856 blocks [2/2] [UU]
md2 : active raid1 sdb3[1] sda3[0] 1456645568 blocks [2/2] [UU]
unused devices: <none>
[root@server ~]# uptime 17:20:54 up 7 days, 3:26, 2 users, load average: 3.41, 2.93, 2.89
I think you need to investigate what is causing the load to be high. In my experience, software RAID causes some CPU load, but it should not be sustained unless you have something doing continuous disk IO.
Neil
-- Neil Aggarwal, (281)846-8957, http://UnmeteredVPS.net CentOS 5.4 VPS with unmetered bandwidth only $25/month! No overage charges, 7 day free trial, PayPal, Google Checkout
On 1/5/2010 5:29 PM, Matt wrote:
I just installed CentOS 5.4 64 bit release on a 1.9ghz CPU with 8gB of RAM. It has 2 Western Digital 1.5TB SATA2 drives in RAID1.
[root@server ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/md2 1.4T 1.4G 1.3T 1% / /dev/md0 99M 19M 76M 20% /boot tmpfs 4.0G 0 4.0G 0% /dev/shm [root@server ~]#
Did you just create the RAIDs? It will something that size a few hours to complete the initial sync. Try 'cat /proc/mdstat' to see when the sync completes. Until then, expect to have head contention with anything else that might be trying to use the drives.
Here is what I got there. Md1 is swap I beleive:
[root@server ~]# uptime 17:20:54 up 7 days, 3:26, 2 users, load average: 3.41, 2.93, 2.89 [root@server ~]# [root@server ~]# [root@server ~]# cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdb1[1] sda1[0] 104320 blocks [2/2] [UU]
md1 : active raid1 sdb2[1] sda2[0] 8385856 blocks [2/2] [UU]
md2 : active raid1 sdb3[1] sda3[0] 1456645568 blocks [2/2] [UU]
That's working normally with the sync completed. Mirrors should run at close to the speed of a single drive. Maybe you are just expecting too much from SATA or you have something like an updatedb running. An 'hdparm -tT' against the partitions and md devices should give you an idea of what the drives can do. And top might show what's causing the load (but maybe not if it is mostly waiting on disk seeks).
That's working normally with the sync completed. Mirrors should run at close to the speed of a single drive. Maybe you are just expecting too much from SATA or you have something like an updatedb running. An 'hdparm -tT' against the partitions and md devices should give you an idea of what the drives can do. And top might show what's causing the load (but maybe not if it is mostly waiting on disk seeks).
There should currently be virtually NO disk I/O on this box right now. Its a bare install and I have not installed any apps yet. Is there an easy way to tell whats using all the I/O? The drive activity light never goes off. I did a torch in my firewall router and there is no network activity going to it.
Matt
Matt wrote:
That's working normally with the sync completed. Mirrors should run at close to the speed of a single drive. Maybe you are just expecting too much from SATA or you have something like an updatedb running. An 'hdparm -tT' against the partitions and md devices should give you an idea of what the drives can do. And top might show what's causing the load (but maybe not if it is mostly waiting on disk seeks).
There should currently be virtually NO disk I/O on this box right now. Its a bare install and I have not installed any apps yet. Is there an easy way to tell whats using all the I/O? The drive activity light never goes off. I did a torch in my firewall router and there is no network activity going to it.
top, atop, lsof
Matt wrote:
That's working normally with the sync completed. Mirrors should run at close to the speed of a single drive. Maybe you are just expecting too much from SATA or you have something like an updatedb running. An 'hdparm -tT' against the partitions and md devices should give you an idea of what the drives can do. And top might show what's causing the load (but maybe not if it is mostly waiting on disk seeks).
There should currently be virtually NO disk I/O on this box right now. Its a bare install and I have not installed any apps yet. Is there an easy way to tell whats using all the I/O? The drive activity light never goes off. I did a torch in my firewall router and there is no network activity going to it.
Have you turned off the mlocate cron job?
On 1/5/2010 5:44 PM, Matt wrote:
I just installed CentOS 5.4 64 bit release on a 1.9ghz CPU with 8gB of RAM. It has 2 Western Digital 1.5TB SATA2 drives in RAID1.
[root@server ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/md2 1.4T 1.4G 1.3T 1% / /dev/md0 99M 19M 76M 20% /boot tmpfs 4.0G 0 4.0G 0% /dev/shm [root@server ~]#
Its barebones right now. Nothing really running. I intended to move our current email server over to it eventually. The thing is slow as mud due to disk I/O though. I have no idea whats going on.
Here is a bit of iostat -x output.
[root@server ~]# iostat -x Linux 2.6.18-164.9.1.el5 (server.x.us) 01/05/2010
avg-cpu: %user %nice %system %iowait %steal %idle 0.13 0.10 0.03 38.23 0.00 61.51
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 0.18 0.33 0.04 0.20 27.77 4.22 132.30 3.70 15239.91 2560.21 61.91 sda1 0.00 0.00 0.00 0.00 0.34 0.00 654.72 0.01 12358.84 3704.24 0.19 sda2 0.18 0.00 0.04 0.00 27.30 0.00 742.93 0.30 8065.37 1930.35 7.09 sda3 0.00 0.33 0.01 0.20 0.14 4.22 21.29 3.40 16537.41 3008.03 61.52
sda3 is running "hot" with a 62% utilization
sdb 0.19 0.33 0.06 0.20 28.39 4.22 126.29 2.51 9676.49 862.49 22.27 sdb1 0.00 0.00 0.00 0.00 0.34 0.00 606.29 0.00 2202.03 643.13 0.04 sdb2 0.18 0.00 0.04 0.00 27.30 0.00 724.25 0.10 2579.45 745.76 2.81 sdb3 0.01 0.33 0.02 0.20 0.75 4.22 22.61 2.41 10913.16 988.40 21.74
sdb3 is running at 22% utilization
Does anyone have an idea what I have wrong here? This is my first software RAID install. Built a number of Centos servers without RAID and they have all worked fine.
As mentioned by others, atop & lsof are good for figuring out what is touching the disk. Something writing to the disk (wsec/s) and doing it fairly evenly while also reading from the 2nd partition.
You could try hdparm (hdparm -tT /dev/sda) and see what speeds you get for the individual drives, then compare that to the speed you get off of say /dev/md2. Or if you want a stronger load that lasts for a minute or two, try using dd and dumping to /dev/null (dd if=/dev/sda of=/dev/null bs=1M count=1000). The individual drive speeds should match up pretty closely with the Software RAID speed.
A pair of 1.5GB 7200RPM SATA drives should be able to handle up to a few hundred thousand to a million or so messages per month (Postfix / Dovecot). As long as that's at least a dual-core 1.9GHz CPU. YMMV of course and watching server performance over the long term will be your best bet.