Is there anyway to enable NCQ on CentOS 4.6? I guess anouther question is on a somewhat high activity mail server with a single SATA2 drive will it do any good?
Matt
on 7-24-2008 6:51 AM Matt spake the following:
Is there anyway to enable NCQ on CentOS 4.6? I guess anouther question is on a somewhat high activity mail server with a single SATA2 drive will it do any good?
Matt
I doubt it will make a BIG difference, but you might be looking for trouble running a highly active server of any kind on a single spindle.
Is there anyway to enable NCQ on CentOS 4.6? I guess anouther question is on a somewhat high activity mail server with a single SATA2 drive will it do any good?
Matt
I doubt it will make a BIG difference, but you might be looking for trouble running a highly active server of any kind on a single spindle.
Some decisions made in past are hard to undo down the road. Would like to move to CentOS 64 5.x with RAID but moving everything too a new box is a very severe pain. I have to move it to new IP space shortly and hate the idea of that.
Matt
on 7-24-2008 9:35 AM Matt spake the following:
Is there anyway to enable NCQ on CentOS 4.6? I guess anouther question is on a somewhat high activity mail server with a single SATA2 drive will it do any good?
Matt
I doubt it will make a BIG difference, but you might be looking for trouble running a highly active server of any kind on a single spindle.
Some decisions made in past are hard to undo down the road. Would like to move to CentOS 64 5.x with RAID but moving everything too a new box is a very severe pain. I have to move it to new IP space shortly and hate the idea of that.
Matt
You could easily move to a software raid1 with very little downtime and a few reboots. It might also help slightly with read latencies, but wouldn't help with writes.
I was monitoring it with SAR and there was very high level of IOWait, meaning the drive was having a hard time.
How bad does the below look? Dual core AM2 5600+ with 4G DDR2 and single SATA2 drive.
Matt
10:10:01 AM all 20.54 0.00 4.09 15.85 59.52 10:20:01 AM all 19.96 0.00 4.17 16.81 59.06 10:30:01 AM all 15.94 0.00 2.48 14.78 66.80 10:40:01 AM all 14.15 0.00 3.62 13.13 69.11 10:50:01 AM all 13.77 0.00 3.53 13.52 69.18 11:00:02 AM all 13.71 0.00 2.24 12.33 71.73 11:10:01 AM all 14.42 0.00 3.51 13.19 68.88 11:20:01 AM all 12.32 0.00 3.37 11.74 72.57 11:30:01 AM all 13.93 0.00 2.54 16.23 67.30 11:40:01 AM all 14.59 0.00 3.68 11.27 70.46 11:50:01 AM all 12.74 0.00 3.27 11.18 72.81 12:00:01 PM all 10.75 0.00 1.71 9.68 77.86 12:10:01 PM all 13.32 0.00 3.26 12.21 71.21 12:20:01 PM all 11.91 0.00 3.86 12.84 71.40 12:30:01 PM all 13.89 0.00 2.33 11.74 72.04 12:40:01 PM all 10.78 0.00 3.13 10.23 75.86 12:50:01 PM all 10.78 0.00 3.07 9.47 76.68 01:00:01 PM all 9.44 0.00 1.72 10.24 78.60 01:10:01 PM all 12.45 0.00 3.34 10.54 73.66 01:20:01 PM all 12.14 0.00 3.46 11.44 72.97 01:30:01 PM all 18.43 0.00 2.64 13.01 65.92 01:40:01 PM all 14.97 0.00 3.54 11.34 70.14 01:50:02 PM all 11.42 0.00 3.16 9.36 76.06 02:00:01 PM all 10.74 0.00 1.81 8.49 78.96 02:10:01 PM all 10.37 0.00 3.02 7.47 79.14 02:20:02 PM all 11.69 0.00 3.49 9.86 74.96 02:30:01 PM all 8.71 0.00 1.62 7.40 82.27 02:40:01 PM all 9.92 0.00 3.21 8.60 78.26 02:50:02 PM all 9.11 0.00 3.08 9.20 78.60 03:00:01 PM all 12.38 0.00 2.12 11.28 74.22 03:10:01 PM all 12.17 0.00 3.38 9.51 74.94 03:20:01 PM all 10.21 0.00 3.17 10.73 75.90 03:30:01 PM all 13.11 0.00 2.51 16.24 68.13 03:40:01 PM all 13.11 0.00 3.22 8.39 75.28 03:50:01 PM all 9.08 0.00 2.92 7.47 80.54 04:00:02 PM all 7.51 0.00 1.43 5.94 85.12 04:10:02 PM all 11.25 0.00 3.23 10.89 74.63 04:20:01 PM all 12.52 0.00 3.62 9.57 74.29 04:30:01 PM all 9.07 0.00 1.61 6.85 82.47 04:40:02 PM all 8.86 0.00 2.80 7.74 80.61 04:50:01 PM all 8.37 0.00 2.80 6.67 82.16 05:00:01 PM all 9.10 0.00 1.38 6.17 83.34 05:10:01 PM all 10.93 0.00 3.01 6.24 79.81 05:20:01 PM all 8.60 0.00 3.48 7.52 80.40 05:30:01 PM all 7.37 0.00 1.44 6.14 85.04 05:40:01 PM all 10.30 0.00 3.03 6.79 79.88
05:40:01 PM CPU %user %nice %system %iowait %idle 05:50:01 PM all 7.20 0.00 2.49 5.08 85.23 Average: all 14.13 0.00 3.12 14.85 67.90
Matt wrote:
How bad does the below look? Dual core AM2 5600+ with 4G DDR2 and single SATA2 drive.
[..]
05:40:01 PM CPU %user %nice %system %iowait %idle 05:50:01 PM all 7.20 0.00 2.49 5.08 85.23 Average: all 14.13 0.00 3.12 14.85 67.90
The average looks average.. with stats only collecting once every 10 minutes it's hard to say. I collect my stats using sar and pipe them into cacti every minute.
If you get say into the 25-30% range I'd be concerned.
What kind of mail system are you running ? Is it simply a SMTP relay or is mail being stored on the system and users accessing it? If users are accessing it what type of mail storage(mbox, maildir, cyrus etc).
nate
on 7-24-2008 4:21 PM Matt spake the following:
I was monitoring it with SAR and there was very high level of IOWait, meaning the drive was having a hard time.
How bad does the below look? Dual core AM2 5600+ with 4G DDR2 and single SATA2 drive.
Matt
<snip>
05:40:01 PM CPU %user %nice %system %iowait %idle 05:50:01 PM all 7.20 0.00 2.49 5.08 85.23 Average: all 14.13 0.00 3.12 14.85 67.90
User and IOwaits are a little high. Not going to stop the server, but it stops it from hitting its full potential. How many users? I have a server with about 80 users and my averages are;
Average: all 6.00 0.00 3.65 3.95 86.40 That is with 2 dualcore XEON's at 2.2 Gh and 4 gigs and a SATA raid 5 (3ware).. It was on a single drive for a month or so after a recovery and I had IOwaits like yours. System was sluggish. Users whined. It wasn't pretty. The single drive was the fastest recovery while I had to replace some drives that tanked. I have a spare, but not 3.
Matt wrote:
I was monitoring it with SAR and there was very high level of IOWait, meaning the drive was having a hard time.
How bad does the below look? Dual core AM2 5600+ with 4G DDR2 and single SATA2 drive.
Matt
10:10:01 AM all 20.54 0.00 4.09 15.85 59.52 05:40:01 PM CPU %user %nice %system %iowait %idle 05:50:01 PM all 7.20 0.00 2.49 5.08 85.23
...
use `iostat -x 5` for 5 second intervals with a lot more detail on disk IO operations. ignore the first output sample, thats the average since last boot.. iostat is part of the sysstat package, yum install sysstat, if you don't already have it.
Scott Silva wrote:
on 7-24-2008 6:51 AM Matt spake the following:
Is there anyway to enable NCQ on CentOS 4.6? I guess anouther question is on a somewhat high activity mail server with a single SATA2 drive will it do any good?
Matt
I doubt it will make a BIG difference, but you might be looking for trouble running a highly active server of any kind on a single spindle.
I have a Communigate mail server that was running on a single WD 200 Gigs IDE (7200 RPM). Communigate uses separate mbox files for storage, very easy for backup / restore.
I was monitoring it with SAR and there was very high level of IOWait, meaning the drive was having a hard time.
Context: ======== 35 Users Running Outlook with Communigate MAPI Plugin 80 Gigs dataset (Total) Public folder with about 50 Gigs of stuff
We replaced the server with a Tyan Transport TA-26. We put an Adaptec 3405 (4 ports Unified SATA/SAS adapter) and 4 Seagate 15K SAS 73 Gigs drives (RAID 10). No more iowaits even if now there is 50 users.
I don't know about your context but running on a single drive is dangerous for data loss anyway.
Hope this helped!
Guy Boisvert, ing. IngTegration inc.
Is there anyway to enable NCQ on CentOS 4.6? I guess anouther question is on a somewhat high activity mail server with a single SATA2 drive will it do any good?
most of the tests I've seen, the overhead of SATA NCQ exceeds any gains and it ends up slower :-/
Isn't there something like disk i/o ladder in the linux kernel that does about the same thing anyway? But how could it tell what physical position the head is at and which is the closest next place to go?
Matt