On Dec 18, 2008, at 12:24 PM, "John" <jses27 at gmail.com> wrote: > >> -----Original Message----- >> From: centos-bounces at centos.org >> [mailto:centos-bounces at centos.org] On Behalf Of Matt >> Sent: Thursday, December 18, 2008 10:37 AM >> To: CentOS mailing list >> Subject: Re: [CentOS] Adding RAM >> >>>> A bit of bottle neck. >>>> >>>> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s >> rkB/s wkB/s >>>> avgrq-sz avgqu-sz await svctm %util >>>> sda 0.38 176.63 70.32 78.26 813.46 2044.82 >> 406.73 1022.41 >>>> 19.24 0.40 19.17 4.04 60.07 >>>> sda1 0.00 0.00 0.00 0.00 0.00 0.00 >> 0.00 0.00 >>>> 5.28 0.00 23.61 19.33 0.00 >>>> sda2 0.38 176.63 70.32 78.26 813.45 2044.82 >> 406.73 1022.41 >>>> 19.24 0.40 19.17 4.04 60.07 >>>> dm-0 0.00 0.00 70.71 255.60 813.45 2044.82 406.73 >>>> 1022.41 8.76 2.90 8.87 1.84 60.10 >>>> dm-1 0.00 0.00 0.00 0.00 0.00 0.00 >> 0.00 0.00 >>>> 8.00 0.00 64.20 11.38 0.00 >>> >>> Try setting the scheduler to 'deadline' and see if the queue sizes >>> shrink. >>> >>> No raid1? Besides adding redundancy, it can help with read >>> performance. I would probably put the mail on a raid 10 though if I >>> had 4 disks to do so. >>> >>> -Ross >> >>> Like this: >>> >>> kernel /vmlinuz-2.6.9-78.0.8.ELsmp ro >> root=/dev/VolGroup00/LogVol00 elevator=deadline >>> >>> And this change will be for System Wide. >> >> Doing this among other things helped quite a lot. My iostat -x %util >> dropped from ~ 60% to ~ 22% now. Of course at same time I updated >> from dual-core 2Ghz CPU to quad-core 2.4Ghz. Also swapped >> motherboard >> from one with an "Intel ICH9R" Southbridge to a "Intel ICH7R" since I >> heard CentOS 4.x did not have drivers in kernel for ICH9R. > > That's why I asked what kind of Controler the board had on it in a > previous > post to you and stated ram was not the suspect problem. IMO if you > keeped > the dual core proc and just switched to ICH7 Board you would have > saved > money. Your utilization rate would probly stayed the same or no > higher than > %30. Just to keep things in balance you will probly want to try the > cfq > schedular with a high user load so every thing gets it fair share in > time_wait. Some people will contradict that it's about making the > users > happy. When access time for one user takes longer than another then > the > complaints start coming in. I would like to know the Proc > Utilization per > core or are you running it in Single Core? CFQ accounting is only good for interactive processes, so unless of is a terminal server mucking with CFQ will have little affect. Deadline and anticipatory schedulers work best for non-interactive servers such as file, mail, database. Noop works for special apps that like to do their own io scheduling, database systems with their own io schedulers mostly. So it isn't one scheduler to rule them all, but one for each situation. -Ross