I have a server running Centos 4.7 32bit. Will moving from 4Gig of RAM to 8Gig do any good? Since its 32bit I assume it will only be able to address the first 4Gig not? When I installed CentOS I did not do anything special to enable using more then 4Gig if thats required. Exim, spamassassin and Clamd seem to be the biggest load on this machine. My biggest bottle neck is disk I/O anyway.
Wish I had installed CentOS 5.x 64bit way back when but some of the software I was using at time listed support for it as beta.
Matt
2008/12/5 Matt lm7812@gmail.com:
I have a server running Centos 4.7 32bit. Will moving from 4Gig of RAM to 8Gig do any good? Since its 32bit I assume it will only be able to address the first 4Gig not?
As long as you are using a SMP kernel you can use up to 64GB of RAM (though each proccess can only address 4GB of this). So if you can find any trace of "SMP" in the uname (grep is your friend) then it should work fine.
On Fri, 2008-12-05 at 23:57 +0000, Michael Holmes wrote:
2008/12/5 Matt lm7812@gmail.com:
I have a server running Centos 4.7 32bit. Will moving from 4Gig of RAM to 8Gig do any good? Since its 32bit I assume it will only be able to address the first 4Gig not?
As long as you are using a SMP kernel you can use up to 64GB of RAM (though each proccess can only address 4GB of this). So if you can find any trace of "SMP" in the uname (grep is your friend) then it should work fine.
PAE, not SMP.
Am 06.12.2008 um 01:02 schrieb Ignacio Vazquez-Abrams:
On Fri, 2008-12-05 at 23:57 +0000, Michael Holmes wrote:
2008/12/5 Matt lm7812@gmail.com:
I have a server running Centos 4.7 32bit. Will moving from 4Gig of RAM to 8Gig do any good? Since its 32bit I assume it will only be able to address the first 4Gig not?
As long as you are using a SMP kernel you can use up to 64GB of RAM (though each proccess can only address 4GB of this). So if you can find any trace of "SMP" in the uname (grep is your friend) then it should work fine.
PAE, not SMP.
He should be able to replace the kernel via rpm -e and rpm -i
That said, I doubt he'll actually see a benefit. PAE is slow. If you want to see a real performance-gain, install 5.2 x86-64.
cheers, Rainer
2008/12/6 Rainer Duffner rainer@ultra-secure.de:
Am 06.12.2008 um 01:02 schrieb Ignacio Vazquez-Abrams:
PAE, not SMP.
Gah, my mistake, sorry.
He should be able to replace the kernel via rpm -e and rpm -i That said, I doubt he'll actually see a benefit. PAE is slow. If you want to see a real performance-gain, install 5.2 x86-64.
If this server is not mission-critical and therefore you wouldn't mind the downtime I'd recommend this too. Of course it could be harder if the server is Colo'd but for >4GB RAM x86-64 is the best way to go.
HTH, Mike
2008/12/5 Matt lm7812@gmail.com:
I have a server running Centos 4.7 32bit. Will moving from 4Gig of RAM to 8Gig do any good? Since its 32bit I assume it will only be able to address the first 4Gig not?
As long as you are using a SMP kernel you can use up to 64GB of RAM (though each proccess can only address 4GB of this). So if you can find any trace of "SMP" in the uname (grep is your friend) then it should work fine.
PAE, not SMP
He should be able to replace the kernel via rpm -e and rpm -i That said, I doubt he'll actually see a benefit. PAE is slow.
It isn't slow; it does have a performance penalty. A 32bit box with 8GB serving many clients (as an IMAP server, file server, etc...) will be faster than a 32bit box with 3GB of RAM.
On Dec 5, 2008, at 7:18 PM, Rainer Duffner rainer@ultra-secure.de wrote:
Am 06.12.2008 um 01:02 schrieb Ignacio Vazquez-Abrams:
On Fri, 2008-12-05 at 23:57 +0000, Michael Holmes wrote:
2008/12/5 Matt lm7812@gmail.com:
I have a server running Centos 4.7 32bit. Will moving from 4Gig of RAM to 8Gig do any good? Since its 32bit I assume it will only be able to address the first 4Gig not?
As long as you are using a SMP kernel you can use up to 64GB of RAM (though each proccess can only address 4GB of this). So if you can find any trace of "SMP" in the uname (grep is your friend) then it should work fine.
PAE, not SMP.
He should be able to replace the kernel via rpm -e and rpm -i
That said, I doubt he'll actually see a benefit. PAE is slow. If you want to see a real performance-gain, install 5.2 x86-64.
He'll see a benefit. PAE slowdown is humanly unnoticeable for short- term transactions, it's difference is in high nano seconds or low micro seconds.
All 5.0 kernels are PAE by default.
So are WinXP_SP2/Win2K3/Vista/Win2k8 and Mac OS X kernels.
Only go 64-bit if it's completely necessary.
-Ross
On Dec 6, 2008, at 10:44 AM, Ross Walker wrote:
On Dec 5, 2008, at 7:18 PM, Rainer Duffner rainer@ultra-secure.de wrote:
Am 06.12.2008 um 01:02 schrieb Ignacio Vazquez-Abrams:
On Fri, 2008-12-05 at 23:57 +0000, Michael Holmes wrote:
2008/12/5 Matt lm7812@gmail.com:
I have a server running Centos 4.7 32bit. Will moving from 4Gig of RAM to 8Gig do any good? Since its 32bit I assume it will only be able to address the first 4Gig not?
As long as you are using a SMP kernel you can use up to 64GB of RAM (though each proccess can only address 4GB of this). So if you can find any trace of "SMP" in the uname (grep is your friend) then it should work fine.
PAE, not SMP.
He should be able to replace the kernel via rpm -e and rpm -i
That said, I doubt he'll actually see a benefit. PAE is slow. If you want to see a real performance-gain, install 5.2 x86-64.
He'll see a benefit. PAE slowdown is humanly unnoticeable for short- term transactions, it's difference is in high nano seconds or low micro seconds.
All 5.0 kernels are PAE by default.
So are WinXP_SP2/Win2K3/Vista/Win2k8 and Mac OS X kernels.
Only go 64-bit if it's completely necessary.
I see that I had mis remembered some changes in 5.
At least with regard to the upstream provider, on X86 the desktop version has a limit of 4GB of RAM, regardless of how much more memory you have. And they removed the hugemem version, so instead of up to 64GB of RAM on 32 bit, you can only get to 16GB for server versions.
I was combining the bunch, and thinking that you could now only get to 4GB on 32 bit.
Of course, I believe that XP SP2 is now limited to 4GB of RAM on 32 bit OS's, less when you factor in device space, due to problems with some device drivers.
Kevin Krieser wrote:
At least with regard to the upstream provider, on X86 the desktop version has a limit of 4GB of RAM, regardless of how much more memory you have. And they removed the hugemem version, so instead of up to 64GB of RAM on 32 bit, you can only get to 16GB for server versions.
With PAE you can access up to 64GB memory. It works much the same way as XMS memory in DOS, where "high mem" is mapped to a low mem window. It is just addresses that are mapped, there is no physical copying of memory that you had with EMS memory.
Generally, PAE would not make much sense on >16GB memory machines, as you still need the space in the 4GB range to address it. Personally I would use PAE on machines with up to 8-12GB memory (assuming x86_64 wasn't an option). With more than 16GB I would recommend against it, as you get a lot of remapping and/or limited space in the 4GB range.
YMMV depending on specific workload of course.
Morten Torstensen wrote:
With PAE you can access up to 64GB memory. It works much the same way as XMS memory in DOS, where "high mem" is mapped to a low mem window. It is just addresses that are mapped, there is no physical copying of memory that you had with EMS memory.
thats not at all an accurate description (other than the 64GB part)
ALL virtual memory systems use page tables to map virtual addresses to physical addresses. x86 systems (and many others) use a 2-level page table where a the high bits of a virtual address is used to look up a page table entry in the page directory, then this in turn is used with middle address bits to look up an actual physical page address. in 32bit x86, this system allows addressing 4GB of physical memory for as many 4GB virtual address spaces as you care to maintain tables for.. the page directory and each page table occupies a single 4K page of memory, which holds 1024 entries of 32 bits each. a process that uses a full 4GB of virtual would require the 1 4K page directory and 1024 4K page tables (although in Linux systems the top 1GB of the 4GB address space is the kernel space, which is shared by all processes). in practice, most page directories and page tables are only partially populated as most processes only use a small part of their address space.
PAE uses a modified page table where each page table instead has 512 x 64bit entries, which provide the larger physical address bits, and it adds a 3rd level page directory so each page fault has to go through three levels of page tables rather than two.
(side note, this is assuming 4K pages... x86 also supports 4M pages, which reduce the lookups by one level, however, I don't think this is used much)
see http://en.wikipedia.org/wiki/Physical_Address_Extension#Page_table_structure... for a pretty good summary of this.
John R Pierce wrote:
thats not at all an accurate description (other than the 64GB part)
It was a simplified description of how PAE works. The point was that PAE work at the page table level and just remaps memory pages to fit within the virtual 32-bit/4GB address space for a 32-bit process. There are still constraints in PAE on how much memory one single process can use and adding memory to a machine where you use PAE does not automagically solve all your memory bottlenecks.
On Dec 7, 2008, at 1:37 PM, Morten Torstensen wrote:
Kevin Krieser wrote:
At least with regard to the upstream provider, on X86 the desktop version has a limit of 4GB of RAM, regardless of how much more memory you have. And they removed the hugemem version, so instead of up to 64GB of RAM on 32 bit, you can only get to 16GB for server versions.
With PAE you can access up to 64GB memory. It works much the same way as XMS memory in DOS, where "high mem" is mapped to a low mem window. It is just addresses that are mapped, there is no physical copying of memory that you had with EMS memory.
Generally, PAE would not make much sense on >16GB memory machines, as you still need the space in the 4GB range to address it. Personally I would use PAE on machines with up to 8-12GB memory (assuming x86_64 wasn't an option). With more than 16GB I would recommend against it, as you get a lot of remapping and/or limited space in the 4GB range.
YMMV depending on specific workload of course.
I'm just going by what the redhat site says for EL 5. On this version, they don't provide the hugemem version for server anymore, on the assumption that if you really need to use more than 16GB of RAM you should be running 64 bits. I assume that this also helps with reducing sizes of page tables, and testing.
on 12-5-2008 4:18 PM Rainer Duffner spake the following:
Am 06.12.2008 um 01:02 schrieb Ignacio Vazquez-Abrams:
On Fri, 2008-12-05 at 23:57 +0000, Michael Holmes wrote:
2008/12/5 Matt lm7812@gmail.com:
I have a server running Centos 4.7 32bit. Will moving from 4Gig of RAM to 8Gig do any good? Since its 32bit I assume it will only be able to address the first 4Gig not?
As long as you are using a SMP kernel you can use up to 64GB of RAM (though each proccess can only address 4GB of this). So if you can find any trace of "SMP" in the uname (grep is your friend) then it should work fine.
PAE, not SMP.
He should be able to replace the kernel via rpm -e and rpm -i
That said, I doubt he'll actually see a benefit. PAE is slow. If you want to see a real performance-gain, install 5.2 x86-64.
You will only see a gain if the machine is swapping, otherwise more ram through PAE could even be slower. Maybe up to 5% slower depending on the machines bios and memory bus speed.
He should be able to replace the kernel via rpm -e and rpm -i
That said, I doubt he'll actually see a benefit. PAE is slow. If you want to see a real performance-gain, install 5.2 x86-64.
You will only see a gain if the machine is swapping, otherwise more ram through PAE could even be slower. Maybe up to 5% slower depending on the machines bios and memory bus speed.
I don't think I am swapping very much.
Linux server.XXXX.net 2.6.9-78.0.8.ELsmp #1 SMP Wed Nov 19 20:05:04 EST 2008 i686 i686 i386 GNU/Linux
top - 09:11:33 up 16 days, 14:52, 2 users, load average: 10.19, 18.29, 23.17 Tasks: 158 total, 2 running, 152 sleeping, 0 stopped, 4 zombie Cpu0 : 5.3% us, 4.0% sy, 0.0% ni, 81.5% id, 9.3% wa, 0.0% hi, 0.0% si Cpu1 : 11.3% us, 8.3% sy, 0.0% ni, 1.7% id, 78.1% wa, 0.7% hi, 0.0% si Mem: 8309188k total, 4761352k used, 3547836k free, 451464k buffers Swap: 2031608k total, 192k used, 2031416k free, 1564316k cached
Matt
Matt wrote:
Cpu0 : 5.3% us, 4.0% sy, 0.0% ni, 81.5% id, 9.3% wa, 0.0% hi, 0.0% si Cpu1 : 11.3% us, 8.3% sy, 0.0% ni, 1.7% id, 78.1% wa, 0.7% hi, 0.0% si Mem: 8309188k total, 4761352k used, 3547836k free, 451464k buffers Swap: 2031608k total, 192k used, 2031416k free, 1564316k cached
If this is typical, you don't need more RAM... cpu1 is waiting a lot, I/O bottleneck?
Cpu0 : 5.3% us, 4.0% sy, 0.0% ni, 81.5% id, 9.3% wa, 0.0% hi, 0.0% si Cpu1 : 11.3% us, 8.3% sy, 0.0% ni, 1.7% id, 78.1% wa, 0.7% hi, 0.0% si Mem: 8309188k total, 4761352k used, 3547836k free, 451464k buffers Swap: 2031608k total, 192k used, 2031416k free, 1564316k cached
If this is typical, you don't need more RAM... cpu1 is waiting a lot, I/O bottleneck?
A bit of bottle neck.
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.38 176.63 70.32 78.26 813.46 2044.82 406.73 1022.41 19.24 0.40 19.17 4.04 60.07 sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 5.28 0.00 23.61 19.33 0.00 sda2 0.38 176.63 70.32 78.26 813.45 2044.82 406.73 1022.41 19.24 0.40 19.17 4.04 60.07 dm-0 0.00 0.00 70.71 255.60 813.45 2044.82 406.73 1022.41 8.76 2.90 8.87 1.84 60.10 dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 8.00 0.00 64.20 11.38 0.00
Matt
On Dec 8, 2008, at 4:55 PM, Matt lm7812@gmail.com wrote:
Cpu0 : 5.3% us, 4.0% sy, 0.0% ni, 81.5% id, 9.3% wa, 0.0% hi, 0.0% si Cpu1 : 11.3% us, 8.3% sy, 0.0% ni, 1.7% id, 78.1% wa, 0.7% hi, 0.0% si Mem: 8309188k total, 4761352k used, 3547836k free, 451464k buffers Swap: 2031608k total, 192k used, 2031416k free, 1564316k cached
If this is typical, you don't need more RAM... cpu1 is waiting a lot, I/O bottleneck?
A bit of bottle neck.
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.38 176.63 70.32 78.26 813.46 2044.82 406.73 1022.41 19.24 0.40 19.17 4.04 60.07 sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 5.28 0.00 23.61 19.33 0.00 sda2 0.38 176.63 70.32 78.26 813.45 2044.82 406.73 1022.41 19.24 0.40 19.17 4.04 60.07 dm-0 0.00 0.00 70.71 255.60 813.45 2044.82 406.73 1022.41 8.76 2.90 8.87 1.84 60.10 dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 8.00 0.00 64.20 11.38 0.00
Try setting the scheduler to 'deadline' and see if the queue sizes shrink.
No raid1? Besides adding redundancy, it can help with read performance. I would probably put the mail on a raid 10 though if I had 4 disks to do so.
-Ross
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.38 176.63 70.32 78.26 813.46 2044.82 406.73 1022.41 19.24 0.40 19.17 4.04 60.07 sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 5.28 0.00 23.61 19.33 0.00 sda2 0.38 176.63 70.32 78.26 813.45 2044.82 406.73 1022.41 19.24 0.40 19.17 4.04 60.07 dm-0 0.00 0.00 70.71 255.60 813.45 2044.82 406.73 1022.41 8.76 2.90 8.87 1.84 60.10 dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 8.00 0.00 64.20 11.38 0.00
Try setting the scheduler to 'deadline' and see if the queue sizes shrink.
I have googled this and having a bit of trouble figuring how to change it under CentOS 4.
http://www.wlug.org.nz/LinuxIoScheduler
Does not seem to work on CentOS 4.
No raid1? Besides adding redundancy, it can help with read performance. I would probably put the mail on a raid 10 though if I had 4 disks to do so.
I plan on moving to faster disks and RAID 1 down the road. Just has not happened yet.
Matt
On Dec 8, 2008, at 6:51 PM, Matt lm7812@gmail.com wrote:
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.38 176.63 70.32 78.26 813.46 2044.82 406.73 1022.41 19.24 0.40 19.17 4.04 60.07 sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 5.28 0.00 23.61 19.33 0.00 sda2 0.38 176.63 70.32 78.26 813.45 2044.82 406.73 1022.41 19.24 0.40 19.17 4.04 60.07 dm-0 0.00 0.00 70.71 255.60 813.45 2044.82 406.73 1022.41 8.76 2.90 8.87 1.84 60.10 dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 8.00 0.00 64.20 11.38 0.00
Try setting the scheduler to 'deadline' and see if the queue sizes shrink.
I have googled this and having a bit of trouble figuring how to change it under CentOS 4.
http://www.wlug.org.nz/LinuxIoScheduler
Does not seem to work on CentOS 4.
No raid1? Besides adding redundancy, it can help with read performance. I would probably put the mail on a raid 10 though if I had 4 disks to do so.
I plan on moving to faster disks and RAID 1 down the road. Just has not happened yet.
Setting scheduler is global in C4 it can be set as a kernel option with a scheduler=deadline in grub.
-Ross
On Tuesday 09 December 2008, Ross Walker wrote:
On Dec 8, 2008, at 6:51 PM, Matt lm7812@gmail.com wrote:
...
Try setting the scheduler to 'deadline' and see if the queue sizes shrink.
I have googled this and having a bit of trouble figuring how to change it under CentOS 4.
http://www.wlug.org.nz/LinuxIoScheduler
Does not seem to work on CentOS 4.
No raid1? Besides adding redundancy, it can help with read performance. I would probably put the mail on a raid 10 though if I had 4 disks to do so.
I plan on moving to faster disks and RAID 1 down the road. Just has not happened yet.
Setting scheduler is global in C4 it can be set as a kernel option with a scheduler=deadline in grub.
Is that an alias for "elevator=deadline" (which I know works)?
/Peter
On Dec 9, 2008, at 3:30 AM, Peter Kjellstrom cap@nsc.liu.se wrote:
On Tuesday 09 December 2008, Ross Walker wrote:
On Dec 8, 2008, at 6:51 PM, Matt lm7812@gmail.com wrote:
...
Try setting the scheduler to 'deadline' and see if the queue sizes shrink.
I have googled this and having a bit of trouble figuring how to change it under CentOS 4.
http://www.wlug.org.nz/LinuxIoScheduler
Does not seem to work on CentOS 4.
No raid1? Besides adding redundancy, it can help with read performance. I would probably put the mail on a raid 10 though if I had 4 disks to do so.
I plan on moving to faster disks and RAID 1 down the road. Just has not happened yet.
Setting scheduler is global in C4 it can be set as a kernel option with a scheduler=deadline in grub.
Is that an alias for "elevator=deadline" (which I know works)?
No that was me forgetting the option name.
Thanks Peter, it's elevator= not scheduler=
-Ross
Setting scheduler is global in C4 it can be set as a kernel option with a scheduler=deadline in grub.
Is that an alias for "elevator=deadline" (which I know works)?
No that was me forgetting the option name.
Thanks Peter, it's elevator= not scheduler=
Does this mean I need to add "elevator=deadline" to grub.conf? Is there a way to make the change without rebooting?
Matt
On Dec 9, 2008, at 10:59 AM, Matt lm7812@gmail.com wrote:
Setting scheduler is global in C4 it can be set as a kernel option with a scheduler=deadline in grub.
Is that an alias for "elevator=deadline" (which I know works)?
No that was me forgetting the option name.
Thanks Peter, it's elevator= not scheduler=
Does this mean I need to add "elevator=deadline" to grub.conf? Is there a way to make the change without rebooting?
I'm afraid not, so possibly a late night or weekend event with the option for a mid day reboot to recover if things turn out badly.
Virtualize things and you can minimize downtime with snapshots.
-Ross
On Tue, 2008-12-09 at 12:07 -0500, Ross Walker wrote:
On Dec 9, 2008, at 10:59 AM, Matt lm7812@gmail.com wrote:
Setting scheduler is global in C4 it can be set as a kernel option with a scheduler=deadline in grub.
Is that an alias for "elevator=deadline" (which I know works)?
No that was me forgetting the option name.
Thanks Peter, it's elevator= not scheduler=
Does this mean I need to add "elevator=deadline" to grub.conf? Is there a way to make the change without rebooting?
I'm afraid not, so possibly a late night or weekend event with the option for a mid day reboot to recover if things turn out badly.
Virtualize things and you can minimize downtime with snapshots.
[root@blah ~]# cat /sys/block/hda/queue/scheduler noop anticipatory deadline [cfq] The Schedular is CFQ and can be changed on the fly to whatever Block Device you want it. It does not matter what load the system is under. Changing to Deadline on his specific block device will improve the I/O of the system unless it really being hammered. Keep in mind these changes will be gone on Reboot. You can put these in rc.local to activate at boot time. Substitute "hda" for your device. This does not work on iscsi or SAN mounts.
[root@blah ~]# echo 'deadline' > /sys/block/hda/queue/scheduler [root@blah ~]# cat /sys/block/hda/queue/scheduler noop anticipatory [deadline] cfq -- Changed to Deadline.
JohnStanley
On Wednesday 10 December 2008, John wrote:
On Tue, 2008-12-09 at 12:07 -0500, Ross Walker wrote:
On Dec 9, 2008, at 10:59 AM, Matt lm7812@gmail.com wrote:
Setting scheduler is global in C4 it can be set as a kernel option with a scheduler=deadline in grub.
Is that an alias for "elevator=deadline" (which I know works)?
No that was me forgetting the option name.
Thanks Peter, it's elevator= not scheduler=
Does this mean I need to add "elevator=deadline" to grub.conf? Is there a way to make the change without rebooting?
I'm afraid not, so possibly a late night or weekend event with the option for a mid day reboot to recover if things turn out badly.
Virtualize things and you can minimize downtime with snapshots.
[root@blah ~]# cat /sys/block/hda/queue/scheduler noop anticipatory deadline [cfq] The Schedular is CFQ and can be changed on the fly to whatever Block Device you want it.
...
[root@blah ~]# echo 'deadline' > /sys/block/hda/queue/scheduler [root@blah ~]# cat /sys/block/hda/queue/scheduler noop anticipatory [deadline] cfq -- Changed to Deadline.
...this is correct on CentOS-5. On CentOS-4 you need to do it via grub and a reboot.
/Peter
On Wed, 2008-12-10 at 17:02 +0100, Peter Kjellstrom wrote:
On Wednesday 10 December 2008, John wrote:
On Tue, 2008-12-09 at 12:07 -0500, Ross Walker wrote:
On Dec 9, 2008, at 10:59 AM, Matt lm7812@gmail.com wrote:
> Setting scheduler is global in C4 it can be set as a kernel option > with a scheduler=deadline in grub.
Is that an alias for "elevator=deadline" (which I know works)?
No that was me forgetting the option name.
Thanks Peter, it's elevator= not scheduler=
Does this mean I need to add "elevator=deadline" to grub.conf? Is there a way to make the change without rebooting?
I'm afraid not, so possibly a late night or weekend event with the option for a mid day reboot to recover if things turn out badly.
Virtualize things and you can minimize downtime with snapshots.
[root@blah ~]# cat /sys/block/hda/queue/scheduler noop anticipatory deadline [cfq] The Schedular is CFQ and can be changed on the fly to whatever Block Device you want it.
...
[root@blah ~]# echo 'deadline' > /sys/block/hda/queue/scheduler [root@blah ~]# cat /sys/block/hda/queue/scheduler noop anticipatory [deadline] cfq -- Changed to Deadline.
...this is correct on CentOS-5. On CentOS-4 you need to do it via grub and a reboot.
Errr,,, It can done on any 2.6 kernel system. See the Kbase Knowledge Section at kbase.redhat.com. If he chooses to do it in grub the correct way is elevator=deadline. I know this to be fact because I my self have a 4.x system with high I/O with samba and use rc.local to change it upon boot. My personal opinion of this OPs thread is that RAM is not going to help in no way possible. What chipset, north and south bridge does the server have? One thing I've never understood is why admins want to throw ram at a problem that does not exist. It seems to me the solution is always through some ram in it??? That is from experiance and my opinion. There is more to it than just that.
JohnStanley
On Wed, 2008-12-10 at 17:02 +0100, Peter Kjellstrom wrote:
On Wednesday 10 December 2008, John wrote:
On Tue, 2008-12-09 at 12:07 -0500, Ross Walker wrote:
On Dec 9, 2008, at 10:59 AM, Matt lm7812@gmail.com wrote:
> Setting scheduler is global in C4 it can be set as a kernel option > with a scheduler=deadline in grub.
Is that an alias for "elevator=deadline" (which I know works)?
No that was me forgetting the option name.
Thanks Peter, it's elevator= not scheduler=
Does this mean I need to add "elevator=deadline" to grub.conf? Is there a way to make the change without rebooting?
I'm afraid not, so possibly a late night or weekend event with the option for a mid day reboot to recover if things turn out badly.
Virtualize things and you can minimize downtime with snapshots.
[root@blah ~]# cat /sys/block/hda/queue/scheduler noop anticipatory deadline [cfq] The Schedular is CFQ and can be changed on the fly to whatever Block Device you want it.
...
[root@blah ~]# echo 'deadline' > /sys/block/hda/queue/scheduler [root@blah ~]# cat /sys/block/hda/queue/scheduler noop anticipatory [deadline] cfq -- Changed to Deadline.
...this is correct on CentOS-5. On CentOS-4 you need to do it via grub and a reboot.
Yes follow up to previous mail. That would be correct for those two. My opinions do not change however, I just saw the mail where it did not work because he has V4.x and needs to use grub.conf.
Yes follow up to previous mail. That would be correct for those two. My opinions do not change however, I just saw the mail where it did not work because he has V4.x and needs to use grub.conf.
In grub.conf I have this:
--- #boot=/dev/sda default=0 timeout=5 splashimage=(hd0,0)/grub/splash.xpm.gz hiddenmenu title CentOS (2.6.9-78.0.8.ELsmp) root (hd0,0) kernel /vmlinuz-2.6.9-78.0.8.ELsmp ro root=/dev/VolGroup00/LogVol00 initrd /initrd-2.6.9-78.0.8.ELsmp.img title CentOS (2.6.9-78.0.8.EL) root (hd0,0) kernel /vmlinuz-2.6.9-78.0.8.EL ro root=/dev/VolGroup00/LogVol00 initrd /initrd-2.6.9-78.0.8.EL.img ---
I just add elevator=deadline above default or something?
Matt
On Wed, 2008-12-10 at 10:56 -0600, Matt wrote:
Yes follow up to previous mail. That would be correct for those two. My opinions do not change however, I just saw the mail where it did not work because he has V4.x and needs to use grub.conf.
In grub.conf I have this:
#boot=/dev/sda default=0 timeout=5 splashimage=(hd0,0)/grub/splash.xpm.gz hiddenmenu title CentOS (2.6.9-78.0.8.ELsmp) root (hd0,0) kernel /vmlinuz-2.6.9-78.0.8.ELsmp ro root=/dev/VolGroup00/LogVol00 initrd /initrd-2.6.9-78.0.8.ELsmp.img title CentOS (2.6.9-78.0.8.EL) root (hd0,0) kernel /vmlinuz-2.6.9-78.0.8.EL ro root=/dev/VolGroup00/LogVol00 initrd /initrd-2.6.9-78.0.8.EL.img
I just add elevator=deadline above default or something?
Like this:
kernel /vmlinuz-2.6.9-78.0.8.ELsmp ro root=/dev/VolGroup00/LogVol00 elevater=deadline
And this change will be for System Wide.
On Wed, Dec 10, 2008 at 9:25 AM, John jses27@gmail.com wrote:
Like this:
kernel /vmlinuz-2.6.9-78.0.8.ELsmp ro root=/dev/VolGroup00/LogVol00 elevater=deadline
The above should be all on one line.
mhr
MHR wrote:
On Wed, Dec 10, 2008 at 9:25 AM, John jses27@gmail.com wrote:
Like this:
kernel /vmlinuz-2.6.9-78.0.8.ELsmp ro root=/dev/VolGroup00/LogVol00 elevater=deadline
The above should be all on one line.
And my dictionary tells me that it should be elevator.
Ralph
On Thu, 2008-12-11 at 10:40 +0100, Ralph Angenendt wrote:
MHR wrote:
On Wed, Dec 10, 2008 at 9:25 AM, John jses27@gmail.com wrote:
Like this:
kernel /vmlinuz-2.6.9-78.0.8.ELsmp ro root=/dev/VolGroup00/LogVol00 elevater=deadline
The above should be all on one line.
And my dictionary tells me that it should be elevator.
Mine gives me elevate instead of elevator! F7 in Evolution. Maybe I need to train it more. "or" is right though
The Schedular is CFQ and can be changed on the fly to whatever Block Device you want it. It does not matter what load the system is under. Changing to Deadline on his specific block device will improve the I/O of the system unless it really being hammered. Keep in mind these changes will be gone on Reboot. You can put these in rc.local to activate at boot time. Substitute "hda" for your device. This does not work on iscsi or SAN mounts.
[root@blah ~]# echo 'deadline' > /sys/block/hda/queue/scheduler [root@blah ~]# cat /sys/block/hda/queue/scheduler noop anticipatory [deadline] cfq -- Changed to Deadline.
echo 'deadline' > /sys/block/sda/queue/scheduler -bash: /sys/block/sda/queue/scheduler: No such file or directory
ls -l /sys/block/sda/queue/ total 0 drwxr-xr-x 2 root root 0 Dec 8 17:45 iosched -r--r--r-- 1 root root 4096 Dec 10 10:10 max_hw_sectors_kb -rw-r--r-- 1 root root 4096 Dec 10 10:10 max_sectors_kb -rw-r--r-- 1 root root 4096 Dec 10 10:10 nr_requests -rw-r--r-- 1 root root 4096 Dec 10 10:10 read_ahead_kb
No go.
Matt
On Wed, Dec 10, 2008 at 8:11 AM, Matt lm7812@gmail.com wrote:
echo 'deadline' > /sys/block/sda/queue/scheduler -bash: /sys/block/sda/queue/scheduler: No such file or directory
ls -l /sys/block/sda/queue/ total 0 drwxr-xr-x 2 root root 0 Dec 8 17:45 iosched -r--r--r-- 1 root root 4096 Dec 10 10:10 max_hw_sectors_kb -rw-r--r-- 1 root root 4096 Dec 10 10:10 max_sectors_kb -rw-r--r-- 1 root root 4096 Dec 10 10:10 nr_requests -rw-r--r-- 1 root root 4096 Dec 10 10:10 read_ahead_kb
No go.
What is in /sys/block/sda/queue/iosched? Or you could do a find /sys/block -name scheduler to see if the file exists at all.
Don't give up so soon.
mhr
A bit of bottle neck.
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.38 176.63 70.32 78.26 813.46 2044.82 406.73 1022.41 19.24 0.40 19.17 4.04 60.07 sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 5.28 0.00 23.61 19.33 0.00 sda2 0.38 176.63 70.32 78.26 813.45 2044.82 406.73 1022.41 19.24 0.40 19.17 4.04 60.07 dm-0 0.00 0.00 70.71 255.60 813.45 2044.82 406.73 1022.41 8.76 2.90 8.87 1.84 60.10 dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 8.00 0.00 64.20 11.38 0.00
Try setting the scheduler to 'deadline' and see if the queue sizes shrink.
No raid1? Besides adding redundancy, it can help with read performance. I would probably put the mail on a raid 10 though if I had 4 disks to do so.
-Ross
Like this:
kernel /vmlinuz-2.6.9-78.0.8.ELsmp ro root=/dev/VolGroup00/LogVol00 elevator=deadline
And this change will be for System Wide.
Doing this among other things helped quite a lot. My iostat -x %util dropped from ~ 60% to ~ 22% now. Of course at same time I updated from dual-core 2Ghz CPU to quad-core 2.4Ghz. Also swapped motherboard from one with an "Intel ICH9R" Southbridge to a "Intel ICH7R" since I heard CentOS 4.x did not have drivers in kernel for ICH9R.
Matt
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Matt Sent: Thursday, December 18, 2008 10:37 AM To: CentOS mailing list Subject: Re: [CentOS] Adding RAM
A bit of bottle neck.
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s
rkB/s wkB/s
avgrq-sz avgqu-sz await svctm %util sda 0.38 176.63 70.32 78.26 813.46 2044.82
406.73 1022.41
19.24 0.40 19.17 4.04 60.07 sda1 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00
5.28 0.00 23.61 19.33 0.00 sda2 0.38 176.63 70.32 78.26 813.45 2044.82
406.73 1022.41
19.24 0.40 19.17 4.04 60.07 dm-0 0.00 0.00 70.71 255.60 813.45 2044.82 406.73 1022.41 8.76 2.90 8.87 1.84 60.10 dm-1 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00
8.00 0.00 64.20 11.38 0.00
Try setting the scheduler to 'deadline' and see if the queue sizes shrink.
No raid1? Besides adding redundancy, it can help with read performance. I would probably put the mail on a raid 10 though if I had 4 disks to do so.
-Ross
Like this:
kernel /vmlinuz-2.6.9-78.0.8.ELsmp ro
root=/dev/VolGroup00/LogVol00 elevator=deadline
And this change will be for System Wide.
Doing this among other things helped quite a lot. My iostat -x %util dropped from ~ 60% to ~ 22% now. Of course at same time I updated from dual-core 2Ghz CPU to quad-core 2.4Ghz. Also swapped motherboard from one with an "Intel ICH9R" Southbridge to a "Intel ICH7R" since I heard CentOS 4.x did not have drivers in kernel for ICH9R.
That's why I asked what kind of Controler the board had on it in a previous post to you and stated ram was not the suspect problem. IMO if you keeped the dual core proc and just switched to ICH7 Board you would have saved money. Your utilization rate would probly stayed the same or no higher than %30. Just to keep things in balance you will probly want to try the cfq schedular with a high user load so every thing gets it fair share in time_wait. Some people will contradict that it's about making the users happy. When access time for one user takes longer than another then the complaints start coming in. I would like to know the Proc Utilization per core or are you running it in Single Core?
JohnStanley
That's why I asked what kind of Controler the board had on it in a previous post to you and stated ram was not the suspect problem. IMO if you keeped the dual core proc and just switched to ICH7 Board you would have saved money. Your utilization rate would probly stayed the same or no higher than %30. Just to keep things in balance you will probly want to try the cfq schedular with a high user load so every thing gets it fair share in time_wait. Some people will contradict that it's about making the users happy. When access time for one user takes longer than another then the complaints start coming in. I would like to know the Proc Utilization per core or are you running it in Single Core?
top - 11:53:39 up 2 days, 9:47, 1 user, load average: 8.27, 13.66, 29.82 Tasks: 188 total, 1 running, 183 sleeping, 0 stopped, 4 zombie Cpu0 : 10.6% us, 2.3% sy, 0.0% ni, 87.0% id, 0.0% wa, 0.0% hi, 0.0% si Cpu1 : 8.0% us, 2.0% sy, 0.0% ni, 88.3% id, 1.7% wa, 0.0% hi, 0.0% si Cpu2 : 11.6% us, 2.0% sy, 0.0% ni, 86.4% id, 0.0% wa, 0.0% hi, 0.0% si Cpu3 : 6.0% us, 0.7% sy, 0.0% ni, 92.4% id, 1.0% wa, 0.0% hi, 0.0% si Mem: 4151316k total, 3438536k used, 712780k free, 428520k buffers Swap: 2031608k total, 8k used, 2031600k free, 1839452k cached
I do see on occasion CPU load jump up to 30-60% accross the board but that is rare. Mostly it looks like above.
I will try switching back too CFQ this evening to see what that does for 24 hours. I suspected it was either the SATA controller or the scheduler that fixed/helped things. With the quad-core I figure that at least I will never have to worry about the CPU being a bottle neck. I still see the load average jump up at peak times to as much as 60 percent but its a rare event now. When I did a grep on a 450Mbyte file it jumped to 90 earlier.
Matt
On Dec 18, 2008, at 12:59 PM, Matt lm7812@gmail.com wrote:
That's why I asked what kind of Controler the board had on it in a previous post to you and stated ram was not the suspect problem. IMO if you keeped the dual core proc and just switched to ICH7 Board you would have saved money. Your utilization rate would probly stayed the same or no higher than %30. Just to keep things in balance you will probly want to try the cfq schedular with a high user load so every thing gets it fair share in time_wait. Some people will contradict that it's about making the users happy. When access time for one user takes longer than another then the complaints start coming in. I would like to know the Proc Utilization per core or are you running it in Single Core?
top - 11:53:39 up 2 days, 9:47, 1 user, load average: 8.27, 13.66, 29.82 Tasks: 188 total, 1 running, 183 sleeping, 0 stopped, 4 zombie Cpu0 : 10.6% us, 2.3% sy, 0.0% ni, 87.0% id, 0.0% wa, 0.0% hi, 0.0% si Cpu1 : 8.0% us, 2.0% sy, 0.0% ni, 88.3% id, 1.7% wa, 0.0% hi, 0.0% si Cpu2 : 11.6% us, 2.0% sy, 0.0% ni, 86.4% id, 0.0% wa, 0.0% hi, 0.0% si Cpu3 : 6.0% us, 0.7% sy, 0.0% ni, 92.4% id, 1.0% wa, 0.0% hi, 0.0% si Mem: 4151316k total, 3438536k used, 712780k free, 428520k buffers Swap: 2031608k total, 8k used, 2031600k free, 1839452k cached
Yup, that puppy is all iowait, see how the cpu idle is high and CPU wait is low, but load avg is high.
Probably lots of random ops going on there and SATA drive can't keep up.
If you have a UPS there you can try to enable the drive's write cache which should also help some.
-Ross
I do see on occasion CPU load jump up to 30-60% accross the board but that is rare. Mostly it looks like above.
I will try switching back too CFQ this evening to see what that does for 24 hours. I suspected it was either the SATA controller or the scheduler that fixed/helped things. With the quad-core I figure that at least I will never have to worry about the CPU being a bottle neck. I still see the load average jump up at peak times to as much as 60 percent but its a rare event now. When I did a grep on a 450Mbyte file it jumped to 90 earlier.
Matt _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Dec 18, 2008, at 12:24 PM, "John" jses27@gmail.com wrote:
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Matt Sent: Thursday, December 18, 2008 10:37 AM To: CentOS mailing list Subject: Re: [CentOS] Adding RAM
A bit of bottle neck.
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s
rkB/s wkB/s
avgrq-sz avgqu-sz await svctm %util sda 0.38 176.63 70.32 78.26 813.46 2044.82
406.73 1022.41
19.24 0.40 19.17 4.04 60.07 sda1 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00
5.28 0.00 23.61 19.33 0.00 sda2 0.38 176.63 70.32 78.26 813.45 2044.82
406.73 1022.41
19.24 0.40 19.17 4.04 60.07 dm-0 0.00 0.00 70.71 255.60 813.45 2044.82 406.73 1022.41 8.76 2.90 8.87 1.84 60.10 dm-1 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00
8.00 0.00 64.20 11.38 0.00
Try setting the scheduler to 'deadline' and see if the queue sizes shrink.
No raid1? Besides adding redundancy, it can help with read performance. I would probably put the mail on a raid 10 though if I had 4 disks to do so.
-Ross
Like this:
kernel /vmlinuz-2.6.9-78.0.8.ELsmp ro
root=/dev/VolGroup00/LogVol00 elevator=deadline
And this change will be for System Wide.
Doing this among other things helped quite a lot. My iostat -x %util dropped from ~ 60% to ~ 22% now. Of course at same time I updated from dual-core 2Ghz CPU to quad-core 2.4Ghz. Also swapped motherboard from one with an "Intel ICH9R" Southbridge to a "Intel ICH7R" since I heard CentOS 4.x did not have drivers in kernel for ICH9R.
That's why I asked what kind of Controler the board had on it in a previous post to you and stated ram was not the suspect problem. IMO if you keeped the dual core proc and just switched to ICH7 Board you would have saved money. Your utilization rate would probly stayed the same or no higher than %30. Just to keep things in balance you will probly want to try the cfq schedular with a high user load so every thing gets it fair share in time_wait. Some people will contradict that it's about making the users happy. When access time for one user takes longer than another then the complaints start coming in. I would like to know the Proc Utilization per core or are you running it in Single Core?
CFQ accounting is only good for interactive processes, so unless of is a terminal server mucking with CFQ will have little affect.
Deadline and anticipatory schedulers work best for non-interactive servers such as file, mail, database. Noop works for special apps that like to do their own io scheduling, database systems with their own io schedulers mostly.
So it isn't one scheduler to rule them all, but one for each situation.
-Ross
I have a server running Centos 4.7 32bit. Will moving from 4Gig of RAM to 8Gig do any good? Since its 32bit I assume it will only be able to address the first 4Gig not?
As long as you are using a SMP kernel you can use up to 64GB of RAM (though each proccess can only address 4GB of this). So if you can find any trace of "SMP" in the uname (grep is your friend) then it should work fine.
I have this:
[root@srvr ~]# rpm -qa | grep kern kernel-smp-2.6.9-78.0.8.EL glibc-kernheaders-2.4-9.1.103.EL kernel-utils-2.4-14.1.117 kernel-2.6.9-78.0.8.EL [root@srvr ~]#
[root@server ~]# uname -a Linux XXX 2.6.9-78.0.8.ELsmp #1 SMP Wed Nov 19 20:05:04 EST 2008 i686 i686 i386 GNU/Linux
Matt
Hi,
Tell me how much is the swap space you assigned and also you can use below commands to trace out the cause of such huge I/O.Also are using SAN or local storage?.I don't think so you need explanation for below commands.Run all the commands and redirect it to some file and send it to the list.
Normally there is no need to fine tune any parameter to upgrade memory on centos 4.7 32 bit, because i am running centos production with 8GB of physical memory but with only 4GB of swap space(not twice the physical ram normaly people use causing huge I/O while using swap memmory because system have to read cylinders and tracks of 8GB takes long time compared to 4GB of swap space)
while true ; do (ps -eo pcpu,pid,user,args |sort -k1 -r |head -10 >> /root/sys-reports/top10-cpu-utilzn) ; sleep 2 ; done
sar -u 2 10000000 > /root/sys-reports/sar.txt
mpstat -P ALL 5 | tee mpstat.txt
top -b -i |tee top.txt
vmstat -m 5 > vmstat.txt
iostat -x 5 >> iostat.txt
Regards, pap
On Sat, Dec 6, 2008 at 5:22 AM, Matt lm7812@gmail.com wrote:
I have a server running Centos 4.7 32bit. Will moving from 4Gig of RAM to 8Gig do any good? Since its 32bit I assume it will only be able to address the first 4Gig not? When I installed CentOS I did not do anything special to enable using more then 4Gig if thats required. Exim, spamassassin and Clamd seem to be the biggest load on this machine. My biggest bottle neck is disk I/O anyway.
Wish I had installed CentOS 5.x 64bit way back when but some of the software I was using at time listed support for it as beta.
Matt _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos