Is there a way to nice the IO on a process such as dd? If not, what could be a way to control the IO level of such a process from bogging down a server to severely.
Thanks, jlc
Joseph L. Casale ha scritto:
Is there a way to nice the IO on a process such as dd? If not, what could be a way to control the IO level of such a process from bogging down a server to severely.
As I was told few days ago you cold nice the whole process, eg.
nice 19 if=/xxx of=/xxx bs=nnn
This should give all the other process priority over dd
Hope this helps
-- Regards Lorenzo Quatrini
Lorenzo Quatrini ha scritto:
Joseph L. Casale ha scritto:
Is there a way to nice the IO on a process such as dd? If not, what could be a way to control the IO level of such a process from bogging down a server to severely.
As I was told few days ago you cold nice the whole process, eg.
nice 19 if=/xxx of=/xxx bs=nnn
Obviously there is a typo...
nice 19 dd if=/xxx of=/xxx bs=nnn ^^
This should give all the other process priority over dd
Hope this helps
-- Regards Lorenzo Quatrini _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Lorenzo Quatrini wrote:
As I was told few days ago you cold nice the whole process, eg.
nice 19 if=/xxx of=/xxx bs=nnn
This should give all the other process priority over dd
nice doesn't really do anything with respect to I/O.
The best way to control I/O in this manor is to physically isolate it from the rest of the system(be it on a different controller connected to different disks etc).
There's no real way in software that I'm aware of to prevent one process from bogging down the whole system by consuming all of the I/O capacity.
This would likely require process-level information on I/O transactions, which I've never seen in Linux. I've heard it's available in Solaris though (not sure if any sort of I/O QoS is available in Solaris haven't used it in years).
nate
nice doesn't really do anything with respect to I/O.
Yes I tried it and it never made a diff from one end of the spectrum to the other:)
The best way to control I/O in this manor is to physically isolate it from the rest of the system(be it on a different controller connected to different disks etc).
Well, not always possible!
I am going to try Peter's suggestion of ionice tonight.
Thanks everyone! jlc
On Tue, 2008-09-02 at 16:28 +0200, Lorenzo Quatrini wrote:
Joseph L. Casale ha scritto:
Is there a way to nice the IO on a process such as dd? If not, what could be a way to control the IO level of such a process from bogging down a server to severely.
As I was told few days ago you cold nice the whole process, eg.
nice 19 if=/xxx of=/xxx bs=nnn
This should give all the other process priority over dd
Saw the type fix. Just want to mention that the "bs=" can have a substantial beneficial effect. By increasing the blocksize to a relative large value, the number of system and I/O calls is reduced. This *may* reduce the adverse effects that you see on overall system responsiveness.
I often use 8192, 16384 and even 8Mb, a "cylinder size".
Give it a try. YMMV.
Hope this helps
-- Regards Lorenzo Quatrini
<snip sig stuff>
On Tuesday 02 September 2008, Joseph L. Casale wrote:
Is there a way to nice the IO on a process such as dd? If not, what could be a way to control the IO level of such a process from bogging down a server to severely.
There is ionice (assuming CentOS-5) in the util-linux package. It's by no means perfect but unlike nice it atleast tries to do what you want :-)
Try it out.
/Peter
Peter Kjellstrom wrote:
On Tuesday 02 September 2008, Joseph L. Casale wrote:
Is there a way to nice the IO on a process such as dd? If not, what could be a way to control the IO level of such a process from bogging down a server to severely.
There is ionice (assuming CentOS-5) in the util-linux package. It's by no means perfect but unlike nice it atleast tries to do what you want :-)
If that doesn't do it for you then maybe choosing a different scheduler then cfq can help. Something like 'deadline' may work better for the workload.
AFAIK ionice will only work with the cfq scheduler for now.
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
If that doesn't do it for you then maybe choosing a different scheduler then cfq can help. Something like 'deadline' may work better for the workload.
AFAIK ionice will only work with the cfq scheduler for now.
Appreciate that info, I have just been reading about the difference but cant say I understand in real life what the difference between deadline versus cfq is. I will try changing it on the fly and running my tests.
Thanks! jlc
On Tue, Sep 2, 2008 at 9:51 AM, Joseph L. Casale JCasale@activenetwerx.com wrote:
Appreciate that info, I have just been reading about the difference but cant say I understand in real life what the difference between deadline versus cfq is. I will try changing it on the fly and running my tests.
The CFQ elevator algorithm attempts to be fair to all i/o requests, without specific regard to performance. The deadline elevator is more aggressive in scheduling for minimal latency per device.
For example, if you have one process that is doing more or less random i/o and another that is doing large block sequential i/o, the deadline elevator will pander to the latter whereas the cfq elevator will try to be fair in scheduling the i/os between the processes.
Here's a decent, short write up on them: http://www.redhat.com/magazine/008jun05/features/schedulers/
HTH
mhr
On Tue, 2 Sep 2008 10:21:31 -0700 MHR mhullrich@gmail.com wrote:
On Tue, Sep 2, 2008 at 9:51 AM, Joseph L. Casale JCasale@activenetwerx.com wrote:
Appreciate that info, I have just been reading about the difference but cant say I understand in real life what the difference between deadline versus cfq is. I will try changing it on the fly and running my tests.
The CFQ elevator algorithm attempts to be fair to all i/o requests, without specific regard to performance. The deadline elevator is more aggressive in scheduling for minimal latency per device.
For example, if you have one process that is doing more or less random i/o and another that is doing large block sequential i/o, the deadline elevator will pander to the latter whereas the cfq elevator will try to be fair in scheduling the i/os between the processes.
Here's a decent, short write up on them: http://www.redhat.com/magazine/008jun05/features/schedulers/
HTH
mhr _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
ionice? has nobidy mentioned this?
Here's a decent, short write up on them: http://www.redhat.com/magazine/008jun05/features/schedulers/
Yup, I found that but I remember stumbling across this issue when reading about Xen in Todd Deshane's book and finally found the article I came across. http://www.linuxjournal.com/article/6931
Thanks, jlc