On Tue, Oct 02, 2007 at 08:57:28PM +0300, Pasi Kärkkäinen wrote:
On Tue, Oct 02, 2007 at 09:39:09AM -0400, Ross S. W. Walker wrote:
Simon Banton wrote:
At 12:30 +0200 2/10/07, matthias platzer wrote:
What I did to work around them was basically switching to XFS for everything except / (3ware say their cards are fast, but only on XFS) AND using very low nr_requests for every blockdev on the 3ware card.
Hi Matthias,
Thanks for this. In my CentOS 5 tests the nr_requests turned out by default to be 128, rather than the 8192 of CentOS 4.5. I'll have a go at reducing it still further.
Yes, the nr_requests should be a realistic reflection of what the card itself can handle. If too high you will see io_waits stack up high.
64 or 128 are good numbers, rarely have I seen a card that can handle a depth larger then 128 (some older scsi cards did 256 I think).
Hmm.. let's say you have a linux software md-raid array made of sata drives.. what kind of nr_request values you should use for that for optimal performance?
Or let's put it this way:
You have a md-raid array on dom0. What kind of nr_requests values should you use for normal 7200 rpm sata-ncq disks on intel ich8 (ncq) controller?
And then this md-array is seen as xvdb by domU.. what kind of nr_requests values should you use in domU?
io-scheduler/elevator should be deadline in domU I assume.. how about in dom0? deadline there too?
Thanks!
-- Pasi