On Wed, 9 Mar 2011, Ross Walker wrote:
On Mar 8, 2011, at 12:25 PM, John Hodrien J.H.Hodrien@leeds.ac.uk wrote:
I think you're right that this is how it should work, I'm just not entirely sure that's actually generally the case (whether that's because typical applications try to do sync writes or if it's for other reasons, I don't know).
As always YMMV, but on the whole it's how it works.
ESX is an exception, it does O_FSYNC on each write cause it needs to know for certain that each completed.
But what I was saying is, most applications benefit from async writes, which suggests most applications do noticeably amounts of sync writes. It doesn't overly matter if they *can* choose to write async if hardly any of them do.
sync;time (dd if=/dev/zero of=testfile bs=1M count=10000;sync)
async: 78.8MB/sec sync: 65.4MB/sec
That seems like a big enough performance hit to me to at least consider the merits of running async.
Yes, disabling the safety feature will make it run faster. Just as disabling the safety on a gun will make it faster in a draw.
And if you're happy with that (in the case of this render farm, a fault at the NFS level is non-fatal) then there's no problem. I don't have a safety on my water pistol.
That said, running dd with oflag=direct appears to bring the performance up to async levels:
oflag=direct with sync nfs export: 81.5 MB/s oflag=direct with async nfs export: 87.4 MB/s
But if you've not got control over how your application writes out to disk, that's no help.
Most apps unfortunately don't allow one to configure how it handles io reads/writes, so you're stuck with how it behaves.
A good sized battery backed write-back cache will often negate the O_FSYNC penalty.
All those figures *were* with a 256Mbyte battery backed write-back cache. It's really not hard to make those figures look a whole lot more skewed in favour of async...
jh