[CentOS] connection speeds between nodes

Tue Mar 8 17:25:03 UTC 2011
John Hodrien <J.H.Hodrien at leeds.ac.uk>

On Tue, 8 Mar 2011, Ross Walker wrote:

> Well on my local disk I don't cache the data of tens or hundreds of clients
> and a server can have a memory fault and oops just as easily as any client.
>
> Also I believe it doesn't sync every single write (unless mounted on the
> client sync which is only for special cases and not what I am talking about)
> only when the client issues a sync or when the file is closed. The client is
> free to use async io if it wants, but the server SHOULD respect the clients
> wishes for synchronous io.
>
> If you set the server 'async' then all io is async whether the client wants
> it or not.

I think you're right that this is how it should work, I'm just not entirely
sure that's actually generally the case (whether that's because typical
applications try to do sync writes or if it's for other reasons, I don't
know).

Figures for just changing the server to sync, everything else identical.
Client does not have 'sync' set as a mount option.  Both attached to the same
gigabit switch (so favouring sync as far as you reasonably could with
gigabit):

sync;time (dd if=/dev/zero of=testfile bs=1M count=10000;sync)

async: 78.8MB/sec
  sync: 65.4MB/sec

That seems like a big enough performance hit to me to at least consider the
merits of running async.

That said, running dd with oflag=direct appears to bring the performance up to
async levels:

oflag=direct with  sync nfs export: 81.5 MB/s
oflag=direct with async nfs export: 87.4 MB/s

But if you've not got control over how your application writes out to disk,
that's no help.

jh