[CentOS] connection speeds between nodes

Tue Mar 8 21:14:49 UTC 2011
wessel van der aart <wessel at postoffice.nl>

thanks for all the response , really gives me a good idea where to pay
attention to. 
the software we're using to distribute our renders is RoyalRender, i'm not
sure if any optimization is possible, i'll check it out.
so far it seems that the option of using nfs stands or falls with he use
of sync.
does anyone here uses nfs without sync in production? does data corrupt
often? 
all the data send from the nodes can be reproduced , so i would think an
error is acceptable if it happens once a month or so.
are there any other options more suitable in this situation? i thought
about GFS with iscsi but i'm not sure if that will work if the filesystem
to be shared already exists in production.

Thanks,
Wessel

On Tue, 8 Mar 2011 17:25:03 +0000 (GMT), John Hodrien
<J.H.Hodrien at leeds.ac.uk> wrote:
> On Tue, 8 Mar 2011, Ross Walker wrote:
> 
>> Well on my local disk I don't cache the data of tens or hundreds of
>> clients
>> and a server can have a memory fault and oops just as easily as any
>> client.
>>
>> Also I believe it doesn't sync every single write (unless mounted on
the
>> client sync which is only for special cases and not what I am talking
>> about)
>> only when the client issues a sync or when the file is closed. The
>> client is
>> free to use async io if it wants, but the server SHOULD respect the
>> clients
>> wishes for synchronous io.
>>
>> If you set the server 'async' then all io is async whether the client
>> wants
>> it or not.
> 
> I think you're right that this is how it should work, I'm just not
entirely
> sure that's actually generally the case (whether that's because typical
> applications try to do sync writes or if it's for other reasons, I don't
> know).
> 
> Figures for just changing the server to sync, everything else identical.
> Client does not have 'sync' set as a mount option.  Both attached to the
> same
> gigabit switch (so favouring sync as far as you reasonably could with
> gigabit):
> 
> sync;time (dd if=/dev/zero of=testfile bs=1M count=10000;sync)
> 
> async: 78.8MB/sec
>   sync: 65.4MB/sec
> 
> That seems like a big enough performance hit to me to at least consider
the
> merits of running async.
> 
> That said, running dd with oflag=direct appears to bring the performance
> up to
> async levels:
> 
> oflag=direct with  sync nfs export: 81.5 MB/s
> oflag=direct with async nfs export: 87.4 MB/s
> 
> But if you've not got control over how your application writes out to
disk,
> that's no help.
> 
> jh
> _______________________________________________
> CentOS mailing list
> CentOS at centos.org
> http://lists.centos.org/mailman/listinfo/centos