[CentOS] htdocs on NFS share / any pitfalls?

Thu Oct 24 10:15:14 UTC 2013
Leon Fauster <leonfauster at googlemail.com>

Am 23.10.2013 um 17:18 schrieb Les Mikesell <lesmikesell at gmail.com>:
> On Wed, Oct 23, 2013 at 4:01 AM, Leon Fauster
> <leonfauster at googlemail.com> wrote:
>> Am 23.10.2013 um 07:52 schrieb James A. Peltier <jpeltier at sfu.ca>:
>>> | i have a new setup where the htdocs directory for the webserver
>>> | is located on a nfs share. Client has cachefilesd configured.
>>> | Compared to the old setup (htdocs directory is on the local disk)
>>> | the performance is not so gratifying. The disk is "faster" compared
>>> | to the ethernet link but the cache should at least compensate this
>>> | a bit. Do they exist more pitfalls for such configurations?
>>> |
>>> 
>>> The best thing to do with respect to NFS shares is to make extensive use of caching
>>> in front of the web servers.  This will hide the latencies that the NFS protocol will
>>> bring.  You can try to scale NFS through use of channel bonding or pNFS/Gluster but
>>> setting up a reverse proxy or memcached instance is going to be your best bet to making
>>> the system perform well.
>> 
>> 
>> All web-frontends (multiple) have the filesystem caching already in
>> place (bottom layer). The application uses a key-value-store in memory (top layer) to
>> accelerate the webapp (php). Nevertheless the performance is not satisfying. I was looking
>> at some caching by the httpd daemon (middle layer). Any experiences with such apache
>> cache out there?
> 
> What kind of throughput and latency are you talking about here?   NFS
> shouldn't add that much overhead to reads compared to disk head
> latency and if you enable client caching  might be considerably
> faster.   If you are writing over NFS you don't get the same options,
> though and sync mounts are going to be slow.



bonded (just for failover) interface with speed: 1000Mb/s duplex: Full

ping gives me a round-trip echo packet with ~ 0.139 ms

writes ~ 57,9 MB/s (dd test)
reads ~ 59,7 MB/s (uncached), 3,9 GB/s (cached) (dd test)

nfsstat -m output:

rw,nosuid,noexec,relatime,vers=3,rsize=32768,wsize=32768,namlen=255,soft, \
nosharecache,proto=tcp,timeo=20,retrans=4,sec=sys,mountaddr=xxx.xxx.xxx.xxx, \
mountvers=3,mountport=102734,mountproto=tcp,fsc,local_lock=none,addr=xxx.xxx.xxx.xxx

async is implicit. UDP would perform better but we voted for reliability 
but even TCP should perform a bit better, i guess :)  

the webapp caches also stat calls.

so far

--
LF