Hi all,
i have a new setup where the htdocs directory for the webserver is located on a nfs share. Client has cachefilesd configured. Compared to the old setup (htdocs directory is on the local disk) the performance is not so gratifying. The disk is "faster" compared to the ethernet link but the cache should at least compensate this a bit. Do they exist more pitfalls for such configurations?
Thanks
LF
PS: checking httpd's caching system now ...
Leon Fauster wrote:
i have a new setup where the htdocs directory for the webserver is located on a nfs share. Client has cachefilesd configured. Compared to the old setup (htdocs directory is on the local disk) the performance is not so gratifying. The disk is "faster" compared to the ethernet link but the cache should at least compensate this a bit. Do they exist more pitfalls for such configurations?
If I needed to serve files from an nfs share, I'd use rsync to pull down a local copy of the files to the webserver. This would give you the benefits of using the nfs location without the network latency.
c
On 2013-10-22, Carl T. Miller carl@carltm.com wrote:
If I needed to serve files from an nfs share, I'd use rsync to pull down a local copy of the files to the webserver. This would give you the benefits of using the nfs location without the network latency.
That's not serving files from an NFS share, that's serving files from a local mirror of an NFS share. It may not seem like much, but it is an important distinction: perhaps the OP has a user who can modify the NFS files directly but for whatever reason is not authorized to modify the local directory. As a result the user either needs to bother the sysadmin for every change, or the sysadmin needs to set up a cron job to do the rsync, or needs some other way of keeping the directory synchronized. It's possible that none of these configurations is acceptable to the OP, which is why he moved to serving them directly off of NFS.
There are many other reasons why the OP may wish to serve off of NFS: perhaps the directory became too big for the web server, for example. Your idea is a good one, but the OP will need to either verify that it's workable or explain in what way it's not.
--keith
remove
2013/10/22 Leon Fauster leonfauster@googlemail.com
Hi all,
i have a new setup where the htdocs directory for the webserver is located on a nfs share. Client has cachefilesd configured. Compared to the old setup (htdocs directory is on the local disk) the performance is not so gratifying. The disk is "faster" compared to the ethernet link but the cache should at least compensate this a bit. Do they exist more pitfalls for such configurations?
Thanks
LF
PS: checking httpd's caching system now ...
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
remove
--
On Tue, Oct 22, 2013 at 9:39 PM, Gerente de Sistemas viabsb@gmail.comwrote:
remove
This is bad list etiquette. Per the bottom of the following page: http://lists.centos.org/mailman/listinfo/centos
"To unsubscribe from CentOS, get a password reminder, or change your subscription options enter your subscription email address:"
If you need more assistance, please ask someone - but don't reply (being off-topic) to a thread like that. Start your own thread if you have to. If the unsubscribe feature is broken, start a thread saying so.
This behavior makes me wonder how people even subscribed to the list in the first place!?
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
remove
-- _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
----- Original Message ----- | Hi all, | | i have a new setup where the htdocs directory for the webserver | is located on a nfs share. Client has cachefilesd configured. | Compared to the old setup (htdocs directory is on the local disk) | the performance is not so gratifying. The disk is "faster" compared | to the ethernet link but the cache should at least compensate this | a bit. Do they exist more pitfalls for such configurations? | | Thanks | | LF | | PS: checking httpd's caching system now ...
The best thing to do with respect to NFS shares is to make extensive use of caching in front of the web servers. This will hide the latencies that the NFS protocol will bring. You can try to scale NFS through use of channel bonding or pNFS/Gluster but setting up a reverse proxy or memcached instance is going to be your best bet to making the system perform well.
Am 23.10.2013 um 07:52 schrieb James A. Peltier jpeltier@sfu.ca:
| i have a new setup where the htdocs directory for the webserver | is located on a nfs share. Client has cachefilesd configured. | Compared to the old setup (htdocs directory is on the local disk) | the performance is not so gratifying. The disk is "faster" compared | to the ethernet link but the cache should at least compensate this | a bit. Do they exist more pitfalls for such configurations? |
The best thing to do with respect to NFS shares is to make extensive use of caching in front of the web servers. This will hide the latencies that the NFS protocol will bring. You can try to scale NFS through use of channel bonding or pNFS/Gluster but setting up a reverse proxy or memcached instance is going to be your best bet to making the system perform well.
All web-frontends (multiple) have the filesystem caching already in place (bottom layer). The application uses a key-value-store in memory (top layer) to accelerate the webapp (php). Nevertheless the performance is not satisfying. I was looking at some caching by the httpd daemon (middle layer). Any experiences with such apache cache out there?
Thanks -- LF
On Wed, Oct 23, 2013 at 4:01 AM, Leon Fauster leonfauster@googlemail.com wrote:
Am 23.10.2013 um 07:52 schrieb James A. Peltier jpeltier@sfu.ca:
| i have a new setup where the htdocs directory for the webserver | is located on a nfs share. Client has cachefilesd configured. | Compared to the old setup (htdocs directory is on the local disk) | the performance is not so gratifying. The disk is "faster" compared | to the ethernet link but the cache should at least compensate this | a bit. Do they exist more pitfalls for such configurations? |
The best thing to do with respect to NFS shares is to make extensive use of caching in front of the web servers. This will hide the latencies that the NFS protocol will bring. You can try to scale NFS through use of channel bonding or pNFS/Gluster but setting up a reverse proxy or memcached instance is going to be your best bet to making the system perform well.
All web-frontends (multiple) have the filesystem caching already in place (bottom layer). The application uses a key-value-store in memory (top layer) to accelerate the webapp (php). Nevertheless the performance is not satisfying. I was looking at some caching by the httpd daemon (middle layer). Any experiences with such apache cache out there?
What kind of throughput and latency are you talking about here? NFS shouldn't add that much overhead to reads compared to disk head latency and if you enable client caching might be considerably faster. If you are writing over NFS you don't get the same options, though and sync mounts are going to be slow.
Am 23.10.2013 um 17:18 schrieb Les Mikesell lesmikesell@gmail.com:
On Wed, Oct 23, 2013 at 4:01 AM, Leon Fauster leonfauster@googlemail.com wrote:
Am 23.10.2013 um 07:52 schrieb James A. Peltier jpeltier@sfu.ca:
| i have a new setup where the htdocs directory for the webserver | is located on a nfs share. Client has cachefilesd configured. | Compared to the old setup (htdocs directory is on the local disk) | the performance is not so gratifying. The disk is "faster" compared | to the ethernet link but the cache should at least compensate this | a bit. Do they exist more pitfalls for such configurations? |
The best thing to do with respect to NFS shares is to make extensive use of caching in front of the web servers. This will hide the latencies that the NFS protocol will bring. You can try to scale NFS through use of channel bonding or pNFS/Gluster but setting up a reverse proxy or memcached instance is going to be your best bet to making the system perform well.
All web-frontends (multiple) have the filesystem caching already in place (bottom layer). The application uses a key-value-store in memory (top layer) to accelerate the webapp (php). Nevertheless the performance is not satisfying. I was looking at some caching by the httpd daemon (middle layer). Any experiences with such apache cache out there?
What kind of throughput and latency are you talking about here? NFS shouldn't add that much overhead to reads compared to disk head latency and if you enable client caching might be considerably faster. If you are writing over NFS you don't get the same options, though and sync mounts are going to be slow.
bonded (just for failover) interface with speed: 1000Mb/s duplex: Full
ping gives me a round-trip echo packet with ~ 0.139 ms
writes ~ 57,9 MB/s (dd test) reads ~ 59,7 MB/s (uncached), 3,9 GB/s (cached) (dd test)
nfsstat -m output:
rw,nosuid,noexec,relatime,vers=3,rsize=32768,wsize=32768,namlen=255,soft, \ nosharecache,proto=tcp,timeo=20,retrans=4,sec=sys,mountaddr=xxx.xxx.xxx.xxx, \ mountvers=3,mountport=102734,mountproto=tcp,fsc,local_lock=none,addr=xxx.xxx.xxx.xxx
async is implicit. UDP would perform better but we voted for reliability but even TCP should perform a bit better, i guess :)
the webapp caches also stat calls.
so far
-- LF
On Thu, Oct 24, 2013 at 5:15 AM, Leon Fauster leonfauster@googlemail.com wrote:
What kind of throughput and latency are you talking about here? NFS shouldn't add that much overhead to reads compared to disk head latency and if you enable client caching might be considerably faster. If you are writing over NFS you don't get the same options, though and sync mounts are going to be slow.
bonded (just for failover) interface with speed: 1000Mb/s duplex: Full
ping gives me a round-trip echo packet with ~ 0.139 ms
writes ~ 57,9 MB/s (dd test) reads ~ 59,7 MB/s (uncached), 3,9 GB/s (cached) (dd test)
How do those compare to the native disk speed on your NFS server (if it is a host where you can access the disks locally)? And does the dd speed improve it you use a very large block size?
Am 24.10.2013 um 17:43 schrieb Les Mikesell lesmikesell@gmail.com:
On Thu, Oct 24, 2013 at 5:15 AM, Leon Fauster leonfauster@googlemail.com wrote:
bonded (just for failover) interface with speed: 1000Mb/s duplex: Full
ping gives me a round-trip echo packet with ~ 0.139 ms
writes ~ 57,9 MB/s (dd test) reads ~ 59,7 MB/s (uncached), 3,9 GB/s (cached) (dd test)
How do those compare to the native disk speed on your NFS server (if it is a host where you can access the disks locally)? And does the dd speed improve it you use a very large block size?
i had access to the NFS server now.
bs=128 -> 89 MB/s bs=512 -> 272 MB/s bs=1024 -> 421 MB/s bs=2048 -> 622 MB/s
RAID with DELL PERC
-- LF