----- Original Message -----
On Wed, Jul 11, 2012 at 11:29 AM, Colin Simpson Colin.Simpson@iongeo.com wrote:
But think yourself lucky, BTRFS on Fedora 16 was much worse. This was the time it took me to untar a vlc tarball.
F16 to RHEL5 - 0m 28.170s F16 to F16 ext4 - 4m 12.450s F16 to F16 btrfs - 14m 31.252s
A quick test seems to say this is better in F17 (3m7.240s on BTRFS but still looks like we are hitting NFSv4 issues for this but btrfs itself is better).
I wonder if the real issue is that NFSv4 waits for a directory change to sync to disk but linux wants to flush the whole disk cache before saying the sync is complete.
-- Les Mikesell lesmikesell@gmail.com
I think you are right that it is the forcing of the sync operation for all writes in NFSv4 that is making it slow. I just tested on a server and client both running RHEL 6.3. I exported a directory that had an old tar.gz of open office 3.0 distribution for Linux. 175MB. Exported with the default of sync option took 26 seconds to extract from the client mount. Exported with the async option and the extraction only took 4 seconds. Just to be clear on what I tested with. This is over 1GbE. The NFS server has an Intel Core i3-2125 CPU @ 3.3GHz, 16GB ram, NFS export directory is from a 22 drive Linux RAID6 connected via a SAS 6Gb/sec HBA. The client is a Intel Core 2 duo E8400 @ 3GHz, 4GB ram.
Mark,
Have you tried using async in your export options yet? Any difference?
David.