On Thu, Oct 24, 2013 at 01:59:15PM -0700, John R Pierce wrote:
On 10/24/2013 1:41 PM, Lists wrote:
Was wondering if anybody here could weigh in with real-life experience? Performance/scalability?
I've only used ZFS on Solaris and FreeBSD. some general observations...
- you need a LOT of ram for decent performance on large zpools. 1GB ram
above your basic system/application requirements per terabyte of zpool is not unreasonable.
- don't go overboard with snapshots. a few 100 are probably OK, but
1000s (*) will really drag down the performance of operations that enumerate file systems.
- NEVER let a zpool fill up above about 70% full, or the performance
really goes downhill.
Have run into this one (again -- with Nexenta) as well. It can be pretty dramatic. We tend to set quotas to ensure we don'get exceed 75% or so max, but....
...at least on the Solaris side, there's a tunable you can set that keeps the metaslab (which gets fragmented and inefficient when pool utilization is high) entirely in memory. This completely resolves our throughput issue, but does require that you have sufficient memory to load the thing...
echo "metaslab_debug/W 1" | mdb -kw
There may be a ZOL equivalent.
- I prefer using striped mirrors (aka raid10) over raidz/z2, but my
applications are primarily database.
(*) ran into a guy who had 100s of zfs 'file systems' (mount points), per user home directories, and was doing nightly snapshots going back several years, and his zfs commands were taking a long long time to do anything, and he couldn't figure out why. I think he had over 10,000 filesystems * snapshots.
Ray