But ... I've been reading about some of the issues with ZFS performance and have discovered that it needs a *lot* of RAM to support decent caching ... the recommendation is for a GByte of RAM per TByte of storage just for the metadata, which can add up. Maybe cache memory starvation is one reason why so many disappointing test results are showing up.
Yes, it uses most of any available RAM as cache. Newer implementations can use SSDs as a kind of 2nd-level cache ("L2-ARC"). Also, certain on-disk logs can be written out to NVRAMs directly, speeding up things even more. Compared with Cache-RAM in RAID-Controllers, RAM for servers is dirt-cheap.
The philosophy is: why put tiny, expensive amounts of RAM into the RAID-controller and have it try to make guesses on what should be cached and what not - if we can add RAM to the server directly at a fraction of the cost and let the OS handle _everything_ short of moving the disk-heads over the platters.
IMO, it's a brilliant concept.
Do you know if there is a lot of performance-penalty with KVM/VBox, compared to Solaris Zones?