Am 03.07.2014 um 21:19 schrieb John R Pierce <pierce at hogranch.com>: > On 7/2/2014 12:53 PM, Lists wrote: >> I'm trying to streamline a backup system using ZFS. In our situation, >> we're writing pg_dump files repeatedly, each file being highly similar >> to the previous file. Is there a file system (EG: ext4? xfs?) that, when >> re-writing a similar file, will write only the changed blocks and not >> rewrite the entire file to a new set of blocks? >> >> Assume that we're writing a 500 MB file with only 100 KB of changes. >> Other than a utility like diff, is there a file system that would only >> write 100KB and not 500 MB of data? In concept, this would work >> similarly to using the 'diff' utility... > > you do realize, adding/removing or even changing the length of a single > line in a block of that pg_dump file will change every block after it as > the data will be offset ? > > may I suggest that instead of pg_dump, you use pg_basebackup and WAL > archiving... this is the best way to do delta backups of a sql database > server. > > Additionally, I’d be extremely careful with ZFS dedup. It uses much more memory than „normal“ ZFS and tends to consume more I/Os, too.