On Thu, Jul 3, 2014 at 2:48 PM, Lists <lists at benjamindsmith.com> wrote: > > On 07/03/2014 12:23 PM, Les Mikesell wrote: >> But, since this is about postgresql, the right way is probably just to >> set up replication and let it send the changes itself instead of doing >> frequent dumps. > > Whatever we do, we need the ability to create a point-in-time history. > We commonly use our archival dumps for audit, testing, and debugging > purposes. I don't think PG + WAL provides this type of capability. I think it does. You should be able to have a base dump plus some number of incremental logs that you can apply to get to a point in time. Might take longer than loading a single dump, though. Depending on your data, you might be able to export it as tables in sorted order for snapshots that would diff nicely, but it is painful to develop things that break with changes in the data schema. > So at > the moment we're down to: > > A) run PG on a ZFS partition and snapshot ZFS. > B) Keep making dumps (as now) and use lots of disk space. > C) Cook something new and magical using diff, rdiff-backup, or related > tools. Disk space is cheap - and pg_dumps usually compress pretty well. But if you have time to experiment, I'd like to know how rdiff-backup or backuppc4 performs. -- Les Mikesell lesmikesell at gmail.com