On Wed, Dec 18, 2013 at 3:13 PM, Lists <lists at benjamindsmith.com> wrote: > > > I would differentiate BackupBuddy in that there is no "incremental" and > "full" distinction. All backups are "full" in the truest sense of the > word, For the people who don't know, backuppc builds a directory tree for each backup run where the full runs are complete and the incrementals normally only contain the changed files. However, when you access the incremental backups through the web interface or the command line tools, the backing full is automatically merged so you don't have to deal with the difference - and when using rsync as the xfer method, deletions are tracked correctly. As far as the rsync-based xfer goes, the difference between a full and incremental run is that the fulls add the --ignore-times option to force a full block checksum compare of the file data, while the incrementals quickly skip files where the diretory timestamp and length match. > and all backups are stored as native files on the backup server. > This works using rsync's hard-link option to minimize wasted disk space. Backuppc normally compresses the files for even more disk savings, and it hard-links all files with identical content with its hash based pooling mechanism. This works across targets, not just for the unchanged files in a single run so it is great where you have copies of the same files on many hosts. > I'm evaluating ZFS and will likely include some features of ZFS into > BBuddy as we integrate these capabilities into our backup processes. > We're free to do this in part because we have redundant backup sets, so > a single failure wouldn't be catastrophic in the short/medium term. By the way, there is a new version of backuppc (4.0) in alpha testing that does not use hardlinks for the pooling plus some other changes that will help make it easier to rsync the whole archive to an offsite mirror. I haven't tried it myself yet and am not sure off the top of my head if it chunks up large files for better pooling of the unchanged portions. -- Les Mikesell lesmikesell at gmail.com