On 01/17/2012 07:31 PM, Les Mikesell wrote:
Nothing will fix a file if the disk underneath goes bad and you aren't running raid. And in my case I run raid1 and regularly swap disks out for offsite copies and resync. But, backuppc makes the links based on an actual comparison, so if an old copy is somehow corrupted, the next full will be stored separately, not linked.
ZFS has an option to turn on full data comparison instead of just checksums.
At this point I am only reading the experience of others, but I am inclined to try it. I backup a mediawiki/mysql database and the new records are added to the database largely by appending. Even with compression, it's a pain to backup the whole thing every day. Block level dedup seems like it would be a good solution for that.
You are still going to have to go through the motions of copying the whole thing and letting the receiving filesystem do hash comparisons on each block to accomplish the dedup.
I'm not sure about that. They support deduplication over the network. There is a command somethink like 'zfs send', but maybe it requires that the filesystem you are backing up is also zfs.
Nataraj