[CentOS] CentOS and LessFS

Les Mikesell lesmikesell at gmail.com
Wed Jan 18 04:02:47 UTC 2012


On Tue, Jan 17, 2012 at 9:43 PM, Nataraj <incoming-centos at rjl.com> wrote:
>
>>> At this point I am only reading the experience of others, but I am
>>> inclined to try it.  I backup a mediawiki/mysql database and the new
>>> records are added to the database largely by appending.  Even with
>>> compression, it's a pain to backup the whole thing every day.  Block
>>> level dedup seems like it would be a good solution for that.
>> You are still going to have to go through the motions of copying the
>> whole thing and letting the receiving filesystem do hash comparisons
>> on each block to accomplish the dedup.
> I'm not sure about that.  They support deduplication over the network.
> There is a command somethink like 'zfs send', but maybe it requires that
> the filesystem you are backing up is also zfs.

Yes, you can make a filesystem snapshot on zfs and do an incremental
'send' to a remote copy of the previous snapshot where the receive
operation will merge the changed blocks.  That does sound efficient in
terms of bandwidth, but would require a one-to-one setup for every
filesystem you want to back up, and I'm not sure what kind of
contortions it takes to get the whole snapshot back and revert it to
the live filesystem.  If you run backuppc over low bandwidth
connections you might come out ahead copying an uncompressed database
dump with rsync as the transport because it may match up some existing
data and avoid the network hop.  However, the way backuppc works if
the file has changed at all, the server side will end up
reconstructing the whole file and saving a complete new copy.  On a
fast local connection you are probably better off compressing the db
dump (and they usually compress a lot) and letting it copy the whole
thing.

-- 
   Les Mikesell
      lesmikesell at gmail.com



More information about the CentOS mailing list