Les Mikesell wrote: > On Tue, Nov 5, 2013 at 10:48 AM, <m.roth at 5-cent.us> wrote: >> > >>> I'm not quite at that scale in a single instance myself, but I'm >>> fairly sure many users on the backuppc mail list are, so it is not >>> necessarily a problem, although there are some tradeoffs with extra >>> overhead for compression and the extra pool hardlink. In any case it >>> is trivial to install and test with the package in EPEL. Even if it >>> doesn't replace your server backup system you might find it useful to >>> point at some workstations or windows boxes (it can use smb as well as >>> rsync or tar to gather the files). >>> >> Heh. We don't do Windows. That's desktop support.... (As a side note, I >> work for a federal contractor at a non-defense site, so scale is, um, >> larger than many.) > > The scaling side of things just trades a little more CPU for > compression and rsync-in-perl in return for vastly less disk > consumption so it's not a sure bet either way in that respect. I > think you've mentioned some subsequent off-line archiving scheme for > your data sets that wouldn't mesh very well, though. But, everyone > has lots of other stuff where backups would be nice to have and > backuppc makes it trivial to have, say, daily copies of all of /etc > from all machines going back months - or your own home directory. > And it doesn't blow up if you point it at a bunch of home directories > where developers have checked out copies of the same big source trees. Still not sounding like we need it. We back up /etc from all our servers (except for the compute cluster nodes every night, and keep about 5 weeks. Home directories are 100% NFS-mounted from servers, and those are backed up every night onto a handful of backup servers, as are various project directories. mark