John R Pierce wrote:
On 11/5/2013 7:52 AM, m.roth@5-cent.us wrote:
I don't think that's going to happen. First, we have an in-house developed backup system that works just fine. Second, we*are* backup
up something
over a hundred servers and workstations to a few backup servers. Third, we are talking, in some cases, of terabytes....
my backuppc server is doing a couple dozen VM's and physical machines, linux+solaris+AIX+WindowsServer, I have backups going back to MARCH, and the whole pool is just a few terabytes, with an effective compression of like 50:1 due to the amount of duplication in those backups. the database server machines run a pre-backup script that either does a database dump, or a freeze (for instance, postgresql has pg_start_backup()), then backup whats right, and run a post-backup script (pg_stop_backup())
its worked remarkably well for me, using a fraction of the disk space I'd planned for (both primary and mirror backup servers have room for 36 3.5" SAS drives, although they are only stuffed with 20x3TB in raid6+0 configurations)
As I noted, we make sure rsync uses hard links... but we have a good number of individual people and projects with who *each* have a good number of terabytes of data and generated data. Some of our 2TB drives are over 90% full, and then there's the honkin' huge RAID, and at least one 14TB partition is over 9TB full....
mark