Tuptus writes:
Nic by się nie stało gdyby Agnello George nie napisał:
we have multiple servers approx 10 and each has about 100 GB of data in the /var/lib/mysql dir , excluding tar , mysqldump and replication how do we take backup for these databases on to a remote machine and store them datewise , ( the remote machine is a 2TB HDD )
currently tar is not feasible as the data is too huge and the same goes with mysqldump
suggestion will be of great help
Why not mysqldump? I suggest mysqldump to local dysk and backup this to remote. I use it with __bacula__.
-- Tuptus
AFAIK mysqldump locks the tables.. and to have the tables locked while you dump 100 GB of data is very annoying, if not unacceptable. The best solution by far in this circumstance (yeah, i know he said he doesn't want replication) is to have a master/slave replication and perform the dump on the slave.
My 2 pence.
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos