Keith Roberts wrote:
On Sun, 15 Aug 2010, Agnello George wrote:
To: CentOS mailing list centos@centos.org, linux@yahoogroups.com From: Agnello George agnello.dsouza@gmail.com Subject: [CentOS] best ways to do mysql backup
we have multiple servers approx 10 and each has about 100 GB of data in the /var/lib/mysql dir , excluding tar , mysqldump and replication how do we take backup for these databases on to a remote machine and store them datewise , ( the remote machine is a 2TB HDD )
currently tar is not feasible as the data is too huge and the same goes with mysqldump
suggestion will be of great help
Would there be some way of tee-ing off the SQL statements to a remote file in real-time? So in effect you are creating a text file dump of the databases in real-time?
Kind Regards,
Keith Roberts
For uninterrupted delivery of dynamic content from the database... or no downtime, replication to a slave is the way to go. This is 'sort of' a T-ing effect, except it is to another database. That slave database however can be stopped, a mysgldump done to a backup and then restarted, at which point the replication restarts and the slave database is updated to match the master database. It works really well without huge overhead increases.
Google MySQL replication for lots of info about setting it up.
John Hinton