we have multiple servers approx 10 and each has about 100 GB of data in the /var/lib/mysql dir , excluding tar , mysqldump and replication how do we take backup for these databases on to a remote machine and store them datewise , ( the remote machine is a 2TB HDD )
currently tar is not feasible as the data is too huge and the same goes with mysqldump
suggestion will be of great help
On 08/14/2010 12:51 PM, Agnello George wrote:
we have multiple servers approx 10 and each has about 100 GB of data in the /var/lib/mysql dir , excluding tar , mysqldump and replication how do we take backup for these databases on to a remote machine and store them datewise , ( the remote machine is a 2TB HDD )
currently tar is not feasible as the data is too huge and the same goes with mysqldump
suggestion will be of great help
Assuming you installed using LVM partitions (and that you left space for snapshots ;) ), stop the database, take a LVM snapshot, restart the database, rsync the mysql data directory to the other machine, then release the snapshot.
On 08/14/2010 12:59 PM, Benjamin Franz wrote:
On 08/14/2010 12:51 PM, Agnello George wrote:
we have multiple servers approx 10 and each has about 100 GB of data in the /var/lib/mysql dir , excluding tar , mysqldump and replication how do we take backup for these databases on to a remote machine and store them datewise , ( the remote machine is a 2TB HDD )
currently tar is not feasible as the data is too huge and the same goes with mysqldump
suggestion will be of great help
Assuming you installed using LVM partitions (and that you left space for snapshots ;) ), stop the database, take a LVM snapshot, restart the database, rsync the mysql data directory to the other machine, then release the snapshot.
Correction: rsync the *snapshot* of the mysql data directory to the other machine.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 14/08/2010 21:51, Agnello George wrote:
we have multiple servers approx 10 and each has about 100 GB of data in the /var/lib/mysql dir , excluding tar , mysqldump and replication how do we take backup for these databases on to a remote machine and store them datewise , ( the remote machine is a 2TB HDD )
currently tar is not feasible as the data is too huge and the same goes with mysqldump
The problem i encountered with this kind of stuff is that huge backups take a lot of time and that corresponds possibly (depends of used backends maybe) to long downtime of the production instance. I generally dont find it feasible to do backups on production instances.
Instead I suggest to set up a slave instance on another machine. The Backup Procedure would roughly be like this:
* stop slave instance * make lvm snapshot * start slave instance again * do whatever you want with the snapshot data (sorry!)
The Advantages as i can see are:
Backup peak I/O is moved away from the production instance. Of course you have the constant replication overhead. As a variation you could only start the replication thread periodically.
The production instance has not to be shut off, e.g. is not affected directly by backups.
The slave instance could act as failover.
Regards, Markus
On Sun, 15 Aug 2010, Agnello George wrote:
To: CentOS mailing list centos@centos.org, linux@yahoogroups.com From: Agnello George agnello.dsouza@gmail.com Subject: [CentOS] best ways to do mysql backup
we have multiple servers approx 10 and each has about 100 GB of data in the /var/lib/mysql dir , excluding tar , mysqldump and replication how do we take backup for these databases on to a remote machine and store them datewise , ( the remote machine is a 2TB HDD )
currently tar is not feasible as the data is too huge and the same goes with mysqldump
suggestion will be of great help
Would there be some way of tee-ing off the SQL statements to a remote file in real-time? So in effect you are creating a text file dump of the databases in real-time?
Kind Regards,
Keith Roberts
----------------------------------------------------------------- Websites: http://www.php-debuggers.net http://www.karsites.net http://www.raised-from-the-dead.org.uk
All email addresses are challenge-response protected with TMDA [http://tmda.net] -----------------------------------------------------------------
Keith Roberts wrote:
On Sun, 15 Aug 2010, Agnello George wrote:
To: CentOS mailing list centos@centos.org, linux@yahoogroups.com From: Agnello George agnello.dsouza@gmail.com Subject: [CentOS] best ways to do mysql backup
we have multiple servers approx 10 and each has about 100 GB of data in the /var/lib/mysql dir , excluding tar , mysqldump and replication how do we take backup for these databases on to a remote machine and store them datewise , ( the remote machine is a 2TB HDD )
currently tar is not feasible as the data is too huge and the same goes with mysqldump
suggestion will be of great help
Would there be some way of tee-ing off the SQL statements to a remote file in real-time? So in effect you are creating a text file dump of the databases in real-time?
Kind Regards,
Keith Roberts
For uninterrupted delivery of dynamic content from the database... or no downtime, replication to a slave is the way to go. This is 'sort of' a T-ing effect, except it is to another database. That slave database however can be stopped, a mysgldump done to a backup and then restarted, at which point the replication restarts and the slave database is updated to match the master database. It works really well without huge overhead increases.
Google MySQL replication for lots of info about setting it up.
John Hinton
On 08/14/2010 03:58 PM, John Hinton wrote:
For uninterrupted delivery of dynamic content from the database... or no downtime, replication to a slave is the way to go. This is 'sort of' a T-ing effect, except it is to another database. That slave database however can be stopped, a mysgldump done to a backup and then restarted, at which point the replication restarts and the slave database is updated to match the master database. It works really well without huge overhead increases.
Google MySQL replication for lots of info about setting it up.
I didn't include this since the OP specified _other_ than tar, replication and mysqldump.
But the most efficient and lowest downtime is to combine replication with the LVM snapshot and rsync. Since the OP specified he has around 100 GB of data, rsyncing the snapshotted data directory will be substantially more efficient than using mysqldump for transfer to the remote system (assuming he doesn't churn most of the contents of the database between syncs).
Nic by się nie stało gdyby Agnello George nie napisał:
we have multiple servers approx 10 and each has about 100 GB of data in the /var/lib/mysql dir , excluding tar , mysqldump and replication how do we take backup for these databases on to a remote machine and store them datewise , ( the remote machine is a 2TB HDD )
currently tar is not feasible as the data is too huge and the same goes with mysqldump
suggestion will be of great help
Why not mysqldump? I suggest mysqldump to local dysk and backup this to remote. I use it with __bacula__.
Tuptus writes:
Nic by się nie stało gdyby Agnello George nie napisał:
we have multiple servers approx 10 and each has about 100 GB of data in the /var/lib/mysql dir , excluding tar , mysqldump and replication how do we take backup for these databases on to a remote machine and store them datewise , ( the remote machine is a 2TB HDD )
currently tar is not feasible as the data is too huge and the same goes with mysqldump
suggestion will be of great help
Why not mysqldump? I suggest mysqldump to local dysk and backup this to remote. I use it with __bacula__.
-- Tuptus
AFAIK mysqldump locks the tables.. and to have the tables locked while you dump 100 GB of data is very annoying, if not unacceptable. The best solution by far in this circumstance (yeah, i know he said he doesn't want replication) is to have a master/slave replication and perform the dump on the slave.
My 2 pence.
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On 08/15/2010 09:49 AM, nux@li.nux.ro wrote:
Tuptus writes:
Nic by się nie stało gdyby Agnello George nie napisał:
we have multiple servers approx 10 and each has about 100 GB of data in the /var/lib/mysql dir , excluding tar , mysqldump and replication how do we take backup for these databases on to a remote machine and store them datewise , ( the remote machine is a 2TB HDD )
currently tar is not feasible as the data is too huge and the same goes with mysqldump
suggestion will be of great help
Why not mysqldump? I suggest mysqldump to local dysk and backup this to remote. I use it with __bacula__.
-- Tuptus
AFAIK mysqldump locks the tables.. and to have the tables locked while you dump 100 GB of data is very annoying, if not unacceptable. The best solution by far in this circumstance (yeah, i know he said he doesn't want replication) is to have a master/slave replication and perform the dump on the slave.
No, the best solution in this circumstance is to use a master/slave replication and use LVM to take a static snapshot of the slave. That lets you use rsync efficiently to then sync it to the remote box.
A straight dump doesn't work efficiently because there isn't a good way to 'null out' unchanged data from being copied in a regular dump during the remote sync. With an LVM snapshot the file structure is preserved between runs allowing rsync to only send *changes* instead *everything* after the first sync. It could easily be the difference between taking a day to do the remote sync and taking 1 to 2 hours to do it (assuming you can read from your drives at around 40 MBytes/sec sustained but can only get 10 mbits/second sustained over the network).
why not mysqldump + binlog + rsync?
Tang Jianwei
On 08/15/2010 03:51 AM, Agnello George wrote:
we have multiple servers approx 10 and each has about 100 GB of data in the /var/lib/mysql dir , excluding tar , mysqldump and replication how do we take backup for these databases on to a remote machine and store them datewise , ( the remote machine is a 2TB HDD )
currently tar is not feasible as the data is too huge and the same goes with mysqldump
suggestion will be of great help
-- Regards Agnello D'souza
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
currently tar is not feasible as the data is too huge and the same goes with mysqldump
suggestion will be of great help
Not really an answer but a good book on the subject:
http://oreilly.com/catalog/9780596807290/
http://oreilly.com/catalog/9780596101718/
Matt