Hi all.
I usally make backups of databases mysql.
I make buckups of all datbase for example:
mysqldump -u user -ppassword name_db > backups.sql
also I make backups just its schema for example
mysqldump -u user -ppassword name_db --no-data > backups.sql
but now I need a backups incrementals, because the size of DB is very big (500 mb)
How to make this?
regards
Check out MySQL-zrm. Handles all types of MySQL backups locally or remotely.
http://www.zmanda.com/download-zrm.php
Chris
On Mon, Nov 10, 2014 at 12:25 PM, Rodrigo Pichiñual Norin < rodrigo.pichinual@gmail.com> wrote:
Hi all.
I usally make backups of databases mysql.
I make buckups of all datbase for example:
mysqldump -u user -ppassword name_db > backups.sql
also I make backups just its schema for example
mysqldump -u user -ppassword name_db --no-data > backups.sql
but now I need a backups incrementals, because the size of DB is very big (500 mb)
How to make this?
regards
-- *Atte. Rodrigo Pichiñual N.* *Ingeniero Administrador de Sistemas Linux* *rodrigo.pichinual@gmail.com rodrigo.pichinual@gmail.com* *+56 9 87272971* *@Roodrigo0461*
*http://cl.linkedin.com/in/rodrigopichinual http://cl.linkedin.com/in/rodrigopichinual* _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Mon, November 10, 2014 1:25 pm, Rodrigo Pichiñual Norin wrote:
Hi all.
I usally make backups of databases mysql.
I make buckups of all datbase for example:
mysqldump -u user -ppassword name_db > backups.sql
also I make backups just its schema for example
mysqldump -u user -ppassword name_db --no-data > backups.sql
but now I need a backups incrementals, because the size of DB is very big (500 mb)
How to make this?
I've seen somewhere script that does the following. It dumps all databases, then commits them into some version control system. CVS and subversion come to my mind. As the last do diff of text files 9and this way keep changes from version to version), and database dump is ASCII text, this will fulfill pretty well what you need. I must confess, I never came to the point of setting it that way myself: with database sizes we have, and the space we can devote we can handle a week worth of daily full dumps, and a couple of Months of backups of these... but one day I'll do it this way. The following may not be what I originally saw, but seems to be doing exactly this:
http://rocketmodule.com/blog/database-backup-dump-and-svn-commit-script-drup...
Valeri
++++++++++++++++++++++++++++++++++++++++ Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247 ++++++++++++++++++++++++++++++++++++++++
On Mon, Nov 10, 2014 at 8:25 PM, Rodrigo Pichiñual Norin rodrigo.pichinual@gmail.com wrote:
Hi all.
I usally make backups of databases mysql.
I make buckups of all datbase for example:
mysqldump -u user -ppassword name_db > backups.sql
also I make backups just its schema for example
mysqldump -u user -ppassword name_db --no-data > backups.sql
Hola,
If size's a concern, just export your DBs gziped :
mysqldump -u user database | gzip > backup_database.sql
Also if you are concerned about time to compress, you can enable multithreaded parallel gzip compress with pigz (available in EPEL) :
mysqldump -u user database | pigz > backup_database.sql
TBH, 500 MB databases aren't big enough to seek a more complex approach (unless you have more ambitious RTO/RPO requirements)
HTH
On Tue, November 11, 2014 9:10 am, Fran Garcia wrote:
On Mon, Nov 10, 2014 at 8:25 PM, Rodrigo Pichiñual Norin rodrigo.pichinual@gmail.com wrote:
Hi all.
I usally make backups of databases mysql.
I make buckups of all datbase for example:
mysqldump -u user -ppassword name_db > backups.sql
also I make backups just its schema for example
mysqldump -u user -ppassword name_db --no-data > backups.sql
Hola,
If size's a concern, just export your DBs gziped :
mysqldump -u user database | gzip > backup_database.sql
I would stay away from compression. Compression results in binary file. Even though compressed result is smaller, when you will try to keep several versions back, you will have multiple compressed file versions hitting your backup storage. The original poster's intent is better: to keep diff of ASCII dump files. If one commits dumps into some version control system, even though its files may be treated as binary, you will need only one latest version on backup (as it contains all versions of database).
Just my $0.02.
Valeri
Also if you are concerned about time to compress, you can enable multithreaded parallel gzip compress with pigz (available in EPEL) :
mysqldump -u user database | pigz > backup_database.sql
TBH, 500 MB databases aren't big enough to seek a more complex approach (unless you have more ambitious RTO/RPO requirements)
++++++++++++++++++++++++++++++++++++++++ Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247 ++++++++++++++++++++++++++++++++++++++++
On Nov 10, 2014 9:25 PM, "Rodrigo Pichiñual Norin" < rodrigo.pichinual@gmail.com> wrote:
Hi all.
I usally make backups of databases mysql.
I make buckups of all datbase for example:
mysqldump -u user -ppassword name_db > backups.sql
also I make backups just its schema for example
mysqldump -u user -ppassword name_db --no-data > backups.sql
but now I need a backups incrementals, because the size of DB is very big (500 mb)
How to make this?
Rodrigo, If your storage engine is InnoDB I would advise to use Percona Innobackupex tool.
Take a look here: [1] http://www.percona.com/doc/percona-xtrabackup/2.1/innobackupex/incremental_b... [2] http://www.slideshare.net/mobile/kastauyra/bitmap-33283809