[CentOS] best ways to do mysql backup

Jerry Franz jfranz at freerun.com
Sun Aug 15 21:24:24 UTC 2010


On 08/15/2010 09:49 AM, nux at li.nux.ro wrote:
> Tuptus writes:
>
>    
>> Nic by się nie stało gdyby Agnello George nie napisał:
>>      
>>> we have multiple servers approx 10   and each has about 100 GB of data in
>>> the /var/lib/mysql dir , excluding tar , mysqldump and replication how do
>>> we
>>> take backup for these databases on to a remote machine and store them
>>> datewise , ( the remote machine is a 2TB  HDD )
>>>
>>> currently tar  is not feasible as the data is too huge  and  the same goes
>>> with mysqldump
>>>
>>> suggestion will be of great help
>>>        
>> Why not mysqldump?
>> I suggest mysqldump to local dysk and backup this to remote.
>> I use it with __bacula__.
>>
>> -- 
>> Tuptus
>>      
> AFAIK mysqldump locks the tables.. and to have the tables locked while you
> dump 100 GB of data is very annoying, if not unacceptable.
> The best solution by far in this circumstance (yeah, i know he said he
> doesn't want replication) is to have a master/slave replication and perform
> the dump on the slave.
>    

No, the best solution in this circumstance is to use a master/slave 
replication and use LVM to take a static snapshot of the slave. That 
lets you use rsync efficiently to then sync it to the remote box.

A straight dump doesn't work efficiently because there isn't a good way 
to 'null out' unchanged data from being copied in a regular dump during 
the remote sync. With an LVM snapshot the file structure is preserved 
between runs allowing rsync to only send *changes* instead *everything* 
after the first sync. It could easily be the difference between taking a 
day to do the remote sync and taking 1 to 2 hours to do it (assuming you 
can read from your drives at around 40 MBytes/sec sustained but can only 
get 10 mbits/second sustained over the network).

-- 
Benjamin Franz



More information about the CentOS mailing list