Hi,
I had a cfml application running on mysql database. Can some suggest a realtime backup solution via ftp say every 5mins without damaging the database?
Thanks
regards
From: CentOS List centoslist@gmail.com
I had a cfml application running on mysql database. Can some suggest a realtime backup solution via ftp say every 5mins without damaging the database?
Wouldn't a simple mysqldump work? http://dev.mysql.com/doc/refman/5.0/en/mysqldump.html
JD
----- Original Message ----
From: John Doe jdmls@yahoo.com To: CentOS mailing list centos@centos.org Sent: Wednesday, February 18, 2009 5:40:48 PM Subject: Re: [CentOS] realtime backup
From: CentOS List
I had a cfml application running on mysql database. Can some suggest a realtime backup solution via ftp say every 5mins without damaging the database?
Wouldn't a simple mysqldump work? http://dev.mysql.com/doc/refman/5.0/en/mysqldump.html
JD
not if there are innodb tables?
CentOS List wrote:
Hi,
I had a cfml application running on mysql database. Can some suggest a realtime backup solution via ftp say every 5mins without damaging the database?
Thanks
regards
Sorry but i see two conflicting ideas in the same sentence : 'realtime' and 'every 5 mins' .. ;-) Why not using a MySQL replication (Master/Slave) between the two nodes ? And if you don't feel confident about configuring that, why just not using Drbd between those two nodes ? realtime replication , whatever runs on top of the filesystem (DB, files, Mailboxes, etc ...)
CentOS List wrote on Wed, 18 Feb 2009 14:30:16 +0800:
every 5mins
How big and what engine is the db to be secured? Backing up that often doesn't make sense to me. If you need to have it that frequently you better go for a slave or write to two backends.
Kai
At Wed, 18 Feb 2009 14:31:18 +0100 CentOS mailing list centos@centos.org wrote:
CentOS List wrote on Wed, 18 Feb 2009 14:30:16 +0800:
every 5mins
How big and what engine is the db to be secured? Backing up that often doesn't make sense to me. If you need to have it that frequently you better go for a slave or write to two backends.
Or just use a RAID array (eg software RAID in mirror mode: RAID1).
Kai
On Wed, Feb 18, 2009 at 8:58 AM, Robert Heller heller@deepsoft.com wrote:
At Wed, 18 Feb 2009 14:31:18 +0100 CentOS mailing list centos@centos.org wrote:
CentOS List wrote on Wed, 18 Feb 2009 14:30:16 +0800:
every 5mins
How big and what engine is the db to be secured? Backing up that often doesn't make sense to me. If you need to have it that frequently you better go for a slave or write to two backends.
Or just use a RAID array (eg software RAID in mirror mode: RAID1).
RAID IS NOT BACKUP RAID IS NOT BACKUP RAID IS NOT BACKUP
To the OP: It would be helpful if you were more descriptive about what you are trying to accomplish. Are you worried about disk failures? Are you worried about the whole system failing? What about the case where invalid data is added to the database? (or the database gets corrupted in general?) Do you want to have a hot backup of the database standing by so you can switch to it if the main one goes down? Each of these have different solutions.
Or just use a RAID array (eg software RAID in mirror mode: RAID1).
RAID IS NOT BACKUP
To the OP: It would be helpful if you were more descriptive about what you are trying to accomplish. Are you worried about disk failures? Are you worried about the whole system failing? What about the case where invalid data is added to the database? (or the database gets corrupted in general?) Do you want to have a hot backup of the database standing by so you can switch to it if the main one goes down? Each of these have different solutions. _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
1. === RAID IS NO BACKUP! RAID is only to survive hardware failure of the hard disk(s) (and only if you don't use RAID0!).
Other people are mentioning the Master/Slave setup. This will do if you just need one up to date backup for the coincidence the complete master server fails but not for the occasion someone / something enters the wrong SQL query and deletes the wrong data! Then the slave server will also execute this SQL query and deletes the data also!
If you want to have the opportunity to go back in time, then you have to make dumps through mysqldump.
2. === The bigger the data, the slower the backup will be, the longer the tables will be locked, the greater is the change your users will notice it with using the application.
Best regards,
Joost Waversveld wrote:
=== RAID IS NO BACKUP! RAID is only to survive hardware failure of the hard disk(s) (and only if you don't use RAID0!).
Other people are mentioning the Master/Slave setup. This will do if you just need one up to date backup for the coincidence the complete master server fails but not for the occasion someone / something enters the wrong SQL query and deletes the wrong data! Then the slave server will also execute this SQL query and deletes the data also!
If you want to have the opportunity to go back in time, then you have to make dumps through mysqldump.
=== The bigger the data, the slower the backup will be, the longer the tables will be locked, the greater is the change your users will notice it with using the application.
For a speedy backup, could put the db on LVM. Then your procedure would be shutdown/freeze db, make lv snapshot, startup/unfreeze db, rsync/backup data, remove snapshot.
On Wed, 2009-02-18 at 15:35 -0500, Toby Bluhm wrote:
For a speedy backup, could put the db on LVM. Then your procedure would be shutdown/freeze db, make lv snapshot, startup/unfreeze db, rsync/backup data, remove snapshot.
That's what I'd suggest too, but be warned that performance on that database (if gets to be of any size to be useful) would completely suck... not unlike driving at 90mph and with the ebrake on and constantly up-and-down-shifting...
-I
on 2-18-2009 1:36 PM Ian Forde spake the following:
On Wed, 2009-02-18 at 15:35 -0500, Toby Bluhm wrote:
For a speedy backup, could put the db on LVM. Then your procedure would be shutdown/freeze db, make lv snapshot, startup/unfreeze db, rsync/backup data, remove snapshot.
That's what I'd suggest too, but be warned that performance on that database (if gets to be of any size to be useful) would completely suck... not unlike driving at 90mph and with the ebrake on and constantly up-and-down-shifting...
-I
Would a decent alternative would be a master/slave, with the dumps being done from the slave. That way if the slave bogs down during the dump, it can catch up afterwards. The master shouldn't slow down at all, or very minimally as it is caching the slave transactions.
on 2-18-2009 1:45 PM Scott Silva spake the following:
on 2-18-2009 1:36 PM Ian Forde spake the following:
On Wed, 2009-02-18 at 15:35 -0500, Toby Bluhm wrote:
For a speedy backup, could put the db on LVM. Then your procedure would be shutdown/freeze db, make lv snapshot, startup/unfreeze db, rsync/backup data, remove snapshot.
That's what I'd suggest too, but be warned that performance on that database (if gets to be of any size to be useful) would completely suck... not unlike driving at 90mph and with the ebrake on and constantly up-and-down-shifting...
-I
Would a decent alternative be a master/slave, with the dumps being done from the slave. That way if the slave bogs down during the dump, it can catch up afterwards. The master shouldn't slow down at all, or very minimally as it is caching the slave transactions.
One too many "would's"...
On Wed, 2009-02-18 at 13:57 -0800, Scott Silva wrote:
on 2-18-2009 1:45 PM Scott Silva spake the following:
on 2-18-2009 1:36 PM Ian Forde spake the following:
On Wed, 2009-02-18 at 15:35 -0500, Toby Bluhm wrote:
For a speedy backup, could put the db on LVM. Then your procedure would be shutdown/freeze db, make lv snapshot, startup/unfreeze db, rsync/backup data, remove snapshot.
That's what I'd suggest too, but be warned that performance on that database (if gets to be of any size to be useful) would completely suck... not unlike driving at 90mph and with the ebrake on and constantly up-and-down-shifting...
-I
Would a decent alternative be a master/slave, with the dumps being done from the slave. That way if the slave bogs down during the dump, it can catch up afterwards. The master shouldn't slow down at all, or very minimally as it is caching the slave transactions.
One too many "would's"...
;) That would work, and I've done that (though not at the 5-minute interval) in production environments. But since the OP hasn't responded to this thread with any type of follow-up detail (like the size of the db), I'm wondering how much time I want to spend putting out possible solutions...
-I
For a speedy backup, could put the db on LVM. Then your procedure
would
be shutdown/freeze db, make lv snapshot, startup/unfreeze db, rsync/backup data, remove snapshot.
That's what I'd suggest too, but be warned that performance on that database (if gets to be of any size to be useful) would completely suck... not unlike driving at 90mph and with the ebrake on and constantly up-and-down-shifting...
-I
Would a decent alternative be a master/slave, with the dumps being done from the slave. That way if the slave bogs down during the dump, it can
catch
up afterwards. The master shouldn't slow down at all, or very minimally
as it
is caching the slave transactions.
One too many "would's"...
;) That would work, and I've done that (though not at the 5-minute interval) in production environments. But since the OP hasn't responded to this thread with any type of follow-up detail (like the size of the db), I'm wondering how much time I want to spend putting out possible solutions...
Thanks everyone. At the present I am looking at 150mb worth of database. I stumbled across Zmanda. Has anything tried it? Is it suitable for my case?
CentOS List schrieb:
For a speedy backup, could put the db on LVM. Then your procedure
would
be shutdown/freeze db, make lv snapshot, startup/unfreeze db, rsync/backup data, remove snapshot.
That's what I'd suggest too, but be warned that performance on that database (if gets to be of any size to be useful) would completely suck... not unlike driving at 90mph and with the ebrake on and constantly up-and-down-shifting...
-I
Would a decent alternative be a master/slave, with the dumps being done from the slave. That way if the slave bogs down during the dump, it can
catch
up afterwards. The master shouldn't slow down at all, or very minimally
as it
is caching the slave transactions.
One too many "would's"...
;) That would work, and I've done that (though not at the 5-minute interval) in production environments. But since the OP hasn't responded to this thread with any type of follow-up detail (like the size of the db), I'm wondering how much time I want to spend putting out possible solutions...
Thanks everyone. At the present I am looking at 150mb worth of database. I stumbled across Zmanda. Has anything tried it? Is it suitable for my case?
I'm still not sure what you want to achieve by backing up every 5 minutes.
I *think* you are looking for something like PostgreSQL's Point-in-Time recovery feature.
Maybe it's time to change databases...
Rainer
On Wed, 18 Feb 2009, Robert Heller wrote:
How big and what engine is the db to be secured?
every 5mins
Backing up that often doesn't make sense to me. If you need to have it that frequently you better go for a slave or write to two backends.
I have to wonder when I see these kind of 'wierd' requirements:
Who is the dummy here? -- the person who did not specify AMQ journalling to a very well protected unit, or the admin who does not step back a bit, and point out the defective design?
-- Russ herrold
CentOS List wrote:
Hi,
I had a cfml application running on mysql database. Can some suggest a realtime backup solution via ftp say every 5mins without damaging the database?
Using ftp every 5 minutes implies a pretty small database, like others have suggested I would suggest setting up a slave DB, and run mysqldump against it as often as you need and ftp the results to the remote host.
nate
CentOS List wrote:
Hi,
I had a cfml application running on mysql database. Can some suggest a realtime backup solution via ftp say every 5mins without damaging the database?
do your backups have to have some level of history ? or just one backup as of the last snapshot interval such that any bad data is replicated too?