On 3/23/2012 10:50 PM, Karl Vogel wrote: >>> On Fri, 23 Mar 2012 20:19:41 -0400, >>> Bob Hoffman<bob at bobhoffman.com> said: > B> I am down to my last hurdle of my project, backups. Not asking for 'how > B> to' but more of 'what is best in your experience'. > > Some questions: > > * What's the hardest stuff for you to recreate? I'd have that on both > DVD and something network-accessible. > > * What's your biggest PITA problem (for me, it would be bare-metal restore) > vs. your most likely one (I'd assume loss of a MySQL table or a VM)? > You mentioned being able to rebuild the host quickly, so if the bare-metal > thing isn't a big problem, concentrate on the VMs instead. > > * What are your priorities? If it's speed of the restore, and you have > the IO/network bandwidth and room, then do like another poster said > and rsync the VM files after shutting them down. If it's more like > history where you want to go back in time to lots of versions, something > finer-grained would be in order. > > B> The scenario... centos server acting as a virtual host. Virtual > B> machines are webservers and dns servers. All on one machine, all running > B> centos 6. Virtual machines are kvm, sitting in lvm storage. What I > B> want to do.. auto backups of the virtual machines to be stored on the > B> virtual host's extra drives for later download to my home computer. > > Your VMs sound like they start out identical, and then you add stuff to > specialize each one. If so, I'd keep these backups: > > a. one generic bare-bones VM that can be installed with as few commands > as possible. > b. each change-set you use to specialize for basic DNS, web, etc. > c. smaller groups of individual files like DB schemas, web content, > mailboxes, etc. > > This way, any given restore breaks down to (a) plus (one or more b) plus > (whatever's appropriate from c). When you get to the individual file > backups within a VM, something like this might be all you need: > > # cd / > # find . -depth -type f -newer /etc/BKUP -print | pax -x cpio -wd | > gzip -c> /path/to/$(date '+%Y/%m%d/%H%M').pax.gz > # touch /etc/BKUP > > B> 1- Amanda. I do not know much about it or how it would deal with mysql > B> databases, but it look promising. > > I set it up once, but it wasn't a close enough match to what we needed > for me to craft an entire backup strategy around it. It's not a trivial > thing to install or run, so you'll be spending time finding out how Amanda > wants to do things and matching that to your goals. > > B> 2- rsync - some kind of rsync going from the host to each machine, > B> putting it on the host's backup drives. > > That's what I use at work, but we're closer to the "networked fileservers > with remote shares" setup. I use the find/pax/touch setup above to handle > hourly backups for 800-1000 users, and they're happy little campers when > they find out the spreadsheet they created at 6am and mangled around noon > isn't completely gone. > I am not looking to back up the vms for a easy reinstall, I can do them in less than a 1/2 hour each. The back up is for the webservers so the database and html and some other folders are continually backed up incase of hack or whatever. Still thinking amanda, or bacula as first choice, rsync second, kpartx somehow third.