Hi,
Up until recently I've hosted all my stuff (web & mail) on a handful of bare metal servers. Web applications (WordPress, OwnCloud, Dolibarr, GEPI, Roundcube) as well as mail and a few other things were hosted mostly on one big machine.
Backups for this setup were done using Rsnapshot, a nifty utility that combines Rsync over SSH and hard links to make incremental backups.
This approach has become problematic, for several reasons. First, web applications have increasingly specific and sometimes mutually exclusive requirements. And second, last month I had a server crash, and even though I had backups for everything, this meant quite some offline time.
So I've opted to go for KVM-based solutions, with everything split up over a series of KVM guests. I wrapped my head around KVM, played around with it (a lot) and now I'm more or less ready to go.
One detail is nagging me though: backups.
Let's say I have one VM that handles only DNS (base installation + BIND) and one other VM that handles mail (base installation + Postfix + Dovecot).
Under the hood that's two QCOW2 images stored in /var/lib/libvirt/images.
With the old "bare metal" approach I could perform remote backups using Rsync,
We're doing rsnapshot based backups for everything, VMs and bare metal systems. We don't care about KVM image files for backups.
When a new host is included in the backup, we first do a hard link based copy on the backup server of another, similar server. Then, the most of the OS is already there on the backup server and real backup consumes only little space.
The only problem we had with rsnapshot is that rsync by default can't handle a lot of hard links. We're now using our own build of rsync 3.2.3 with --max-alloc=0 and multi million hard links are not a problem anymore.
Regards, Simon