Firstly, let me thank all of the individuals who assisted me in various matters regarding find, tar and ssh last week. Your contributions were truly invaluable and I learned a great deal thereby. The reason for that brief flurry of activity was an administrative and technical move to store-to-disc from store-to-tape for our server backups.
We presently use a mix of store to disc on various servers and store to tape depending upon the point of origin; store-to-disc for MS- Windows and Linux, store to tape for our HP3000 mid-range. These disparate backup strategies are now being marshalled onto a pair of Linux servers, one on-site and one at a remote location. At this juncture we are contemplating methods of creating off-line archives as well. In this regard I seek your advice regarding the suitability of employing an LG model GSA-4163B multi-format DVD drive as a backup device using DVD-RAM 120mm media.
Specifically I wish to know what software is available under Centos4 to record to this drive and which of those are preferred for this sort of application and how suitable each is for untended backups. I understand that DVD ram has a ~4GB limit and that filesets in excess of that naturally require a disc change.
We also are looking at tape systems. The idea is that we will end up with a four stage backup process. 1. Daily automated backups of each server over the network to a local high capacity linux server devoted to hosting the backup volumes. 2. Daily automated moving of certain critical, volatile backup sets from those on-line local stores via ssh to an off-site server performing the same function. 3. Daily semi- automated (somebody has to change the disc) stores of less critical and only somewhat volatile data onto DVD kept on-site in a fireproof vault. 4. Semi-automated archiving of the rest onto tape on a weekly basis and moving these off-site (the backup job is automatically scheduled but the media has to be manually handled).
We currently use an internal DDS3 tape unit for store-to-tape and are considering changing to an external DLT drive. I would welcome comments regarding a suitable choice of external DLT tape drives for this application. We have used an external SCSI DLT device in the past but this device needs to be replaced and are open to alternatives.
The total volume of data handled by all these these processes is not prohibitively large. We are speaking of tens of giga-bytes (~40GB) over all.
Regards, Jim
On Mon, 2005-08-15 at 09:05, James B. Byrne wrote:
We also are looking at tape systems. The idea is that we will end up with a four stage backup process. 1. Daily automated backups of each server over the network to a local high capacity linux server devoted to hosting the backup volumes.
Backuppc (http://backuppc.sourceforge.net/) is excellent for this and it's compression and duplicate file linking scheme will greatly increase the on-line capacity.
- Daily automated moving of certain
critical, volatile backup sets from those on-line local stores via ssh to an off-site server performing the same function.
Depending on network capacity and your backup window, you might simply run an independent copy of backuppc offsite, using rsync over ssh for the copy. This has the advantage of continuing to work if your main backup server has a problem.
- Daily semi-
automated (somebody has to change the disc) stores of less critical and only somewhat volatile data onto DVD kept on-site in a fireproof vault.
Backuppc has a manual command to write the last backup of a single host out as a normal compressed tar image either to a tape drive or as files split to a certain size.
- Semi-automated archiving of the rest onto tape on a
weekly basis and moving these off-site (the backup job is automatically scheduled but the media has to be manually handled).
If you are archiving these for a long time, using the archive function to write tapes is probably the best approach with backuppc. However, if the object is just to get copies offsite, you might be able to use external drives. The huge number of hardlinks makes file-based copying impractical so you may have to raid-mirror or do LVM snapshots and image copies of the snapshot to make the copy in a reasonable amount of them. The space-savings of the storage scheme makes this a nice approach. I'm doing it with a 250 Gig drive that typically has around 100 Gigs used, but it holds a week's backup runs of 28 machines and the uncompressed data would total over 600 gigs. I can plug the external drive copy into my laptop which also has backkuppc installed and have instant access to any of the files from any of that week's runs.
The total volume of data handled by all these these processes is not prohibitively large. We are speaking of tens of giga-bytes (~40GB) over all.
Depending on the rate of change and your bandwidth to your offsite location, you might be able to let an offsite instance of backuppc run a complete independent copy and not bother swapping anything around. Using rsync for the copy method means that only the changes are going to be moved on each run. Five of the 28 machines I mentioned are in remote offices.
--- Les Mikesell lesmikesell@gmail.com