Is there a standard way of copying a working system from one machine to another with different partitions?
I have two CentOS-5.6 machines, say A and B, and I thought I would copy / on sdb10 on machine A to an unused partition sda7 on machine B with rsync. I made the appropriate changes to /etc/fstab and grub.conf , as well as /etc/sysconfig/network-scripts , but found that there were innumerable errors when I booted machine B into the new system, mostly to do with creating dev's. Also the ethernet connection, which had been eth1 on A, was now eth0 on B, and this did not work.
This was only a kind of experiment. There is a problem with the partition table on machine A, and I thought it would be useful to have a backup machine with exactly the same setup.
Is this a hopeless enterprise, or can it be done easily?
At Thu, 05 May 2011 12:13:18 +0100 CentOS mailing list centos@centos.org wrote:
Is there a standard way of copying a working system from one machine to another with different partitions?
I have two CentOS-5.6 machines, say A and B, and I thought I would copy / on sdb10 on machine A to an unused partition sda7 on machine B with rsync. I made the appropriate changes to /etc/fstab and grub.conf , as well as /etc/sysconfig/network-scripts , but found that there were innumerable errors when I booted machine B into the new system, mostly to do with creating dev's. Also the ethernet connection, which had been eth1 on A, was now eth0 on B, and this did not work.
It sounds like you have problems *other* the 'copy' part.
After copying the system, you will likely need to remake the initrd on the target system. Oh, you will need to edit /etc/modprobe.conf: different SATA driver, different ethernet driver, etc.
This was only a kind of experiment. There is a problem with the partition table on machine A, and I thought it would be useful to have a backup machine with exactly the same setup.
Is this a hopeless enterprise, or can it be done easily?
It is easy enough to do. There are just a few more things involved besides copying the data and diddling with grub.conf, /etc/fatab, and /etc/sysconfig/network-scripts. You just forgot about /etc/modprobe.conf and forgot to remake the the initrd.
Robert Heller wrote:
Is there a standard way of copying a working system from one machine to another with different partitions?
After copying the system, you will likely need to remake the initrd on the target system. Oh, you will need to edit /etc/modprobe.conf: different SATA driver, different ethernet driver, etc.
Is this a hopeless enterprise, or can it be done easily?
It is easy enough to do. There are just a few more things involved besides copying the data and diddling with grub.conf, /etc/fatab, and /etc/sysconfig/network-scripts. You just forgot about /etc/modprobe.conf and forgot to remake the the initrd.
Thanks for your response (and the many others).
Looking back, I think most of my problems did actually arise from not re-making initrd. (I actually used a kernel and initrd from machine B.) I looked at /etc/modprobe.d/ but not /etc/modprobe.conf .
I'll try again, and run kudzu as someone suggested. Actually, I've ordered a huge new disk for machine B (an HP MicroServer), so I probably could use clonezilla.
But the whole thing is just an experiment, as I said. The problem is that the partition table on machine A, which is my home server, has been destroyed (through folly on my part), and my hope was to have a substitute machine which could just be plugged in to replace machine A.
But I suspect I'll have to get down to re-making the partition table on machine A. I back up everything useful on it each night with backuppc, so hopefully I'm not looking at a disaster in any case.
On 05/05/2011 07:13 AM, Timothy Murphy wrote:
Is there a standard way of copying a working system from one machine to another with different partitions?
You could also utilize cloning software, such as the client version of drbl, clonezilla livecd.
You could also do a direct copy with dd onto a connected drive.
You may be well served by looking into drbl, or clonezilla.
At Thu, 05 May 2011 07:44:52 -0400 CentOS mailing list centos@centos.org wrote:
On 05/05/2011 07:13 AM, Timothy Murphy wrote:
Is there a standard way of copying a working system from one machine to another with different partitions?
You could also utilize cloning software, such as the client version of drbl, clonezilla livecd.
You could also do a direct copy with dd onto a connected drive.
Warning: dd is not a good choise if the source and desination drives/partitions are *different* sizes.
You may be well served by looking into drbl, or clonezilla.
centos-bounces@centos.org wrote:
At Thu, 05 May 2011 07:44:52 -0400 CentOS mailing list centos@centos.org wrote:
On 05/05/2011 07:13 AM, Timothy Murphy wrote:
Is there a standard way of copying a working system from one machine to another with different partitions?
You could also utilize cloning software, such as the client version of drbl, clonezilla livecd.
You could also do a direct copy with dd onto a connected drive.
Warning: dd is not a good choise if the source and desination drives/partitions are *different* sizes.
Different block mappings will also give you grief. .:. The drives must be identical manufacturer and model, down to the firmware revision. dd is not a backup tool in the general sense.
Insert spiffy .sig here: Life is complex: it has both real and imaginary parts.
//me ******************************************************************* This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the system manager. This footnote also confirms that this email message has been swept for the presence of computer viruses. www.Hubbell.com - Hubbell Incorporated**
On 05/05/2011 08:01 AM Brunner, Brian T. wrote:
centos-bounces@centos.org wrote:
At Thu, 05 May 2011 07:44:52 -0400 CentOS mailing list centos@centos.org wrote:
On 05/05/2011 07:13 AM, Timothy Murphy wrote:
Is there a standard way of copying a working system from one machine to another with different partitions?
You could also utilize cloning software, such as the client version of drbl, clonezilla livecd.
You could also do a direct copy with dd onto a connected drive.
Warning: dd is not a good choise if the source and desination drives/partitions are *different* sizes.
Different block mappings will also give you grief. .:. The drives must be identical manufacturer and model, down to the firmware revision. dd is not a backup tool in the general sense.
I had doubts about dd also. But last year, when I needed to upgrade to a larger drive, I used it and it worked fine. I bought a new drive (of course of larger size... different manufacturer too), put it into a drive enclosure, plugged that new drive into my USB port, and ran dd to copy the entirety of hda to hdb. Shutting down the machine, I swapped the hard drives and booted with the new drive and-- viola!-- new bigger drive with everything running just like on the old drive. I didn't have to reconfigure anything; even the networking worked on the new drive without touching anything. The only thing I did on the new drive was to create a new partition from all the extra new hd space I had. Indeed, this is a multi-boot machine and all OSs on it copied over just fine. In addition, all my linux partitions are encrypted, and all that copied over perfectly as well.
One tip: Use dd's smallest block size (BS). I did this copy using dd several times, starting with 4k, then 2k block sizes and the new disk had problems when I tried to use it. IIRC, I had to rachet down to 256 to get a working drive. And this took eight or ten hours to copy an 80G drive.
Another tip: in your BIOS the parameter for the hard drive should probably be Auto-Detect if your source and destination drives aren't identical. That's generally the default anyway.
Final tip (I think): For me, my machine A and machine B were the same machine... so of course the hardware was absolutely identical. Using dd might not work if the hardware on A and B are too different from one another.
hth, ken
At Thu, 05 May 2011 10:10:52 -0400 CentOS mailing list centos@centos.org wrote:
On 05/05/2011 08:01 AM Brunner, Brian T. wrote:
centos-bounces@centos.org wrote:
At Thu, 05 May 2011 07:44:52 -0400 CentOS mailing list centos@centos.org wrote:
On 05/05/2011 07:13 AM, Timothy Murphy wrote:
Is there a standard way of copying a working system from one machine to another with different partitions?
You could also utilize cloning software, such as the client version of drbl, clonezilla livecd.
You could also do a direct copy with dd onto a connected drive.
Warning: dd is not a good choise if the source and desination drives/partitions are *different* sizes.
Different block mappings will also give you grief. .:. The drives must be identical manufacturer and model, down to the firmware revision. dd is not a backup tool in the general sense.
I had doubts about dd also. But last year, when I needed to upgrade to a larger drive, I used it and it worked fine. I bought a new drive (of course of larger size... different manufacturer too), put it into a drive enclosure, plugged that new drive into my USB port, and ran dd to copy the entirety of hda to hdb. Shutting down the machine, I swapped the hard drives and booted with the new drive and-- viola!-- new bigger drive with everything running just like on the old drive. I didn't have to reconfigure anything; even the networking worked on the new drive without touching anything. The only thing I did on the new drive was to create a new partition from all the extra new hd space I had. Indeed, this is a multi-boot machine and all OSs on it copied over just fine. In addition, all my linux partitions are encrypted, and all that copied over perfectly as well.
One tip: Use dd's smallest block size (BS). I did this copy using dd several times, starting with 4k, then 2k block sizes and the new disk had problems when I tried to use it. IIRC, I had to rachet down to 256 to get a working drive. And this took eight or ten hours to copy an 80G drive.
Hmmm.... Using dump & restore (or tar or rsync or cpio, etc.) would likely be a lot faster. Esp. if the disk is not 100% full. Remember, dd will copy even the unused free blocks (which is a total waste of time). And dump & restore will likely use a more optimal block size, which will copy the data faster as well...
Another tip: in your BIOS the parameter for the hard drive should probably be Auto-Detect if your source and destination drives aren't identical. That's generally the default anyway.
Final tip (I think): For me, my machine A and machine B were the same machine... so of course the hardware was absolutely identical. Using dd might not work if the hardware on A and B are too different from one another.
hth, ken
On 05/05/2011 10:41 AM Robert Heller wrote:
At Thu, 05 May 2011 10:10:52 -0400 CentOS mailing list centos@centos.org wrote:
On 05/05/2011 08:01 AM Brunner, Brian T. wrote:
centos-bounces@centos.org wrote:
At Thu, 05 May 2011 07:44:52 -0400 CentOS mailing list centos@centos.org wrote:
On 05/05/2011 07:13 AM, Timothy Murphy wrote:
Is there a standard way of copying a working system from one machine to another with different partitions?
You could also utilize cloning software, such as the client version of drbl, clonezilla livecd.
You could also do a direct copy with dd onto a connected drive.
Warning: dd is not a good choise if the source and desination drives/partitions are *different* sizes.
Different block mappings will also give you grief. .:. The drives must be identical manufacturer and model, down to the firmware revision. dd is not a backup tool in the general sense.
...
Hmmm.... Using dump & restore (or tar or rsync or cpio, etc.) would likely be a lot faster. Esp. if the disk is not 100% full. Remember, dd will copy even the unused free blocks (which is a total waste of time). And dump & restore will likely use a more optimal block size, which will copy the data faster as well...
Speed is good sometimes. But I was probably either sleeping or watching TV during those eight to ten hours, so the length of time to do the copy didn't matter at all.
The most time-consuming part of the job was finding the particular command with the correct args that actually worked-- not the command or utility that "should" work or that "theoretically ought to" work-- but one which in fact *did* work. So if anyone actually finds a faster way to clone a system-- meaning they've run the command(s), and done the testing to determine that it was successful-- I'm all ears. The other possibilities are interesting, but given what my schedule is like, unless success with something else is 99.9% assured, I'll probably do it the same way again next time. Hey, what can I say...? I like success.
On 5/5/2011 10:36 AM, ken wrote:
The most time-consuming part of the job was finding the particular command with the correct args that actually worked-- not the command or utility that "should" work or that "theoretically ought to" work-- but one which in fact *did* work. So if anyone actually finds a faster way to clone a system-- meaning they've run the command(s), and done the testing to determine that it was successful-- I'm all ears.
That's what clonezilla is all about. And it is released frequently on both debian and ubuntu (the 'alternative' version) live bases so it has pretty good hardware handling.
Les Mikesell wrote:
That's what clonezilla is all about. And it is released frequently on both debian and ubuntu (the 'alternative' version) live bases so it has pretty good hardware handling.
I did look at clonezilla, briefly, but had to discard the idea, as my setup violated two of its rules:
# The destination partition must be equal or larger than the source one.
In my case the / partition on machine A is larger than the disk on machine B.
# The partition to be imaged or cloned has to be unmounted.
I can't stop machine A, as the partition table has been destroyed and it would not re-boot.
--On Thursday, May 05, 2011 10:41:04 AM -0400 Robert Heller heller@deepsoft.com wrote:
Hmmm.... Using dump & restore (or tar or rsync or cpio, etc.) would likely be a lot faster.
+1 for dump & restore. It's been around for years, is lightweight (in terms of minimal dependencies), and is absolutely solid. I've had good success in moving dump images to new hardware as long as your hardware is similar (ie: not mixing Intel/AMD), and those aren't problems with dump/restore but rather the OS that you're copying. For a straight clone, the recovery steps would generally be: - partition your new drive and create the new filesystems - use restore to extract your data - reinitialize your boot blocks (MBR or whatever) - boot the system
I don't know of any UNIX that doesn't ship with it (although there are variations among the UNIX flavours).
The assumption is that your're backing up on a per-filesystem basis, as file exclusion for dump is rudimentary.
With file-based copy schemes like tar, rsync, cpio, etc, you have better control for file exclusions, but you need to make sure you're paying attention to how you handle symlinks, hard links and other "unusual" file setups.
No opinion on clonezilla.
Devin
On Thursday, May 05, 2011 08:01:57 AM Brunner, Brian T. wrote:
centos-bounces@centos.org wrote:
At Thu, 05 May 2011 07:44:52 -0400 CentOS mailing list Warning: dd is not a good choise if the source and desination drives/partitions are *different* sizes.
Different block mappings will also give you grief. .:. The drives must be identical manufacturer and model, down to the firmware revision. dd is not a backup tool in the general sense.
I do dd imaging quite frequently, and as long as everything is LBA48 capable and setup, I don't have problems copying partitions or whole drives between multiple drives of different sizes and manufacturers; even in instances between different interface technologies. This gets better once you're on an OS rev that treats ATA drives as SCSI, and CHS is no longer in play at all, which is the case in EL6 and Fedora revs around EL6. (At least I think that's correct; but it has been an awfully long time since I've done a CentOS 4 or 5 install on an ATA/IDE system, as all of my server systems are either SCSI or FibreChannel, physical or virtual).
Having said that, I quarterly rotate two identical drives in this laptop; each quarter, I clone the currently operating drive to the secondary and to a dated whole-disk image file, and then swap the drives, putting the previous primary back into the fire safe for storage. This both wear-levels and tests the backups drives.
I use a three-tiered approach to backups of my own laptop: 1.) Quarterly swapping drive clones as described above, using dd (which is faster than the slightly more friendly ddrescue, unless a bad sector is found) booted from rescue or live media of the OS that's installed (this provides a fast bare-metal base recovery that I can then update and restore from the rolling rsync in item 3); 2.) Three quarters of kept images along with the partition mapping (I use GPT, and thus use gdisk for this, which works better in my particular case than parted does (parted puts an inappropriate partition type on one of my partitions when recreating the partition map)) on multiple disks; 3.) Frequent rsyncs of /home and /etc to multiple drives, in rotation. This does mean an SELinux relabel might be required on a restore, but that's ok.
For servers I do the same, but with annual images and more rigorous scheduling of tarballs of important files, along with rolling rsyncs (I've looked at rsnapshot, and backing up the backup can be somewhat interesting in that case). Dump/restore has its advantages, too, however.
On 5/5/2011 9:37 AM, Lamar Owen wrote:
Different block mappings will also give you grief. .:. The drives must be identical manufacturer and model, down to the firmware revision. dd is not a backup tool in the general sense.
I do dd imaging quite frequently, and as long as everything is LBA48 capable and setup, I don't have problems copying partitions or whole drives between multiple drives of different sizes and manufacturers; even in instances between different interface technologies. This gets better once you're on an OS rev that treats ATA drives as SCSI, and CHS is no longer in play at all, which is the case in EL6 and Fedora revs around EL6. (At least I think that's correct; but it has been an awfully long time since I've done a CentOS 4 or 5 install on an ATA/IDE system, as all of my server systems are either SCSI or FibreChannel, physical or virtual).
Clonezilla-live is a handy, faster way to do this. It boots from cd/usb into a menu and generally uses partclone to do the work so on most filesystems it only copies the blocks that are actually used. It also has a mode to resize the partitions on the new disk but it isn't all that useful because you can't control them individually.
Having said that, I quarterly rotate two identical drives in this laptop; each quarter, I clone the currently operating drive to the secondary and to a dated whole-disk image file, and then swap the drives, putting the previous primary back into the fire safe for storage. This both wear-levels and tests the backups drives.
Besides disk->disk, clonezilla can save/restore compressed image copies over the network to space mounted via nfs/samba/sshfs so if you are making the copy as a backup or the source for future clones you can drop it on some other filesystem instead of needing a matching disk.
I use a three-tiered approach to backups of my own laptop: 1.) Quarterly swapping drive clones as described above, using dd (which is faster than the slightly more friendly ddrescue, unless a bad sector is found) booted from rescue or live media of the OS that's installed (this provides a fast bare-metal base recovery that I can then update and restore from the rolling rsync in item 3); 2.) Three quarters of kept images along with the partition mapping (I use GPT, and thus use gdisk for this, which works better in my particular case than parted does (parted puts an inappropriate partition type on one of my partitions when recreating the partition map)) on multiple disks; 3.) Frequent rsyncs of /home and /etc to multiple drives, in rotation. This does mean an SELinux relabel might be required on a restore, but that's ok.
For servers I do the same, but with annual images and more rigorous scheduling of tarballs of important files, along with rolling rsyncs (I've looked at rsnapshot, and backing up the backup can be somewhat interesting in that case). Dump/restore has its advantages, too, however.
I always recommend backuppc for scheduled backups. It's pretty much configure and forget and it compresses and pools all identical content so you can keep much more history online than you would expect.
On Thursday, May 05, 2011 11:35:01 AM Les Mikesell wrote:
On 5/5/2011 9:37 AM, Lamar Owen wrote:
I do dd imaging quite frequently, and as long as everything is LBA48 capable and setup, [snippage] .... using dd .... booted from rescue or live media of the OS that's installed...
Clonezilla-live is a handy, faster way to do this.
I've recast my original message slightly, as you've missed a critical point: I use the cloning tool from the rescue or live media of the OS that's installed. There are a number of reasons for this, not the least of which is that LVM, RAID, and some other things behave differently depending upon the kernel, lvm tools, etc, that's running the clone.
I'm familiar with and have used clonezilla numerous times, but not for this purpose. The 'using dd ... booted from rescue or live media of the OS that's installed' part isn't as important during backup as it can be during restore. And I have been bit by that, using F12 (or 13) live media to do a C4 backup/restore; some metadata got farkled and the restore didn't 'take' until I did the restore with C4 media.
Also, well, there are uses for manually-marked badblocks other than drive errors.... :-)
[snip]
I always recommend backuppc for scheduled backups. It's pretty much configure and forget and it compresses and pools all identical content so you can keep much more history online than you would expect.
I've actually thought about using DragonFly BSD and its HAMMER filesystem for the backup storage device...... quick restores rely on quickly finding what is needed, and many times I get requests like 'please restore the file that has the stuff about the instrument we built for grant so-and-so' rather than an exact filename; greppability of the backup set is a must for us. Complete, straight-dd, clones are mountable (RO, of course) and searchable, and rolling rsyncs and tarballs are searchable without a whole lot of effort. Deduplication would be nice, but it's secondary, as is the time and space spent on the backup, for our purposes.
On 5/5/2011 11:11 AM, Lamar Owen wrote:
I do dd imaging quite frequently, and as long as everything is LBA48 capable and setup, [snippage] .... using dd .... booted from rescue or live media of the OS that's installed...
Clonezilla-live is a handy, faster way to do this.
I've recast my original message slightly, as you've missed a critical point: I use the cloning tool from the rescue or live media of the OS that's installed. There are a number of reasons for this, not the least of which is that LVM, RAID, and some other things behave differently depending upon the kernel, lvm tools, etc, that's running the clone.
I generally try to avoid layers that are likely to have breakage between different versions. Backwards compatibility is a good thing, as is the ability to move disks around among different hosts.
That said, Clonezilla doesn't deal with software raid in the disk image mode - even raid1 where it should be simple. You can do single partitions at a time though, and then it is agnostic about the underlying layers but you have to deal with making it bootable yourself.
I'm familiar with and have used clonezilla numerous times, but not for this purpose. The 'using dd ... booted from rescue or live media of the OS that's installed' part isn't as important during backup as it can be during restore. And I have been bit by that, using F12 (or 13) live media to do a C4 backup/restore; some metadata got farkled and the restore didn't 'take' until I did the restore with C4 media.
Yeah, I avoid fedora too...
But, how would you deal with a dual-boot disk with different OS's on the same drive?
[snip]
I always recommend backuppc for scheduled backups. It's pretty much configure and forget and it compresses and pools all identical content so you can keep much more history online than you would expect.
I've actually thought about using DragonFly BSD and its HAMMER filesystem for the backup storage device...... quick restores rely on quickly finding what is needed, and many times I get requests like 'please restore the file that has the stuff about the instrument we built for grant so-and-so' rather than an exact filename; greppability of the backup set is a must for us. Complete, straight-dd, clones are mountable (RO, of course) and searchable, and rolling rsyncs and tarballs are searchable without a whole lot of effort. Deduplication would be nice, but it's secondary, as is the time and space spent on the backup, for our purposes.
With backuppc, just give them a login to the web side with access to their own machine and let them pick any/all versions they want (you can download through the browser or restore it back where it came from). If you really need to manage versioning based on content/differences/context the stuff should live in subversion or git with an associated status tracking system. But then you have the opposite problem of how to get rid of it when you really don't need it any more...
On Thu, 5 May 2011, Les Mikesell wrote:
On 5/5/2011 11:11 AM, Lamar Owen wrote:
I do dd imaging quite frequently, and as long as everything is LBA48 capable and setup, [snippage] .... using dd .... booted from rescue or live media of the OS that's installed...
Clonezilla-live is a handy, faster way to do this.
I've recast my original message slightly, as you've missed a critical point: I use the cloning tool from the rescue or live media of the OS that's installed. There are a number of reasons for this, not the least of which is that LVM, RAID, and some other things behave differently depending upon the kernel, lvm tools, etc, that's running the clone.
I generally try to avoid layers that are likely to have breakage between different versions. Backwards compatibility is a good thing, as is the ability to move disks around among different hosts.
That said, Clonezilla doesn't deal with software raid in the disk image mode - even raid1 where it should be simple. You can do single partitions at a time though, and then it is agnostic about the underlying layers but you have to deal with making it bootable yourself.
I can recommend ReaR (Relax and Recover) for migrations and cloning systems. I have been working wit the Relax and Recover project for the past few months together with a colleague and it now covers a lot of situations:
- HWRAID (SmartArray), SWRAID, DRBD, partitions, encrypted partitions, LVM
- It supports bootable tapes (OBDR), ISO images and USB media
- It supports backup software for restoring (like Bacula, TSM, rsync and others)
- And it can also take care of backups (using rsync, tar) using different solutions (NFS, USB, Samba, ...)
- It's modular, so with little effort you can implement your own workflow or use-case
However I would stress to test a complete disaster recover scenario for your systems (different technologies) in order to understand if everything is supported. You don't want to realize a problem in disaster-mode :)
But for the use-cases we have, the current trunk is very usable and flexible to support restoring on different hardware. Even with different controllers/disks etc... During recovery you can still adapt the layout and make changes to your wishes before restoring.
We are preparing a new stable minor release (without the new layout code enabled by default), but after that release there should be a new major release covering everything I mentioned by default.
If you need more help, feel free to join the ReaR mailinglist on sourceforge and ask your questions :)
And if you happen to go to LinuxTag, we're having two discussion sessions for developers and users on Wednesday and Thursday.
On 5/5/2011 3:37 PM, Dag Wieers wrote:
I can recommend ReaR (Relax and Recover) for migrations and cloning systems. I have been working wit the Relax and Recover project for the past few months together with a colleague and it now covers a lot of situations:
HWRAID (SmartArray), SWRAID, DRBD, partitions, encrypted partitions, LVM
It supports bootable tapes (OBDR), ISO images and USB media
It supports backup software for restoring (like Bacula, TSM, rsync and others)
And it can also take care of backups (using rsync, tar) using different solutions (NFS, USB, Samba, ...)
It's modular, so with little effort you can implement your own workflow or use-case
What I've really always wanted in this respect is something that would work with backuppc such that you could run something on the source to generate descriptions of the partitions and filesystems (sort of clonezilla-like) in files that would be included in backups, and have a bootable restore OS that would know how to get this info from the backuppc server (could be an http request), build the matching filesystems, then run the ssh command to generate a tar image and extract into the right place. Backuppc already does a great job of managing file-level backups but it is somewhat cumbersome to re-install by hand on bare metal and it doesn't automatically keep a description of the layout.
However I would stress to test a complete disaster recover scenario for your systems (different technologies) in order to understand if everything is supported. You don't want to realize a problem in disaster-mode :)
I already trust backuppc on the 'save a copy' side. I'd rather not replace that part.
If you need more help, feel free to join the ReaR mailinglist on sourceforge and ask your questions :)
Would a backuppc adapter be feasible?
On Thu, 5 May 2011, Les Mikesell wrote:
On 5/5/2011 3:37 PM, Dag Wieers wrote:
I can recommend ReaR (Relax and Recover) for migrations and cloning systems. I have been working wit the Relax and Recover project for the past few months together with a colleague and it now covers a lot of situations:
HWRAID (SmartArray), SWRAID, DRBD, partitions, encrypted partitions, LVM
It supports bootable tapes (OBDR), ISO images and USB media
It supports backup software for restoring (like Bacula, TSM, rsync and others)
And it can also take care of backups (using rsync, tar) using different solutions (NFS, USB, Samba, ...)
It's modular, so with little effort you can implement your own workflow or use-case
What I've really always wanted in this respect is something that would work with backuppc such that you could run something on the source to generate descriptions of the partitions and filesystems (sort of clonezilla-like) in files that would be included in backups, and have a bootable restore OS that would know how to get this info from the backuppc server (could be an http request), build the matching filesystems, then run the ssh command to generate a tar image and extract into the right place. Backuppc already does a great job of managing file-level backups but it is somewhat cumbersome to re-install by hand on bare metal and it doesn't automatically keep a description of the layout.
Well, I've become very fond of rbme as of lately, but since ReaR supports rsync out of the box, you don't need a separate backup method for it.
But if backuppc has a client, or a configuration, it's very easy to make ReaR aware of it. And then to only configuration you would need to do is:
BACKUP=BACKUPPC
and it would automatically create a bootable image with your system's layout and the backuppc software/configuration, and even the necessary commands to automatically recover your system when doing:
rear recover
on the rescue prompt. That's how it is done with Bacula, TSM, and others.
However I would stress to test a complete disaster recover scenario for your systems (different technologies) in order to understand if everything is supported. You don't want to realize a problem in disaster-mode :)
I already trust backuppc on the 'save a copy' side. I'd rather not replace that part.
Does backuppc take care of restoring HWRAID, SWRAID, DRBD, LVM, paritions, filesystems ? If so, then ReaR may not be for you, because ReaR takes care of those items.
If you need more help, feel free to join the ReaR mailinglist on sourceforge and ask your questions :)
Would a backuppc adapter be feasible?
Definitely, join the list and we can help you implement it.
On 5/5/2011 4:22 PM, Dag Wieers wrote:
What I've really always wanted in this respect is something that would work with backuppc [...]
Well, I've become very fond of rbme as of lately, but since ReaR supports rsync out of the box, you don't need a separate backup method for it.
But if backuppc has a client, or a configuration, it's very easy to make ReaR aware of it. And then to only configuration you would need to do is:
BACKUP=BACKUPPC
Backuppc usually doesn't need anything on the client side. The server can run rsync or tar over ssh or use smb or talk to rsync in daemon mode. It's basically a couple of perl programs to do the scheduling and provide a web interface wrapped around standard tools. But, if you haven't used it, the thing it does better than any of the similar programs is that it compresses the files and pools all duplicate content with hardlinks so you can keep a much longer history of more hosts online than you would expect. It has an rsync-in-perl implementation to deal with local compressed files while chatting with a stock remote version. And it has a nice web interface to browse/restore files. Or you can use a command line tool to generate a tar image.
and it would automatically create a bootable image with your system's layout and the backuppc software/configuration, and even the necessary commands to automatically recover your system when doing:
I don't really want a separate copy of an 'image' built. I want something to do the grunge work of partitioning and creating the necessary filesystems, then pull the tar image from the backuppc server with an appropriate ssh command.
rear recover
on the rescue prompt. That's how it is done with Bacula, TSM, and others.
You could probably do something very similar by generating the tar image(s) ahead of time from the backuppc server and storing them in your recovery setup. And that would be useful for archiving, offsite, or cloning purposes, but the main thing I want is the ability to boot something that can mindlessly reconstruct a machine from last night's backuppc run straight out of that compressed/pooled storage.
I already trust backuppc on the 'save a copy' side. I'd rather not replace that part.
Does backuppc take care of restoring HWRAID, SWRAID, DRBD, LVM, paritions, filesystems ? If so, then ReaR may not be for you, because ReaR takes care of those items.
No, backuppc just saves files and can give you what looks like a tar image (or put them back if the target is working well enough to accept them). That's why I'm interested in something else to do the work up to where you would restore a tar backup. It's not extremely difficult to do by hand from a livecd boot, but automation is always better. Backuppc does handle the more common case of someone wanting a few files back that they accidentally erased very nicely and I don't want to do a whole different backup to cover rebuilding the machine.
If you need more help, feel free to join the ReaR mailinglist on sourceforge and ask your questions :)
Would a backuppc adapter be feasible?
Definitely, join the list and we can help you implement it.
OK, I'm interested... It's probably just a matter of generating whatever description of the underlying storage it needs and plugging in an ssh command to get the data at the right point.
On Thu, 5 May 2011, Les Mikesell wrote:
On 5/5/2011 4:22 PM, Dag Wieers wrote:
and it would automatically create a bootable image with your system's layout and the backuppc software/configuration, and even the necessary commands to automatically recover your system when doing:
I don't really want a separate copy of an 'image' built. I want something to do the grunge work of partitioning and creating the necessary filesystems, then pull the tar image from the backuppc server with an appropriate ssh command.
The rescue image is a boot image doing the grunge work of partitioning and creating the necessary filesystems and pulling the tar image from your server using the appropriate ssh command. (Or whatever you tell it to do in your specific case)
You need to boot something if you are in a disaster. And this 'something' needs to know about your network configuration, your system's layout and needs the necessary tools to restore the backup. That's the bootable rescue image I was referring to. It usually is between 25MB and 50MB depending on the size of the backup client.
The rescue image can be a kernel/ramdisk, or an ISO image, or a bootable USB media, or a bootable OBDR tape, or a PXE instance (if you set everything up to update your PXE server).
rear recover
on the rescue prompt. That's how it is done with Bacula, TSM, and others.
You could probably do something very similar by generating the tar image(s) ahead of time from the backuppc server and storing them in your recovery setup. And that would be useful for archiving, offsite, or cloning purposes, but the main thing I want is the ability to boot something that can mindlessly reconstruct a machine from last night's backuppc run straight out of that compressed/pooled storage.
That's already possible. ReaR can also handle the backup, on the same boot media if size is sufficient (so either OBDR tape, USB media or PXE/network), for cloning or one-shot migrations this use-case is indeed important too.
If you need more help, feel free to join the ReaR mailinglist on sourceforge and ask your questions :)
Would a backuppc adapter be feasible?
Definitely, join the list and we can help you implement it.
OK, I'm interested... It's probably just a matter of generating whatever description of the underlying storage it needs and plugging in an ssh command to get the data at the right point.
Something like that, yes.
On 5/5/11 6:13 AM, Timothy Murphy wrote:
Is there a standard way of copying a working system from one machine to another with different partitions?
I have two CentOS-5.6 machines, say A and B, and I thought I would copy / on sdb10 on machine A to an unused partition sda7 on machine B with rsync. I made the appropriate changes to /etc/fstab and grub.conf , as well as /etc/sysconfig/network-scripts , but found that there were innumerable errors when I booted machine B into the new system, mostly to do with creating dev's.
That's normal. Anaconda does a bit of magic during the install in detecting the hardware and setting things up for it. Within limits, running kudzu will adjust some of them and sometimes it will kick off automatically when hardware differences are detected.
Also the ethernet connection, which had been eth1 on A, was now eth0 on B, and this did not work.
That will always happen even what you think is identical hardware, but if you are at the machine you can fix it manually. If kudzu runs it will set up the interfaces for dhcp and discard your old settings.
This was only a kind of experiment. There is a problem with the partition table on machine A, and I thought it would be useful to have a backup machine with exactly the same setup.
Is this a hopeless enterprise, or can it be done easily?
Neither. It isn't easy and I think that is a real deficiency in Linux distributions because most people probably think they can have their backups working quickly if their machine dies. But, it also isn't hopeless - you just have to know as much about hardware and drivers as anaconda does. Or you can cheat and do an install on the new machine first and keep at least the initrd or all of /boot and perhaps modprobe.conf and a few other things. In general it is better to plan for a new install and have backups that don't overwrite the system part but there is not a clean separation between system files and you own, so plan to spend a lot of time sorting out things from your backups of /etc and /var and merging them.