Hi All;
I'm awaiting a new linux laptop that will be my primary work machine. I want to implement a strategy that allows me as easily as possible to revert back to a former state. My primary concern is a scenario where I apply system updates and it breaks something that for me is critical.
I wonder if a simple rsync script would work. If so, here's what I'm thinking:
1) updates are available so I execute the rsync script which pulls any updated files from my laptop to a backup server/drive
2) apply updates
3) if something breaks (even if I can no longer login) I boot the laptop, run the rsync script in the opposite direction (push files from the backup drive to the laptop)
I assume that if I were to execute step 3 above that my system would be in the exact state that it was before I ran the updates. Is this a correct assumption ? Are there better approaches ?
Thanks in advance..
Kevin Kempter wrote:
Hi All;
I'm awaiting a new linux laptop that will be my primary work machine. I want to implement a strategy that allows me as easily as possible to revert back to a former state. My primary concern is a scenario where I apply system updates and it breaks something that for me is critical.
I wonder if a simple rsync script would work. If so, here's what I'm thinking:
- updates are available so I execute the rsync script which pulls any updated
files from my laptop to a backup server/drive
apply updates
if something breaks (even if I can no longer login) I boot the laptop, run
the rsync script in the opposite direction (push files from the backup drive to the laptop)
I assume that if I were to execute step 3 above that my system would be in the exact state that it was before I ran the updates. Is this a correct assumption ?
Depends in part on the rsync commands, the file structure, and the order of operations. Restoring over a running system would overwrite files that are in use, particularly in /etc and /var - not a good idea. Restoring from a backup of a live system would restore copies of files that might have been in the process of being changed. Would be safer to do this using a live CD for both the backup and the restore. Would want to do the backup/restore on a per-filesystem basis. Assuming you have / /boot and /home:
rsync --archive --delete --hard-links --one-file-system / /backup/laptop/
rsync --archive --delete --hard-links --one-file-system /boot/ /backup/laptop/boot/
rsync --archive --delete --hard-links --one-file-system /home/ /backup/laptop/home/
On restore would need to mount and restore / first, then mount other partitions and restore them.
Are there better approaches ?
Perhaps using other backup tools (backuppc has been mentioned favorably recently), but it should be workable; however, this sounds like a time/labor-intensive approach every time there are updates, for a low probability of fatal problems with the OS. Just backing up user files would be a lot faster and easier.
Phil
Kevin Kempter wrote:
Hi All;
I'm awaiting a new linux laptop that will be my primary work machine. I want to implement a strategy that allows me as easily as possible to revert back to a former state. My primary concern is a scenario where I apply system updates and it breaks something that for me is critical.
I wonder if a simple rsync script would work. If so, here's what I'm thinking:
- updates are available so I execute the rsync script which pulls any updated
files from my laptop to a backup server/drive
apply updates
if something breaks (even if I can no longer login) I boot the laptop, run
the rsync script in the opposite direction (push files from the backup drive to the laptop)
I assume that if I were to execute step 3 above that my system would be in the exact state that it was before I ran the updates. Is this a correct assumption ? Are there better approaches ?
Thanks in advance..
Taking a disk image snapshot with something like clonezilla might be an alternative for you to consider.
Kevin Kempter kevin@kevinkempterllc.com writes:
Hi All;
I'm awaiting a new linux laptop that will be my primary work machine. I want to implement a strategy that allows me as easily as possible to revert back to a former state. My primary concern is a scenario where I apply system updates and it breaks something that for me is critical.
I wonder if a simple rsync script would work. If so, here's what I'm thinking:
- updates are available so I execute the rsync script which pulls any updated
files from my laptop to a backup server/drive
apply updates
if something breaks (even if I can no longer login) I boot the laptop, run
the rsync script in the opposite direction (push files from the backup drive to the laptop)
I assume that if I were to execute step 3 above that my system would be in the exact state that it was before I ran the updates. Is this a correct assumption ? Are there better approaches ?
Thanks in advance..
Look at rsnapshot, which is rsync based and enables hourly, daily, weekly and monthly rotating backups.
This is what I used on my laptop, to an external USB HD. It provides an OSX Time Machine like schema, albeit without the fancy GUI.
HTH,
Marc Schwartz
Is there a way to "freeze" a list of installed packages and exact versions, then tell yum (or any other tool/script) to install exactly these verions either on the same or another systme?
I'm asking from perspective of being able to update and test in my test or staging environment then when tests pass I want to replicate the exact list of package versions in production.
Thanks,
--Amos
On 11/12/08, Marc Schwartz marc_schwartz@comcast.net wrote:
Kevin Kempter kevin@kevinkempterllc.com writes:
Hi All;
I'm awaiting a new linux laptop that will be my primary work machine. I want to implement a strategy that allows me as easily as possible to revert back to a former state. My primary concern is a scenario where I apply system updates and it breaks something that for me is critical.
I wonder if a simple rsync script would work. If so, here's what I'm thinking:
- updates are available so I execute the rsync script which pulls any
updated files from my laptop to a backup server/drive
apply updates
if something breaks (even if I can no longer login) I boot the laptop,
run the rsync script in the opposite direction (push files from the backup drive to the laptop)
I assume that if I were to execute step 3 above that my system would be in the exact state that it was before I ran the updates. Is this a correct assumption ? Are there better approaches ?
Thanks in advance..
Look at rsnapshot, which is rsync based and enables hourly, daily, weekly and monthly rotating backups.
This is what I used on my laptop, to an external USB HD. It provides an OSX Time Machine like schema, albeit without the fancy GUI.
HTH,
Marc Schwartz
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Amos Shapira wrote:
Is there a way to "freeze" a list of installed packages and exact versions, then tell yum (or any other tool/script) to install exactly these verions either on the same or another systme?
There isn't a need for an explicit feature. Just update one server, test it, then copy all of /var/cache/yum/updates/packages to the other machines. You can then say "rpm -Fvh *.rpm" in that directory to bring that machine up to the same level as the other one.
We don't do it exactly that way. We copy the current package cache to new machines after installation to speed a regular "yum update," as it needs only enough bandwidth to download what's changed since updating the package cache clone. Because of CentOS/RHEL's policy of not upgrading versions, only patching the released version, we haven't had any serious problems by allowing production systems to track the current yum repositories.
What about disaster recovery? Assuming I take the approach you suggest and have to restore the cache (with the tested versions) after it's lost in a disaster, is there a way to do that (short of backing it up)? I'd rather be able to keep a list of package versions instead of having to move around entire cache backups across continents.
Thanks,
--Amos
On 11/15/08, Warren Young warren@etr-usa.com wrote:
Amos Shapira wrote:
Is there a way to "freeze" a list of installed packages and exact versions, then tell yum (or any other tool/script) to install exactly these verions either on the same or another systme?
There isn't a need for an explicit feature. Just update one server, test it, then copy all of /var/cache/yum/updates/packages to the other machines. You can then say "rpm -Fvh *.rpm" in that directory to bring that machine up to the same level as the other one.
We don't do it exactly that way. We copy the current package cache to new machines after installation to speed a regular "yum update," as it needs only enough bandwidth to download what's changed since updating the package cache clone. Because of CentOS/RHEL's policy of not upgrading versions, only patching the released version, we haven't had any serious problems by allowing production systems to track the current yum repositories. _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Amos Shapira wrote:
What about disaster recovery? Assuming I take the approach you suggest and have to restore the cache (with the tested versions) after it's lost in a disaster, is there a way to do that (short of backing it up)? I'd rather be able to keep a list of package versions instead of having to move around entire cache backups across continents.
something like this? rpm -qa > installed_packages
Looks good.
And is there a tool which can read this output and fetch the right packages from the right repositories, or do I have to write my own?
Would a script which massages this into an input for "| xargs yum install" be the way to go?
Thanks!
--Amos
On 11/15/08, Nicolas Thierry-Mieg Nicolas.Thierry-Mieg@imag.fr wrote:
Amos Shapira wrote:
What about disaster recovery? Assuming I take the approach you suggest and have to restore the cache (with the tested versions) after it's lost in a disaster, is there a way to do that (short of backing it up)? I'd rather be able to keep a list of package versions instead of having to move around entire cache backups across continents.
something like this? rpm -qa > installed_packages
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Amos Shapira wrote:
Assuming I take the approach you suggest and have to restore the cache (with the tested versions) after it's lost in a disaster, is there a way to do that (short of backing it up)?
I don't see why this is a big deal.
First off, even way out at the end of a RHEL/CentOS release's lifetime, the full set of patched packages will be just a few gigs of data. (It almost *has* to be smaller than a single-layer DVD...the patch set can't practically be bigger than the original OS it's patching.) Today, a couple of gigs costs nearly nothing to store, and 3 years from now when the last CentOS 5 patches come out, storage and bandwidth costs will be 1/4 what they are now.
Second, this scheme replicates the cache to multiple machines. Most of the time, only a single machine will get killed at a time, so you can get the cache copy from one of its neighbors. If you're worried about a disaster that can take out a whole building's worth of machines at once, surely the cost of a few gigs of professional off-site storage isn't out of the question?
I keep the most precious 4 GB of my home data backed up on Amazon S3. This costs me about 75 cents a month. The solution I'm proposing would take maybe half that out at the end of its useful life, and more like 1/10 that today. Total cost over three years is maybe the cost of lunch for you and a few friends. You'll spend more than that of your company's time writing the script to pull the data from a public repository, and on top of that it's a waste of someone else's bandwidth to treat them as your company's backup system.
move around entire cache backups across continents.
Continents?? What, now we're worried about protecting against total continental destruction? Maybe you're thinking you'll need those backups to help reboot civilization on another continent?
Sheesh, talk about overengineering... I would have thought that sending backups to another time zone would be more than sufficient.
2008/11/15 Warren Young warren@etr-usa.com:
[ long rant in favor of keeping the entire yum cache instead of a list of package versions deleted ]
move around entire cache backups across continents.
Continents?? What, now we're worried about protecting against total continental destruction? Maybe you're thinking you'll need those backups to help reboot civilization on another continent?
Sheesh, talk about overengineering... I would have thought that sending backups to another time zone would be more than sufficient.
My production and test/staging servers are over 12,000 km (or 7700 miles) away from my office. I need to be able to move configurations around between my office and two separate hosted sites. Also I have around 10 different system configuration prototypes ("roles") with more expected to be added - so every such "cache" is multiplied by that number. I pay for the traffic and we easily hit our traffic quota during a busy month of tests and updates, not to mention the huge drag on time to copy things around back and forth.
On top of that - the cache is not reliable - it would contain deleted packages, packages installed manually individually on one system for testing, packages which were replaced by newer ones etc. It can be cleaned (accidentally or when it runs out of space) etc.
Your solution of "it's cheap so waste it" is not just wasteful but not sustainable as our operation will grow (or possibly even at its current size).
Thanks for the advise, but the more I think about this solution the more I'm convinced it's not going to help me.
I'll try to try to find or build something based on "rpm -qa" and "yum".
Cheers,
--Amos
2008/11/16 Nicolas Thierry-Mieg Nicolas.Thierry-Mieg@imag.fr:
Amos Shapira wrote:
I'll try to try to find or build something based on "rpm -qa" and "yum".
no reason to use yum: it's for resolving dependencies, but in your case they would already be resolved. Instead you could more simply and reliably wget the files and rpm -U them.
Right, except that this might require to re-implement yum's ability to find and download the right package from the right repository. Not sure what's the advantage of wget+rpm over a simple "yum install" then.
In the meantime, on another list I got the following recipe:
A. Installing the missing packages. (On the source machine:) $ rpm -qa --queryformat="%{NAME}-%{ARCH}\n" | sort > package_list.txt (On the target machine:) $ yum install -y $(cay package_list.txt)
B. Remove "extra" packages: (On the target machine:) $ rpm -qa --queryformat="%{NAME}-%{ARCH}\n" | sort > package_list_new.txt $ yum remove $(diff package_list_new.txt package_list.txt | grep ">" | cut -d">" -f2)
The provider of the above says it works for him on Fedora, CentOS and RHEL, so it sounds like it's been tested for a while.
I'll test it further before actually using it.
Cheers,
--Amos
On Fri, 2008-11-14 at 15:08 -0700, Warren Young wrote:
Amos Shapira wrote:
Is there a way to "freeze" a list of installed packages and exact versions, then tell yum (or any other tool/script) to install exactly these verions either on the same or another systme?
There isn't a need for an explicit feature. Just update one server, test it, then copy all of /var/cache/yum/updates/packages to the other machines. You can then say "rpm -Fvh *.rpm" in that directory to bring that machine up to the same level as the other one.
Actually, that's the problem that Red Hat Satellite Server can solve. You can approve packages for deployment. Thus, when provisioning new servers, they get updates from the approved list. And servers are grouped by class. For the free version, one should investigate Project SpaceWalk. http://www.redhat.com/spacewalk/
-I
2008/11/16 Ian Forde ian@duckland.org:
Actually, that's the problem that Red Hat Satellite Server can solve. You can approve packages for deployment. Thus, when provisioning new servers, they get updates from the approved list. And servers are grouped by class. For the free version, one should investigate Project SpaceWalk. http://www.redhat.com/spacewalk/
Thanks for the pointer. I've looked at it a few weeks ago back when there was some news about it and it looked promising but I didn't have time to learn it in depth. Will keep it in my stack of things to look at.
Cheers,
--Amos
Amos Shapira wrote :
2008/11/16 Ian Forde ian@duckland.org:
Actually, that's the problem that Red Hat Satellite Server can solve. You can approve packages for deployment. Thus, when provisioning new servers, they get updates from the approved list. And servers are grouped by class. For the free version, one should investigate Project SpaceWalk. http://www.redhat.com/spacewalk/
Thanks for the pointer. I've looked at it a few weeks ago back when there was some news about it and it looked promising but I didn't have time to learn it in depth. Will keep it in my stack of things to look at.
I just wrote a HowTo on this topic. Spacewalk can help you manage software versions across different environments using software channels. The document is available here: http://wiki.centos.org/HowTos/PackageManagement/Spacewalk
Regards, -- Patrice Guay patrice.guay@nanotechnologies.qc.ca
Thanks! (and sorry for the late response).
On 12/19/08, Patrice Guay patrice.guay@nanotechnologies.qc.ca wrote:
Amos Shapira wrote :
2008/11/16 Ian Forde ian@duckland.org:
Actually, that's the problem that Red Hat Satellite Server can solve. You can approve packages for deployment. Thus, when provisioning new servers, they get updates from the approved list. And servers are grouped by class. For the free version, one should investigate Project SpaceWalk. http://www.redhat.com/spacewalk/
Thanks for the pointer. I've looked at it a few weeks ago back when there was some news about it and it looked promising but I didn't have time to learn it in depth. Will keep it in my stack of things to look at.
I just wrote a HowTo on this topic. Spacewalk can help you manage software versions across different environments using software channels. The document is available here: http://wiki.centos.org/HowTos/PackageManagement/Spacewalk
Regards,
Patrice Guay patrice.guay@nanotechnologies.qc.ca
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
on 11-14-2008 1:09 PM Amos Shapira spake the following:
Is there a way to "freeze" a list of installed packages and exact versions, then tell yum (or any other tool/script) to install exactly these verions either on the same or another systme?
I'm asking from perspective of being able to update and test in my test or staging environment then when tests pass I want to replicate the exact list of package versions in production.
Thanks,
--Amos
Why not just clone the system? Put it on DVD, and make images of those disks available. If you are installing on a system in a country with limited bandwidth, mail or otherwise ship a DVD there.
That will be the easiest way. Or just make your own repo with only the packages you want, and set new systems to only use that repo.
But if the systems touch the internet in any way, you will just be creating a security nightmare for yourself. If your software is so finicky that an update breaks it, you need to redesign the app. The Enterprise distros don't change that much, and if you have a test system, you would always test the updates there first, and then script the updates and any tweaks that need to be done from a central server. You could update globally this way to any system with a connection, and you could send a CD or DVD to those that don't.