On Tue, 15 Jun 2010, Frank Cox wrote:
By way of experimentation, I manually changed one of the files in the new version to match what the patch says it should be, then created a new patch file from that and it applies and appears to work fine. (I patched the previous version's file, compared the result to the original and made the same change in the new version's file.)
ugghhh --- doable, but laborious ... ;)
I have two questions:
First, am I going about this the right way?
no -- Usually one unrolls the old tree, applies the patches to the old; and then unrolls the new in a directory 'next to' the first, and diffs from a point above the top of each
This produces a new patch set, which may already have some of what the older patches formerly needed to do (or a wholly different approach, when two forks diverge)
And if so, is there a way to automate the process as described in the previous paragraph?
Early automation of a partially understood technology seems like a premature optimization ;)
Second, what is the proper convention for handling this in a rpm? The obvious solution seems to be to create new patch files and throw the old ones away, then build the rpm from that. Some of these patches appear to go back several versions, though, so is there a better or more proper way to handle this than just throwing them out and making a whole new set of patches?
A serious developer will usually have available a complete copy of the master upstream, and local branches which are used and discarded without a second thought, once the 'fruit' from an approach is 'cherrypicked' [disk space has become inexpensive]; Mere re-packagers can usually get by with less, and simply pluck prior packages containing (in part) tarballs and patches, and diff between two points in time
This is to some degree a matter of taste and administrative approach. A big fat batch was used in the old and early kernel and libc days to distribute 'nightly deltas' which one would D/L and apply one after another againast a periodic master tarball. As bandwidth availability has grown, this fell by the wayside, and later distributed version control systems ('VCS') have emerged as the approach favored there
The world is moving to building from VCS as well as snap-shotting; for safety's sake, periodically rolling and signing a SRPM or saving a file containing a signed set of checksums for a backup tarball comes to mind as 'good practices' See: http://www.unrealircd.com/ and the prior experience of the Linux kernel folks, as well as at Fedora and Red Hat with the issue of detecting possible hostile substituted checkins
I have learned a lot more about patch and diff tonight than I ever needed to know before. Very cool stuff, and very useful.
I wrote this introduction to let people get an early success doing patching and SRPM building
http://www.owlriver.com/tips/patching_srpms/
and it is designed to be approachible
-- Russ herrold