On Nov 26, 2008, at 12:45 PM, Karanbir Singh wrote:
Hi,
One of the things that I'd like to get working sometime is a proper post-build test harness for rpm's. At the moment, what we do is quite basic, along with rpmdiff and some comparison tools we do a string of bash-written script tests. There must be another way of doing this, a sort of rpm::unit, for the lack of a better name, but modelled on Rspec. To illustrate :
What is "Rspec"? I'm unfamiliar with the reference.
For Pkg in PackageList: { ensure Pkg.SIGGPG => present; };
Makes sure that Packages we are testing are signed, I guess it could also be ensure=> 'blahblah'; to make sure its signed with a specific key.
A more complex test might be to make sure that content in multilib pkgs match for overlapping content + multilib acceptable content.
An even more complex test might be to make sure any rpm that drops an init script also has a chkconfig stanza in the %post / %pre ( as an example ).
The first step would, of course, be to look at doing this per package, and then growing it to work with multiple packages ( eg. doing things like :
Per-pkg unit tests are likely best done by running rpmlint with a custom configuration.
Multi-pkg tests require collecting an approximation to "everything" that is currently in the distro. How the collection is done, and whether the picture of "everything" is accurate, is (or has been) what has prevented better distro QA imho.
TestPackage: { ensure latest in -> http://mirror.centos.org/centos/updates/i386/ } ; to make sure that this new package does indeed provide an higher EVR than whatever is in the repo at the end of that url.
That introduces a 3rd layer, in addition to unit & integration testing, checking that indeed, a package appears in distribution URI's as intended.
One way to avoid the test for higher is to automate R++ when building. The problem these days is how to recognize templates for Release: so that an autoincrement might be performed reliably. I suspect that EVR autoincrement is an easier problem to solve than detecting failure after the build.
And having this harness pass or fail depending on the test outcome.
The "fail" case is often painful, requiring human intervention. A more automated "fail" proof process flow is perhaps a better approach than a "fail" test.
OTOH, detection is better than not testing at all, even if automated correction or prevention (through processes) is better than "fail" detection.
Perhaps this is not the right list, but most of the people who would end up using this or having to live with me forcing them to use it are on this list... so I thought this would be a good place to start.
Comments ? Am I wasting my time ?
Doesn't sound like a waste of time to me.
73 de Jeff
- KB
CentOS-devel mailing list CentOS-devel@centos.org http://lists.centos.org/mailman/listinfo/centos-devel