On Nov 28, 2008, at 10:56 AM, Karanbir Singh wrote:
Jeff Johnson wrote:
What is "Rspec"? I'm unfamiliar with the reference.
Essentially a unit testing framework that works on specifications rather than tests. Let me see if I can come up with a semi functional example to demonstrate what I have in mind.
Yes, I found Rspec soon after I asked.
The Rspec agile approach, populating the desired test framework at high level *even if the tests do nothing* is dead-on imho. What tends to happen with testing is that interest wanes, and round 'tuits go missing, so tests that are incrementally added to a framework often don't accomplish what was originally intended (in my experience).
Per-pkg unit tests are likely best done by running rpmlint with a custom configuration.
I've looked at rpmlint a few times, let me look again. Perhaps most of this might be plumbable via there.
The trick with rpmlint, like all lints, is to actively disable the spewage until the fire hose is merely a trickle. Then proceed by turning on whatever looks useful. If you expect rpmlint *by default* to find all the errors, well, your definition of "error" is likely going to be different thatn rpmlint's.
I watched RedHat QA struggle with Mandrake packaging policies applied to a RedHat distro for a year when rpmlint was introduced. Somehow the QA trolls never could grasp the insanity of Mandrake != RedHat packaging policies, and so using rpmlint was eventually abandoned.
An agile Rspec approach enables the QA trolls to actively control the testing goal rather than reactively trying to assign a priority to various flaws.
That introduces a 3rd layer, in addition to unit & integration testing, checking that indeed, a package appears in distribution URI's as intended.
I think some / most of this info might be extractable from createrep'd metadata. Will need to dig more into that, at the moment just looking at individual package testing, and some collection stuff. Eg. making sure we get the right multilib stuff into the right place.
Some of the tests you wish cannot be performed by looking at single packages, only by looking at the collection of package metadata.
The metadata aggregation can be done by createrepo, but the XML generated by createrepo discards information from *.rpm packages, and adds additional information from comps.
Whether that is a good thing (or not) depends on one's testing goals. E.g. I'm not all sure why half of the items on the proposed verifytree feature bullet list are there. Obsoletes: loops? Self-referential dependencies? Whatever ...
An alternative to repo metadata might be just to import *complete* metadata into your own database. A database, not XML, is more useful for QA imho. See sophie.zarb.org for a very intelligent postgres database that has *complete* *.rpm metadata, where queries to index additional tag data can be retrofitted, and even remote queries or web display, can all be rather easily done.
All you can do with XML is download the next generation and scratch your cornea all over again with the pointy angle brackets. ;-)
One way to avoid the test for higher is to automate R++ when building. The problem these days is how to recognize templates for Release: so that an autoincrement might be performed reliably. I suspect that EVR autoincrement is an easier problem to solve than detecting failure after the build.
I agree with the auto bump, or check at the buildsystem level to make sure that package-to-build is indeed higher. Its whats in place at the moment here. However, its not easy to enforce on the main distro level stuff, since we need to 'inhert' Rel: Ver: from upstream and match it exactly1[]
Sure its easy to enforce, what sort of clue bat do you want added to rpmlib? ;-)
Why do you want *exactly* the same Release:? That's much like RedHat QA trolls being cornfused by Mandrake packaging policies imho. If all the patches are included, and the buildsys is configured identically for a CentOS rebuild, does it really matter what value is in the Release: tag? And if it *does* matter to CentOS, then adding a substitute for buildrelease tracking is likely necessary.
I f the package is rebuilt, then the Release: should be bumped alwaysl even if nothing in the package changed, the build system and build tools are different, and its really hard to find issues with two apparently identical packages that behave differently.
And having this harness pass or fail depending on the test outcome.
The "fail" case is often painful, requiring human intervention. A more automated "fail" proof process flow is perhaps a better approach than a "fail" test. OTOH, detection is better than not testing at all, even if automated correction or prevention (through processes) is better than "fail" detection.
I think you put it better than I was able to, the whole aim is to catch these things before it breaks or causes issues on production machines out there. We dont really churn out that many packages per day that would make such reporting get noisy and irritating, as yet.
Could you post a pointer to your current CentOS QA scripts please? I've likely asked before, but have lost the memories.
73 de Jeff