On Fri, 2006-10-13 at 14:21 -0500, Alex Palenschat wrote: > > Instead of blocking on (lack-of) feedback, I'd suggest > > considering something > > like: > > 1. Put pkgs in "testing" > > 2. If no bugs reported after X days/weeks, move out of testing > > > > At least this way nothing gets perpetually stalled in testing. > > > > -- Rex > > > > Just my 2 cents, but I have used some packages in the testing repo > successfully. I certainly am not in a position to test them in a large > production/enterprise environment and outside of the testing repo I try > really hard to stay as vanilla as I can with my boxes. But I'm not > certain what I can do except troll this list and say WorksForMe when > somebody asks. I wouldn't guess that the signal-to-noise-ratio would be > acceptable if everytime I successfully installed and used for a week > package x that I post a new thread saying it worked. > Perhaps you could find a method of requiring prospective users of the > testing repo to subscribe to this mailing list. That way more users > would see when a question/poll was asked in regards to a given package. > > Just an idea, > > Alex <snip sig stuff> Testers need to be recruited and, possibly, take "ownership", as suggested in another post. All must have available time, naturally. "Time" may be highly dependent on schedule to be met. It may depend on complexity of testing to be attempted. It may depend on coverage desired. It may depend on a particular package's test team size. Two "types" of testers are available. 1) Those with a need for the package under test and, 2) those that don't need it. Type 1 may not be willing to test because "need" implies a *possible* requirement for some RAS (Reliability, Availability, Service) commitments. Testing, for them, may pose unacceptable risks. In larger facilities with resources, the opposite may hold: equipment, scripted scenarios, personnel may be in place to provide this service for packages of interest to that organization. Type 2 is most likely to be able to afford the small risk increase, but may not have the interest. This can only be offset by inspiring an altruistic mindset and offering the rewards that come from a solid recognition process and tolerance for the learning that may be involved. The same resource considerations as mentioned above still apply, of course. Regardless, packages should have a "group" of cooperating testers who can share load, allowing real life its proper place, and experience, while ensuring the broadest possible coverage of the desired test scenario and meeting desired time frames. Details for this would need development, but in a nutshell, a package's "Test Lead", recruits team members to increase coverage, redundancy and reliability and applicability of results. This implies that skills other than technical may be of paramount importance. "A great engineer is not necessarily a great manager" for a lot of different reasons. Initial bottlenecks: fear of commitment - lots of "single" folks in the pool; view of test members as ... "intolerant" of those less knowledgeable (whether or not this is real or deserved is moot); lack of well defined test objectives (e.g. coverage, regression, ...) and fear of actually having to develop them; having well defined test objectives and actually having to commit to achieving them; fear of missing a "biggie" that brings shame and disrepute to the individual and team; fear of looking like a fool; fear of depending on certain support and finding it lacking and leading to disillusionment... Hmmm... a typical corporate management problem I think. My take: if your testing is only WFM, it's somewhat of a waste to have anyone outside the current development staff involved. Just have multiple staff members do it on some dissimilar setups. After all, you are only testing that you can duplicate something similar to RH quality. Bug fixes released by them tells you about that. If you want to have a CentOS name that is more than just "repackaged RH", then you might want to see if it is *feasible* to do more in this area. MHO -- Bill