On 2/20/11 8:51 PM, Johnny Hughes wrote:
But instead, what happens is someone builds the RPMs ... puts it in their blog. All their buddies download it. The build requirements are broken and it causes bugs. A month later, we get them logging in to the forums or the mailing lists or IRC with the broken packages that have the same EVR numbers as ours and don't get replaced by the real packages and it is a bad experience for everyone.
That seems very, very unlikely from people who won't use the available SL betas.
Or, we have thousands of users with varying levels of capability, some of which are very knowledgeable and would produce data that helps us a lot. Pick a percentage of people where that data is good ... 1%, 10%, 20%, 30% ... the rest of the data is incorrect. How long does it take us to verify that the data is correct or incorrect and how does that compare to the time spent if we just build it and test it?
If you have a tool that can verify correctness, how can having a larger farm of brute force builds and tests, and submissions of reproducible recipes for the correct ones not speed things up?