On 04/14/2011 05:27 AM, Mister IT Guru wrote:
On Thu, 2011-04-14 at 10:03 +0100, Michael Simpson wrote:
On 14 April 2011 09:34, Mister IT Guru misteritguru@gmx.com wrote:
Hmmm -- I don't mind waiting, I'm trying to hold onto my enthusiasm here. I can guess, or try and piece together an answer for myself, but just some inkling that I was noticed, or something. -- Mister IT Guru
I wondered the same thing so i channeled my inner "misc" and opened up the tests in my favourite text editor.
I understand the phrasing of my question assumed that I hadn't done that
- I did the same, and when I noticed the purpose of the scripts, and
what they do, I wondered why is this being "repeated"
They test for basic functionality ie create a basic configuration, switch it on and check that it is working and give an exit status of pass or fail.
This I understand - what I would like to know, is this a new thing? Or is this part of the normal procedure? Are these scripts uploaded to a SVN repo where we can get to it, and test against our own builds if we are so inclined? etc etc
The conversation above suggests that it would be gold standard for the test for each service to be able to successfully install the service onto a minimal set up as well rather than having separate install and test scripts.
If this is standard - then I'm surprised that the distribution mechanism is the list. (I'm sure I'm wrong, but understand, this is all I can see as a newbie, or even someone who was just browsing to see how projects are run) - I just assumed that the devs used automated script deployment mechanisms to do this kind of thing.
I'd use nagios as a front end, with NRPE on my target machine with a minimal install, and push the scripts from an SVN repo to the test box, and then look in nagios to see what the results of the script is. That way, I know that when I get green across the board - Everything is all good!
open them up and have a look :)
Way ahead of you! hehe!
I can see what the tests do, I am asking why are we doing these tests?
I'm assuming that this is done as part of the mechanism to test that certain packages are working as expected after install, once they have passed binary compatibility. (I would assume that if they are binary compatible, that they will behave the same way as the RHEL packages) which if this is the case, then why are we testing them again? Which leads me to think my assumption is wrong - SO .... why are we doing these tests? :)
I use nagios to run scripts and get results from environments, along with other deployment methods to create clean environments to do this automagically. Does the centos devs do something like this, (I know I inferred this earlier, bare with me).
Something like this - the writing of the scripts, can that be opened up a bit? Maybe thrown open to the community? Get them cracking on the current release, see what comes back - it's just their time and effort, you just have to ask for it.
Should I be starting a new thread for this by the way, I know how fickle people can be!
The CentOS Project is building a suite of automated tests that we are using to help us with QA after we get our tree ready.
This test suite will run on our tree after we get it moved to our QA servers.
This is being run by people other than Karanbir, Tru, and me ... so it is already "opened up".
As to how it is being run, that is up to the people who we have "opened it up to".