hi,
at the moment all test are run by hand, and people look at the output on the cli. Lets change that. Fabian and I did some work during Fosdem this year to get an automation script in place.
This script does : 1) setup some disk space 2) run virt-install with a specific kickstart 3) waits for machine to come back post install boot 4) gets the test scripts onto the machine 5) run each one and track progress. 6) tear down the machine instance.
Now the question is : apart from test output, what else would be relevant to track? Here is my list:
- install time logs - syscleanup[1] output from before tests are run and after - 'rpm -qa' output
What else would be relevant here ?
Second part of the issue : What format would everyone like to see these things in ?
a) a simple webapp in sinatra.rb or bottle.py ?
b) as long as all output is in text, we could shovel test runs into a git repo ( keep in mind that we can end up generating gigs of data per day ).
c) something else ?
- KB
[1]: https://gitorious.org/syscleanup/syscleanup/blobs/master/syscleanup.sh ( I'll pull this into the git repo for QA Testing, so stuff is all in the same place ).
Hi all,
Add some script to look at "storage management",
For auto scripts python, vit-lib with pexpect tools will be easy to play :)
V!jay
On Wed, Apr 6, 2011 at 3:36 PM, Karanbir Singh mail-lists@karan.org wrote:
hi,
at the moment all test are run by hand, and people look at the output on the cli. Lets change that. Fabian and I did some work during Fosdem this year to get an automation script in place.
This script does :
- setup some disk space
- run virt-install with a specific kickstart
- waits for machine to come back post install boot
- gets the test scripts onto the machine
- run each one and track progress.
- tear down the machine instance.
Now the question is : apart from test output, what else would be relevant to track? Here is my list:
- install time logs
- syscleanup[1] output from before tests are run and after
- 'rpm -qa' output
What else would be relevant here ?
Second part of the issue : What format would everyone like to see these things in ?
a) a simple webapp in sinatra.rb or bottle.py ?
b) as long as all output is in text, we could shovel test runs into a git repo ( keep in mind that we can end up generating gigs of data per day ).
c) something else ?
- KB
[1]: https://gitorious.org/syscleanup/syscleanup/blobs/master/syscleanup.sh ( I'll pull this into the git repo for QA Testing, so stuff is all in the same place ). _______________________________________________ CentOS-devel mailing list CentOS-devel@centos.org http://lists.centos.org/mailman/listinfo/centos-devel
On 04/06/2011 11:24 AM, Vijay N. Majagaonkar wrote:
Add some script to look at "storage management",
For auto scripts python, vit-lib with pexpect tools will be easy to play :)
could you please :
1) not top post
2) make your comments inline with relevant quotes to retain sanity
3) comments like 'add script to get rich quick' is not helpful. What is more productive is : here is a script, it does 'blah' and its good to have because 'foo'.
- KB
On Wed, Apr 6, 2011 at 4:00 PM, Karanbir Singh mail-lists@karan.org wrote:
On 04/06/2011 11:24 AM, Vijay N. Majagaonkar wrote:
Add some script to look at "storage management",
For auto scripts python, vit-lib with pexpect tools will be easy to play
:)
could you please :
1) not top post
OK
- make your comments inline with relevant quotes to retain sanity
My bad
- comments like 'add script to get rich quick' is not helpful. What is
more productive is : here is a script, it does 'blah' and its good to have because 'foo'.
I will love to contribute script, only problem is that I am completely new to this world I am not sure what is correct way to go. ( I can take part of installing on VM (KVM) running test case and parsing output)
V!jay
_______________________________________________
CentOS-devel mailing list CentOS-devel@centos.org http://lists.centos.org/mailman/listinfo/centos-devel
Hi Karanbir,
2011/4/6 Karanbir Singh mail-lists@karan.org
[... automated VM install ...]
in principle a good idea, should be (from my point of view) extended to something like BFO (http://boot.fedoraproject.org/) to make it possible to install on physical machines as well. So something like USB boot stick which get standard configurations based on kickstart files, do the installation, report back to console and central QA server and do the next install on the list (the only question is how to make previous runs persistent so installations won't be done twice until QA system request a second install because of a failure).
[...]
What else would be relevant here ?
- lshw - dmesg - maybe bootchart
[...] b) as long as all output is in text, we could shovel test runs into a git repo ( keep in mind that we can end up generating gigs of data per day ).
GIT is a good idea but you need a wrapper around that check successful runs and shorten this one to profile xyz on hardware abc has no problem.
c) something else ?
See above, an USB stick for real hardware and an ISO for virtual hardware will probably work best.
Regards, Thomas
Hi,
On 04/06/2011 01:01 PM, Thomas Bendler wrote:
[... automated VM install ...]
in principle a good idea, should be (from my point of view) extended to
It works in practise as well :) ( over 700 automated installs during the centos-5.6 test phase would say so .. )
something like BFO (http://boot.fedoraproject.org/) to make it possible to install on physical machines as well. So something like USB boot stick which get standard configurations based on kickstart files, do the installation, report back to console and central QA server and do the next install on the list (the only question is how to make previous runs persistent so installations won't be done twice until QA system request a second install because of a failure).
It would be good to get boot.k.o functionality included in, but afaik that needs httpfs etc into the kernel; so we might only be able to achieve that for the centosplus kernel. Would you be happy to take on the task of looking into whats involved and how feasible that might be ?
The idea of getting real-iron into the QA testing loop is very interesting and perhaps even essential. Could a cobbler type setup, being able to do ipmi calls to real iron be a good substitute ?
- lshw
there is no such thing on CentOS
- dmesg
added.
- maybe bootchart
not sure about this. Maybe bootchart makes for a good test case in itself ?
GIT is a good idea but you need a wrapper around that check successful runs and shorten this one to profile xyz on hardware abc has no problem.
the test run script does that already, so it will output something like this : (taken from a real test ):
78 test cases complete. 73 passes, 5 fails and 0 exceptions.
See above, an USB stick for real hardware and an ISO for virtual hardware will probably work best.
Ideally, we should be testing every install type ( so from boot.iso, over pxe + ftp, pxe + http, pxe + images, cd1, dvd, diskimage etc ). At the moment though its only usrin virt-install ( so pxe images ) into Xen hosts using installmethod = http.
I guess if someone wants to propose automation for the various things it would be good. Is it even possible to emulate a usb disk somewhere ( kind of like being able to use an iso image as a cd-rom in xen/kvm etc )
- KB
On 04/06/2011 11:06 AM, Karanbir Singh wrote:
at the moment all test are run by hand, and people look at the output on the cli. Lets change that. Fabian and I did some work during Fosdem this year to get an automation script in place.
Must say I'm disappointed by the level of feedback here. Given all the calls from all the people saying they wanted more info, when there is an option to get involved and help create that info pool, its total silence.
If its because I wasn't clear enough, or what I said didn't make any sense, please do ask and I'll try again.
Alternatively, would it help if I just put something up. Would that make it easier for everyone to get their heads around what the intent is ?
- KB
On 04/07/2011 11:32 AM, Karanbir Singh wrote:
On 04/06/2011 11:06 AM, Karanbir Singh wrote:
at the moment all test are run by hand, and people look at the output on the cli. Lets change that. Fabian and I did some work during Fosdem this year to get an automation script in place.
Must say I'm disappointed by the level of feedback here. Given all the calls from all the people saying they wanted more info, when there is an option to get involved and help create that info pool, its total silence.
If its because I wasn't clear enough, or what I said didn't make any sense, please do ask and I'll try again.
Alternatively, would it help if I just put something up. Would that make it easier for everyone to get their heads around what the intent is ?
I'll try to implement a temporary solution (PS. I'm not a 'real' developer.), after writing some kickstarts and tests, I figure out that a such solution is necessary (testing will be a pain work for CentOS-QA team.)
I've just begin to understand how CentOS QA process works, well after reading [1] and [2].
[1] http://lists.centos.org/pipermail/centos-devel/2011-April/007321.html
[2] http://lists.centos.org/pipermail/centos-devel/2011-April/007335.html
2011/4/7 Karanbir Singh mail-lists@karan.org
[...] Must say I'm disappointed by the level of feedback here. Given all the calls from all the people saying they wanted more info, when there is an option to get involved and help create that info pool, its total silence.
What do you expect, BFO adapted for CentOS in one day completely tested and ready to roll-out? Sorry but this need a bit more work and coordination (and I guess most people here have regular jobs which had in the end more priority).
[...] Alternatively, would it help if I just put something up. Would that make it easier for everyone to get their heads around what the intent is ?
I think (my personal opinion) that the first thing that should be done is a kind of an architecture overview to see what the goal of this project is. Based on this tasks need to be defined and names need to be found to execute the tasks. With other words, this need to be coordinated in a proper project way to get people involved and doing stuff. It's not helpful to say we need something but leave the community alone with the realization and be disappointed if the thing won#t be solved in a short time frame. But again, this is my personal opinion, I'm just convinced that a project like this need a proper project management when more than two people contribute developments if the result should be valuable.
Regards, Thomas
On 04/07/2011 01:21 PM, Thomas Bendler wrote:
What do you expect, BFO adapted for CentOS in one day completely tested and ready to roll-out? Sorry but this need a bit more work and coordination (and I guess most people here have regular jobs which had in the end more priority).
As do I, have a day job. And I'm also not looking for anything being developed or implemented. At the moment, its just a case of asking people for enough feedback so we can start with the design stages. Implementing stuff comes a fair bit down the road.
I think (my personal opinion) that the first thing that should be done is a kind of an architecture overview to see what the goal of this project is. Based on this tasks need to be defined and names need to be
Right, and therefore I am looking for people's feedback on what they would expect to see in such a tool. Or are you suggesting that you are not clear about why we are doing tests ?
- KB
On 04/07/2011 01:39 PM, Karanbir Singh wrote:
On 04/07/2011 01:21 PM, Thomas Bendler wrote:
What do you expect, BFO adapted for CentOS in one day completely tested and ready to roll-out? Sorry but this need a bit more work and coordination (and I guess most people here have regular jobs which had in the end more priority).
As do I, have a day job. And I'm also not looking for anything being developed or implemented. At the moment, its just a case of asking people for enough feedback so we can start with the design stages. Implementing stuff comes a fair bit down the road.
I think (my personal opinion) that the first thing that should be done is a kind of an architecture overview to see what the goal of this project is. Based on this tasks need to be defined and names need to be
Right, and therefore I am looking for people's feedback on what they would expect to see in such a tool. Or are you suggesting that you are not clear about why we are doing tests ?
Hi Karanbir,
I'm thinking of implementing a small script (python/virt-install etc...) without any external tool (BFO, cobbler)
I have some questions:
1. Does we need to get the kickstart and tests from git repo ?
2. Is there any restrictions of the QA infrastructure, like setup a central server/vm for hosting the Centos repo, kickstarts and the tests script ?
I need this to prepare a similar environment for testing.
Regards.
On 04/07/2011 02:06 PM, Athmane Madjoudj wrote:
I need this to prepare a similar environment for testing.
Ok, lets get a git repo together with whatever is needed to put together a harness that people can deploy locally to help them with this process. A fair bit of stuff is there already, lets see how much of that is usable outside the centos-qa-environ
- KB
On 04/07/2011 05:28 PM, Karanbir Singh wrote:
On 04/07/2011 02:06 PM, Athmane Madjoudj wrote:
I need this to prepare a similar environment for testing.
Ok, lets get a git repo together with whatever is needed to put together a harness that people can deploy locally to help them with this process. A fair bit of stuff is there already, lets see how much of that is usable outside the centos-qa-environ
Hi Karanbir
Here's my thoughts that I'm trying to implement, let me know if it's OK so I can start by now (in my spare time):
- Setup a CentOS-based web-server $SERVER for hosting centos yum repo, kernel+initrd ,kickstarts, test-scripts.
- Write a python script, that: 1. run virt-install with -l http://$SERVER/centos -x http://$SERVER/kickstars/ks --nographics (serial console)
2. Connect to the system (serial console) download tests and run them ('pexpect' module), the output should be text only.
Regards.
On 4/7/11 7:55 PM, Athmane Madjoudj wrote:
On 04/07/2011 05:28 PM, Karanbir Singh wrote:
On 04/07/2011 02:06 PM, Athmane Madjoudj wrote:
I need this to prepare a similar environment for testing.
Ok, lets get a git repo together with whatever is needed to put together a harness that people can deploy locally to help them with this process. A fair bit of stuff is there already, lets see how much of that is usable outside the centos-qa-environ
Hi Karanbir
Here's my thoughts that I'm trying to implement, let me know if it's OK so I can start by now (in my spare time):
- Setup a CentOS-based web-server $SERVER for hosting centos yum repo,
kernel+initrd ,kickstarts, test-scripts.
- Write a python script, that:
- run virt-install with -l http://$SERVER/centos -x
http://$SERVER/kickstars/ks --nographics (serial console)
2. Connect to the system (serial console) download tests and run them
('pexpect' module), the output should be text only.
Is this only going to handle one test instance on one machine at a time or should it be able to deal with a bunch of concurrent changes across a group of test machines? For the latter, you need a way to collate the responses and tie them back to the source of the change each runs against. Something like jenkins (http://jenkins-ci.org/) might be a good place to start and it already has plugins for most build/test related things you might want to do (https://wiki.jenkins-ci.org/display/JENKINS/Plugins). It does expect a JVM to be working on the slaves (but is very platform-agnostic otherwise) so the bare-metal installs would need to run the script driving the job on a controlled slave with the real action happening in a VM or another target. You still need to write the specific tests, but the jenkins framework can automate the work of distributing them to the targets triggered by changes and collecting the results for central viewing.
On 4/7/2011 7:39 AM, Karanbir Singh wrote:
On 04/07/2011 01:21 PM, Thomas Bendler wrote:
What do you expect, BFO adapted for CentOS in one day completely tested and ready to roll-out? Sorry but this need a bit more work and coordination (and I guess most people here have regular jobs which had in the end more priority).
As do I, have a day job. And I'm also not looking for anything being developed or implemented. At the moment, its just a case of asking people for enough feedback so we can start with the design stages. Implementing stuff comes a fair bit down the road.
I think (my personal opinion) that the first thing that should be done is a kind of an architecture overview to see what the goal of this project is. Based on this tasks need to be defined and names need to be
Right, and therefore I am looking for people's feedback on what they would expect to see in such a tool. Or are you suggesting that you are not clear about why we are doing tests ?
It's not clear what the test should show. Also, I haven't seen anything that would help to prioritize such work. That is, where is time currently being consumed, and how can the tests be designed to predict how to solve the problems they detect in a way that the results help move the project forward?
On Thu, 7 Apr 2011, Les Mikesell wrote:
On 4/7/2011 7:39 AM, Karanbir Singh wrote:
On 04/07/2011 01:21 PM, Thomas Bendler wrote:
What do you expect, BFO adapted for CentOS in one day completely tested and ready to roll-out? Sorry but this need a bit more work and coordination (and I guess most people here have regular jobs which had in the end more priority).
As do I, have a day job. And I'm also not looking for anything being developed or implemented. At the moment, its just a case of asking people for enough feedback so we can start with the design stages. Implementing stuff comes a fair bit down the road.
I think (my personal opinion) that the first thing that should be done is a kind of an architecture overview to see what the goal of this project is. Based on this tasks need to be defined and names need to be
Right, and therefore I am looking for people's feedback on what they would expect to see in such a tool. Or are you suggesting that you are not clear about why we are doing tests ?
It's not clear what the test should show. Also, I haven't seen anything that would help to prioritize such work. That is, where is time currently being consumed, and how can the tests be designed to predict how to solve the problems they detect in a way that the results help move the project forward?
This gives also the impression that the reason releases are delayed for so long is because of the lack of QA, while it often takes weeks for anything to be submitted to QA. So even if the QA could be improved (which no doubt it can), it is reasonable to assume we could also make 'going to QA' faster by opening up the process, by feeding QA faster with new updates and by opening up the QA process so more people can actually help test the intermediate builds so we get more problems faster.
It seems to me a set of kickstart files is only going to achieve a small improvement to the whole process (mostly testing anaconda) and the causes for the delays during the 85 days CentOS 5.6 has been developed.
(The above is based on my experience with previous QA processes and what I have heard from people involved in the CentOS 5.6 QA process.)
It would be nice to have metrics that indicate when the first build was ready for QA, where things got blocked and how to unblock those in the future. But that requires a transparent/open process to even discuss this here.
On 04/07/2011 04:14 PM, Les Mikesell wrote:
It's not clear what the test should show.
That is quite strange. When one tests for functionality, the assumption is that the test would go through the mechanics needed to prove that the functionality exists. Either as conditional 'OK' or as 'Must Fail' as OK.
So I am not sure what you are looking at.
- KB
On 4/7/2011 11:27 AM, Karanbir Singh wrote:
On 04/07/2011 04:14 PM, Les Mikesell wrote:
It's not clear what the test should show.
That is quite strange. When one tests for functionality, the assumption is that the test would go through the mechanics needed to prove that the functionality exists. Either as conditional 'OK' or as 'Must Fail' as OK.
So I am not sure what you are looking at.
What sort of test would SL binaries fail that would keep them from being accepted in CentOS? Isn't that the kind of problem that has to be solved to speed up a release?
On Thu, Apr 7, 2011 at 11:35 AM, Les Mikesell lesmikesell@gmail.com wrote:
On 4/7/2011 11:27 AM, Karanbir Singh wrote:
On 04/07/2011 04:14 PM, Les Mikesell wrote:
It's not clear what the test should show.
That is quite strange. When one tests for functionality, the assumption is that the test would go through the mechanics needed to prove that the functionality exists. Either as conditional 'OK' or as 'Must Fail' as OK.
So I am not sure what you are looking at.
What sort of test would SL binaries fail that would keep them from being accepted in CentOS? Isn't that the kind of problem that has to be solved to speed up a release?
Umm, not applicable. And, no.
jerry
On 04/07/2011 05:35 PM, Les Mikesell wrote:
What sort of test would SL binaries fail that would keep them from being accepted in CentOS? Isn't that the kind of problem that has to be solved to speed up a release?
No, the two things are orthogonal. Assurance that a package does what it says its going to do is something that needs to be addressed - not directly connected to what the input to the testing harness is or what comes out from the other end.
- KB
On Thu, 7 Apr 2011, Karanbir Singh wrote:
On 04/07/2011 05:35 PM, Les Mikesell wrote:
What sort of test would SL binaries fail that would keep them from being accepted in CentOS? Isn't that the kind of problem that has to be solved to speed up a release?
No, the two things are orthogonal. Assurance that a package does what it says its going to do is something that needs to be addressed
I don't think so. I think that is upstream's problem. We need assurance that what is produced is functionally the same as what upstream produces (except for the very few areas where CentOS functionality differs).
- not directly connected to what the input to the testing harness is or what comes out from the other end.
On 4/8/2011 4:25 PM, Charlie Brady wrote:
On Thu, 7 Apr 2011, Karanbir Singh wrote:
On 04/07/2011 05:35 PM, Les Mikesell wrote:
What sort of test would SL binaries fail that would keep them from being accepted in CentOS? Isn't that the kind of problem that has to be solved to speed up a release?
No, the two things are orthogonal. Assurance that a package does what it says its going to do is something that needs to be addressed
I don't think so. I think that is upstream's problem. We need assurance that what is produced is functionally the same as what upstream produces (except for the very few areas where CentOS functionality differs).
And more relevant to my question - I thought that the time consuming part of the process had to do with the binary compatibility checking and the iterations to make that succeed.
Not that there is anything wrong with a fully automated and publicly visible build/regression test facility, but if that's not the slow part, how much can it help?
Dne 7.4.2011 14:21, Thomas Bendler napsal(a):
What do you expect, BFO adapted for CentOS in one day completely tested and ready to roll-out? Sorry but this need a bit more work and coordination (and I guess most people here have regular jobs which had in the end more priority).
Well, I will add this. Karanbir, you can't expect to find a lot of people to contribute testing scripts. Firstly, this is something that not many companies/people are doing, they are just deploying. And secondly, you can't expect positive feedback and mailbox full of scripts after many weeks of "if you don't like it, go aways".
I think (my personal opinion) that the first thing that should be done is a kind of an architecture overview to see what the goal of this project is. Based on this tasks need to be defined and names need to be found to execute the tasks. With other words, this need to be coordinated in a proper project way to get people involved and doing stuff. It's not helpful to say we need something but leave the community alone with the realization and be disappointed if the thing won#t be solved in a short time frame. But again, this is my personal opinion, I'm just convinced that a project like this need a proper project management when more than two people contribute developments if the result should be valuable.
Thomas is very right, if you want the results, you must take it seriously and keep it as project. It really seem to me like "go and search for RH trademarks" issue again. DH
Hi David,
On 04/07/2011 02:02 PM, David Hrbáč wrote:
I will add this. Karanbir, you can't expect to find a lot of people to contribute testing scripts. Firstly, this is something that not many companies/people are doing, they are just deploying. And secondly, you can't expect positive feedback and mailbox full of scripts after many weeks of "if you don't like it, go aways".
ok, I take your point onboard. However, at this stage its not a case of test-scripts, this is mostly orthogonal.
Thomas is very right, if you want the results, you must take it seriously and keep it as project. It really seem to me like "go and search for RH trademarks" issue again.
That is mostly what I am trying to do here. But based on these comments, I feel it would be easier to buyinto once there is something people can see. Let me go do that and come back.
In the mean time : the tests scripts are very welcome, keep them coming.
- KB
On Thursday 07 April 2011 16:07:33 Karanbir Singh wrote:
Hi David,
On 04/07/2011 02:02 PM, David Hrbáč wrote:
I will add this. Karanbir, you can't expect to find a lot of people to contribute testing scripts. Firstly, this is something that not many companies/people are doing, they are just deploying. And secondly, you can't expect positive feedback and mailbox full of scripts after many weeks of "if you don't like it, go aways".
ok, I take your point onboard. However, at this stage its not a case of test-scripts, this is mostly orthogonal.
Thomas is very right, if you want the results, you must take it seriously and keep it as project. It really seem to me like "go and search for RH trademarks" issue again.
That is mostly what I am trying to do here. But based on these comments, I feel it would be easier to buyinto once there is something people can see. Let me go do that and come back.
In the mean time : the tests scripts are very welcome, keep them coming.
- KB
I want to propose something here. While I'm setting up my servers with kickstarts, after the kickstart finishes I have a very simple collection of scripts that tests if everything is fine.
So armed with that experience I propse that we build a system that:
1. Has a big collection of kickstart files 2. Can provide different kickstart file per every request 3. Will know which ks is send to which client 4. Receive status reports from each server
On the servers, after install, in the post part of the ks we can have a simple download of a test "framework" that will handle the testing.
For example one directory with scripts: tests/0000_apache.sh tests/0001_mysql.sh tests/0002_ftp.py ...... And then you only need to run the tests: for i in $(find tests -type f -perm 700|sort); do if ( $i ); then chmod 600 $i else fi done
This works quite well for me, and it is very easy to add more tests. We can have different collections of scripts for different ks installs.
I would love to help with something like this.
Best regards, Marian Marinov