On Thu, Oct 21, 2010 at 3:56 PM, Grant McWilliams grantmasterflash@gmail.com wrote:
On Thu, Oct 21, 2010 at 12:50 PM, Grant McWilliams grantmasterflash@gmail.com wrote:
On Thu, Oct 21, 2010 at 3:29 AM, Karanbir Singh mail-lists@karan.org wrote:
On 10/21/2010 12:01 AM, Grant McWilliams wrote:
So what we're on the verge of doing here is creating a test set... I'd love to see a shell script that ran a bunch of tests, gathered data about the system and then created an archive that would then be uploaded to a website which created graphs. Dreaming maybe but it would be consistent. So what goes in our testset?
I am trying to create just that - a kickstart that will build a machine as a Xen dom0, build 4 domU's, fire up puppet inside the domU's do the testing and scp results into a central git repo. Then something similar for KVM.
will get the basic framework online today.
- KB
Do you suppose you could get it to use Phoronix Test Suite so we can start to have measurable stats? We could do the same thing for any VM software - even the ones that don't allow publishing stats in the EULA...
I'm also wondering if we should do the whole test suite or a subset. Here is the list of tests..
One thing that I think probably needs to be modified for our needs is a Dom0 controller to run various tests in each DomU simultaneously then collate the date. Virtual worlds are more complex than non-virtual ones. Sometimes something runs great in on VM but drags when multiple VMs are being used.
I was also going to mention that we should look at scalability and performance isolation. Some references and previous studies here:
http://todddeshane.net/research/Xen_versus_KVM_20080623.pdf http://clarkson.edu/~jnm/publications/isolation_ExpCS_FINALSUBMISSION.pdf http://clarkson.edu/~jnm/publications/freenix04-clark.pdf
Also, is there anybody that has access to or would be able to get access to run SPECvirt? http://www.spec.org/virt_sc2010/
Thanks, Todd