[CentOS-virt] performance differences between kvm/xen

Wed Oct 20 23:45:38 UTC 2010
Kelvin Edmison <kelvin at kindsight.net>

On 20/10/10 7:01 PM, "Grant McWilliams" <grantmasterflash at gmail.com> wrote:

> On Wed, Oct 20, 2010 at 6:24 AM, Tom Bishop <bishoptf at gmail.com> wrote:
>> Ok so I'd like to help, since most folks have Intel Chipsets, I have a AMD
>> 4p(16 core)/32gig memory opteron server that I'm running that we can get some
>> numbers on....but it would be nice if we could run apples to apples...I have
>> iozone loaded and can run that but would be nice to run using the same
>> parameters....is there any way we could list the types of test we would like
>> to run and the actual command with options listed and then we would have some
>> thing to compare at least  level the playing field...KB, any thoughts, is
>> this a good idea?
>> On Wed, Oct 20, 2010 at 6:52 AM, Karanbir Singh <mail-lists at karan.org> wrote:
>>> On 10/20/2010 12:35 PM, Dennis Jacobfeuerborn wrote:
>>>> Being skeptical is the best approach in the absence of
>>>> verifiable/falsifiable data. Today or tomorrow I'll get my hands on a new
>>>> host system and although it is supposed to go into production immediately I
>>>> will probably find some time to do some rudimentary benchmarking in that
>>>> regard to see if this is worth investigating further. Right now I'm
>>> That sounds great. I've got a machine coming online in the next few days
>>> as well and will do some testing on there. Its got 2 of these :
>>> Intel(R) Xeon(R) CPU E5310
>>> So not the newest/greatest, but should be fairly representative.
>>>> planning to use fio for block device measurements but don't know any decent
>>>> (and uncomplicated) network i/o benchmarking tools. Any ideas what tools I
>>>> could use to quickly get some useful data on this from the machine?
>>> iozone and openssl speed tests are always a good thing to run as a 'warm
>>> up' to your app level testing. Since pgtest has been posted here
>>> already, I'd say that is definitely one thing to include so it creates a
>>> level of common-code-testing and comparison. mysql-bench is worth
>>> hitting as well. I have a personal interest in web app delivery, so a
>>> apache-bench hosted from an external machine hitting domU's / VM's ( but
>>> more than 1 instance, and hitting more than 1 VM / domU at the same time
>>> ) would be good to have as well.
>>> And yes, publish lots of machine details and also details on the code /
>>> platform / versions used. I will try to do the same ( but will  limit my
>>> testing to whats already available in the distro )
>>> thanks
>>> - KB
>>> ______
> So what we're on the verge of doing here is creating a test set... I'd love to
> see a shell script that ran a bunch of tests, gathered data about the system
> and then created an archive that would then be uploaded to a website which
> created graphs. Dreaming maybe but it would be consistent. So what goes in our
> testset?
> Just a generic list, add to or take away form it..
> * phoronix test suite ?
> * 
> * iozone
> * kernbench
> * dbench
> * bonnie++
> * iperf
> * nbench
> The phoronix test suite has most tests in it in addition to many many others.
> Maybe a subset of those tests with the aim of testing Virtualization would be
> good?
> Grant McWilliams

+1 for the Phoronix test suite.  I was going to suggest it too.

It can publish stats to a central server which the phoronix folks maintain,
and it records the details of the server on which the test was performed.
Not sure if it's smart enough to detect a VM though.   My experience with it
has been limited so far but generally positive.

This isn't my data, but I think it's a good example of how pts can be used
to compare results from different tests and scenarios.