[CentOS-virt] awful i/o performance on xen paravirtualized guest

Fri Aug 20 22:20:56 UTC 2010
Fernando Gleiser <fergleiser at yahoo.com>

Hi. I'm testing a centos 5.4 xen PV guest on top of a centos 5.4 host.

for some reason, the disk performance from the guest is awful. when I do an 
import , the io is fine for a while then climbs to 100% and stays there most of 
the time.
at first I tougth it was because I was using file-backed disks, so  deleted 
those and changed to LVM, but the situation did't improve.

Here's an iostat output from within the DomU:
Device:         rrqm/s   wrqm/s   r/s   w/s    rkB/s    wkB/s avgrq-sz 
avgqu-sz   await  svctm  %util
xvda              0.00     0.00  0.00  0.00     0.00     0.00     0.00     
0.00    0.00   0.00   0.00
xvda1             0.00     0.00  0.00  0.00     0.00     0.00     0.00     
0.00    0.00   0.00   0.00
xvda2             0.00     0.00  0.00  0.00     0.00     0.00     0.00     
0.00    0.00   0.00   0.00
xvdb              0.00     0.00  0.00  0.00     0.00     0.00     0.00     
0.00    0.00   0.00   0.00
xvdc              0.00     0.00  0.00  0.00     0.00     0.00     0.00     
0.00    0.00   0.00   0.00
xvdd              0.00   271.00  0.00 179.00     0.00  1800.00    20.11     
1.99   11.11   5.59 100.00
dm-0              0.00     0.00  0.00  0.00     0.00     0.00     0.00     
0.00    0.00   0.00   0.00
dm-1              0.00     0.00  0.00  0.00     0.00     0.00     0.00     
0.00    0.00   0.00   0.00
dm-2              0.00     0.00  0.00  0.00     0.00     0.00     0.00     
0.00    0.00   0.00   0.00
dm-3              0.00     0.00  0.00  0.00     0.00     0.00     0.00     
0.00    0.00   0.00   0.00
dm-4              0.00     0.00  0.00 450.00     0.00  1800.00     8.00     
4.93   10.93   2.22 100.00


the service time is a bit high but the 

and here from within the dom0:

Device:         rrqm/s   wrqm/s   r/s   w/s    rkB/s    wkB/s avgrq-sz 
avgqu-sz   await  svctm  %util
cciss/c0d0        0.00     0.00  0.00 169.00     0.00  1640.00    19.41     
2.00   11.74   5.94 100.40
cciss/c0d0p1
               0.00     0.00  0.00  0.00     0.00     0.00     0.00     0.00    
0.00   0.00   0.00
cciss/c0d0p2
               0.00     0.00  0.00 169.00     0.00  1640.00    19.41     2.00   
11.74   5.94 100.40
dm-0              0.00     0.00  0.00  0.00     0.00     0.00     0.00     
0.00    0.00   0.00   0.00
dm-1              0.00     0.00  0.00  0.00     0.00     0.00     0.00     
0.00    0.00   0.00   0.00
dm-2              0.00     0.00  0.00 87.00     0.00   768.00    17.66     
1.00   11.45  11.49 100.00
dm-3              0.00     0.00  0.00  0.00     0.00     0.00     0.00     
0.00    0.00   0.00   0.00
dm-4              0.00     0.00  0.00 82.00     0.00   856.00    20.88     
1.00   12.05  12.24 100.40

a DB import takes almost 10 times longer than in the bare-metal server, even if 
the server I'm planing to virtualize is 4 years old and the guest is a brand new 
DL380 from HP. In the old server takes 4hs, in the new one takes 2 days. That's 
not surprising given the fact that the disk shows a peak throughput below 2Mb/s


here's the vm config file:


[root at xen2 xen]# cat vm-dbweb
name = "vm-dbweb"
uuid = "8560e33a-865e-cca5-725d-817de4972422"
maxmem = 7168
memory = 7168
bootloader="/usr/bin/pygrub"
vcpus = 2
on_poweroff = "destroy"
on_reboot = "restart"
on_crash = "restart"
disk = [ "tap:aio:/var/lib/xen/images/vm-artweb.img,xvda,w", \
"tap:aio:/var/lib/xen/images/dbweb_home.img,xvdb,w", \
"phy:/dev/VolGroup00/dbweb_oradata,xvdc,w", \
"phy:/dev/VolGroup00/dbweb_oradata2,xvdd,w" ]
vif = [ "mac=00:16:36:5a:4d:a1,bridge=xenbr0,script=vif-bridge" ]


I'm pretty sure there is a way to get decent disk performance from a domU, and I 
must be screwing things up somewhere, I just don't find where :/

any help/pointers/tips for disk tuning would be greatly apreciated.


Fer