Just wondering if there was a howto or other URL that explains what is needed to achieve "near native" performance on a xen domU -- for this purpose, I am thinking about a single domU running on a physical server, in comparison to that same physical server running the same kernel but non-xenified.
For instance, using a physical partition for VBD v. using a file-backed one is one of the more obvious ones. Assigning all the VCPUs. And as much RAM as you can get away with (maybe leaving the dom0 with 512MB).
But are there others? Since I'm doing paravirtualization, I assume I don't need to turn on VT in the BIOS? What about 32-bit v. 64-bit OS, for the dom0 and for the domU? (I'll be using CentOS-5.) Anything else?
johnn
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Johnn Tan Sent: Monday, August 06, 2007 2:56 PM To: CentOS mailing list Subject: [CentOS] "near native" performance with xen?
Just wondering if there was a howto or other URL that explains what is needed to achieve "near native" performance on a xen domU -- for this purpose, I am thinking about a single domU running on a physical server, in comparison to that same physical server running the same kernel but non-xenified.
For instance, using a physical partition for VBD v. using a file-backed one is one of the more obvious ones. Assigning all the VCPUs. And as much RAM as you can get away with (maybe leaving the dom0 with 512MB).
But are there others? Since I'm doing paravirtualization, I assume I don't need to turn on VT in the BIOS? What about 32-bit v. 64-bit OS, for the dom0 and for the domU? (I'll be using CentOS-5.) Anything else?
You don't need to allocate all the CPUs if the app running in the domU doesn't need that horse power, same goes for the memory.
Where you want to concentrate on is storage and network.
Of course you need physical volumes but look for a storage solution that will also run as direct to disk as possible, maybe something based on the newer SATA/SAS where the RAID logic is built into the enclosure and a plain SATA/SAS card is in the server which can be actively shared between multiple domUs.
If that is out of your budget then use some type of hardware RAID for your volumes to get rid of any use of software RAID in dom0 for guest storage.
Same goes for the network side of things. Install the latest PV drivers in the domUs to get the latest advances.
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
Ross S. W. Walker wrote:
Of course you need physical volumes but look for a storage solution that will also run as direct to disk as possible, maybe something based on the newer SATA/SAS where the RAID logic is built into the enclosure and a plain SATA/SAS card is in the server which can be actively shared between multiple domUs.
If that is out of your budget then use some type of hardware RAID for your volumes to get rid of any use of software RAID in dom0 for guest storage.
I'm not sure what the distinction is between your first paragraph and the second. When I read the first paragraph, I thought: "hardware RAID" but it seems like you're referring to something else.
Same goes for the network side of things. Install the latest PV drivers in the domUs to get the latest advances.
I didn't realize these existed. I just used whatever was pulled down when I install xen/kernel-xen via yum. Where can I get these PV drivers?
Big thanks, Ross. This was extremely helpful!
johnn
Johnn Tan spake the following on 8/6/2007 12:51 PM:
Ross S. W. Walker wrote:
Of course you need physical volumes but look for a storage solution that will also run as direct to disk as possible, maybe something based on the newer SATA/SAS where the RAID logic is built into the enclosure and a plain SATA/SAS card is in the server which can be actively shared between multiple domUs.
If that is out of your budget then use some type of hardware RAID for your volumes to get rid of any use of software RAID in dom0 for guest storage.
I'm not sure what the distinction is between your first paragraph and the second. When I read the first paragraph, I thought: "hardware RAID" but it seems like you're referring to something else.
In the first paragraph I think he was referring to an external attached storage device with the raid logic built in to itself, and presenting to the outside a scsi or sata connection. In the second paragraph I believe he was referring to a raid card and some drives internal to the server.
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Johnn Tan Sent: Monday, August 06, 2007 3:52 PM To: CentOS mailing list Subject: Re: [CentOS] "near native" performance with xen?
Ross S. W. Walker wrote:
Of course you need physical volumes but look for a storage solution that will also run as direct to disk as possible, maybe something based on the newer SATA/SAS where the RAID logic is built into the enclosure and a plain SATA/SAS card is in the server which can be actively shared between multiple domUs.
If that is out of your budget then use some type of hardware RAID for your volumes to get rid of any use of software RAID in dom0 for guest storage.
I'm not sure what the distinction is between your first paragraph and the second. When I read the first paragraph, I thought: "hardware RAID" but it seems like you're referring to something else.
The distinction is small, but the costs are great.
1st one, the RAID technology is in the enclosure where you can have multiple initiators (even from the same host) directly access the logical disks in the arrays, you can also have multiple paths to those logical disks. PV'ing these drivers would be a lot simpler then if the RAID technology were in the controller card itself. This isn't iSCSI, but the newer SAS/SATA technology. The controller card is simpler and can be abstracted much in the same way a network adapter can be.
The 2nd is what you are familiar with RAID controller hooked into JBOD enclosure.
Same goes for the network side of things. Install the latest PV drivers in the domUs to get the latest advances.
I didn't realize these existed. I just used whatever was pulled down when I install xen/kernel-xen via yum. Where can I get these PV drivers?
My mistake I was thinking HVM guests.
If you want the latest for HVM I think Novell sells them, or you can try the 5.1 versions in testing.
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
On Mon, 2007-08-06 at 14:55 -0400, Johnn Tan wrote:
Assigning all the VCPUs.
Having more than one vcpu currently emits non-fatal error messages, at least with the C5 domU kernel. I didn't have time to look into that yet. Though, it's probably useful for threaded applications if it works. Note that you can always specify which CPUs can be used by a domU (seen from the hypervisor), regardless of the number of vcpus. E.g.:
cpus="0-1" vcpus=1
And as much RAM as you can get away with (maybe leaving the dom0 with 512MB).
The Xen hypervisor requires 64MB RAM. If you don't do much work in dom0 you can set the dom0 memory fairly low. I have seen people setting it to 64 MB RAM. But remember that this is the minimum, suppose that a system has 1024 MB RAM, and only one 512MB domU guest. dom0 can use approximately 1024 - 512 - 64 = 448MB RAM, even if dom0-min-mem is set to 64.
But are there others? Since I'm doing paravirtualization, I assume I don't need to turn on VT in the BIOS?
No.
What about 32-bit v. 64-bit OS, for the dom0 and for the domU? (I'll be using CentOS-5.) Anything else?
A 64-bit dom0 with 32-bit domUs isn't currently supported in CentOS 5, but it it will in 5.1.
-- Daniel
Daniel de Kok wrote:
On Mon, 2007-08-06 at 14:55 -0400, Johnn Tan wrote:
Assigning all the VCPUs.
Having more than one vcpu currently emits non-fatal error messages, at least with the C5 domU kernel. I didn't have time to look into that yet.
What error messages are you seeing? On one of my machines, I have four domU's (kernel 2.6.18-8.1.8.el5xen), each with 4 VCPUs. I don't see any errors in any of the domU's dmesg. But /proc/cpuinfo shows they are each using all 4 CPUs.
When I try to over-assign memory though, I definitely get errors and the domU is not created.
The Xen hypervisor requires 64MB RAM. If you don't do much work in dom0 you can set the dom0 memory fairly low. I have seen people setting it to 64 MB RAM.
Good to know. I'm a little hesitant to set it at 64MB, but I feel a little more comfortable now bringing it further down from the 512MB that I've been using as a dom0 minimum.
What about 32-bit v. 64-bit OS, for the dom0 and for the domU? (I'll be using CentOS-5.) Anything else?
A 64-bit dom0 with 32-bit domUs isn't currently supported in CentOS 5, but it it will in 5.1.
Yes, I'm looking forward to that, plus some of the other updates in 5.1.
But I'm curious whether 32-bit domU on 32-bit dom0 is more performant than 64-bit domU on 64-bit dom0. I'm about to do my own test of this sometime this week, but was wondering if others had already tried it.
johnn
On Mon, 2007-08-06 at 16:54 -0400, Johnn Tan wrote:
What error messages are you seeing? On one of my machines, I have four domU's (kernel 2.6.18-8.1.8.el5xen), each with 4 VCPUs. I don't see any errors in any of the domU's dmesg. But /proc/cpuinfo shows they are each using all 4 CPUs.
Soft lockups on CPUs: http://bugs.centos.org/view.php?id=2161
-- Daniel
Daniel de Kok wrote:
On Mon, 2007-08-06 at 16:54 -0400, Johnn Tan wrote:
What error messages are you seeing? On one of my machines, I have four domU's (kernel 2.6.18-8.1.8.el5xen), each with 4 VCPUs. I don't see any errors in any of the domU's dmesg. But /proc/cpuinfo shows they are each using all 4 CPUs.
Soft lockups on CPUs: http://bugs.centos.org/view.php?id=2161
Thanks for the link Daniel. I am not experiencing this at all. I now have 11 machines where I've assigned all available VCPUs to every domU running on each machine. I just did a reboot on one of them and checked the domU dmesg and do not see this.
I'm thinking it's either the kernel version or, like the user states, maybe it's something with HyperThreading.
johnn
On Tue, 2007-08-07 at 17:54 -0400, Johnn Tan wrote:
I'm thinking it's either the kernel version or, like the user states, maybe it's something with HyperThreading.
Hyperthreading is off on the machines where this occurs. Since the problem seems to occur handline timer interrupts, this could be a hardware-specific problem.
-- Daniel