Rudi Ahlers wrote:
Ross S. W. Walker wrote:
Rudi Ahlers wrote:
Ross S. W. Walker wrote:
<snip discussion on centos-virt subscribing>
Did you verify that selinux is indeed disabled by looking in /etc/selinux/config that the line SELINUX=disabled is in there?
Yep:
# This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - SELinux is fully disabled. SELINUX=disabled # SELINUXTYPE= type of policy in use. Possible values are: # targeted - Only targeted network daemons are protected. # strict - Full SELinux protection. SELINUXTYPE=targeted
Good, just some users use setenforce 0 and think that's it done and then reboot and wonder why things are still not working properly.
What was the actual contents of your domU's config file?
This is what I changed:
# Dom0 will balloon out when needed to free memory for domU. # dom0-min-mem is the lowest memory level (in MB) dom0 will get down to. # If dom0-min-mem=0, dom0 will never balloon out. (dom0-min-mem 512)
It was: (dom0-min-mem 256)
Misunderstanding, I was hoping to see the config file of the domU you were trying to create.
You meant this?
[root@gimbli ~]# more /etc/xen/vm03 name = "vm03" uuid = "cc3b0b01-7894-4ac2-06e6-a1a1307939fc" maxmem = 512 memory = 512 vcpus = 1 bootloader = "/usr/bin/pygrub" on_poweroff = "destroy" on_reboot = "restart" on_crash = "restart" vfb = [ ] disk = [ "tap:aio:/home/vm/vm03.img,xvda,w" ] vif = [ "mac=00:16:3e:0a:13:9d,bridge=xenbr0" ]
Yes, it's a little lighter then I am use to, but it appears functional.
I typically use an example config from /etc/xen and then change the appropriate parts for my VM and if necessary make sure the defines are up to date.
Here's a Xen 3.2 PV config:
----- # Kernel image file. kernel = "/boot/vmlinuz-2.6.10-xenU"
# Optional ramdisk. #ramdisk = "/boot/initrd.gz"
# The domain build function. Default is 'linux'. #builder='linux'
# Initial memory allocation (in megabytes) for the new domain. memory = 64
# A name for your domain. All domains must have different names. name = "ExampleDomain"
# 128-bit UUID for the domain. The default behavior is to generate a new UUID # on each call to 'xm create'. #uuid = "06ed00fe-1162-4fc4-b5d8-11993ee4a8b9"
# List of which CPUS this domain is allowed to use, default Xen picks #cpus = "" # leave to Xen to pick #cpus = "0" # all vcpus run on CPU0 #cpus = "0-3,5,^1" # run on cpus 0,2,3,5
# Number of Virtual CPUS to use, default is 1 #vcpus = 1
# By default, no network interfaces are configured. You may have one created # with sensible defaults using an empty vif clause: # # vif = [ '' ] # # or optionally override backend, bridge, ip, mac, script, type, or vifname: # # vif = [ 'mac=00:16:3e:00:00:11, bridge=xenbr0' ] # # or more than one interface may be configured: # # vif = [ '', 'bridge=xenbr1' ]
vif = [ '' ]
# Define the disk devices you want the domain to have access to, and # what you want them accessible as. # Each disk entry is of the form phy:UNAME,DEV,MODE # where UNAME is the device, DEV is the device name the domain will see, # and MODE is r for read-only, w for read-write.
disk = [ 'phy:hda1,hda1,w' ]
# Define frame buffer device. # # By default, no frame buffer device is configured. # # To create one using the SDL backend and sensible defaults: # # vfb = [ 'type=sdl' ] # # This uses environment variables XAUTHORITY and DISPLAY. You # can override that: # # vfb = [ 'type=sdl,xauthority=/home/bozo/.Xauthority,display=:1' ] # # To create one using the VNC backend and sensible defaults: # # vfb = [ 'type=vnc' ] # # The backend listens on 127.0.0.1 port 5900+N by default, where N is # the domain ID. You can override both address and N: # # vfb = [ 'type=vnc,vnclisten=127.0.0.1,vncdisplay=1' ] # # Or you can bind the first unused port above 5900: # # vfb = [ 'type=vnc,vnclisten=0.0.0.0,vnunused=1' ] # # You can override the password: # # vfb = [ 'type=vnc,vncpasswd=MYPASSWD' ] # # Empty password disables authentication. Defaults to the vncpasswd # configured in xend-config.sxp.
# Define to which TPM instance the user domain should communicate. # The vtpm entry is of the form 'instance=INSTANCE,backend=DOM' # where INSTANCE indicates the instance number of the TPM the VM # should be talking to and DOM provides the domain where the backend # is located. # Note that no two virtual machines should try to connect to the same # TPM instance. The handling of all TPM instances does require # some management effort in so far that VM configration files (and thus # a VM) should be associated with a TPM instance throughout the lifetime # of the VM / VM configuration file. The instance number must be # greater or equal to 1. #vtpm = [ 'instance=1,backend=0' ]
# Set the kernel command line for the new domain. # You only need to define the IP parameters and hostname if the domain's # IP config doesn't, e.g. in ifcfg-eth0 or via DHCP. # You can use 'extra' to set the runlevel and custom environment # variables used by custom rc scripts (e.g. VMID=, usr= ).
# Set if you want dhcp to allocate the IP address. #dhcp="dhcp" # Set netmask. #netmask= # Set default gateway. #gateway= # Set the hostname. #hostname= "vm%d" % vmid
# Set root device. root = "/dev/hda1 ro"
# Root device for nfs. #root = "/dev/nfs" # The nfs server. #nfs_server = '169.254.1.0' # Root directory on the nfs server. #nfs_root = '/full/path/to/root/directory'
# Sets runlevel 4. extra = "4"
#on_poweroff = 'destroy' #on_reboot = 'restart' #on_crash = 'restart' -----
So there are lots of options you can configure that aren't available through virt-install. You don't need to use virt-install either, that is a separate Xen management framework (actually general VM management framework for Xen and KVM).
You could just do:
# xm create <config file>
On Xen 3.2, I add the VM directly into the xenstore with:
# xm new <config file>
Then the VM appears on xm list, and can be started stopped paused and is always able to query through the Xen API from third party management tools.
# xm start <vm name> # xm pause <vm name> # xm resume <vm name> # xm shutdown <vm name>
Also is this a workstation with Xen domU's for testing/development or a full blown Xen server for running production VMs?
This will be a full blown Xen server for production purposes. It will run max 8 Xen guests with cPanel on each one.
In that case if you don't want to shell out the $ for Xen Enterprise I would do these steps for setting up a Xen server:
- for each server, minimal install, no X to reduce any possible dom0
issues, and to allow you to minimize dom0 memory usage, you can then run in 256MB with no X windows!
- use the Xen 3.2 packages off of xen.org, compiled 64-bit, compile on
separate 64-bit platform as the compilation will pull in a lot of other development packages and X. These packages use the Xen kernel from CentOS for the kernel image, and that package comes with the 3.1 Xen image so you'll need to edit the grub.conf to make sure the Xen 3.2 image is used instead of the 3.1 image every time you upgrade the kernel. These packages provide the latest features and fixes as well as the more capable management tools and API which will become a necessity when you manage from the command line and/or have more then 1 server which eventually you will for scalability, redundancy, etc.
- Start seriously thinking about implementing an iSCSI SAN, your
storage requirements will balloon crazy until your virtualization environment stabilizes, and SANs allows for better storage utilization, scalability and also allows for VM migration from one host to another and are a bitch to migrate to after the fact.
- Build your Xen config files by hand, it's the only way to assure
they are setup properly and the way you want.
Since a Xen environment will be sensitive to change, maybe not as much as a LAMP environment, but still probably second to, you may want to manage your Xen build yourself, at least for the servers, as Redhat's Xen implementation is still evolving.
I would use Redhat's Xen environments once they have a pure Xen 3.2 build, as their current Frankenstein environment is really aimed at workstation deployments, especially their hoaky X tool.
Ross, you're talking about "scary" new stuff which I haven't even though about.
True, once it's done once it isn't as scary as it first seems though.
- What I'd like to accomplish, is to have a few dedicated servers (Our
company is the hosting company, and this is the first time we go into virtualization ), each running up to say 10 VPS / VM's (which is much cheaper than a full blown dedi to the client) None of the servers have X (no point to it), and we use a very, very minimal install (I don't even have FTP running, since cPanel will provide this). The VPS's will either have 256MB (cPanel minimum) / 512 / 786 / 1GB RAM - Obviously if more RAM is desired per VPS, less will be run on 1 server, or the server will have more RAM & CPU's HDD space will also either be 10 / 20 / 40 / 60 GB per VPS. The VPS' itself will only run cPanel, no X - a server doesn't need X for anything. So, 10 VPS with 512MB each = 12GB needed on host server. Many Xeon mobo's can take upto 32GB RAM.
Yes there is no problem there, and if you have multiple Xen servers using shared backend storage you can migrate VMs between the Xen servers on a resource needed basis.
- I'm a bit sceptic about using Xen 3.2 off the Xen site, as I don't
know how well it'll perform on CentOS and I believe that if CentOS hasn't included in their repositories yet, then there must be a good reason. I'll test it on a test server though to see what happens. I think the other problem I have, is that these servers are deployed from the standard CentOS 5.1 CD & a kickstart file with only the necessary software & nothing more. Having to compile software on another machine isn't fun for me.
Well here's a clue. Xen Enterprise server uses CentOS as the OS for dom0. I believe they still use CentOS 4.X (maybe the latest uses 5.X) and the Xen 3.2 package was built on CentOS. Xen 3.2 performs very very well on CentOS 5.X (some say even better then the Xen shipping with CentOS).
At my site I have setup a site specific repository for all in-house compiled RPMs and I have my kickstarts add the repo with a higher priority then the base/updates/extras repos for CentOS. Then I run a 'yum update' and my Xen 3.2 packages replace the CentOS ones on install and are there after updated from my internal repo.
- I just want to understand this better. If I run a 64bit host, and want
to install any other guest (preferably 32bit), then I need to use the "fully virtualized guest" and not the para-virtualized option, right?
No, with Xen 3.1 and 3.2 a 64-bit Xen host can run:
32-bit PVM 32-bit PAE PVM 32-bit HVM 64-bit PVM 64-bit HVM
There have been reports that the CentOS version has problems with 32-bit PVM guests on 64-bit hosts. I don't know if they have all been resolved, but this should fully work on Xen 3.2.
- I like where you're heading with the suggestion of an iSCSI SAN, but
that's totally new to me as well, and not in my budget right now. Maybe later on when this whole project takes off as I hope for it to take off. But, since you mention it, do you then setup the server with a base OS, and mount the iSCSI SAN as added storage? And then all the VM's get's stored on the SAN, instead of the server's HDD's? And how well will such a setup perform if I have say 5 / 10 servers connected to it? I guess the SAN will then at least need 4 Gigabit NIC's, as the hubs in the DC are only 100Mbit hubs. For a startup SAN, what would you suggest? I guess a PC / server based SAN (in other words, I take a Xeon mobo with plenty of HDD ports and put plenty HDD's on it) isn't really an option?
Yes, the VM's disks are located on the SAN and either mounted on the Xen host and booted off them, or the Xen VM's do an iSCSI boot directly off the SAN. There are a myriad of ways of doing this and it will depend on the VMs and Xen desired configuration.
I run approximately 16 virtual guests off a single iSCSI server and the performance is very good. Of course I have the disk array setup as a RAID-10 for the VM's as 90% of all disk access will be random across the OS disks. I have a separate iSCSI server that provides the application data storage with different array types based on the storage.
Currently we have used the iSCSI Enterprise Target software from sourceforge to provide cheap iSCSI and we are graduating to an appliance based solution soon. The appliance costs from Dell have dropped to a thin margin over the costs of the raw disks, so there isn't much cost factor any more. Talking around 20K for 15 146GB 15K SAS disks iSCSI array where once it was around 80-90K.
Of course with the iSCSI Enterprise Target you can start simple with SATA and a cheap array. Then gradually evolve it to more and more complex/expensive setups as the need grows.
For now I'm going to manage the Xen stuff myself, I don't have anyone capable of doing this kind of work yet.
Plan it out carefully, ask advice on this list and the Xen list and you should be able to put together an effective setup very economically.
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
< -- snip -- >
You meant this?
[root@gimbli ~]# more /etc/xen/vm03 name = "vm03" uuid = "cc3b0b01-7894-4ac2-06e6-a1a1307939fc" maxmem = 512 memory = 512 vcpus = 1 bootloader = "/usr/bin/pygrub" on_poweroff = "destroy" on_reboot = "restart" on_crash = "restart" vfb = [ ] disk = [ "tap:aio:/home/vm/vm03.img,xvda,w" ] vif = [ "mac=00:16:3e:0a:13:9d,bridge=xenbr0" ]
Yes, it's a little lighter then I am use to, but it appears functional.
I typically use an example config from /etc/xen and then change the appropriate parts for my VM and if necessary make sure the defines are up to date.
Here's a Xen 3.2 PV config:
# Kernel image file. kernel = "/boot/vmlinuz-2.6.10-xenU"
# Optional ramdisk. #ramdisk = "/boot/initrd.gz"
# The domain build function. Default is 'linux'. #builder='linux'
# Initial memory allocation (in megabytes) for the new domain. memory = 64
# A name for your domain. All domains must have different names. name = "ExampleDomain"
# 128-bit UUID for the domain. The default behavior is to generate a new UUID # on each call to 'xm create'. #uuid = "06ed00fe-1162-4fc4-b5d8-11993ee4a8b9"
# List of which CPUS this domain is allowed to use, default Xen picks #cpus = "" # leave to Xen to pick #cpus = "0" # all vcpus run on CPU0 #cpus = "0-3,5,^1" # run on cpus 0,2,3,5
# Number of Virtual CPUS to use, default is 1 #vcpus = 1
# By default, no network interfaces are configured. You may have one created # with sensible defaults using an empty vif clause: # # vif = [ '' ] # # or optionally override backend, bridge, ip, mac, script, type, or vifname: # # vif = [ 'mac=00:16:3e:00:00:11, bridge=xenbr0' ] # # or more than one interface may be configured: # # vif = [ '', 'bridge=xenbr1' ]
vif = [ '' ]
# Define the disk devices you want the domain to have access to, and # what you want them accessible as. # Each disk entry is of the form phy:UNAME,DEV,MODE # where UNAME is the device, DEV is the device name the domain will see, # and MODE is r for read-only, w for read-write.
disk = [ 'phy:hda1,hda1,w' ]
# Define frame buffer device. # # By default, no frame buffer device is configured. # # To create one using the SDL backend and sensible defaults: # # vfb = [ 'type=sdl' ] # # This uses environment variables XAUTHORITY and DISPLAY. You # can override that: # # vfb = [ 'type=sdl,xauthority=/home/bozo/.Xauthority,display=:1' ] # # To create one using the VNC backend and sensible defaults: # # vfb = [ 'type=vnc' ] # # The backend listens on 127.0.0.1 port 5900+N by default, where N is # the domain ID. You can override both address and N: # # vfb = [ 'type=vnc,vnclisten=127.0.0.1,vncdisplay=1' ] # # Or you can bind the first unused port above 5900: # # vfb = [ 'type=vnc,vnclisten=0.0.0.0,vnunused=1' ] # # You can override the password: # # vfb = [ 'type=vnc,vncpasswd=MYPASSWD' ] # # Empty password disables authentication. Defaults to the vncpasswd # configured in xend-config.sxp.
# Define to which TPM instance the user domain should communicate. # The vtpm entry is of the form 'instance=INSTANCE,backend=DOM' # where INSTANCE indicates the instance number of the TPM the VM # should be talking to and DOM provides the domain where the backend # is located. # Note that no two virtual machines should try to connect to the same # TPM instance. The handling of all TPM instances does require # some management effort in so far that VM configration files (and thus # a VM) should be associated with a TPM instance throughout the lifetime # of the VM / VM configuration file. The instance number must be # greater or equal to 1. #vtpm = [ 'instance=1,backend=0' ]
# Set the kernel command line for the new domain. # You only need to define the IP parameters and hostname if the domain's # IP config doesn't, e.g. in ifcfg-eth0 or via DHCP. # You can use 'extra' to set the runlevel and custom environment # variables used by custom rc scripts (e.g. VMID=, usr= ).
# Set if you want dhcp to allocate the IP address. #dhcp="dhcp" # Set netmask. #netmask= # Set default gateway. #gateway= # Set the hostname. #hostname= "vm%d" % vmid
# Set root device. root = "/dev/hda1 ro"
# Root device for nfs. #root = "/dev/nfs" # The nfs server. #nfs_server = '169.254.1.0' # Root directory on the nfs server. #nfs_root = '/full/path/to/root/directory'
# Sets runlevel 4. extra = "4"
#on_poweroff = 'destroy' #on_reboot = 'restart'
#on_crash = 'restart'
So there are lots of options you can configure that aren't available through virt-install. You don't need to use virt-install either, that is a separate Xen management framework (actually general VM management framework for Xen and KVM).
You could just do:
# xm create <config file>
On Xen 3.2, I add the VM directly into the xenstore with:
# xm new <config file>
Then the VM appears on xm list, and can be started stopped paused and is always able to query through the Xen API from third party management tools.
# xm start <vm name> # xm pause <vm name> # xm resume <vm name> # xm shutdown <vm name>
I'd prefer to the virt-install, as I can automate the VM creation from a script if I can get it to work.
Also is this a workstation with Xen domU's for testing/development or a full blown Xen server for running production VMs?
This will be a full blown Xen server for production purposes. It will run max 8 Xen guests with cPanel on each one.
In that case if you don't want to shell out the $ for Xen Enterprise I would do these steps for setting up a Xen server:
- for each server, minimal install, no X to reduce any possible dom0
issues, and to allow you to minimize dom0 memory usage, you can then run in 256MB with no X windows!
- use the Xen 3.2 packages off of xen.org, compiled 64-bit, compile on
separate 64-bit platform as the compilation will pull in a lot of other development packages and X. These packages use the Xen kernel from CentOS for the kernel image, and that package comes with the 3.1 Xen image so you'll need to edit the grub.conf to make sure the Xen 3.2 image is used instead of the 3.1 image every time you upgrade the kernel. These packages provide the latest features and fixes as well as the more capable management tools and API which will become a necessity when you manage from the command line and/or have more then 1 server which eventually you will for scalability, redundancy, etc.
- Start seriously thinking about implementing an iSCSI SAN, your
storage requirements will balloon crazy until your virtualization environment stabilizes, and SANs allows for better storage utilization, scalability and also allows for VM migration from one host to another and are a bitch to migrate to after the fact.
- Build your Xen config files by hand, it's the only way to assure
they are setup properly and the way you want.
Since a Xen environment will be sensitive to change, maybe not as much as a LAMP environment, but still probably second to, you may want to manage your Xen build yourself, at least for the servers, as Redhat's Xen implementation is still evolving.
I would use Redhat's Xen environments once they have a pure Xen 3.2 build, as their current Frankenstein environment is really aimed at workstation deployments, especially their hoaky X tool.
Ross, you're talking about "scary" new stuff which I haven't even though about.
True, once it's done once it isn't as scary as it first seems though.
- What I'd like to accomplish, is to have a few dedicated servers (Our
company is the hosting company, and this is the first time we go into virtualization ), each running up to say 10 VPS / VM's (which is much cheaper than a full blown dedi to the client) None of the servers have X (no point to it), and we use a very, very minimal install (I don't even have FTP running, since cPanel will provide this). The VPS's will either have 256MB (cPanel minimum) / 512 / 786 / 1GB RAM - Obviously if more RAM is desired per VPS, less will be run on 1 server, or the server will have more RAM & CPU's HDD space will also either be 10 / 20 / 40 / 60 GB per VPS. The VPS' itself will only run cPanel, no X - a server doesn't need X for anything. So, 10 VPS with 512MB each = 12GB needed on host server. Many Xeon mobo's can take upto 32GB RAM.
Yes there is no problem there, and if you have multiple Xen servers using shared backend storage you can migrate VMs between the Xen servers on a resource needed basis.
- I'm a bit sceptic about using Xen 3.2 off the Xen site, as I don't
know how well it'll perform on CentOS and I believe that if CentOS hasn't included in their repositories yet, then there must be a good reason. I'll test it on a test server though to see what happens. I think the other problem I have, is that these servers are deployed from the standard CentOS 5.1 CD & a kickstart file with only the necessary software & nothing more. Having to compile software on another machine isn't fun for me.
Well here's a clue. Xen Enterprise server uses CentOS as the OS for dom0. I believe they still use CentOS 4.X (maybe the latest uses 5.X) and the Xen 3.2 package was built on CentOS. Xen 3.2 performs very very well on CentOS 5.X (some say even better then the Xen shipping with CentOS).
At my site I have setup a site specific repository for all in-house compiled RPMs and I have my kickstarts add the repo with a higher priority then the base/updates/extras repos for CentOS. Then I run a 'yum update' and my Xen 3.2 packages replace the CentOS ones on install and are there after updated from my internal repo.
Xen Enterprise is a bit out of my budget, maybe later on when our VPS project takes off well will we look into it.
- I just want to understand this better. If I run a 64bit host, and want
to install any other guest (preferably 32bit), then I need to use the "fully virtualized guest" and not the para-virtualized option, right?
No, with Xen 3.1 and 3.2 a 64-bit Xen host can run:
32-bit PVM 32-bit PAE PVM 32-bit HVM 64-bit PVM 64-bit HVM
There have been reports that the CentOS version has problems with 32-bit PVM guests on 64-bit hosts. I don't know if they have all been resolved, but this should fully work on Xen 3.2.
Am I understanding correctly that I'm trying to install a 32-bit PVM? Cause then it should work?
- I like where you're heading with the suggestion of an iSCSI SAN, but
that's totally new to me as well, and not in my budget right now. Maybe later on when this whole project takes off as I hope for it to take off. But, since you mention it, do you then setup the server with a base OS, and mount the iSCSI SAN as added storage? And then all the VM's get's stored on the SAN, instead of the server's HDD's? And how well will such a setup perform if I have say 5 / 10 servers connected to it? I guess the SAN will then at least need 4 Gigabit NIC's, as the hubs in the DC are only 100Mbit hubs. For a startup SAN, what would you suggest? I guess a PC / server based SAN (in other words, I take a Xeon mobo with plenty of HDD ports and put plenty HDD's on it) isn't really an option?
Yes, the VM's disks are located on the SAN and either mounted on the Xen host and booted off them, or the Xen VM's do an iSCSI boot directly off the SAN. There are a myriad of ways of doing this and it will depend on the VMs and Xen desired configuration.
I run approximately 16 virtual guests off a single iSCSI server and the performance is very good. Of course I have the disk array setup as a RAID-10 for the VM's as 90% of all disk access will be random across the OS disks. I have a separate iSCSI server that provides the application data storage with different array types based on the storage.
With 16 VPS's (depending on the RAM & HDD space), surely I can just run them directly on the dedi instead? I'm not planning on using more than 1GB RAM & 60 GB HDD space per VM. The servers will probably have upto 1.5TB RAID-10 HDD space, which will work fine for 14x60GB = 960GB. Although some people recommend not more than 6 VPS's per CPU kernel, so on the current Core 2 Duo I can only do 12 VPS's max, or 16 easily on a Core 2 Quad + 16 - 32GB RAM
Currently we have used the iSCSI Enterprise Target software from sourceforge to provide cheap iSCSI and we are graduating to an appliance based solution soon. The appliance costs from Dell have dropped to a thin margin over the costs of the raw disks, so there isn't much cost factor any more. Talking around 20K for 15 146GB 15K SAS disks iSCSI array where once it was around 80-90K.
SCSI drives are still rather expensive in our country, but SATAII drives are rather cheap, but I'll look into SCSI to see how cost effective it could be
Of course with the iSCSI Enterprise Target you can start simple with SATA and a cheap array. Then gradually evolve it to more and more complex/expensive setups as the need grows.
Thanx, I'll take a look @ iSCSI Enterprise Target, although I've been thinking about using FreeN(http://www.freenas.org/) for this purpose though
For now I'm going to manage the Xen stuff myself, I don't have anyone capable of doing this kind of work yet.
Plan it out carefully, ask advice on this list and the Xen list and you should be able to put together an effective setup very economically.
-Ross
This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
Rudi Ahlers wrote:
Ross S. W. Walker wrote:
Rudi Ahlers wrote:
< -- snip -- >
You meant this?
[root@gimbli ~]# more /etc/xen/vm03 name = "vm03" uuid = "cc3b0b01-7894-4ac2-06e6-a1a1307939fc" maxmem = 512 memory = 512 vcpus = 1 bootloader = "/usr/bin/pygrub" on_poweroff = "destroy" on_reboot = "restart" on_crash = "restart" vfb = [ ] disk = [ "tap:aio:/home/vm/vm03.img,xvda,w" ] vif = [ "mac=00:16:3e:0a:13:9d,bridge=xenbr0" ]
Yes, it's a little lighter then I am use to, but it appears functional.
I typically use an example config from /etc/xen and then change the appropriate parts for my VM and if necessary make sure the defines are up to date.
Here's a Xen 3.2 PV config:
<snip config found in /etc/xen>
So there are lots of options you can configure that aren't available through virt-install. You don't need to use virt-install either, that is a separate Xen management framework (actually general VM management framework for Xen and KVM).
You could just do:
# xm create <config file>
On Xen 3.2, I add the VM directly into the xenstore with:
# xm new <config file>
Then the VM appears on xm list, and can be started stopped paused and is always able to query through the Xen API from third party management tools.
# xm start <vm name> # xm pause <vm name> # xm resume <vm name> # xm shutdown <vm name>
I'd prefer to the virt-install, as I can automate the VM creation from a script if I can get it to work.
Well I think you can script Xen through python and the Xen API, much the way virt-install does, but with infinitely more flexibility.
Most of the time though you probably won't be scripting, but templating VMs. You will get a few default general configs for different OS types and combine them maybe with base image snapshots and use those to deploy new VMs.
<snip lengthy talk on Xen setup>
Ross, you're talking about "scary" new stuff which I haven't even though about.
True, once it's done once it isn't as scary as it first seems though.
- What I'd like to accomplish, is to have a few dedicated servers (Our
company is the hosting company, and this is the first time we go into virtualization ), each running up to say 10 VPS / VM's (which is much cheaper than a full blown dedi to the client) None of the servers have X (no point to it), and we use a very, very minimal install (I don't even have FTP running, since cPanel will provide this). The VPS's will either have 256MB (cPanel minimum) / 512 / 786 / 1GB RAM - Obviously if more RAM is desired per VPS, less will be run on 1 server, or the server will have more RAM & CPU's HDD space will also either be 10 / 20 / 40 / 60 GB per VPS. The VPS' itself will only run cPanel, no X - a server doesn't need X for anything. So, 10 VPS with 512MB each = 12GB needed on host server. Many Xeon mobo's can take upto 32GB RAM.
Yes there is no problem there, and if you have multiple Xen servers using shared backend storage you can migrate VMs between the Xen servers on a resource needed basis.
- I'm a bit sceptic about using Xen 3.2 off the Xen site, as I don't
know how well it'll perform on CentOS and I believe that if CentOS hasn't included in their repositories yet, then there must be a good reason. I'll test it on a test server though to see what happens. I think the other problem I have, is that these servers are deployed from the standard CentOS 5.1 CD & a kickstart file with only the necessary software & nothing more. Having to compile software on another machine isn't fun for me.
Well here's a clue. Xen Enterprise server uses CentOS as the OS for dom0. I believe they still use CentOS 4.X (maybe the latest uses 5.X) and the Xen 3.2 package was built on CentOS. Xen 3.2 performs very very well on CentOS 5.X (some say even better then the Xen shipping with CentOS).
At my site I have setup a site specific repository for all in-house compiled RPMs and I have my kickstarts add the repo with a higher priority then the base/updates/extras repos for CentOS. Then I run a 'yum update' and my Xen 3.2 packages replace the CentOS ones on install and are there after updated from my internal repo.
Xen Enterprise is a bit out of my budget, maybe later on when our VPS project takes off well will we look into it.
Well $1600/socket can be a lot or a little depending on what your use to, but you can start with the CentOS 5.1 + Xen 3.2 and always migrate over to Xen Enterprise later.
- I just want to understand this better. If I run a 64bit host, and want
to install any other guest (preferably 32bit), then I need to use the "fully virtualized guest" and not the para-virtualized option, right?
No, with Xen 3.1 and 3.2 a 64-bit Xen host can run:
32-bit PVM 32-bit PAE PVM 32-bit HVM 64-bit PVM 64-bit HVM
There have been reports that the CentOS version has problems with 32-bit PVM guests on 64-bit hosts. I don't know if they have all been resolved, but this should fully work on Xen 3.2.
Am I understanding correctly that I'm trying to install a 32-bit PVM? Cause then it should work?
Yes, 32-bit PVM should work fine on 64-bit CentOS 5.1 host. It has to be 5.1 or later though because 5.0 didn't support this, but support is even tighter on Xen 3.2 for this setup.
- I like where you're heading with the suggestion of an iSCSI SAN, but
that's totally new to me as well, and not in my budget right now. Maybe later on when this whole project takes off as I hope for it to take off. But, since you mention it, do you then setup the server with a base OS, and mount the iSCSI SAN as added storage? And then all the VM's get's stored on the SAN, instead of the server's HDD's? And how well will such a setup perform if I have say 5 / 10 servers connected to it? I guess the SAN will then at least need 4 Gigabit NIC's, as the hubs in the DC are only 100Mbit hubs. For a startup SAN, what would you suggest? I guess a PC / server based SAN (in other words, I take a Xeon mobo with plenty of HDD ports and put plenty HDD's on it) isn't really an option?
Yes, the VM's disks are located on the SAN and either mounted on the Xen host and booted off them, or the Xen VM's do an iSCSI boot directly off the SAN. There are a myriad of ways of doing this and it will depend on the VMs and Xen desired configuration.
I run approximately 16 virtual guests off a single iSCSI server and the performance is very good. Of course I have the disk array setup as a RAID-10 for the VM's as 90% of all disk access will be random across the OS disks. I have a separate iSCSI server that provides the application data storage with different array types based on the storage.
With 16 VPS's (depending on the RAM & HDD space), surely I can just run them directly on the dedi instead? I'm not planning on using more than 1GB RAM & 60 GB HDD space per VM. The servers will probably have upto 1.5TB RAID-10 HDD space, which will work fine for 14x60GB = 960GB. Although some people recommend not more than 6 VPS's per CPU kernel, so on the current Core 2 Duo I can only do 12 VPS's max, or 16 easily on a Core 2 Quad + 16 - 32GB RAM
16 on a Quad should work if their load is low, if you have 4 or more with a heavy load then you may want to get more sockets or spread them over multiple servers.
Currently we have used the iSCSI Enterprise Target software from sourceforge to provide cheap iSCSI and we are graduating to an appliance based solution soon. The appliance costs from Dell have dropped to a thin margin over the costs of the raw disks, so there isn't much cost factor any more. Talking around 20K for 15 146GB 15K SAS disks iSCSI array where once it was around 80-90K.
SCSI drives are still rather expensive in our country, but SATAII drives are rather cheap, but I'll look into SCSI to see how cost effective it could be
SCSI drives are still more expensive then their SATA counterparts in any country.
iSCSI doesn't have to use SCSI drives though. Any backend storage can be served up over iSCSI, even RAM disks if so desired.
Of course with the iSCSI Enterprise Target you can start simple with SATA and a cheap array. Then gradually evolve it to more and more complex/expensive setups as the need grows.
Thanx, I'll take a look @ iSCSI Enterprise Target, although I've been thinking about using FreeN(http://www.freenas.org/) for this purpose though
I try to stay away from NAS for block-level storage as it can cause all sorts of hard to diagnose performance problems. It just was never designed to handle virtual machines, databases and such.
For now I'm going to manage the Xen stuff myself, I don't have anyone capable of doing this kind of work yet.
Plan it out carefully, ask advice on this list and the Xen list and you should be able to put together an effective setup very economically.
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.