Ross S. W. Walker wrote:
Rudi Ahlers wrote:
Ross S. W. Walker wrote:
That shouldn't be, the list acts like any other CentOS list, maybe you entered your email address incorrectly, got spam filtered, or is just temporarily broken, but it should send you an email upon subscribing to confirm your subscription.
Really? Maybe I subscribed to the wrong list then, but this is the reply I got:
Your mail to 'CentOS-virt' with the subject
subscribe centos-virt
Is being held until the list moderator can review it for approval.
The reason it is being held:
Message may contain administrivia
Try subscribing through the mailman web site:
Odd, my subscription did go though on the first email, as I got a message saying I'm already subscribed. I'll post there as well, let's see what happens :)
Did you verify that selinux is indeed disabled by looking in /etc/selinux/config that the line SELINUX=disabled is in there?
Yep:
# This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - SELinux is fully disabled. SELINUX=disabled # SELINUXTYPE= type of policy in use. Possible values are: # targeted - Only targeted network daemons are protected. # strict - Full SELinux protection. SELINUXTYPE=targeted
Good, just some users use setenforce 0 and think that's it done and then reboot and wonder why things are still not working properly.
What was the actual contents of your domU's config file?
This is what I changed:
# Dom0 will balloon out when needed to free memory for domU. # dom0-min-mem is the lowest memory level (in MB) dom0 will get down to. # If dom0-min-mem=0, dom0 will never balloon out. (dom0-min-mem 512)
It was: (dom0-min-mem 256)
Misunderstanding, I was hoping to see the config file of the domU you were trying to create.
You meant this?
[root@gimbli ~]# more /etc/xen/vm03 name = "vm03" uuid = "cc3b0b01-7894-4ac2-06e6-a1a1307939fc" maxmem = 512 memory = 512 vcpus = 1 bootloader = "/usr/bin/pygrub" on_poweroff = "destroy" on_reboot = "restart" on_crash = "restart" vfb = [ ] disk = [ "tap:aio:/home/vm/vm03.img,xvda,w" ] vif = [ "mac=00:16:3e:0a:13:9d,bridge=xenbr0" ]
Also is this a workstation with Xen domU's for testing/development or a full blown Xen server for running production VMs?
-Ross
This will be a full blown Xen server for production purposes. It will run max 8 Xen guests with cPanel on each one.
In that case if you don't want to shell out the $ for Xen Enterprise I would do these steps for setting up a Xen server:
- for each server, minimal install, no X to reduce any possible dom0
issues, and to allow you to minimize dom0 memory usage, you can then run in 256MB with no X windows!
- use the Xen 3.2 packages off of xen.org, compiled 64-bit, compile on
separate 64-bit platform as the compilation will pull in a lot of other development packages and X. These packages use the Xen kernel from CentOS for the kernel image, and that package comes with the 3.1 Xen image so you'll need to edit the grub.conf to make sure the Xen 3.2 image is used instead of the 3.1 image every time you upgrade the kernel. These packages provide the latest features and fixes as well as the more capable management tools and API which will become a necessity when you manage from the command line and/or have more then 1 server which eventually you will for scalability, redundancy, etc.
- Start seriously thinking about implementing an iSCSI SAN, your
storage requirements will balloon crazy until your virtualization environment stabilizes, and SANs allows for better storage utilization, scalability and also allows for VM migration from one host to another and are a bitch to migrate to after the fact.
- Build your Xen config files by hand, it's the only way to assure
they are setup properly and the way you want.
Since a Xen environment will be sensitive to change, maybe not as much as a LAMP environment, but still probably second to, you may want to manage your Xen build yourself, at least for the servers, as Redhat's Xen implementation is still evolving.
I would use Redhat's Xen environments once they have a pure Xen 3.2 build, as their current Frankenstein environment is really aimed at workstation deployments, especially their hoaky X tool.
-Ross
Ross, you're talking about "scary" new stuff which I haven't even though about. - What I'd like to accomplish, is to have a few dedicated servers (Our company is the hosting company, and this is the first time we go into virtualization ), each running up to say 10 VPS / VM's (which is much cheaper than a full blown dedi to the client) None of the servers have X (no point to it), and we use a very, very minimal install (I don't even have FTP running, since cPanel will provide this). The VPS's will either have 256MB (cPanel minimum) / 512 / 786 / 1GB RAM - Obviously if more RAM is desired per VPS, less will be run on 1 server, or the server will have more RAM & CPU's HDD space will also either be 10 / 20 / 40 / 60 GB per VPS. The VPS' itself will only run cPanel, no X - a server doesn't need X for anything. So, 10 VPS with 512MB each = 12GB needed on host server. Many Xeon mobo's can take upto 32GB RAM.
- I'm a bit sceptic about using Xen 3.2 off the Xen site, as I don't know how well it'll perform on CentOS and I believe that if CentOS hasn't included in their repositories yet, then there must be a good reason. I'll test it on a test server though to see what happens. I think the other problem I have, is that these servers are deployed from the standard CentOS 5.1 CD & a kickstart file with only the necessary software & nothing more. Having to compile software on another machine isn't fun for me.
- I just want to understand this better. If I run a 64bit host, and want to install any other guest (preferably 32bit), then I need to use the "fully virtualized guest" and not the para-virtualized option, right?
- I like where you're heading with the suggestion of an iSCSI SAN, but that's totally new to me as well, and not in my budget right now. Maybe later on when this whole project takes off as I hope for it to take off. But, since you mention it, do you then setup the server with a base OS, and mount the iSCSI SAN as added storage? And then all the VM's get's stored on the SAN, instead of the server's HDD's? And how well will such a setup perform if I have say 5 / 10 servers connected to it? I guess the SAN will then at least need 4 Gigabit NIC's, as the hubs in the DC are only 100Mbit hubs. For a startup SAN, what would you suggest? I guess a PC / server based SAN (in other words, I take a Xeon mobo with plenty of HDD ports and put plenty HDD's on it) isn't really an option?
For now I'm going to manage the Xen stuff myself, I don't have anyone capable of doing this kind of work yet.