[CentOS-virt] xen setup documentation for centos?
lee at yun.yagibdah.de
Mon Jun 2 17:01:45 UTC 2014
Nico Kadel-Garcia <nkadel at gmail.com> writes:
> On Sun, Jun 1, 2014 at 8:45 PM, lee <lee at yun.yagibdah.de> wrote:
>> what is the proposed way to create domU guests on centos 6.5? At first
>> Then I followed redhat documentation which suggests to use virt-manager
>> --- which doesn`t work because servers don`t have GUIs. So I finally
>> managed to create a guest with virt-install. I can start and stop the
>> guest (which is also running centos), though I don`t think this is the
>> right way to create one.
>> So how exactly are you supposed to create guests?
> Servers *can* have GUI's. Even if you don't want to install the full
> Gnome/KDE/display manager toolkits, it's possible to set up enough to
> run X based applications form another host.
Yes, they /can/, and IMO a server shouldn`t have and shouldn`t need one.
> And virt-manager can be run from a client, with authenticated access
> to the libvirt server, though I've generally not done that.
Yes, I tried that and it didn`t work.
> If you don't want to bother with that, you'll need to learn 'virsh',
> which is the actual tool that libvirt uses to do almost everything.
That`s what I`m using now. Is virsh what centos users are supposed to
use? The documentation on the xen project wiki seems to indicate that
xen users are supposed to use xl.
>> Do I have to set up shorewall (or the like) on dom0 to be able to handle
>> network access for guests? Would I need to create a bridge for every
>> guest to be able to handle them separately for firewalling purposes
>> because otherwise packets circumvent firewall rules by directly going
>> over the bridge? If so, why are bridges needed?
> You need to pick. One approach is to set up a bridged connection with
> one VM, with a second localized VLAN connection,
How do you make such a localised connection?
In this case, the VM with the firewall needs to have access to an
ethernet interface to do pppoe, a second interface to the LAN and to a
third one for a DMZ. The VM with the firewall will be on the same
physical host as the VMs in the DMZ. There will also be VMs in the LAN
on the same physical host.
This seems to require three bridges, and the firewall VM would need
access to all of them, unless the dom0 is doing some of the routing.
How do you enable access to several bridges for a VM?
ATM, I`m trying to understand how this networking stuff with bridges
> and run shorewall or other firewalls on that VM to manage connections
> to the rest of the VM's. This leaves your bandwidth trapped at the
> capacity of that firewall VM, but it's not an uncommon soluiton,
> especially when running complex firewalls and/or proxies in small
Hm, I never thought of that. What kind of limit do I have to imagine?
For external connections, the highest bandwidth is 1GB LAN. So a
limiting factor for internal connections is probably much more
important, like a VM accessing a database running in another VM. The
database will probably be so small that it can be kept mostly in memory,
so disk access won`t slow things down much.
How much information does the host system have available about the VMs
for scheduling purposes? Like when there is a VM for
firewalling/routing, a VM with a database and another VM with an
application accessing the database, all three VMs need resources. Delay
any of them or give them resources at the wrong moment, and in the end
performance will be diminished.
Suppose each of the three VMs has one CPU assigned, and more CPUs are
physically available while other VMs happen to be idle. So three CPUs
are busy and another five CPUs remain idle so that they are available
just in case one of the idle VMs needs to do something? Or will they be
used to speed up the running VMs?
Are there situations in which overcommitment of CPUs is advisable? Like
when I have 8 CPUs and 4 VMs, I could assign 2 CPUs to each VM. But
when I can expect that 95% or so of the time 2 VMs will be idle when 2
others are running, won`t I be better off to assign 4 CPUs to each VM?
Or is this done automatically, and the number of CPUs specified for a VM
is only a minimum?
> Whether you need bridges then depends on where your firewall is. If it
> lives on another host on your network, yes, your guests need bridges.
> If it's on a VM with two connections, as I described above, it's
> potentially much easier to set up on a single firewall VM. But
> migrating the firewall among multiple VM servers means establishing,
> and maintaining, a multiple VM server internal network, and if doing
> that, *THOSE* might mandate bridges.
Moving is another thing I haven`t thought about yet. It`s not a
requirement in this case, though it would be a good option to have.
How do you deal with memory overcommitment? Suppose I set up a VM that
does the firewalling and routing. Memory requirements for this are low.
I also want to use squid (2.7), with a fairly large cache and url
rewriting (which hasn`t been ported to 3.x yet). That requires memory
(and file I/O).
I also need a file server.
So what do I do? Use one VM for firewall, one for squid, one for a file
server that also provides squid with its cache? Or run the file server
directly on the host? Or run firewall and squid in the same VM and give
this VM some more memory? Or don`t use the file server to supply squid
and keep the cache in the same VM squid runs in, perhaps giving it even
I could also have the firewall in it`s own VM (which might be a good
idea due to security) and use another VM for the routing and to run
squid, an MTA and other basic services which are accessible from the LAN
only. Hm, this actually makes the most sense to me. But then, what do
I do with the web server?
What is good practise?
>> I would understand doing things like adding those guests that are
>> visible to the LAN only to the same bridge to have them all reachable
>> likewise. When doing that, it would seem to make sense to use a
>> different subnet for guests in the DMZ.
> It Depends(tm).
Depends on? ;)
>> All the documentation tells you many different things, none of them work
>> and it`s totally confusing. Is there any /good/ documentation
> I suggest what you need to accomplish first. Do you have, or want to
> build, firewalls? Are you isolating DMZ hosts or public facing
> webservers that need heightened isolation and security?
Well, I don`t really want to build firewalls; having one is merely a
requirement, and being able to do some traffic shaping can be nice. I`m
running a web server which is reachable from the outside (on a
non-default port, so it`s not exactly public) and needs to be isolated,
and I`m running an MTA. I`m thinking about adding IMAPs so clients can
get their mail through that.
How isolated does a web server need to be? I`m thinking of adding some
game servers in their own VM later on, too. But CPU and memory are
limited resources ...
Since I need to start somewhere, what needs to be accomplished is
probably something like this:
use that as file server directly, or use a VM as file server?
dns, router, squid, MTA, IMAPs, perhaps DHCP
internal use (like distcc, a multi-user X server for clients to
connect to, experimentation ...)
The server has 2x4 CPUs and 8GB of RAM. Unless something speaks
against it, I`d prefer to use dom0 as file server because it seems
easier to set up, with direct access to the storage volume.
As to resources:
| dom | RAM | CPUs | scheduling |
| dom0 | how do I assign memory here? | | ++ |
| dom1 | 512--768 MB | 1 CPU | - |
| dom2 | 2GB? | 1 CPUs | - |
| dom3 | 2GB? | 1--3 CPUs | = |
| dom4 | 4GB? | 3--5 CPUs | = |
| 5 | 9GB | 8 CPUs | == |
The file server would run on dom0. What should I set for dom0_mem?
Dom4 doesn`t need to be up all the time. From what I`ve been reading,
you can overcommit memory and it`s a bad idea to overcommit CPUs.
It it better to give VMs less memory (to some point) and let them use
their swap files, or is it better to give them a bit more and
overcommit in total (to some point) so that dom0 may swap?
It also seems that scheduling means that VMs /can/ get more time when
they need it and don`t get it when not. How does that interfere with
The VMs only need to sustain a very low minimum throughput because
usage will be more like short-term spikes on one or another. Would it
make sense to overcommit CPUs in this szenario?
Does this make sense or should I do things differently?
Knowledge is volatile and fluid. Software is power.
More information about the CentOS-virt