I'm wondering what others who have already used the Xen kernel for DomUs and the (free) VMWare Server can say about the comparison in actual day-to-day operation. I've only started playing with Xen on CentOS 5 and I've been running VMWare Server only on Win2k3 servers so far, so I'm missing direct comparibility. I find that VMWare is highly reliable and flexible, but needs a good portion of RAM and CPU for working nicely. Xen seems to be less reliable, but more responsive on low ressources. It's also more flexible when you want to move it around as it is all stored in a single file and the config file takes only a few options.
Xen is supposed to give "almost the same performance" as if not virtualized. I get confusing figures from the Virtual Machine Manager. On my test machine the Dom0 takes about 9% of the CPU with one VM when both are mostly idle. Top says that 3% of that is taken by the X-Server, most of the rest is shared by two python processes (which belong to Xen I suppose) and xenstored. (The test machine is on a single Athlon 2500+ or around that, not sure about the exact speed.) But regularly one of the python processes grabs another 10-15%, so the whole CPU utilization goes up to around 20%. Although the single VM is idle at that time and shows 0.0x%. So, even when idle the whole CPU utilization zigzags between 10 and 20%. I found that when I close the VM console that drops to 2-3%, so that python process is obviously related to the VM console. The interesting thing is that when I then reopen the console from the VM manager it keeps going at about 3% and doesn't go up to the earlier 9%. But it still zigzags between 3 and 11% then. All the figures have been taken from the VM manager. The %us count in top seems to stay at 3% all the time.
With less reliable I refer to a filesystem problem that seems to occur sometimes after rebooting/shutdown. Sometimes after rebooting there either is no filesystem file anymore (which is not healable, of course) or the kernel panics with a filesystem problem on first boot or reboot, but may boot just fine after a second or third try. The problem that the filesystem file is just gone seems to happen only when I reboot/shutdown from within the console with the shutdown/reboot command. For instance it can happen if I let it do the first reboot after I installed the OS on the VM. When I "xm shutdown" or CRTL+ALT+DEL or hit the shutdown button on the Virtual Machine Manager it does not *seem* to happen (maybe I just didn't shutdown often enough to see it happen there as well.) I've already lost several testing VMs because of this. I wonder if this problem might happen because I use the option of not allocating all space in the filesystem file right-away. I also wonder if performance might be better if it wouldn't need to grow.
So, are there recommended best practices for Xen VMs like "always use partitions", "always allocated whole space at once", "never shutdown from the console window", "never use virtual machine manager" or some such?
Kai
On 10/15/07, Kai Schaetzl maillists@conactive.com wrote:
I'm wondering what others who have already used the Xen kernel for DomUs and the (free) VMWare Server can say about the comparison in actual day-to-day operation. I've only started playing with Xen on CentOS 5 and I've been running VMWare Server only on Win2k3 servers so far, so I'm missing direct comparibility. I find that VMWare is highly reliable and flexible, but needs a good portion of RAM and CPU for working nicely. Xen seems to be less reliable, but more responsive on low ressources. It's also more flexible when you want to move it around as it is all stored in a single file and the config file takes only a few options.
Xen is supposed to give "almost the same performance" as if not virtualized. I get confusing figures from the Virtual Machine Manager. On my test machine the Dom0 takes about 9% of the CPU with one VM when both are mostly idle. Top says that 3% of that is taken by the X-Server, most of the rest is shared by two python processes (which belong to Xen I suppose) and xenstored. (The test machine is on a single Athlon 2500+ or around that, not sure about the exact speed.) But regularly one of the python processes grabs another 10-15%, so the whole CPU utilization goes up to around 20%. Although the single VM is idle at that time and shows 0.0x%. So, even when idle the whole CPU utilization zigzags between 10 and 20%. I found that when I close the VM console that drops to 2-3%, so that python process is obviously related to the VM console. The interesting thing is that when I then reopen the console from the VM manager it keeps going at about 3% and doesn't go up to the earlier 9%. But it still zigzags between 3 and 11% then. All the figures have been taken from the VM manager. The %us count in top seems to stay at 3% all the time.
With less reliable I refer to a filesystem problem that seems to occur sometimes after rebooting/shutdown. Sometimes after rebooting there either is no filesystem file anymore (which is not healable, of course) or the kernel panics with a filesystem problem on first boot or reboot, but may boot just fine after a second or third try. The problem that the filesystem file is just gone seems to happen only when I reboot/shutdown from within the console with the shutdown/reboot command. For instance it can happen if I let it do the first reboot after I installed the OS on the VM. When I "xm shutdown" or CRTL+ALT+DEL or hit the shutdown button on the Virtual Machine Manager it does not *seem* to happen (maybe I just didn't shutdown often enough to see it happen there as well.) I've already lost several testing VMs because of this. I wonder if this problem might happen because I use the option of not allocating all space in the filesystem file right-away. I also wonder if performance might be better if it wouldn't need to grow.
So, are there recommended best practices for Xen VMs like "always use partitions", "always allocated whole space at once", "never shutdown from the console window", "never use virtual machine manager" or some such?
Kai
Hi Kai,
I have read that Xen is supposed to only take a few % without virtualization support in the CPU, in my experience this is not entirely true. I have experienced good CPU performance with degraded I/O performance on Xen compared to the physical environment although sufficient hardware resources were provided. You can read more about other tests done in this regard, URL:
http://lists.xensource.com/archives/html/xen-users/2007-04/msg00512.html
I've shutdown Xen 2 and 3 from within the console (shutdown -h now), xm shutdown ... or xm destroy ... and I never got a crashed file system, all environments on LVM partitions on Intel and AMD hardware. I don't use the I use the option of not allocating all space in the file system file right-away so it could be this or the image file, or why not the combination? ;) Perhaps you can try out some combination and let us know how this turns out..
Regards, Nicolas
Nicolas Sahlqvist wrote on Mon, 15 Oct 2007 14:26:28 +0200:
I've shutdown Xen 2 and 3 from within the console (shutdown -h now), xm shutdown ... or xm destroy ... and I never got a crashed file system, all environments on LVM partitions on Intel and AMD hardware.
This is somewhat ambiguous. Do you use an LVM partition as the filesystem "container" for a VM or do you place filesystem files for DomU on a Dom0 host LVM partition?
Kai
On Mon, 2007-10-15 at 13:32 +0200, Kai Schaetzl wrote:
I'm wondering what others who have already used the Xen kernel for DomUs and the (free) VMWare Server can say about the comparison in actual day-to-day operation. I've only started playing with Xen on CentOS 5 and I've been running VMWare Server only on Win2k3 servers so far, so I'm missing direct comparibility.
It depends on what you want to run as a domU. Paravirtualization is very fast (e.g. for running domU CentOS 4/5 kernels). On the other hand, some devices are very slow if you use Xen HVM for running systems that do not have a kernel that functions as a Xen domU. As long as we don't have paravirtualized network/display drivers for those systems, network/graphic performance will not be very good.
So, what do you plan to run as a virtual machine?
I found that when I close the VM console that drops to 2-3%, so that python process is obviously related to the VM console. The interesting thing is that when I then reopen the console from the VM manager it keeps going at about 3% and doesn't go up to the earlier 9%. But it still zigzags between 3 and 11% then. All the figures have been taken from the VM manager. The %us count in top seems to stay at 3% all the time.
Did you try to connect to the VM virtual framebuffer with vncviewer, rather than virt-manager? What loads do you get then?
I wonder if this problem might happen because I use the option of not allocating all space in the filesystem file right-away. I also wonder if performance might be better if it wouldn't need to grow.
I have never seen this problem on production machines. But I don't use virt-manager, so I am not sure at what point it rewrites the domain configuration files after the installation.
-- Daniel
Daniel de Kok wrote on Mon, 15 Oct 2007 14:45:22 +0200:
It depends on what you want to run as a domU. Paravirtualization is very fast (e.g. for running domU CentOS 4/5 kernels). On the other hand, some devices are very slow if you use Xen HVM for running systems that do not have a kernel that functions as a Xen domU. As long as we don't have paravirtualized network/display drivers for those systems, network/graphic performance will not be very good.
So, what do you plan to run as a virtual machine?
Strictly CentOS 5. CentOS 5 host and CentOS 5 guest. The testing machine doesn't have a CPU with virtualization support, so they will run paravirtualized. The most likely production machine I want to put Xen on has an X2 from last year (X2 3800+ EE because of temperature in a 1U box) where I'm not sure if it might support virtualization, but I plan to do paravirtualization, anyway. Usage will be for web/mail server related tasks (especially heavy MySQL usage with several GB databases), no desktop and no fileserver tasks. There will be only two or three DomUs.
Did you try to connect to the VM virtual framebuffer with vncviewer, rather than virt-manager? What loads do you get then?
I didn't know I can do that. I haven't enabled vncserver on the host machine as I couldn't really get it going, if I want to VNC in I connect to the vino-server that's automatically coming with Gnome. Would this allow me to connect to the VM frame buffer as well? On the production machine I wouldn't be using/installing Gnome at all. But for getting acquainted to the stuff it's much easier to install and manage a new VM with virtual machine manager.
Kai
Kai,
----- "Kai Schaetzl" maillists@conactive.com wrote:
Strictly CentOS 5. CentOS 5 host and CentOS 5 guest
[...]
Usage will be for web/mail server related tasks (especially heavy MySQL usage with several GB databases), no desktop and no fileserver tasks.
Sorry for butting in with some ideas that might be considered off topic for your particular discussion but you sound like perfect customer for OS Virtualization. Although running OpenVZ (or Linux-VServer) would be going outside of the official CentOS repositories (and upstream)... since you want to run Linux on Linux with no need to run different kernels... and are seeking performance... I'd recommend you give the OS Virtualization guys (aka containers aka security contexts) a try.
If you'd like to see a comparison between Xen and OpenVZ, here's a completely independent white paper for you in PDF format:
http://www.hpl.hp.com/techreports/2007/HPL-2007-59.pdf
There will be only two or three DomUs.
That's a shame. With OS Virtualization, you can easily fit more virtual machines than you can with machine virtualization. Again, OS Virtualization isn't the better solution for every case, but where appropriate, it kicks ass.
On the production machine I wouldn't be using/installing Gnome at all. But for getting acquainted to the stuff it's much easier to install and manage a new VM with virtual machine manager.
While there has been some work putting OpenVZ support into virt-manager, those haven't quite made it into the upstream... but for comparison, here's how you create a virtual machine (aka Virtual Private Server or Virtual Environment) in OpenVZ:
$ vzctl create {VEID} --hostname {FQDN} --ipadd {ip address} --ostemplate centos-4 --config vps.basic
$ vzctl set {VEID} --name {name} --nameserver {nameserver ip address} --searchdomain {desired search domainname} --diskspace {min:max} --userpasswd root:{password} --onboot yes --save
$ vzctl start {name}
Now you have a fully functioning server you can reach just like a Xen virtual or a real machine. For --ostemplate (which is is basically a .tar.gz form of install media) you can use a pre-created OS Template file (see: http://openvz.org/download/template/cache/) or build your own with vzpkgcache and a OS Template Metadata package.
There is an additional access method available from the host machine:
$ vzctl enter {name}
Get a VPS installed with all of the software you want on it and configured as desired and you can easily clone it or make it into an OS Template from which other VPSes can be installed:
1) Stop the VPS: $ vzctl stop {name}
2) cd into the VPS' private directory: cd /vz/private/{VEID}
3) tar gz up the private dir and place it in template cache directory: tar -cvzf /vz/template/cache/my-new-os-template.tar.gz .
4) Now use vzctl create to create as many machines as you like giving my-new-os-template as parameter given for --ostemplate.
VPS host name and network configuration are stored in a separate config file on the host machine rather than being embedded inside the VPS' directory space.
Here's a screencast I did on installing OpenVZ, creating a virtual machine, and migration: http://www.montanalinux.org/openvz-intro-video.html
Here's a short screencast that shows live migration of a desktop VPS: http://www.montanalinux.org/openvz-live-migration-demo.html
Feel free to email me with any questions or comments... and flames too if desired. :)
TYL,
On Mon, 2007-10-15 at 14:50 -0400, Scott Dowdle wrote:
Sorry for butting in with some ideas that might be considered off topic for your particular discussion but you sound like perfect customer for OS Virtualization. Although running OpenVZ (or Linux-VServer) would be going outside of the official CentOS repositories (and upstream)... since you want to run Linux on Linux with no need to run different kernels... and are seeking performance... I'd recommend you give the OS Virtualization guys (aka containers aka security contexts) a try.
According to their (OpenVZ) installation guide, you still need to turn off SELinux. If you will be using virtualization run net-facing daemons, I'd think twice before deploying OpenVZ. Besides that it provides less isolation. Every virtual machine is running the same kernel, a kernel vulnerability may be enough to break out of a virtual machine.
Besides that, as you already mentioned. With OpenVZ you are on your own, it's not CentOS anymore.
-- Daniel
Daniel,
----- "Daniel de Kok" danieldk@pobox.com wrote:
According to their (OpenVZ) installation guide, you still need to turn off SELinux. If you will be using virtualization run net-facing daemons, I'd think twice before deploying OpenVZ.
I'm not trying to bash SELinux, but I have to wonder what percentage of CentOS/RHEL/Fedora users have SELinux on to begin with? I'd like to use it myself, but I just haven't gotten around to it... nor have I made it a priority. :(
The vast majority of Linux distributions do not include SELinux and ARE being deployed with net-facing daemons. One has to decide if it is an acceptable risk. For me it is... knock on wood.
I do appreciate you bringing up the point though. It would be one of the advantages of Xen over OpenVZ. With both, there are a number of advantages and disadvantages that must be considered.
Besides that it provides less isolation. Every virtual machine is running the same kernel, a kernel vulnerability may be enough to break out of a virtual machine.
While we must always be vigilant, I'm not aware of any cases where anyone has broken out of an OpenVZ nor Linux-VServer VPS and into the host node nor into other VPSes... and OpenVZ comes from Virtuozzo which has been around for about 7 years now... and Linux-VServer has been around for a long time too. In fact, Xen is much newer. :)
I'm not saying it could never happen because anything is possible. As you may know there was a Xen vulnerability reported recently where a grub configuration line could allow access to the hypervisor from the guest (or something like that).
The person I was addressing this to was talking about running CentOS on CentOS so I'm guessing that they'd run the same exact kernel inside every Xen Virtual Machine anyway.
Besides that, as you already mentioned. With OpenVZ you are on your own, it's not CentOS anymore.
While I'd like to see CentOS sanctioned OpenVZ packages, I have to ask just how many people have third-party packages on their system? I use DAG quite a bit. I'd really like to see the CentOS team adopt OpenVZ and add it to the Addons or Extras repo (which would be more appropriate) but I'm not sure if they would be interested. The OpenVZ make it darn easy to install with signed packages in their own repo.
I made the mistake of asking a kernel question in the #centos IRC channel... and when I revealed that I was running an OpenVZ kernel... I wasn't kicked but I was sternly told that it would be off topic and not tolerated. :(
TYL,
On Mon, 15 Oct 2007, Scott Dowdle wrote:
I'm not trying to bash SELinux, but I have to wonder what percentage of CentOS/RHEL/Fedora users have SELinux on to begin with? I'd like to use it myself, but I just haven't gotten around to it... nor have I made it a priority. :(
I rarely find a need to turn it off, and when I do, it's because I've either installed things that didn't come from the distro, or because I've configured something to act far,far differently than the default. Like you, I'm too lazy to figure out how to tweak SELinux, but I find that as long as I keep to "normal" installs, I have no need to tweak.... at least on CentOS. Fedora is another matter. :)
On Mon, 2007-10-15 at 16:18 -0400, Scott Dowdle wrote:
I'm not trying to bash SELinux, but I have to wonder what percentage of CentOS/RHEL/Fedora users have SELinux on to begin with? I'd like to use it myself, but I just haven't gotten around to it... nor have I made it a priority. :(
The vast majority of Linux distributions do not include SELinux and ARE being deployed with net-facing daemons. One has to decide if it is an acceptable risk. For me it is... knock on wood.
It's a matter or diving in it, just like we all had to dive into UNIX/Linux once. I can really recommend "SELinux by example" for getting into SELinux.
I do appreciate you bringing up the point though. It would be one of the advantages of Xen over OpenVZ. With both, there are a number of advantages and disadvantages that must be considered.
Agreed.
I'm not saying it could never happen because anything is possible. As you may know there was a Xen vulnerability reported recently where a grub configuration line could allow access to the hypervisor from the guest (or something like that).
That was a vulnerability in pygrub. Just for clarity's sake: pygrub runs in dom0, and is used to retrieve the kernel and initrd images from the domU machine being booted based on its GRUB configuration (this is needed to bootstrap the VM). It was not a vulnerability where some program can break out of a domU and do stuff in dom0.
Doing such a thing is far easier when the virtual machine is running under the same kernel as the host.
While I'd like to see CentOS sanctioned OpenVZ packages, I have to ask just how many people have third-party packages on their system? I use DAG quite a bit.
For that exact reason we advise people to use the yum-priorities plugin. It prevents that the package manager replaces CentOS packages with packages from a third-party repo.
I'd really like to see the CentOS team adopt OpenVZ and add it to the Addons or Extras repo (which would be more appropriate) but I'm not sure if they would be interested.
I think we'd be interested in including OS-level virtualization as an option when:
- There are patches for the kernel versions that CentOS uses, and it doesn't change the kernel too much besides implementing that technology (so that it is easy to maintain it for future kernel updates). - It is feasible to support it for a few years on the kernels that CentOS uses, and someone is willing to maintain it for such periods. - The solution should be stable, secure, and performant. - The solution allows system administrators to keep on SELinux on the host system, and not restrict SELinux usage on guest systems.
Remember that we potentially have to support new additions for years, ideally until 2014 for CentOS 5. If someone thinks one solution can fulfill these requirements, please feel free to discuss it on this list.
I made the mistake of asking a kernel question in the #centos IRC channel... and when I revealed that I was running an OpenVZ kernel... I wasn't kicked but I was sternly told that it would be off topic and not tolerated. :(
We can't support what we don't provide, and you would be amazed how often people ask questions on #centos about stuff that we don't provide ;).
-- Daniel
Daniel,
----- "Daniel de Kok" daniel@centos.org wrote:
It's a matter or diving in it, just like we all had to dive into UNIX/Linux once. I can really recommend "SELinux by example" for getting into SELinux.
Ok, I'll look into when I get a chance.
Doing such a thing is far easier when the virtual machine is running under the same kernel as the host.
Understood... that is a logical assumption... but also take into account that OpenVZ (including and its commercial sibling, SWsoft's Virtuozzo) has been deployed by tens of thousands of users and is the #2 virtualization technology in use today... according to the OpenVZ project manager. I don't have any hard data I can point you to to prove that but that is my understanding. #1 would be VMware of course. My point is that it has been tested, audited, and revised over its history with regards to security... but it is obviously and ongoing process.
Linux-VServer was adopted by the OLPC developers and is the key component of the Bitfrost Security Framework... and as a result will be deployed in millions upon millions of laptops.
I think we'd be interested in including OS-level virtualization as an option when:
- There are patches for the kernel versions that CentOS uses, and it doesn't change the kernel too much besides implementing that technology (so that it is easy to maintain it for future kernel updates).
The OpenVZ project provides kernels patched against the RHEL4 and RHEL5 kernel source... so I think what they are doing is pretty darn close to what you are asking... and I believe they plan on maintaining those kernels for the life of RHEL4 and RHEL5?!? (see more below)
- The solution allows system administrators to keep on SELinux on the host system, and not restrict SELinux usage on guest systems.
I'm not sure if there is a technical reason that OpenVZ won't work with SELinux. I'm guessing that it is like so many other third-party packages that say to turn off SELinux... simply because they want to avoid the support complexity of figuring out how to make it work and writing policies.
Remember that we potentially have to support new additions for years, ideally until 2014 for CentOS 5. If someone thinks one solution can fulfill these requirements, please feel free to discuss it on this list.
As long as SWsoft has Virtuozzo customers using RHEL4 and RHEL5, I'm assuming it will be supported by them and also available in OpenVZ but I don't think I can find anything in writing that promises that. SWsoft also holds controlling interest in Parallels company.
I think OS Virtualization / Containers will be less of an issue with upcoming major releases as I'm very sure that container features will be a stock part of the mainline kernel by that time. In fact, Andrew Morton says in his kernel speeches that the only thing he can predict that is coming over the next year or two is container features... but who knows how that will pan out? I'd like to see Red Hat officially add OpenVZ support to RHEL 5 Update X but the only statements I've seen by Red Hat executives is that they do plan to have some form of container based virtualization as a option for RHEL6.
We can't support what we don't provide, and you would be amazed how often people ask questions on #centos about stuff that we don't provide ;).
Yeah, understood. I was just asking about kernel modules and I doubt the question was specific to my OpenVZ kernel... but since I had mentioned that I was using an OpenVZ kernel, the topic police kicked in. :) I find #centos-social much more friendly... but I don't hang out there much because I don't run into CentOS problems very often.
TYL,
On Mon, 2007-10-15 at 18:36 -0400, Scott Dowdle wrote:
Understood... that is a logical assumption... but also take into account that OpenVZ (including and its commercial sibling, SWsoft's Virtuozzo) has been deployed by tens of thousands of users and is the #2 virtualization technology in use today... according to the OpenVZ project manager. I don't have any hard data I can point you to to prove that but that is my understanding. #1 would be VMware of course. My point is that it has been tested, audited, and revised over its history with regards to security... but it is obviously and ongoing process.
That doesn't really matter. Even if OpenVZ was proven to be exactly correct, it is still used as a part of the kernel, which every now and then has vulnerabilities.
- The solution allows system administrators to keep on SELinux on the host system, and not restrict SELinux usage on guest systems.
I'm not sure if there is a technical reason that OpenVZ won't work with SELinux. I'm guessing that it is like so many other third-party packages that say to turn off SELinux... simply because they want to avoid the support complexity of figuring out how to make it work and writing policies.
I see more obstacles: how would you modify/add policy from a virtual machine, without affecting that of other VMs or the host machine? What about security context collisions between virtual machines?
As long as SWsoft has Virtuozzo customers using RHEL4 and RHEL5, I'm assuming it will be supported by them and also available in OpenVZ but I don't think I can find anything in writing that promises that.
We need to be sure that patches can be maintained for a longer period. So, ideally a maintainer of such packages has understanding of the code/patches. In the worst case, the maintainer could update patches to ensure that it continues to work with our kernels.
I think OS Virtualization / Containers will be less of an issue with upcoming major releases as I'm very sure that container features will be a stock part of the mainline kernel by that time. In fact, Andrew Morton says in his kernel speeches that the only thing he can predict that is coming over the next year or two is container features... but who knows how that will pan out?
I guess that we have to wait and see :).
-- Daniel
Thanks for the info, but it's really that I want to use unchanged CentOS 5. One of the reasons of using CentOS 5 is the easy maintainability and I would loose this when going to a system where I need to replace or patch the kernel.
Kai
Kai,
----- "Kai Schaetzl" maillists@conactive.com wrote:
Thanks for the info, but it's really that I want to use unchanged CentOS 5. One of the reasons of using CentOS 5 is the easy maintainability and I would loose this when going to a system where I need to replace or patch the kernel.
If you were addressing this to me and my mentioning of using OpenVZ's third-party repo... I agree with you completely regarding the desire to retain the ease of maintainence.
Just to clarify, the OpenVZ folks do maintain several kernel trees including those based on RHEL4 and RHEL5 kernels... and (to the best of my knowledge) plan to do so for some time to come although I haven't seen any promises in writing. See my previous post to this mailing list for more info.
While the OpenVZ project doesn't release a new build every time there is a CentOS kernel update, they do release quite frequently and do incorporate all security and bug fixes available within a reasonable timeframe of their releases... although there is plenty of room for disagreement there on the word "reasonable". It would be another reason to prefer a CentOS sanctioned OpenVZ kernel. :)
TYL,
Daniel,
what's your recommendation for the filesystem "underlayer" for a Xen VM? A partition, a file, a sparse file?
Kai
Hi Kai,
On Tue, 2007-10-16 at 21:09 +0200, Kai Schaetzl wrote:
what's your recommendation for the filesystem "underlayer" for a Xen VM? A partition, a file, a sparse file?
I use files, they are easy to manage. They ease up maintenance, since you can move them around, easily create new ones, etc. But I guess that raw partitions or LVM volumes are faster.
Files are good enough for our purposes, but the VMs don't heavily do disk I/O.
-- Daniel
Daniel de Kok wrote on Tue, 16 Oct 2007 21:25:07 +0200:
I use files, they are easy to manage. They ease up maintenance, since you can move them around, easily create new ones, etc. But I guess that raw partitions or LVM volumes are faster.
Files are good enough for our purposes, but the VMs don't heavily do disk I/O.
Thanks for the info. I now added the xen stuff to the possible target machine and there the performance situation is much nicer. I'm now testing installation on partitions. I agree handling of the filebased VMs is very nice.
Kai
On Tue, 2007-10-16 at 21:09 +0200, Kai Schaetzl wrote:
Daniel,
what's your recommendation for the filesystem "underlayer" for a Xen VM? A partition, a file, a sparse file?
I've been using LVM volumes for ages.
You only have a limited amount of partitions so partitions don't scale.
I only use files if I`m building a virtual machine that I want to give to someone who isn't capable of kickstarting a system.
Kris Buytaert wrote on Wed, 17 Oct 2007 17:40:49 +0200:
I've been using LVM volumes for ages.
You only have a limited amount of partitions so partitions don't scale.
I only use files if I`m building a virtual machine that I want to give to someone who isn't capable of kickstarting a system.
Yeah, I meant to include LVM volumes in the general partition meaning. I wanted to know if you use/prefer/recommend partitions/LVM over files. Obviously, files are so much easier to maintain. I now have a test VM running on an LVM volume and am now wondering if there are other methods to move it around for instance as what Xen provides (I read there's a system migration feature, but i didn't read that up yet).
Kai
On Wed, 2007-10-17 at 20:31 +0200, Kai Schaetzl wrote:
Kris Buytaert wrote on Wed, 17 Oct 2007 17:40:49 +0200:
I've been using LVM volumes for ages.
You only have a limited amount of partitions so partitions don't scale.
I only use files if I`m building a virtual machine that I want to give to someone who isn't capable of kickstarting a system.
Yeah, I meant to include LVM volumes in the general partition meaning. I wanted to know if you use/prefer/recommend partitions/LVM over files. Obviously, files are so much easier to maintain. I now have a test VM running on an LVM volume and am now wondering if there are other methods to move it around for instance as what Xen provides (I read there's a system migration feature, but i didn't read that up yet).
I find files much harder to maintain .. I don't have a fixed set of tools that can grow,shrink them , that can span them over multple physical disks if my initial disk becomes to small. That can easily snapshot them.
Also ..an unexperienced sysadmin will easily start removing big files that clutter his filesystem. He`ll think twice when it's a logical volume that is in use :)