hi,
Just wondering if people had started using i386 Xen DomU's on a x86_64 dom0 machine with 5.1 as yet ? And just wondering what their experiences have been.
On Mon, 2007-12-10 at 22:34 +0000, Karanbir Singh wrote:
Just wondering if people had started using i386 Xen DomU's on a x86_64 dom0 machine with 5.1 as yet ? And just wondering what their experiences have been.
As far as I understand that feature was removed from the 5.1 feature list at the last moment by upstream?
-- Daniel
Daniel de Kok wrote:
On Mon, 2007-12-10 at 22:34 +0000, Karanbir Singh wrote:
Just wondering if people had started using i386 Xen DomU's on a x86_64 dom0 machine with 5.1 as yet ? And just wondering what their experiences have been.
As far as I understand that feature was removed from the 5.1 feature list at the last moment by upstream?
humm, it does still work sometimes. I seem to have a 50/50 hitrate on that. If the install works fine, then the vm works fine as well, however in a lot of cases, the kernel will crash and backtrace in the installer.
Just wondering if anyone else had seen something similar ?
me either, i never seen i386 vm works on x86_64 dom0 even centos 5.1 i tried several times to install but everytime when i try kernel crashed.
i don't think this feature working well in 5.1
On 12/11/07, Karanbir Singh mail-lists@karan.org wrote:
Daniel de Kok wrote:
On Mon, 2007-12-10 at 22:34 +0000, Karanbir Singh wrote:
Just wondering if people had started using i386 Xen DomU's on a x86_64 dom0 machine with 5.1 as yet ? And just wondering what their
experiences
have been.
As far as I understand that feature was removed from the 5.1 feature list at the last moment by upstream?
humm, it does still work sometimes. I seem to have a 50/50 hitrate on that. If the install works fine, then the vm works fine as well, however in a lot of cases, the kernel will crash and backtrace in the installer.
Just wondering if anyone else had seen something similar ?
-- Karanbir Singh : http://www.karan.org/ : 2522219@icq _______________________________________________ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
Greetings,
I see these problems with Xen... and many people are stating that they are running CentOS on CentOS... ie Linux on Linux virtualization... so I thought I'd pipe up and mention OpenVZ again. It does Linux on Linux virtualization well and allows for i386 guests on x86_64 hosts just fine.
On a dual quad-core Xeon with 16GB of RAM (4GB of swap) I asked the vzsplit command how many machines it thinks my hardware is capable of. Here's the output:
[root@comp2 ~]# vzsplit -n 9999999 On node with 20114 Mb of memory (RAM + swap) 9999999 VEs can not be allocated The maximum allowed value is 3795
On a machine with 2GB of RAM and 4GB of swap:
[root@backup1 root]# vzsplit -n 9999999 On node with 6119 Mb of memory (RAM + swap) 999999 VEs can not be allocated The maximum allowed value is 639
Of course, there are some situations where OpenVZ (ie OS Virtualization) isn't suitable... but for the vast majority of common server tasks, it is. I don't claim you should try that many virtual machines on a single host node but it just goes to show you the density differences possible between Xen and OpenVZ, eh? :)
TYL,
Scott Dowdle wrote:
Greetings,
I see these problems with Xen... and many people are stating that they are running CentOS on CentOS... ie Linux on Linux virtualization... so I thought I'd pipe up and mention OpenVZ again. It does Linux on Linux virtualization well and allows for i386 guests on x86_64 hosts just fine.
On a dual quad-core Xeon with 16GB of RAM (4GB of swap) I asked the vzsplit command how many machines it thinks my hardware is capable of. Here's the output:
[root@comp2 ~]# vzsplit -n 9999999 On node with 20114 Mb of memory (RAM + swap) 9999999 VEs can not be allocated The maximum allowed value is 3795
On a machine with 2GB of RAM and 4GB of swap:
[root@backup1 root]# vzsplit -n 9999999 On node with 6119 Mb of memory (RAM + swap) 999999 VEs can not be allocated The maximum allowed value is 639
Of course, there are some situations where OpenVZ (ie OS Virtualization) isn't suitable... but for the vast majority of common server tasks, it is. I don't claim you should try that many virtual machines on a single host node but it just goes to show you the density differences possible between Xen and OpenVZ, eh? :)
TYL,
It would really suck to have 3795 "virtual machines" die all at the same time from a single kernel panic.
Christopher,
----- "Christopher G. Stach II" cgs@ldsys.net wrote:
It would really suck to have 3795 "virtual machines" die all at the same time from a single kernel panic.
Yes, absolutely it would. I use the OpenVZ kernels that are based on the RHEL4 and RHEL5 kernels and I haven't had any problems with them... just like I haven't had any problems with the stock RHEL4 and RHEL5 kernels... nor CentOS kernels.
I usually end up rebooting host node machines because of kernel upgrades... so my machines don't get a chance to have longish uptimes... but on one remote colocation machine I have for hobby stuff... it currently has an uptime of 106 days. It has 7 VPSes on it and they are fairly fat as they all run a full set of services. I know I've been running that machine for close to 2 years now... and if I remember correctly it started out with CentOS 4.0. I've upgraded to each release (on the host node and the VPSes) and am currently at CentOS 4.5. I look foward forward to 4.6.
Here's what they look like (ip addresses and hostnames obscured):
[root@hn ~]# vzlist VEID NPROC STATUS IP_ADDR HOSTNAME 101 53 running xx.xx.xx.xx vps101.localdomain 102 44 running xx.xx.xx.xx vps102.localdomain 103 44 running xx.xx.xx.xx vps103.localdomain 104 32 running xx.xx.xx.xx vps104.localdomain 105 322 running xx.xx.xx.xx vps105.localdomain 106 32 running xx.xx.xx.xx vps106.localdomain 107 29 running xx.xx.xx.xx vps107.localdomain
Looking at the number of processes, can you tell which VPS is running Zimbra? :)
6 of the 7 VPSes are CentOS and the remaining 1 is Debian.
Speaking of uptimes, I have a "legacy" machine at work running Linux-VServer on a 2.4.x series kernel. It had the longest uptime of any machine I've had... and was well over 400 days... when a power outage that outlasted its UPS took it down. That particular machine runs three VPSes that are mail relay/frontends and they get pounded... so that uptime is notable.
So, my experience has been that physical failures and power failures (although pretty rare) are more common that kernel panics that take down all of my virtual machines.
TYL,
Scott Dowdle wrote:
Of course, there are some situations where OpenVZ (ie OS Virtualization) isn't suitable... but for the vast majority of common server tasks, it is. I don't claim you should try that many virtual machines on a single host node but it just goes to show you the density differences possible between Xen and OpenVZ, eh? :)
apart from mass scale hosting solutions, I am yet to see a role where openvz actually provided a better all around VM solution than Xen. Even the management tools and the developer support behind Xen far out weights that on openvz.
And for those mass hosting solutions, a bit of security minded setups would remove the need to have this sort of a virtual userspace virtualising anyway. ( in a lot of the cases )
Karanbir,
----- "Karanbir Singh" mail-lists@karan.org wrote:
apart from mass scale hosting solutions, I am yet to see a role where openvz actually provided a better all around VM solution than Xen.
Depends on your definition of better. One that works (for example the discussion you are having now about problems running i386 guests on x86_64 hosts) might fall into that. :)
But seriously, it's all about meeting the needs of the users... and there is a large variety of virtualization needs out there... and not a single, one size fits all best solution. I'm not trying to badmouth Xen and I'd appreciate it if people didn't badmouth other solutions either.
There are uses where Xen is much better suited and OpenVZ isn't even a viable option. But there are other cases where OpenVZ is a better fit especially with regards to density and scalability. OpenVZ is also very attractive in those situations where you want to isolate a single or a small number of services... although the vast majority if my deployments have a full set of services.
Even the management tools and the developer support behind Xen far out weights that on openvz.
I'm not sure what you mean by that. OpenVZ comes from Virtuozzo which has been out over 6 years now and has been deployed by thousands (if not tens of thousands) of deployments.
The OpenVZ developers (along with a few from IBM and Google mostly) are currently working on getting "control group" features in the mainline kernel... and that is expected to happen between now and the next 12-18 months. Who knows how the mainline implementation will differ from the stock OpenVZ?
So far as management tools go, I wondering what management tools you use for Xen. The only one I've really tried was Virtual Machine Manager and prior to the most recent release in 5.1, it couldn't even START a virtual machine. I've tested out XenSource's management solution and while it has a few more features that Virtual Machine Manager, there still isn't much there.
Given the 20ish resource parameters provided by OpenVZ and the vzctl command where all of those resources can be dynamically changed... and looking at /proc/user_beancounters on the hn or guests is the most direct way to monitor them... those rudimentary cli tools seem more up to the task than the current crop of GUI tools I see for Xen. Although perhaps I'm just ignorant of additional management programs that are out there... and I look forward to you informing me.
The good thing is Red Hat has taken a virtualization agnostic approach with their tools and with some additional development work, they could support OpenVZ too. I believe someone added OpenVZ support to libvirt this past summer but I don't know how complete it was nor if it got integrated into upstream or not.
And for those mass hosting solutions, a bit of security minded setups would remove the need to have this sort of a virtual userspace virtualising anyway.
I'm not really sure what you mean, please clarify.
TYL,
On Tue, 2007-12-11 at 11:27 -0500, Scott Dowdle wrote:
There are uses where Xen is much better suited and OpenVZ isn't even a viable option. But there are other cases where OpenVZ is a better fit especially with regards to density and scalability. OpenVZ is also very attractive in those situations where you want to isolate a single or a small number of services... although the vast majority if my deployments have a full set of services.
Yes. It's good not to underestimate OS-level virtualization. Many people used chroot to isolate certain processes. OS-level virtualization provides better isolation and control, at only little extra cost.
Operating systems that provide binary compatibility for other systems (like the BSDs or Solaris) can also use OS-level virtualization to emulate a complete enviroment that resembles the emulated system.
The downside of most (if not virtually all) current OS-level virtualization on Linux is that they do not have proper support for SELinux. I suppose that things get more interesting in that respect when container features are integrated in the mainline kernel.
-- Daniel
Scott Dowdle wrote:
Karanbir,
----- "Karanbir Singh" mail-lists@karan.org wrote:
apart from mass scale hosting solutions, I am yet to see a role where openvz actually provided a better all around VM solution than Xen.
Depends on your definition of better. One that works (for example the discussion you are having now about problems running i386 guests on x86_64 hosts) might fall into that. :)
the fact that openvz kernel does not boot my machine, takes it completely out of the working category. Neither is it capable of running selinux, which is another show stopped.
Also, none of the HA tools work for me under openvz.
They have a long way to go.
There are uses where Xen is much better suited and OpenVZ isn't even a viable option.
Sure, thats what my point was. But my point also went on to say that aprt from high density hosting, I am yet to find a role where openvz was a better fit. I am open to hearing about your use cases :D
Even the management tools and the developer support behind Xen far out weights that on openvz.
I'm not sure what you mean by that. OpenVZ comes from Virtuozzo which has been out over 6 years now and has been deployed by thousands (if not tens of thousands) of deployments.
sure, but again, only in high desity hosting solutions. I am yet to see a openvz deployment outside that.
So far as management tools go, I wondering what management tools you use for Xen. The only one I've really tried was Virtual Machine Manager and prior to the most recent release in 5.1, it couldn't even START a virtual machine. I've tested out XenSource's management solution and while it has a few more features that Virtual Machine Manager, there still isn't much there.
Depends on the client setup, lots of people seem to rely on the amazon xen tools these days, along with Enomalism stuff.
btw, whats wrong with virsh ? you seem to be happy using cli tools for openvz, why not try the same sort of stuff on Xen as well ? Besides, I have never really used GUI pointty/clickity tools on such machines ( I think you get the idea that I am not in the hosting business :D )
Given the 20ish resource parameters provided by OpenVZ and the vzctl command where all of those resources can be dynamically changed... and looking at /proc/user_beancounters on the hn or guests is the most direct way to monitor them... those rudimentary cli tools seem more up to the task than the current crop of GUI tools I see for Xen. Although perhaps I'm just ignorant of additional management programs that are out there... and I look forward to you informing me.
right, so your ideas on Xen are mostly based on the lack of awareness. You can quite easily control and tweak runtime resources with Xen, that was one of the main selling points for it in the first place.
And for those mass hosting solutions, a bit of security minded setups would remove the need to have this sort of a virtual userspace virtualising anyway.
I'm not really sure what you mean, please clarify.
Most people who do high density hosting can achieve similar results / density without really needing a userspace vm model. eg. I know that $LargestHostingISP in .de is presently looking at howto educate the users that they might actually get a better deal with almost all the same resource access using shared hosting rather than VPS's running UML/Virtuzzo/Xen etc. Lets see how that pans out. At the moment, the idea and selling point of VPS's is that its a buzzword.
Karanbir,
----- "Karanbir Singh" mail-lists@karan.org wrote:
the fact that openvz kernel does not boot my machine, takes it completely out of the working category.
Not booting would be a problem. :) The only OpenVZ kernel I've had trouble booting was one where they added Xen support (so you can use both Xen and OpenVZ on the same host). I don't mean to associate Xen with the boot problem... the machine got stuck on USB detection and wouldn't get past that.
If you remember any of the details, it would be interesting to report the boot failure to the bug tracking system... but unless you were willing to do followup, it would just be another ticket that doesn't have the possibility of being resolved.
Also, none of the HA tools work for me under openvz.
Here's a wiki page on using DRDB and heartbeat... although I've not done it myself. http://wiki.openvz.org/HA_cluster_with_DRBD_and_Heartbeat
For high availability, I just migrate my VPSes from one physical host to another periodically... and have good backups. I know that isn't real HA but it is suitable for my needs.
I'm guessing that HA isn't a feature used by more than a single digit percentage of Xen users... but I really have no data one way or another.
Again, it's an area where if you need HA, then OpenVZ might not be for you.
Neither is it capable of running selinux, which is another show stopped.
I'm one of those people who have yet to take the time to learn SELinux... and unfortunately, I think there are a lot of us out there.
I'm not sure if there is a technical reason why SELinux and OpenVZ are incompatible. I've asked questions from people who should know... and the answer I get is that the main reason SELinux support is not available with OpenVZ is because of the same reason some other projects say it has to be turned off... simply because they don't want to have to deal with all of the troubleshooting involved with its use. That's a bad excuse but it seems to be pretty common.
The vast majority of Linux distros don't support SELinux anyway... and for users of those, it isn't an issue.
There are uses where Xen is much better suited and OpenVZ isn't even a viable option.
[...]
Sure, thats what my point was. But my point also went on to say that aprt from high density hosting, I am yet to find a role where openvz was a better fit. I am open to hearing about your use cases :D
Your argument is like saying everyone should drive a large SUV because it has 4-wheel drive and it holds a lot of passengers... and it has a large gas tank. I didn't flesh that argument out very well... but my point is that not everyone wants or needs 4-wheel drive or a lot of passengers... or a large gas tank. And OS Virtualization gets more miles per gallon. :)
I could just as easily say that a physical machine is a much better solution than Xen because you can do more with a physical machine than you can a virtual machine... but I'm guessing you see the fault with that argument.
sure, but again, only in high desity hosting solutions. I am yet to see a openvz deployment outside that.
The vast majority of people I talk to on a daily basis are using OpenVZ in situations other than high density hosting solutions. This would include myself. :) The common misperception is that OpenVZ or OS Virtualization is only good for high density web hosting. While that is an area where it definitely excels in, it isn't the only thing it is good for.
Depends on the client setup, lots of people seem to rely on the amazon
I think that Amazon makes up the largest percentage of Xen deployments. I hope they are giving back to the community.
btw, whats wrong with virsh ? you seem to be happy using cli tools for
Nothing. I'd like to learn more about it... especially if and when it supports OpenVZ. :)
right, so your ideas on Xen are mostly based on the lack of awareness.
[...]
You can quite easily control and tweak runtime resources with Xen, that was one of the main selling points for it in the first place.
I'll grant that. Please point me to a web page that talks about the tweakable resource parameters of Xen. All I am aware of are memory and number of CPUs. Hey, I'm here to learn... that's why I signed up for this mailing list.
Most people who do high density hosting can achieve similar results / density without really needing a userspace vm model. eg. I know that $LargestHostingISP in .de is presently looking at howto educate the users that they might actually get a better deal with almost all the same resource access using shared hosting rather than VPS's running UML/Virtuzzo/Xen etc. Lets see how that pans out. At the moment, the idea and selling point of VPS's is that its a buzzword.
You mean service level virtual hosting (ie Apache VirtualHosting) rather than using virtualization? That might be true... but there are drawbacks to that. I mean, you can't give someone root access and allow them to install software, create accounts, etc.... in non-virtualized environments. Perhaps I'm not understanding the alternative you are referring to.
I do understand that there is a buzzword aspect to "virtualization" but I'm sure I don't have to explain to you situations where it offers benefits.
TYL,