I'm wondering if virtualization could be used as a cheap redundancy solution for situations which can tolerate a certain amount of downtime.
Current recommendations is to run some kind of replication server such as DRBD. The problem here is cost if there are more than one server (or servers running on different OS) to be backed up. I'd basically need to tell my client they need to buy say 2X machines instead of just X. Not really attractive :D
So I'm wondering if it would be a good, or likely stupid idea, to run X CentOS machines with VMware. Each running a single instance of CentOS and in at least one case of Windows for MSSQL.
So if any of the machines physically fails for whatever reasons not related to disk. I'll just transfer the disk to one of the surviving server or a cold standby and have things running again within say 30~60 minutes needed to check the filesystem, then mount and copy the image.
I thought I could also rsync the images so that Server 1 backs up Server 2 image file and Server 2 backs up Server 3 etc in a round robin fashion to make this even faster. But reading up indicates that rsync would attempt to mirror the whole 60gb or 80gb image on any change. Bad idea.
So while this is not real time HA but in most situations, they can tolerate an hour's downtime. The cost of the "redundancy" also stays constant no matter how many servers are added to the operation.
Any comments on this or is it like just plain stupid because there are better options that are equally cost effective?
On Fri, Jun 25, 2010 at 9:04 AM, Emmanuel Noobadmin centos.admin@gmail.com wrote:
I'm wondering if virtualization could be used as a cheap redundancy solution for situations which can tolerate a certain amount of downtime.
Current recommendations is to run some kind of replication server such as DRBD. The problem here is cost if there are more than one server (or servers running on different OS) to be backed up. I'd basically need to tell my client they need to buy say 2X machines instead of just X. Not really attractive :D
So I'm wondering if it would be a good, or likely stupid idea, to run X CentOS machines with VMware. Each running a single instance of CentOS and in at least one case of Windows for MSSQL.
So if any of the machines physically fails for whatever reasons not related to disk. I'll just transfer the disk to one of the surviving server or a cold standby and have things running again within say 30~60 minutes needed to check the filesystem, then mount and copy the image.
I thought I could also rsync the images so that Server 1 backs up Server 2 image file and Server 2 backs up Server 3 etc in a round robin fashion to make this even faster. But reading up indicates that rsync would attempt to mirror the whole 60gb or 80gb image on any change. Bad idea.
So while this is not real time HA but in most situations, they can tolerate an hour's downtime. The cost of the "redundancy" also stays constant no matter how many servers are added to the operation.
Any comments on this or is it like just plain stupid because there are better options that are equally cost effective?
This is one of the advantages of using VMs, and I'm sure most people are using it for this reason in one way or another. However, there are a few things you need to worry about:
- When the host crashes, the guests will also, so you'll be in a recovery situation just like for a physical crash. This is manageable and something you'd have to deal with either way.
- Rsyncing the VMs while they are running leaves them in an inconsistent state. This state may or may not be worse than a simple crash situation. One way I have been getting around this is by creating a snapshot of the VM before performing the rsync, and when bringing up the copy after a crash, revert to the snapshot. That will at least give you consistent filesystem and memory state, but could cause issues with network connections. I usually reboot the VM cleanly after reverting to the snapshot.
Rsync will not transfer the entire file when transferring over the network. It scans the whole thing and only sends changes. If you have --progress enabled it will appear to go through the whole file, but you will see the "speedup" go much higher than a regular transfer. However, sometimes this process can take more time than doing a full copy on a local network. Rsync is meant to conserve bandwidth, not necessarily time. Also, I suggest the you use a GB network if you have the option. If not you could directly link the network ports on 2 servers and copy straight from 1 to the other.
If you are looking at VMware Server for this, here are some tips: - For best performance, search around for "vmware tmpfs". It will dramatically increase the performance of the VMs at the expense of some memory. - VMware Server seems like it's EOL, even though vmware hasn't specifically said so yet - There is a bug in VMware with CentOS that causes guests to slowly use more CPU until the whole machine is bogged down. This can be fixed by restarting or suspend/resume each VM - At this point I'd look at ESXi for the free VMware option.
- Rsyncing the VMs while they are running leaves them in an
inconsistent state. This state may or may not be worse than a simple crash situation. One way I have been getting around this is by creating a snapshot of the VM before performing the rsync, and when bringing up the copy after a crash, revert to the snapshot. That will at least give you consistent filesystem and memory state, but could cause issues with network connections. I usually reboot the VM cleanly after reverting to the snapshot.
The problem with doing snapshot is, the data reverts to whatever it was at the point of the snapshot. The client can accept waiting for 3~4 hrs for their servers to be fixed every now and then. It's a mess of several servers ranging from almost 10yrs old we inherited from their past vendors.
Which is why they would readily accept even 1hr of downtime for a VM image to be transferred. But they will not accept the need to redo work. Even if they are willing, it isn't possible because of the server generated number sequences that would already be used by their clients but would not likely match the new numbers after a restore to an older snapshot.
Rsync will not transfer the entire file when transferring over the network. It scans the whole thing and only sends changes. If you have --progress enabled it will appear to go through the whole file, but you will see the "speedup" go much higher than a regular transfer. However, sometimes this process can take more time than doing a full copy on a local network. Rsync is meant to conserve bandwidth, not necessarily time. Also, I suggest the you use a GB network if you have the option. If not you could directly link the network ports on 2 servers and copy straight from 1 to the other.
They already have GB switches so not a problem if rsync works incrementally on images as well.
At the same time, I do have reservations about such a hack so I'm also exploring the other possibility of implementing a 2 machine Lustre cluster and run all images from that storage cluster instead. That would take an extra machine but still more viable than the 2x option and much faster to get back up.
If you are looking at VMware Server for this, here are some tips:
- For best performance, search around for "vmware tmpfs". It will
dramatically increase the performance of the VMs at the expense of some memory.
Thanks for the tip.
- VMware Server seems like it's EOL, even though vmware hasn't
specifically said so yet
- There is a bug in VMware with CentOS that causes guests to slowly
use more CPU until the whole machine is bogged down. This can be fixed by restarting or suspend/resume each VM
It explains a puzzling seemingly random freeze up we get with a particular test system. I guess the random part was because we do sus/restart the machine every now and then so it didn't always bog down to the point we'd notice.
Thanks again for the responses :)
On Fri, Jun 25, 2010 at 12:28 PM, Emmanuel Noobadmin centos.admin@gmail.com wrote:
- Rsyncing the VMs while they are running leaves them in an
inconsistent state. This state may or may not be worse than a simple crash situation. One way I have been getting around this is by creating a snapshot of the VM before performing the rsync, and when bringing up the copy after a crash, revert to the snapshot. That will at least give you consistent filesystem and memory state, but could cause issues with network connections. I usually reboot the VM cleanly after reverting to the snapshot.
The problem with doing snapshot is, the data reverts to whatever it was at the point of the snapshot. The client can accept waiting for 3~4 hrs for their servers to be fixed every now and then. It's a mess of several servers ranging from almost 10yrs old we inherited from their past vendors.
Which is why they would readily accept even 1hr of downtime for a VM image to be transferred. But they will not accept the need to redo work. Even if they are willing, it isn't possible because of the server generated number sequences that would already be used by their clients but would not likely match the new numbers after a restore to an older snapshot.
You cannot do rsync on a continuous basis, so I think you have your answer there. Even running it once an hour isn't going to work, as the machine will be inconsistent (very bad disk corruption). It sounds like you need to get some new servers anyway, so DRBD is probably the way you need to go. Either that or a dedicated SAN or SAN-like device.
Rsync will not transfer the entire file when transferring over the network. It scans the whole thing and only sends changes. If you have --progress enabled it will appear to go through the whole file, but you will see the "speedup" go much higher than a regular transfer. However, sometimes this process can take more time than doing a full copy on a local network. Rsync is meant to conserve bandwidth, not necessarily time. Also, I suggest the you use a GB network if you have the option. If not you could directly link the network ports on 2 servers and copy straight from 1 to the other.
They already have GB switches so not a problem if rsync works incrementally on images as well.
At the same time, I do have reservations about such a hack so I'm also exploring the other possibility of implementing a 2 machine Lustre cluster and run all images from that storage cluster instead. That would take an extra machine but still more viable than the 2x option and much faster to get back up.
If you are looking at VMware Server for this, here are some tips:
- For best performance, search around for "vmware tmpfs". It will
dramatically increase the performance of the VMs at the expense of some memory.
Thanks for the tip.
- VMware Server seems like it's EOL, even though vmware hasn't
specifically said so yet
- There is a bug in VMware with CentOS that causes guests to slowly
use more CPU until the whole machine is bogged down. This can be fixed by restarting or suspend/resume each VM
It explains a puzzling seemingly random freeze up we get with a particular test system. I guess the random part was because we do sus/restart the machine every now and then so it didn't always bog down to the point we'd notice.
Thanks again for the responses :)
The creeping CPU problem happens slowly over the course of a week or so, so if you're seeing acute freeze-ups, then that's probably not it. However, if all machines have been running for a while, try to suspend/resume all of them, then see if the problem goes away.
You cannot do rsync on a continuous basis, so I think you have your answer there. Even running it once an hour isn't going to work, as the machine will be inconsistent (very bad disk corruption). It
That's what I figured too, otherwise it should/would had been an easy solution.
sounds like you need to get some new servers anyway, so DRBD is probably the way you need to go. Either that or a dedicated SAN or SAN-like device.
DRBD as I understand it, is effectively RAID 1 over network. Which was the 2x cost and budget problem I had. Would the 2-machine Lustre cluster I'm considering to implement work as adequate as a SAN device?
The creeping CPU problem happens slowly over the course of a week or so, so if you're seeing acute freeze-ups, then that's probably not it. However, if all machines have been running for a while, try to suspend/resume all of them, then see if the problem goes away.
That is pretty much what we see. Sometimes we leave the machine on through several days if there was no changes to what we are doing on it and sometimes the VM freezes after a few days. Had appeared random because anyone of us could have restarted the VM during the week so those lock up probably happened when none of us did.
However, the CentOS machine itself is still responding when we vnc/ssh in so we would have to kill and restart the VM service before we can use the VM again. Sometimes that doesn't work and we have to reboot the machine.
Emmanuel Noobadmin wrote:
The creeping CPU problem happens slowly over the course of a week or so, so if you're seeing acute freeze-ups, then that's probably not it. However, if all machines have been running for a while, try to suspend/resume all of them, then see if the problem goes away.
That is pretty much what we see. Sometimes we leave the machine on through several days if there was no changes to what we are doing on it and sometimes the VM freezes after a few days. Had appeared random because anyone of us could have restarted the VM during the week so those lock up probably happened when none of us did.
Is this with VMware Server 2.x? I have machines with the 1.x version (some one Centos3, some on Centos5) that run more or less forever without issues. Also, the ESXi version is even better if you are only using it to host VMs.
On Sat, Jun 26, 2010 at 9:26 AM, Les Mikesell lesmikesell@gmail.com wrote:
Emmanuel Noobadmin wrote:
The creeping CPU problem happens slowly over the course of a week or so, so if you're seeing acute freeze-ups, then that's probably not it. However, if all machines have been running for a while, try to suspend/resume all of them, then see if the problem goes away.
That is pretty much what we see. Sometimes we leave the machine on through several days if there was no changes to what we are doing on it and sometimes the VM freezes after a few days. Had appeared random because anyone of us could have restarted the VM during the week so those lock up probably happened when none of us did.
Is this with VMware Server 2.x? I have machines with the 1.x version (some one Centos3, some on Centos5) that run more or less forever without issues. Also, the ESXi version is even better if you are only using it to host VMs.
Yes, this problem is with Server 2.x only.
Brian Mathis wrote:
On Sat, Jun 26, 2010 at 9:26 AM, Les Mikesell lesmikesell@gmail.com wrote:
Emmanuel Noobadmin wrote:
The creeping CPU problem happens slowly over the course of a week or so, so if you're seeing acute freeze-ups, then that's probably not it. However, if all machines have been running for a while, try to suspend/resume all of them, then see if the problem goes away.
That is pretty much what we see. Sometimes we leave the machine on through several days if there was no changes to what we are doing on it and sometimes the VM freezes after a few days. Had appeared random because anyone of us could have restarted the VM during the week so those lock up probably happened when none of us did.
Is this with VMware Server 2.x? I have machines with the 1.x version (some one Centos3, some on Centos5) that run more or less forever without issues. Also, the ESXi version is even better if you are only using it to host VMs.
Yes, this problem is with Server 2.x only.
It is still probably reasonable to install the latest 1.x you can find - you just have to have the matching client to access the console remotely. But, I normally only use the client to get to the point where the network is up and I can go to the guest directly with ssh/freenx/vnc. The only problem I ever see on those VMs is instability in the clock - but the real fix is to use ESXi instead with everything running as guests.
On 6/26/10, Les Mikesell lesmikesell@gmail.com wrote:
Is this with VMware Server 2.x? I have machines with the 1.x version (some one Centos3, some on Centos5) that run more or less forever without issues. Also, the ESXi version is even better if you are only using it to host VMs.
Yes, version 2.x
Could you please elaborate on the "only using it to host VM" part? Does it mean ESXi doesn't play well if say some other service like vnc/freenx/ftpd is running off the same machine?Thanks! :)
Emmanuel Noobadmin wrote:
On 6/26/10, Les Mikesell lesmikesell@gmail.com wrote:
Is this with VMware Server 2.x? I have machines with the 1.x version (some one Centos3, some on Centos5) that run more or less forever without issues. Also, the ESXi version is even better if you are only using it to host VMs.
Yes, version 2.x
Could you please elaborate on the "only using it to host VM" part? Does it mean ESXi doesn't play well if say some other service like vnc/freenx/ftpd is running off the same machine?Thanks! :)
ESXi installs on the bare metal instead of some other OS - and thus is able to do a better job of managing the VMs than something working on top of some other OS's drivers. You don't even use the console except for the initial install and some very minor commands - you have to have a windows machine to run the remoete console client for the real setup and control work and to get the consoles of the guest machines. However, it isn't necessary to keep the client connected all the time and you can arrange direct vnc/ssh/freenx access to the guests once the network is up.
On 6/25/2010 7:33 AM, Brian Mathis wrote:
On Fri, Jun 25, 2010 at 9:04 AM, Emmanuel Noobadmin centos.admin@gmail.com wrote:
I'm wondering if virtualization could be used as a cheap redundancy solution for situations which can tolerate a certain amount of downtime.
Current recommendations is to run some kind of replication server such as DRBD. The problem here is cost if there are more than one server (or servers running on different OS) to be backed up. I'd basically need to tell my client they need to buy say 2X machines instead of just X. Not really attractive :D
So I'm wondering if it would be a good, or likely stupid idea, to run X CentOS machines with VMware. Each running a single instance of CentOS and in at least one case of Windows for MSSQL.
Sure. I run 4 machines with VMware Server2 in production. Three with the live VM machines and a 4th with live 'near-mirror' VMs of all the others.
So if any of the machines physically fails for whatever reasons not related to disk. I'll just transfer the disk to one of the surviving server or a cold standby and have things running again within say 30~60 minutes needed to check the filesystem, then mount and copy the image.
I don't like this so much. It means you physically have to move something, possibly have to fsck the drives and deal with potential corruption of the VM images.
I thought I could also rsync the images so that Server 1 backs up Server 2 image file and Server 2 backs up Server 3 etc in a round robin fashion to make this even faster. But reading up indicates that rsync would attempt to mirror the whole 60gb or 80gb image on any change. Bad idea.
You have multiple choices here. I do three things:
1) I have 'near'-image machines running live all the time that rsync all the production relevant portions of the live machines once a day. With scripts that can put them live in a few seconds or minutes at need.
2) I have snapshots of the VM images themselves that I take once a week by shutting down the VMs, taking an LVM static snapshot, restarting the VMs, rsyncing the snapshot to another machine, and then removing the snapshot. Since rsync only transfers the *changed* part of the image files this only takes a few hours for some hundreds of gigabytes of VM images and only has a few minutes of actual downtime.
Since VMware Server 2 has an unfixed 'cpu load' leak requiring you to stop/restart the machines about once every week or two anyway, it kills two birds with one stone.
3) I also have inside-the-vm full rsync-over-ssh with hardlinking onsite/offsite backup of all the live virtual machines taken daily with a 7 x daily, 4 x weekly, 3 x monthly, 2 x quarterly, 2 x semi-annual retention.
So while this is not real time HA but in most situations, they can tolerate an hour's downtime. The cost of the "redundancy" also stays constant no matter how many servers are added to the operation.
Any comments on this or is it like just plain stupid because there are better options that are equally cost effective?
This is one of the advantages of using VMs, and I'm sure most people are using it for this reason in one way or another. However, there are a few things you need to worry about:
- When the host crashes, the guests will also, so you'll be in a
recovery situation just like for a physical crash. This is manageable and something you'd have to deal with either way.
I'm not so hot on the 'move the physical disk' idea. 'Move the data' seems better to me.
- Rsyncing the VMs while they are running leaves them in an
inconsistent state. This state may or may not be worse than a simple crash situation. One way I have been getting around this is by creating a snapshot of the VM before performing the rsync, and when bringing up the copy after a crash, revert to the snapshot. That will at least give you consistent filesystem and memory state, but could cause issues with network connections. I usually reboot the VM cleanly after reverting to the snapshot.
Note - *take the snapshot while the vm's are 'stopped' or 'paused'* :)
Rsync will not transfer the entire file when transferring over the network. It scans the whole thing and only sends changes. If you have --progress enabled it will appear to go through the whole file, but you will see the "speedup" go much higher than a regular transfer. However, sometimes this process can take more time than doing a full copy on a local network. Rsync is meant to conserve bandwidth, not necessarily time. Also, I suggest the you use a GB network if you have the option. If not you could directly link the network ports on 2 servers and copy straight from 1 to the other.
Yep.
If you are looking at VMware Server for this, here are some tips:
- For best performance, search around for "vmware tmpfs". It will
dramatically increase the performance of the VMs at the expense of some memory.
+1
We are talking an order of magnitude difference in performance. This is probably the single most important performance tuning tip for VMware Server.
- VMware Server seems like it's EOL, even though vmware hasn't
specifically said so yet
Yah. They have been 'not calling it dead' for a while now. It is clear though from the lack of even important security patches that they intend to put as little into it as possible before it officially reaches EOL in June 2011.
- There is a bug in VMware with CentOS that causes guests to slowly
use more CPU until the whole machine is bogged down. This can be fixed by restarting or suspend/resume each VM
Note that suspend can cause havoc with IP interfaces if you bring up addresses that are not part of the automatic list during normal operation.
There is *also* a serious bug with its glibc handling on CentOS 5.4 or later. You will need to install an older copy of glibc directly into the vmware libraries for the host machine and tweak the launch scripts for stable operation. Google for: centos vmware glibc
- At this point I'd look at ESXi for the free VMware option.
Or KVM if you are willing to leave VMware since that is where RH is going.
On 6/25/2010 8:33 AM, Brian Mathis wrote:
- VMware Server seems like it's EOL, even though vmware hasn't
specifically said so yet
Given that there are known serious bugs in 2.0.2[*] and that release is now 8 months old, that seems plausible to me. But another plausible explanation is that they've decided to throw all their effort at a 3.0 release.
Do you have any hard evidence that would help me decide between these two possibilities?
[*] glibc change with EL 5.4+ crashes server, creeping CPU time bug mentioned elsewhere in this thread, web UI buggier than Brazil in the rainy season...
On Mon, Jun 28, 2010 at 07:25:59AM -0600, Warren Young wrote:
On 6/25/2010 8:33 AM, Brian Mathis wrote:
- VMware Server seems like it's EOL, even though vmware hasn't
specifically said so yet
Given that there are known serious bugs in 2.0.2[*] and that release is now 8 months old, that seems plausible to me. But another plausible explanation is that they've decided to throw all their effort at a 3.0 release.
If you look on their site, they clearly specify that they do not offer a paid support option for VMware Server, that it's community supported only. Does that seem like an attitude towards a product they plan to update?
Whit
On 6/28/2010 7:34 AM, Whit Blauvelt wrote:
If you look on their site, they clearly specify that they do not offer a paid support option for VMware Server, that it's community supported only. Does that seem like an attitude towards a product they plan to update?
It fits completely with a low-end product they're giving away as a come-on for their more expensive supported products. If the come-on doesn't function, it's not going to win them *any* converts from the free-as-in-free VM systems. That seems counterproductive to me, so no, I don't believe that explains why we haven't seen a bug fix release in 8 months.
If they wish to withdraw all support for it, I'd expect it to just disappear from their site.
on 6-28-2010 6:34 AM Whit Blauvelt spake the following:
On Mon, Jun 28, 2010 at 07:25:59AM -0600, Warren Young wrote:
On 6/25/2010 8:33 AM, Brian Mathis wrote:
- VMware Server seems like it's EOL, even though vmware hasn't
specifically said so yet
Given that there are known serious bugs in 2.0.2[*] and that release is now 8 months old, that seems plausible to me. But another plausible explanation is that they've decided to throw all their effort at a 3.0 release.
If you look on their site, they clearly specify that they do not offer a paid support option for VMware Server, that it's community supported only. Does that seem like an attitude towards a product they plan to update?
Whit
That just looks like they don't want to support something they give away...
On 06/28/2010 10:15 AM, Scott Silva wrote:
on 6-28-2010 6:34 AM Whit Blauvelt spake the following:
If you look on their site, they clearly specify that they do not offer a paid support option for VMware Server, that it's community supported only. Does that seem like an attitude towards a product they plan to update?
Whit
That just looks like they don't want to support something they give away...
They give away ESXi, too, so that argument is pretty weak. The difference is that ESXi is directly tied to their other tracks and support. VM Server has always been pretty 'standalone'. Not so good if your business models is convincing people to buy all the pretty add ons.
They more-or-less abrogated their own lifecycle guidelines with VM Server by declaring that 'General' support for it only includes 'Technical Guidance' until EOL (there-by skipping directly to their lowest level of support - which is pretty much 'Google it and look in the forums').
At this point VM Server is in the 'if it breaks you get to keep all the pieces' mode.
On 6/28/2010 12:39 PM, Benjamin Franz wrote:
At this point VM Server is in the 'if it breaks you get to keep all the pieces' mode.
Like just about all software, although you might get the chance for a refund on what you paid if you can prove there is a problem with advertised capabilities.
Anyway, I'll repeat that my experience with VMware server 1.x has been years of running under both CentOS 3.x and 5.x and no breakage at all so far.
On Mon, Jun 28, 2010 at 9:25 AM, Warren Young warren@etr-usa.com wrote:
On 6/25/2010 8:33 AM, Brian Mathis wrote:
- VMware Server seems like it's EOL, even though vmware hasn't
specifically said so yet
Given that there are known serious bugs in 2.0.2[*] and that release is now 8 months old, that seems plausible to me. But another plausible explanation is that they've decided to throw all their effort at a 3.0 release.
Do you have any hard evidence that would help me decide between these two possibilities?
[*] glibc change with EL 5.4+ crashes server, creeping CPU time bug mentioned elsewhere in this thread, web UI buggier than Brazil in the rainy season...
Here is the support lifecycle page: http://www.vmware.com/support/policies/lifecycle/general/index.html#policy_s... See the footnote under the "VMware Server" section.
Maybe there's a 3.0 in the works, but the general feeling is that they have abandoned the product. There have been no updates allowing the console to work in Firefox 3.6, no fixes to the hostd crash (glibc problem), nor any fixes to the creeping CPU problem. These are all major issues that would normally be addressed in any product a company would expect to keep around.
All of these things together do not leave one with a good feeling about the product. Additionally, the way they are handling this has made me feel less confident in VMware as a company, and instead of looking at their paid products I have started looking at the alternatives. If they just came right out and said they were not supporting it any longer, that would be preferable to what they are doing now.
Why would one use vmware Server 2.x when ESXi is available free of charge, stable, small footprint, ... ? We have about 60 vmware machines here, about 20 of them already converted to ESXi and running fine. I would never think about going back to Server 2.x or even GSX, especially when using veeam as a central management console.
http://www.vmware.com/products/esxi/ http://www.veeam.com/esxi-monitoring-free.html
Am 28.06.2010 um 15:45 schrieb Brian Mathis:
On Mon, Jun 28, 2010 at 9:25 AM, Warren Young warren@etr-usa.com wrote:
On 6/25/2010 8:33 AM, Brian Mathis wrote:
- VMware Server seems like it's EOL, even though vmware hasn't
specifically said so yet
Given that there are known serious bugs in 2.0.2[*] and that release is now 8 months old, that seems plausible to me. But another plausible explanation is that they've decided to throw all their effort at a 3.0 release.
Do you have any hard evidence that would help me decide between these two possibilities?
[*] glibc change with EL 5.4+ crashes server, creeping CPU time bug mentioned elsewhere in this thread, web UI buggier than Brazil in the rainy season...
Here is the support lifecycle page: http://www.vmware.com/support/policies/lifecycle/general/index.html#policy_s... See the footnote under the "VMware Server" section.
Maybe there's a 3.0 in the works, but the general feeling is that they have abandoned the product. There have been no updates allowing the console to work in Firefox 3.6, no fixes to the hostd crash (glibc problem), nor any fixes to the creeping CPU problem. These are all major issues that would normally be addressed in any product a company would expect to keep around.
All of these things together do not leave one with a good feeling about the product. Additionally, the way they are handling this has made me feel less confident in VMware as a company, and instead of looking at their paid products I have started looking at the alternatives. If they just came right out and said they were not supporting it any longer, that would be preferable to what they are doing now. _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On 6/28/2010 7:59 AM, guillaume wrote:
Why would one use vmware Server 2.x when ESXi is available free of charge, stable, small footprint, ... ?
I've thought about it, but it's not really the right thing for us.
Our VM host has some special hardware in it, driven by custom software which runs just fine in the host OS, but which doesn't work through virtualization because VMware doesn't know about this class of hardware.
This server is idle much of the time, so it made sense to give it secondary duty as a VM host. To switch to ESXi, we'd have to bring up a separate server (wasteful) and let the current one go back to being idle much of the time (doubly wasteful).
On Jun 28, 2010, at 11:15 AM, Warren Young warren@etr-usa.com wrote:
On 6/28/2010 7:59 AM, guillaume wrote:
Why would one use vmware Server 2.x when ESXi is available free of charge, stable, small footprint, ... ?
I've thought about it, but it's not really the right thing for us.
Our VM host has some special hardware in it, driven by custom software which runs just fine in the host OS, but which doesn't work through virtualization because VMware doesn't know about this class of hardware.
This server is idle much of the time, so it made sense to give it secondary duty as a VM host. To switch to ESXi, we'd have to bring up a separate server (wasteful) and let the current one go back to being idle much of the time (doubly wasteful).
Then give VirtualBox a whirl.
Fully supported, works and some say performs better then VMware Server.
-Ross
On 06/28/10 8:57 AM, Ross Walker wrote:
Then give VirtualBox a whirl. Fully supported, works and some say performs better then VMware Server.
I second the emotion on VBox, its a nice piece of work.
Read the license carefully, however. Its no longer free for use as a server in a business environment, only free for 'personal' use. Larry needs a new boat.
On Mon, Jun 28, 2010 at 09:06:43AM -0700, John R Pierce wrote:
I second the emotion on VBox, its a nice piece of work.
Read the license carefully, however. Its no longer free for use as a server in a business environment, only free for 'personal' use. Larry needs a new boat.
They also clarify that 'personal' use can include in a business environment, if you're just installing on a system here and there and not doing a mass rollout on multiple machines. But that's from their website. Haven't read the full license to see how closely their informal synopsis matches the lawyereese.
Whit
On 06/28/10 9:19 AM, Whit Blauvelt wrote:
On Mon, Jun 28, 2010 at 09:06:43AM -0700, John R Pierce wrote:
I second the emotion on VBox, its a nice piece of work.
Read the license carefully, however. Its no longer free for use as a server in a business environment, only free for 'personal' use. Larry needs a new boat.
They also clarify that 'personal' use can include in a business environment, if you're just installing on a system here and there and not doing a mass rollout on multiple machines. But that's from their website. Haven't read the full license to see how closely their informal synopsis matches the lawyereese.
what I read said personal use in a business environment meant the VM isn't a server accessed by anyone else. I can run vbox on my office desktop and host a VM for my own use. I can't, however, host a webserver VM that my department uses.
but IANAL.
On Jun 28, 2010, at 12:40 PM, John R Pierce pierce@hogranch.com wrote:
On 06/28/10 9:19 AM, Whit Blauvelt wrote:
On Mon, Jun 28, 2010 at 09:06:43AM -0700, John R Pierce wrote:
I second the emotion on VBox, its a nice piece of work.
Read the license carefully, however. Its no longer free for use as a server in a business environment, only free for 'personal' use. Larry needs a new boat.
They also clarify that 'personal' use can include in a business environment, if you're just installing on a system here and there and not doing a mass rollout on multiple machines. But that's from their website. Haven't read the full license to see how closely their informal synopsis matches the lawyereese.
what I read said personal use in a business environment meant the VM isn't a server accessed by anyone else. I can run vbox on my office desktop and host a VM for my own use. I can't, however, host a webserver VM that my department uses.
but IANAL.
If you use it in a production environment you will probably want to buy support where you can request bug fixes and such.
But nobody is going to audit you if your performing an "extended" evaluation of the product.
Most people who end up installing it for large deployments end up buying the support, if not to be 'legit', then for regulatory compliance reasons.
-Ross
On 6/28/2010 10:15 AM, Warren Young wrote:
On 6/28/2010 7:59 AM, guillaume wrote:
Why would one use vmware Server 2.x when ESXi is available free of charge, stable, small footprint, ... ?
I've thought about it, but it's not really the right thing for us.
Our VM host has some special hardware in it, driven by custom software which runs just fine in the host OS, but which doesn't work through virtualization because VMware doesn't know about this class of hardware.
What kind of hardware? Is it something that could be replaced by a supported card or a usb device that a guest could access?
This server is idle much of the time, so it made sense to give it secondary duty as a VM host. To switch to ESXi, we'd have to bring up a separate server (wasteful) and let the current one go back to being idle much of the time (doubly wasteful).
That still leaves the Server 1.x version as an option. It's been rock solid for me for years and the only thing that RHEL/Centos5 being 'unsupported' hosts means is that after each kernel update you have to run the script that recompiles the kernel module - which is not a problem as long as you have the compiler and kernel header packages installed.
On 6/28/2010 8:25 AM, Warren Young wrote:
On 6/25/2010 8:33 AM, Brian Mathis wrote:
- VMware Server seems like it's EOL, even though vmware hasn't
specifically said so yet
Given that there are known serious bugs in 2.0.2[*] and that release is now 8 months old, that seems plausible to me. But another plausible explanation is that they've decided to throw all their effort at a 3.0 release.
Do you have any hard evidence that would help me decide between these two possibilities?
[*] glibc change with EL 5.4+ crashes server, creeping CPU time bug mentioned elsewhere in this thread, web UI buggier than Brazil in the rainy season...
I've never liked the web UI, so sticking with a 1.x server version seems like the obvious choice if it is impossible to switch to ESXi and make your current OS one of the guests. Personally, thing that seems odd to me is that RHEL broke things in an update which is very strange considering the nature of the product. I'm not so surprised that VMware hasn't gone out of their way to do a workaround just for RHEL/CentOS. The clock issue probably can't be fixed completely when running under some other OS.
Anyway, 1.x versions still work and don't have the glibc problem - or you might even use vmware player if you don't mind tying a console to the vm instance. And ESXi is much better for anything resembling production or even a backup for production use.
On Fri, Jun 25, 2010 at 6:34 PM, Emmanuel Noobadmin centos.admin@gmail.com wrote:
I'm wondering if virtualization could be used as a cheap redundancy solution for situations which can tolerate a certain amount of downtime.
Absolutely. I use the Linux KVM as my virtualization platform. It is just not redundancy but also better utilization of your hardware and saving installation/configuration time by creating templates.
For example, at a couple of client locations, I have 4 DNS/openLDAP servers in production (2 VMs each on 2 different hosts) per location. I did the initial install for one VM, the configured DNS + LDAP services, tested them and put them in production. To replicate, I simply copy the VM image on to the other host, reconfigure the services in the guest VMs to be secondary servers. The secondary DNS and LDAP servers are synced with the primary using their own internal mechanism for replication.
Now I can use the same image file as a template for any other client that wants these services in their IT infra.
Do a minimal install of the host OS (takes less time), make sure it's clock is in sync with a reliable NTP server. The keyword is "minimal" - only those packages that are needed to support virtualization. Copy the necessary VM files; change the client specific info like domain names, user entries and the "DNS/Directory server" is operational within an hour.
-- Arun Khan