I have been working off and on with Xen and KVM on a couple of test hosts for that past year or so and while now everything seems to function as expected, more or less, I find myself asking the question: Why?
We run our own servers at our own sites for our own purposes. We do not, with a few small exceptions, host alien domains. So, while rapidly provisioning or dynamically expanding a client's vm might be very attractive to a public hosting provider that is not our business model at all.
Why would a small company not in the public hosting business choose to employ VM technology? What are the benefits over operating several individual small form factor servers or blades instead? I am curious because what I can find on the net respecting VM use cases, outside of for public providers or application testing, seems to me mostly puff and smoke.
This might be considered OT but since CentOS is what we use it seems to me best that I ask here to start.
On 05/27/2011 02:33 PM, James B. Byrne wrote:
I have been working off and on with Xen and KVM on a couple of test hosts for that past year or so and while now everything seems to function as expected, more or less, I find myself asking the question: Why?
We run our own servers at our own sites for our own purposes. We do not, with a few small exceptions, host alien domains. So, while rapidly provisioning or dynamically expanding a client's vm might be very attractive to a public hosting provider that is not our business model at all.
Why would a small company not in the public hosting business choose to employ VM technology? What are the benefits over operating several individual small form factor servers or blades instead? I am curious because what I can find on the net respecting VM use cases, outside of for public providers or application testing, seems to me mostly puff and smoke.
This might be considered OT but since CentOS is what we use it seems to me best that I ask here to start.
Live migration between physical hosts. Also, ease of recovery in the event of a failure. Can move the VM to entirely new hardware when the old hardware is no longer powerful enough... etc.
Server utilization and seperation. I need 10 web servers, none of which are going to be busy, but each organization in my business wants their "own".
10 vms on a 2 cpu box makes more sense that 1 web server on each of ten. Add a second vm host for some redundancy, etc, etc.
On Fri, 27 May 2011, Digimer wrote:
On 05/27/2011 02:33 PM, James B. Byrne wrote:
I have been working off and on with Xen and KVM on a couple of test hosts for that past year or so and while now everything seems to function as expected, more or less, I find myself asking the question: Why?
We run our own servers at our own sites for our own purposes. We do not, with a few small exceptions, host alien domains. So, while rapidly provisioning or dynamically expanding a client's vm might be very attractive to a public hosting provider that is not our business model at all.
Why would a small company not in the public hosting business choose to employ VM technology? What are the benefits over operating several individual small form factor servers or blades instead? I am curious because what I can find on the net respecting VM use cases, outside of for public providers or application testing, seems to me mostly puff and smoke.
This might be considered OT but since CentOS is what we use it seems to me best that I ask here to start.
Live migration between physical hosts. Also, ease of recovery in the event of a failure. Can move the VM to entirely new hardware when the old hardware is no longer powerful enough... etc.
---------------------------------------------------------------------- Jim Wildman, CISSP, RHCE jim@rossberry.com http://www.rossberry.net "Society in every state is a blessing, but Government, even in its best state, is a necessary evil; in its worst state, an intolerable one." Thomas Paine
On Fri, 27 May 2011, Digimer wrote:
Live migration between physical hosts. Also, ease of recovery in the event of a failure. Can move the VM to entirely new hardware when the old hardware is no longer powerful enough... etc.
And if you have licensed software that ties its network license keys to a specific MAC address, you no longer have to tie the license server to a specific physical box.
On May 27, 2011, at 4:08 PM, Steve Thompson wrote:
And if you have licensed software that ties its network license keys to a specific MAC address, you no longer have to tie the license server to a specific physical box.
And if you plan properly, you can keep the same MAC addresses on all your guests when you upgrade the host.....
Steve Thompson wrote:
On Fri, 27 May 2011, Digimer wrote:
Live migration between physical hosts. Also, ease of recovery in the event of a failure. Can move the VM to entirely new hardware when the old hardware is no longer powerful enough... etc.
And if you have licensed software that ties its network license keys to a specific MAC address, you no longer have to tie the license server to a specific physical box.
Just to clarify. You can set whatever virtual NIC MAC address for your virtual guest.
Ljubomir
On Fri, May 27, 2011 at 2:33 PM, James B. Byrne byrnejb@harte-lyne.ca wrote:
I have been working off and on with Xen and KVM on a couple of test hosts for that past year or so and while now everything seems to function as expected, more or less, I find myself asking the question: Why?
We run our own servers at our own sites for our own purposes. We do not, with a few small exceptions, host alien domains. So, while rapidly provisioning or dynamically expanding a client's vm might be very attractive to a public hosting provider that is not our business model at all.
Why would a small company not in the public hosting business choose to employ VM technology? What are the benefits over operating several individual small form factor servers or blades instead? I am curious because what I can find on the net respecting VM use cases, outside of for public providers or application testing, seems to me mostly puff and smoke.
This might be considered OT but since CentOS is what we use it seems to me best that I ask here to start.
I would have to say that the usecase really has to do with what you are doing with servers.
For us when our facility was built ~6 years ago they bought racks and racks of servers from Dell. There were racks with 1U dual CPU boxes and then racks with 2U dual CPU and then the racks with 4U Quad CPU boxes with varying amounts of ram and drive space in them. That was the way to have resources available to spin up a new box for someone in a timely manner rather than waiting a few weeks for one to be delivered... but at year 2 it became obvious that many of the boxes were not being used or they were really underutilized as the software didn't need that much power but it was a Windows App that wanted to have the box to itself. So virtualization came into play. Now instead of 10 racks of random servers of varying specs we have 1 rack with some really beefy servers and then a couple racks with individual boxes for specific applications that were to resource intense to make sense to virtualize.
The advantages we have been able to realize is the ability to spin up a 2 processor 1gb VM for someone to start their project and then if it gets off the ground and they need more we can bump it up to what they need by editing a configuration rather than having to add memory to a box or worse yet have to migrate it from one box to another. This has resulted in much better utilization of the hardware that we have. Many servers don't need lots of resources they just need to have the resources they have all to themselves or atleast think that they do. We have many servers that only see a spike in workload periodicaly during the week/month when a researcher has collected the data they need to work on. The rest of the time it sits there waiting.
The other real advantage for us is the DR solutions that virtualizing gives us. Previously DR ment having a stack of machines at the DR site waiting to be recovered to and deciding what servers were important enough that we needed a box with their name on it and what could wait until we could call Dell and get a new one shipped out. Now we just have a smaller VMware cluster at the other end running some hot DR servers and ready to bring up the replicated LUNs from our primary site in case of a failure. We can bring those VMs back up in a short time instead of having to do baremetal installs then restore from backups. Going foward we will be scaling so that we can share our DR at the remote site with the group that is there as their primary and they will be able to use our cluster here as their DR allowing us to consolidate the number of hosts needed. That is something that just isn't possiable with physical boxes.
I can't say that you don't take a monitary hit to buy the shared storage, networking, and software required to support a Virtual world but I personaly see it as worth it.
On the other hand there are those systems that just make no sense to virtualize. We run a HPC cluster with hundreds of nodes. There is no reason to virtualize it as we are already trying to squeeze every bit of performance out of the boxes and adding the hypervisor would defeat that and we don't need/want more than one OS install running on a physical box.
So really look at your workload and your business case. Virtualization isn't the end all be all silver bullet that will make everything in IT perfect. It is just another tool in your toolbox that can help in some cases but much as showing up with a hammer and finding a screw you might be better off going back to get a screw driver than trying to just pound it in.
On May 27, 2011, at 2:33 PM, James B. Byrne wrote:
Why would a small company not in the public hosting business choose to employ VM technology? What are the benefits over operating several individual small form factor servers or blades instead?
Well, we do virtualization for an unusual reason here. We do it to reduce RFI, since we're a radio astronomy observatory. 15 VM's on a pair of largish hosts generates much less interference than 15 physical boxes with less effective shielding....
Also, virtualization allows rapid changes in provisioning of storage and other resources. It also allows rapid 're-imaging' for emergency rollback of the whole guest. It also allows virtualization-based HA to work.
And, most importantly, it allows you to spread the cost of multiple guests to one or two more reliable and more capable boxes; 2 Dell PE6950's (or equivalent four-socket multicore capable servers) is going to cost less than 15 two or one RU boxes with equivalent redundancy and reliability).
Plus, I don't have to buy more hardware to provision another server, especially if it is a low-performance-requirement guest.
On 5/27/2011 1:33 PM, James B. Byrne wrote:
I have been working off and on with Xen and KVM on a couple of test hosts for that past year or so and while now everything seems to function as expected, more or less, I find myself asking the question: Why?
We run our own servers at our own sites for our own purposes. We do not, with a few small exceptions, host alien domains. So, while rapidly provisioning or dynamically expanding a client's vm might be very attractive to a public hosting provider that is not our business model at all.
Why would a small company not in the public hosting business choose to employ VM technology? What are the benefits over operating several individual small form factor servers or blades instead? I am curious because what I can find on the net respecting VM use cases, outside of for public providers or application testing, seems to me mostly puff and smoke.
It is fairly difficult to avoid applications that require specific OS versions or conflict with certain other applications, especially as things evolve over time. This means you are likely to end up with machines (and their backups) dedicated to specific legacy apps even though you could otherwise consolidate them on newer/faster hardware and reduce the maintenance/power/space requirements. Putting them on VMs lets you separate the physical resource concerns from the applications you support. And in some cases you might set up backup/failover instances as VMs even where the normally-live host is a physical machine.
This might be considered OT but since CentOS is what we use it seems to me best that I ask here to start.
Aside from consolidating things that don't want to be consolidated (a somewhat odd concept for an OS that should theoretically be able to have multiple versions of most libraries and applications running at once but yum and rpm won't like it), there is also the issue of moving to different hardware if an older machine crashes. While Centos is pretty good about detecting hardware during an install, if you want to restore your backup onto something different you'll need to know as much as anaconda does to make it work, where a VM image will work the same way regardless of the physical hardware.
At Fri, 27 May 2011 14:33:23 -0400 (EDT) CentOS mailing list centos@centos.org wrote:
I have been working off and on with Xen and KVM on a couple of test hosts for that past year or so and while now everything seems to function as expected, more or less, I find myself asking the question: Why?
We run our own servers at our own sites for our own purposes. We do not, with a few small exceptions, host alien domains. So, while rapidly provisioning or dynamically expanding a client's vm might be very attractive to a public hosting provider that is not our business model at all.
Why would a small company not in the public hosting business choose to employ VM technology? What are the benefits over operating several individual small form factor servers or blades instead? I am curious because what I can find on the net respecting VM use cases, outside of for public providers or application testing, seems to me mostly puff and smoke.
One of the benefits of VM over small form factor servers or blades is ecomonies of scale: a 'larger' server box (larger == additional memory and disk space, maybe more cores) might be cheaper than several smaller, lower-end machines. And given the way things are going in terms of many core procssors, memory and disk prices, these sorts of ecomonies of scale are going to increase -- it may stop being cost effective (or even impossible) to get a 2-core box with 2-4 Gig and a 160gig disk and using a 6-core processor with 64gig of RAM and a 4TB disk is insane for a *simple* web server, even if you are hosting a couple dozen virtual hosts.
This might be considered OT but since CentOS is what we use it seems to me best that I ask here to start.
But taking the other side of the argument, here are two scenarios where I *wouldn't* use virtualization (one could certainly enumerate more):
1. A production DB server or server cluster.
2. I've had services where I needed to maximize uptime. One option I tried were VMs and being able to move the VM back and forth between hosts. That might cover hardware failure, but I'd still take outages when I needed to upgrade software in the VM. Moving to a traditional HA solution on physical hardware means the outages are now measured in seconds instead of minutes, and most of the time are undetectable by the users.
I also tried services where the HA nodes are themselves VMs, but was less than impressed with operational stability.
When CentOS 6 comes out, though, I'll be interested to see how (2) behaves when it comes time to do a rolling upgrade from CentOS 5 (bring a node down, install and reconfigure C6 from scratch, rejoin the cluster, have C6 take over the services, then upgrade the other node).
Thank god for test environments. And backups.
Devin
--On Friday, May 27, 2011 05:35:32 PM -0400 Steve Thompson smt@vgersoft.com wrote:
On Fri, 27 May 2011, Devin Reade wrote:
Thank god for test environments. And backups.
Backups? Que?
That was just an OT aside referring that while the optimist in me hopes for an easy rolling upgrade from CentOS 5 to 6 in that cluster configuration, the pessimist/realist in me is anticipating doing that on a test cluster first and having a backout plan for production.
You know: The usual concerns in a production environment.
Devin
On May 27, 2011, at 5:29 PM, Devin Reade gdr@gno.org wrote:
But taking the other side of the argument, here are two scenarios where I *wouldn't* use virtualization (one could certainly enumerate more):
- A production DB server or server cluster.
I actually have really good experience with MSSQL and ESXi. I have 6 big SQL 2005 servers running virtualized and the RDMs actually performed better under ESXi then they did on the bear metal. It really depends on the backend storage, virtualized CPU and memory perform very well.
- I've had services where I needed to maximize uptime. One option
I tried were VMs and being able to move the VM back and forth between hosts. That might cover hardware failure, but I'd still take outages when I needed to upgrade software in the VM. Moving to a traditional HA solution on physical hardware means the outages are now measured in seconds instead of minutes, and most of the time are undetectable by the users.
I also tried services where the HA nodes are themselves VMs, but was less than impressed with operational stability.
Again I have had good experiences with VMware HA and FT. HA will restart a VM if it fails or if the virtualization host goes down. FT will keep a mirrored copy running and if the primary fails the secondary will take over. I always felt those terms were reversed.
When CentOS 6 comes out, though, I'll be interested to see how (2) behaves when it comes time to do a rolling upgrade from CentOS 5 (bring a node down, install and reconfigure C6 from scratch, rejoin the cluster, have C6 take over the services, then upgrade the other node).
I love CentOS but clustering it is a PITA. Virtualization clusters should just work.
Thank god for test environments. And backups.
Amen.
-Ross
From: James B. Byrne byrnejb@harte-lyne.ca
I have been working off and on with Xen and KVM on a couple of test hosts for that past year or so and while now everything seems to function as expected, more or less, I find myself asking the question: Why?
I saw a company architecture presentation where they had paired servers with 2 VMs each + DRBD + heartbeat... So, on each server there is an active VM and a 'backup' VM, crossed. If one server goes down, the 'backup' VM on the second server would be activated.
JD