I work for a school in a New Zealand university and we are wanting to implement Server Virtualization for both CentOS and Windows systems.
Keep in mind virtualization software is moving pretty quickly. 8 months ago Xen didn't migrate fully virtual hosts, now it does. In 5 years the rediculous pricing structure for Virtualization technology will be gone and virtulization will be a commodity where all you pay for are accelorating drivers and managment tools. If you check the virtualization page on wikipedia http://en.wikipedia.org/wiki/Virtualization#Virtualization_examples you'll see a bunch of the questions you should ask to figure out your reasons for going virtual. Try to rank the features you know will help you frequently, and the stuff that's just "WOW! Moving a running server is so cool!". Try to avoid cool stuff for cool stuffs sake. Live host migrations are great if you have dynamic workloads or for the occasion you need to take a physical machine down for firmware/hardware updates during buisness hours, but think of how often you are going to use it and what impact downtimes might have.
From my own research it seems that VMWare or Xen are really the two major products to be considered, are there any others I should be considering ?
Take a peak at KVM (http://kvm.qumranet.com/kvmwiki/Guest_Support_Status). Might not be ready for primetime, but it is pretty favored by the kernel maintainers for simplicty and cleanliness so it's likely to end up going further than Xen. Do you really think the hypervisors and managment software isn't going to endup in hardware?
If it's "Enterprise Level Support" and performance you pretty much have to go with VMware. Realistically, for most companies and workloads way to many things are tagged as "Requiring Enterprise Class", and you can get away with Xen and KVM. The free VMWare Server (aka GSX) is a completely different beast from VMWare ESX, performs pretty terribly, and is almost worthless for production servers. ESX is amazing, and I'd recommend it if you have the money, but I it's like 3K every 2 sockets and needs a san to be very useful. You can quickly rack up 50 grand in hardware and licensing just to get off the ground.
If I had the time, I'd like to try using Xen with an OpenSolaris ZFS iSCSI target as shared storage, but alas I do not have that time.
Is anyone running Linux "Guest" O/S's inside a Windows host ?? And if so can you share your reasons for this?
I've done for people I work with because cygwin is too much of a moving target, or to test that their code compiles and works on both platforms. I also sniffed alot of glue when I was younger.
Patrick