Just wanted to get the lists opinion on clustering and what project to use. Any info would be greatly appreciated. Thanks
On Thu, Feb 4, 2010 at 3:25 PM, Bo Lynch blynch@ameliaschools.com wrote:
Just wanted to get the lists opinion on clustering and what project to use. Any info would be greatly appreciated. Thanks
There are all types of clustering. What are you looking to do?
On Thu, February 4, 2010 3:31 pm, Kwan Lowe wrote:
On Thu, Feb 4, 2010 at 3:25 PM, Bo Lynch blynch@ameliaschools.com wrote:
Just wanted to get the lists opinion on clustering and what project to use. Any info would be greatly appreciated. Thanks
There are all types of clustering. What are you looking to do? _______________________________________________
I guess the main objective would be availability.
Just wanted to get the lists opinion on clustering and what project to use. Any info would be greatly appreciated. Thanks
There are all types of clustering. What are you looking to do?
I guess the main objective would be availability.
We need more information then just an "Availability Cluster."
What application(s) do you want to cluster? What sort of environment/budget are you working with? What objective(s) are you trying to achieve? What are your expectations of the cluster itself, beyond just high availability?
On Thu, February 4, 2010 4:09 pm, Drew wrote:
Just wanted to get the lists opinion on clustering and what project to use. Any info would be greatly appreciated. Thanks
There are all types of clustering. What are you looking to do?
I guess the main objective would be availability.
We need more information then just an "Availability Cluster."
What application(s) do you want to cluster? What sort of environment/budget are you working with? What objective(s) are you trying to achieve? What are your expectations of the cluster itself, beyond just high availability?
-- Drew
Right know we have about 30 or so linux servers scattered through out or district. Was looking at ways of consolidating and some sort of redundancy would be nice. Will clustering not work with certain apps? We have a couple mysql dbases, oracle database, smb shares, nfs, email, and web servers.
Bo Lynch wrote:
Right know we have about 30 or so linux servers scattered through out or district. Was looking at ways of consolidating and some sort of redundancy would be nice. Will clustering not work with certain apps? We have a couple mysql dbases, oracle database, smb shares, nfs, email, and web servers.
Maybe your looking for putting them in a virtual environment, to cluster applications like that is fairly complex, Oracle has it's own clustering(RAC), MySQL has clustering(with some potentially serious limitations depending on your DB size), NFS clustering is yet another animal, and samba clustering, well CIFS is a stateful protocol so there really isn't a good way to do clustering there at least with generic samba, that I'm aware of, if a server fails the clients connected to it will lose their connection and potentially data if they happened to be writing at the time.
In any case it sounds like clustering isn't want your looking for, I would look towards putting the systems in VMs with HA shared storage if you want to consolidate and provide high availability.
nate
Right know we have about 30 or so linux servers scattered through out or district. Was looking at ways of consolidating and some sort of redundancy would be nice.
I'm in the process of going through something like that right now. The solution we're pursuing is to virtualize our existing physical servers in virtual machines and consolidating those VM's on a smaller number of larger servers.
The tools we're using allow us to keep a warm copy of a VM on redundant server and if we lose an entire server we're up within 3-5min with minimal data loss. As the servers we're installing have VMware ESXi embedded in the server and storage is pulled from redundant iSCSI backends, data loss due to server failure is minimal. And as part of the backup process includes regular off-site backups of the data and VMs to another office we can, in theory, lose an entire building and still continue to function.
On Thu, February 4, 2010 6:18 pm, Drew wrote:
Right know we have about 30 or so linux servers scattered through out or district. Was looking at ways of consolidating and some sort of redundancy would be nice.
I'm in the process of going through something like that right now. The solution we're pursuing is to virtualize our existing physical servers in virtual machines and consolidating those VM's on a smaller number of larger servers.
The tools we're using allow us to keep a warm copy of a VM on redundant server and if we lose an entire server we're up within 3-5min with minimal data loss. As the servers we're installing have VMware ESXi embedded in the server and storage is pulled from redundant iSCSI backends, data loss due to server failure is minimal. And as part of the backup process includes regular off-site backups of the data and VMs to another office we can, in theory, lose an entire building and still continue to function.
-- Drew
Thanks for the info. Looks like VM would be the way to go. I have been looking at Vmware and virtualbox. Would you recommend Vmware over virtualbox?
On Fri, Feb 5, 2010 at 1:58 PM, Bo Lynch blynch@ameliaschools.com wrote:
On Thu, February 4, 2010 6:18 pm, Drew wrote:
Right know we have about 30 or so linux servers scattered through out or district. Was looking at ways of consolidating and some sort of redundancy would be nice.
I'm in the process of going through something like that right now. The solution we're pursuing is to virtualize our existing physical servers in virtual machines and consolidating those VM's on a smaller number of larger servers.
The tools we're using allow us to keep a warm copy of a VM on redundant server and if we lose an entire server we're up within 3-5min with minimal data loss. As the servers we're installing have VMware ESXi embedded in the server and storage is pulled from redundant iSCSI backends, data loss due to server failure is minimal. And as part of the backup process includes regular off-site backups of the data and VMs to another office we can, in theory, lose an entire building and still continue to function.
-- Drew
Thanks for the info. Looks like VM would be the way to go. I have been looking at Vmware and virtualbox. Would you recommend Vmware over virtualbox?
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
AFAIK, virtualbox is desktop only virtualization while vmware has more offering (desktop, server, cloud etc)
On Fri, February 5, 2010 8:03 am, Athmane Madjoudj wrote:
On Fri, Feb 5, 2010 at 1:58 PM, Bo Lynch blynch@ameliaschools.com wrote:
On Thu, February 4, 2010 6:18 pm, Drew wrote:
Right know we have about 30 or so linux servers scattered through out or district. Was looking at ways of consolidating and some sort of redundancy would be nice.
I'm in the process of going through something like that right now. The solution we're pursuing is to virtualize our existing physical servers in virtual machines and consolidating those VM's on a smaller number of larger servers.
The tools we're using allow us to keep a warm copy of a VM on redundant server and if we lose an entire server we're up within 3-5min with minimal data loss. As the servers we're installing have VMware ESXi embedded in the server and storage is pulled from redundant iSCSI backends, data loss due to server failure is minimal. And as part of the backup process includes regular off-site backups of the data and VMs to another office we can, in theory, lose an entire building and still continue to function.
-- Drew
Thanks for the info. Looks like VM would be the way to go. I have been looking at Vmware and virtualbox. Would you recommend Vmware over virtualbox?
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
AFAIK, virtualbox is desktop only virtualization while vmware has more offering (desktop, server, cloud etc)
-- Athmane Madjoudj
Whats your thoughts on Vmware server over esxi? Really do not want to have to budget for Virtualization if I do not have to. Thanks for any info.
Bo
Bo Lynch wrote:
Whats your thoughts on Vmware server over esxi? Really do not want to have to budget for Virtualization if I do not have to.
Depends on the hardware, ideally esxi, though it is very picky about hardware.
And you should budget for it, storage will be a big concern if you want to provide high availability. A good small storage array(few TB) starts at around $30-40k.
nate
nate wrote:
Bo Lynch wrote:
Whats your thoughts on Vmware server over esxi? Really do not want to have to budget for Virtualization if I do not have to.
Depends on the hardware, ideally esxi, though it is very picky about hardware.
And you should budget for it, storage will be a big concern if you want to provide high availability. A good small storage array(few TB) starts at around $30-40k.
Have you investigated any of the mostly-software alternatives for this like openfiler, nexentastor, etc., or rolling your own iscsi server out of opensolaris or centos?
On Fri, February 5, 2010 9:57 am, Les Mikesell wrote:
nate wrote:
Bo Lynch wrote:
Whats your thoughts on Vmware server over esxi? Really do not want to have to budget for Virtualization if I do not have to.
Depends on the hardware, ideally esxi, though it is very picky about hardware.
And you should budget for it, storage will be a big concern if you want to provide high availability. A good small storage array(few TB) starts at around $30-40k.
Have you investigated any of the mostly-software alternatives for this like openfiler, nexentastor, etc., or rolling your own iscsi server out of opensolaris or centos?
-- Les Mikesell lesmikesell@gmail.com _______________________________________________
No I have not, but now that you mention this I will definitely look into these. Thanks again for all your help and info. This has been a greta discussion. Bo Lynch
Les Mikesell wrote:
Have you investigated any of the mostly-software alternatives for this like openfiler, nexentastor, etc., or rolling your own iscsi server out of opensolaris or centos?
I have and it depends on your needs. I ran Openfiler a couple years ago with ESX and it worked ok. The main issue there was stability. I landed on a decent configuration that worked fine as long as you didn't touch it(kernel updates often caused kernel panics on the hardware which was an older HP DL580). And when Openfiler finally came out with their newer "major" version the only upgrade path was to completely re-install the OS(maybe that's changed now I don't know).
A second issue was availability, Openfiler(and others) have replication and clustering in some cases, but I've yet to see anything come close to what the formal commercial storage solutions can provide(seamless fail over, online software upgrades etc). Mirrored cache is also a big one as well.
Storage can be the biggest pain point to address when dealing with a consolidated environment, since in many cases it remains a single point of failure. Network fault tolerance is fairly simple to address, and throwing more servers to take into account server failure is easy, but the data can often only live in one place at a time. Some higher end arrays offer synchronous replication to another system, though that replication is not application aware(aka crash consistent) so you are at some risk of data loss when using it with applications that are not aggressive about data integrity(like Oracle for example).
A local vmware consulting shop here that I have a lot of respect for says in their experience, doing crash consistent replication of VMFS volumes between storage arrays there is about a 10% chance one of the VMs on the volume being replicated will not be recoverable, as a result they heavily promoted NetApp's VMware-aware replication which is much safer. My own vendor 3PAR released similar software a couple of weeks ago for their systems.
Shared storage can also be a significant pain point for performance as well with a poor setup.
Another advantage to a proper enterprise-type solution is support, mainly for firmware updates. My main array at work for example is using Seagate enterprise SATA drives. The vendor has updated the firmware on them twice in the past six months. So not only was the process made easy since it was automatic, but since it's their product they work closely with the manufacturer and are kept in the loop when important updates/fixes come out and have access to them, last I checked it was a very rare case to be able to get HDD firmware updates from the manufacturer's web sites.
The system "worked" perfectly fine before the updates, I don't know what the most recent update was for but the one performed in August was around an edge case where silent data corruption could occur on the disk if a certain type of error condition was encountered, so the vendor sent out an urgent alert to all customers using the same type of drive to get them updated asap.
A co-worker of mine had to update the firmware on some other Seagate disks(SCSI) in 2008 on about 50 servers due to a performance issue with our application, in that case he had to go to each system individually with a DOS boot disk and update the disks, a very time consuming process involving a lot of downtime. My company spent almost a year trying to track down the problem before I joined and ran some diagnostics and fairly quickly narrowed the problem down to systems running Seagate disks(some other systems running the same app had other brands(stupid dell), of disks that were not impacted).
A lot of firmware update tools I suspect don't work well with RAID controllers either, since the disks are abstracted, further complicating the issue of upgrading them.
So it all depends on what the needs are, you can go with the cheaper software options just try to set expectations accordingly when using them. Which for me is basically - "don't freak out when it blows up".
nate
On 2/5/2010 10:04 AM, nate wrote:
Have you investigated any of the mostly-software alternatives for this like openfiler, nexentastor, etc., or rolling your own iscsi server out of opensolaris or centos?
I have and it depends on your needs. I ran Openfiler a couple years ago with ESX and it worked ok. The main issue there was stability. I landed on a decent configuration that worked fine as long as you didn't touch it(kernel updates often caused kernel panics on the hardware which was an older HP DL580). And when Openfiler finally came out with their newer "major" version the only upgrade path was to completely re-install the OS(maybe that's changed now I don't know).
Somewhere along the line they switch from a CentOS base to rpath for better package management, but I haven't followed them since.
[...]
Another advantage to a proper enterprise-type solution is support, mainly for firmware updates. My main array at work for example is using Seagate enterprise SATA drives. The vendor has updated the firmware on them twice in the past six months. So not only was the process made easy since it was automatic, but since it's their product they work closely with the manufacturer and are kept in the loop when important updates/fixes come out and have access to them, last I checked it was a very rare case to be able to get HDD firmware updates from the manufacturer's web sites.
I had an equally frustrating experience with a Dell rebranded NetApp several years back. The unit shipped with a bad moherboard FC controller which was a known problem and they also included an add-on card. But, the guy who set it up called support where he was told that the problem had been fixed by this serial number and he should connect to the motherboard port. The symptom was that once or twice a year it would see something wrong with a drive, kick it out and rebuild on a hot spare. Eventually it lost several disks at once and lost the data. After I dug up the history I switched controllers and reinstalled everything from scratch and it worked after that, but by then nobody trusted it and it was only used for backups. So, I no longer believe that paying a lot for a device that is supposed to have a good reputation is a sure thing - or that having a support phone number is going to make things better. Everyone has different war stories, I guess...
A co-worker of mine had to update the firmware on some other Seagate disks(SCSI) in 2008 on about 50 servers due to a performance issue with our application
Oh yeah - the drives in this device needed that too - but it wasn't that bad to do on one device with the NetApp software.
Les Mikesell wrote:
Somewhere along the line they switch from a CentOS base to rpath for better package management, but I haven't followed them since.
Yeah the version I had at the time was based on rPath, I think they changed to something else yet again in the past year or so.
trusted it and it was only used for backups. So, I no longer believe that paying a lot for a device that is supposed to have a good reputation is a sure thing - or that having a support phone number is going to make things better. Everyone has different war stories, I guess...
Oh absolutely, nothing is a sure thing, on two separate occasions last year we had a disk failure take out an entire storage array (I speculate that fiber errors flooded the bus and that took the controllers off line), this was on low end crap storage. One of our vendors OEM's low end IBM storage for some of their customers and they reported similar events on that stuff.
In 2004 the company I was at had a *massive* outage on our EMC array(CX600), some pretty significant data loss(~60 hours of downtime in the first week alone), in the end it was traced to administrator(wasn't me at the time) error. A misconfiguration of the system allowed both controllers to go down simultaneously. Such an error is not possible to make on more modern systems(phew). I don't know what the specific configuration was but the admin fessed up to it a couple years later.
Which is why most vendors will try to push for a 2nd array and doing some sort of replication, there's only one system in the world that I know of that puts their money behind 100% uptime and that is the multi million $ systems from Hitachi. They claim they've never had to pay up for any claims.
Most other array makers don't make their systems to handle more than 99.999% uptime on the high end. And probably 99.99% on the mid range.
BUT under most circumstances a good storage array provides far better availability than anything someone can build on their own for most applications. Where good typically means the system would be sold starting at north of $50k.
I like my own storage array because it can have up to 4 controllers running in active-active mode(right now it has 2, getting another 2 installed in a few weeks). Recently a software update was installed that allows the system to re-mirror itself to another controller(s) in the system in the event of a controller failure.
Normally in a dual controller system if a controller goes down the system goes into write-through mode to ensure data integrity which can destroy performance, with this feature that doesn't happen, and the system still ensures data integrity by making sure all data is written to two locations before the write is acknowledged to the host.
It goes well beyond that though, it automatically lays data out so that it can survive a full shelf(up to 40 drives) failing without skipping a beat. RAID rebuilds are very fast(up to 10x faster than other systems), the drives are connected to a switched back plane, there are no fiber loops on the system, every shelf of disks is directly connected to the controllers via two fiber ports. In the event of a power failure there is an internal disk in each controller that the system writes it's cache out to, so no worries about a power outage lasting longer than the batteries(typically 48-72 hours). And of course since everything is written twice, when the power goes out you store two copies of that cache on the internal disks, in the event one disk happens to fail (hopefully both don't) at the precisely wrong moment.
The drives themselves are in vibration absorbing sleds, vibration is the #1 cause of failure on disks according to a report I read from Seagate.
http://portal.aphroland.org/~aphro/chassis-architecture.png http://www.techopsguys.com/2009/11/20/enterprise-sata-disk-reliability/
I have had two soft failures on the system since we got it, one time a fiber channel port had a sort of core dump, and another where a system process crashed, both were recovered automatically without user intervention and no noticeable impact other than the email alerts to me.
No guarantees it won't burst into flames one day, but I do sleep a lot better at night with this system vs the last one.
My vendor also recently introduced an interesting solution for replication which involves 3 arrays providing synchronous long distance replication, it works like this:
(while all arrays must be from the same vendor they do not need to be identical in any way)
Array 1 sits in facility A Array 2 sits in facility B (up to ~130 miles away, or 1.3ms RTT) Array 3 sits in facility C (up to 3000 miles away, or 150ms RTT)
Array 1 is synchronously replicating to facility B (hence distance limitations), and asynchronously replicating to facility C at defined intervals. In the event facility A or Array 1 blows up, Array 3 in facility C automatically connects to Array 2 and has it send all of the data up to the point Array 1 went down, I think you can get as close as something like a few milliseconds from the disaster that took out Array 1, and get all of the data to Array 3.
Setting it up takes about 30 minutes, and it's all automatic.
Prior to this setting up such a solution would cost waaaaaaaaaay more, as you'd only find it in the most high end systems.
It's going to be many times cheaper to get a 2nd array and replicate than it is to try to design/build a single system that offers 100% uptime.
Entry level pricing of this particular array starts at maybe $130k, can go probably as high as $2-3M if you load it up with software(more of half the cost can be software add ons). So it's not in the same league as most NetApp, or Equallogic, or even EMC/HDS gear. Their low end stuff starts at probably $70k.
nate
Whats your thoughts on Vmware server over esxi? Really do not want to have to budget for Virtualization if I do not have to. Thanks for any info.
Here is a comparison of VMware ESXi and Server notice that server doesn't cost money.
http://www.vmware.com/products/server/faqs.html
both are proprietary there are a lot of good FOSS alternatives such:
KVM (require a modern hardware) Xen (need a patched kernel: available in centos repos) OpenVZ (need a patched kernel: available in openvz repos, mainly for VPS but personalty i use it)
HTH
On Fri, February 5, 2010 9:02 am, Athmane Madjoudj wrote:
Whats your thoughts on Vmware server over esxi? Really do not want to have to budget for Virtualization if I do not have to. Thanks for any info.
Here is a comparison of VMware ESXi and Server notice that server doesn't cost money.
http://www.vmware.com/products/server/faqs.html
both are proprietary there are a lot of good FOSS alternatives such:
KVM (require a modern hardware) Xen (need a patched kernel: available in centos repos) OpenVZ (need a patched kernel: available in openvz repos, mainly for VPS but personalty i use it)
HTH
-- Athmane Madjoudj
Does anyone have any experience with KVM or OpenVZ? If I can stick to something that is not proprietary that would be great. I didn't realize there were so many options. Any info would be greatly appreciated. Bo
Does anyone have any experience with KVM or OpenVZ? If I can stick to something that is not proprietary that would be great. I didn't realize there were so many options. Any info would be greatly appreciated. Bo
KVM is easier (like VMware) than OpenVZ when using virt-manager to manage virtual machine and the new version of CentOS 5.4 support KVM (KVM is default in Fedora distro).
Personally i use OpenVZ because my hardware doesn't support virtualization
HTH
Athmane Madjoudj
Does anyone have any experience with KVM or OpenVZ? If I can stick to something that is not proprietary that would be great. I didn't realize there were so many options. Any info would be greatly appreciated. Bo
If you can, avoid OpenVZ, it's not a full virtualization platform, but rather kernel emulation. The moment one of the VPS's has a memory hog, the whole server will suffer.
Rather use XEN / KVM / VMWare as it gives total isolation on each VPS.
Bo Lynch sent a missive on 2010-02-05:
On Fri, February 5, 2010 9:02 am, Athmane Madjoudj wrote:
Whats your thoughts on Vmware server over esxi? Really do not want to have to budget for Virtualization if I do not have to. Thanks for any info.
Here is a comparison of VMware ESXi and Server notice that server doesn't cost money.
http://www.vmware.com/products/server/faqs.html
both are proprietary there are a lot of good FOSS alternatives such:
KVM (require a modern hardware) Xen (need a patched kernel: available in centos repos) OpenVZ (need a patched kernel: available in openvz repos, mainly for VPS but personalty i use it)
HTH
-- Athmane Madjoudj
Does anyone have any experience with KVM or OpenVZ? If I can stick to something that is not proprietary that would be great. I didn't realize there were so many options. Any info would be greatly appreciated. Bo
OpenVZ is containerisation and not virtualisation and therefore limits the os running to a minor version of the base os. If you need to have say Centos4, Centos5, Solaris 10, Windows on the same box then this is not for you.
Bo Lynch wrote:
Does anyone have any experience with KVM or OpenVZ? If I can stick to something that is not proprietary that would be great. I didn't realize there were so many options. Any info would be greatly appreciated. Bo
Philosophically, I don't see how running on ESXi virtualization is any more or less proprietary than running on IBM (Dell, etc.) hardware directly. Unless you are just being pedantic about it, the main thing to consider is whether or not you could move your application elsewhere easily if you had to live without the unique proprietary features of any platform. And you can, if you pay attention to how things work. In fact there is some standardization being done in the virtual containers, and I'd assume VMware is a leader in that.
On Fri, February 5, 2010 9:55 am, Les Mikesell wrote:
Bo Lynch wrote:
Does anyone have any experience with KVM or OpenVZ? If I can stick to something that is not proprietary that would be great. I didn't realize there were so many options. Any info would be greatly appreciated. Bo
Philosophically, I don't see how running on ESXi virtualization is any more or less proprietary than running on IBM (Dell, etc.) hardware directly. Unless you are just being pedantic about it, the main thing to consider is whether or not you could move your application elsewhere easily if you had to live without the unique proprietary features of any platform. And you can, if you pay attention to how things work. In fact there is some standardization being done in the virtual containers, and I'd assume VMware is a leader in that.
-- Les Mikesell lesmikesell@gmail.com
You make a valid point. Thanks
Whats your thoughts on Vmware server over esxi? Really do not want to have to budget for Virtualization if I do not have to. Thanks for any info.
Here is a comparison of VMware ESXi and Server notice that server doesn't cost money.
http://www.vmware.com/products/server/faqs.html
both are proprietary
<snip> ESXi is free, but usable on one system. ESX is the full-blown version, costs, and I *think* comes with the console... which, for some unknown reason, is WinDoze *only*.
I believe both can be administered via browser.
mark
ESXi is free, but usable on one system. ESX is the full-blown version, costs, and I *think* comes with the console... which, for some unknown reason, is WinDoze *only*.
I believe both can be administered via browser.
maybe because there are more windows users that Linux and / or Mac OS X and FreeBSD.
i have read in [1] and [2] that even RedHat may do the same thing (a Wind0w$ only console)
[1] http://www.internetnews.com/software/article.php/3847391/Red+Hat+Virtualizat...
[2] http://www.linuxtoday.com/it_management/2009110700635NWRH
I' m not sure but it will be helpful if someone confirm (or not).
Best regards.
ESXi is free, but usable on one system. ESX is the full-blown version, costs, and I *think* comes with the console... which, for some unknown reason, is WinDoze *only*.
I believe both can be administered via browser.
maybe because there are more windows users that Linux and / or Mac OS X and FreeBSD.
i have read in [1] and [2] that even RedHat may do the same thing (a Wind0w$ only console)
<snip> Except that VMware is *based* on RHEL. Why would you *not* have a Linux-based console?
mark
On 2/5/2010 10:12 AM, m.roth@5-cent.us wrote:
ESXi is free, but usable on one system. ESX is the full-blown version, costs, and I *think* comes with the console... which, for some unknown reason, is WinDoze *only*.
I believe both can be administered via browser.
maybe because there are more windows users that Linux and / or Mac OS X and FreeBSD.
i have read in [1] and [2] that even RedHat may do the same thing (a Wind0w$ only console)
<snip> Except that VMware is *based* on RHEL. Why would you *not* have a Linux-based console?
Esx(i) is pretty lightweight on the host side. There's no GUI at all and not much you can actually do there. The vcenter client is a fairly complex application - probably non-trivial to port and maintain lots of different versions. If you're going to lose a percentage of customers based on not having an appropriate platform to run the client - well you can do the math - they aren't dumb.
Anyway, the client doesn't need to be connected for normal operation and you can connect from different clients, so they don't have to be on a particularly reliable machine.
On Fri, Feb 5, 2010 at 7:53 AM, Athmane Madjoudj athmanem@gmail.com wrote:
ESXi is free, but usable on one system. ESX is the full-blown version, costs, and I *think* comes with the console... which, for some unknown reason, is WinDoze *only*.
I believe both can be administered via browser.
maybe because there are more windows users that Linux and / or Mac OS X and FreeBSD.
i have read in [1] and [2] that even RedHat may do the same thing (a Wind0w$ only console)
[1] http://www.internetnews.com/software/article.php/3847391/Red+Hat+Virtualizat...
[2] http://www.linuxtoday.com/it_management/2009110700635NWRH
I' m not sure but it will be helpful if someone confirm (or not).
Best regards.
-- Athmane Madjoudj _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
It's Windows only for the management piece because it is written in .NET, and yes it is the same for RHEV (Red Hat's Virtualization server). I don't know why it has to be in .NET, but it is (Probably a C# thing).
For my money, and as this is a CentOS mailing list please forgive the following recommendations, I would go with Oracle VM...because I don't have much money. OVM is free to download but has paid support options. It's a really small implementation of RHEL using the Xen kernel and has a Non-Windows management UI. It supports clustering and high-availability with OCFS2 and does para and full virtualization.
If I had more of a budget, I would go with RHEV. It costs a lot less to run compared to ESX and Hyper-V, and is higher performing too. This, of course, uses KVM and not Xen, but the performance is there. You need RHEL 5.4 and hardware compatibility. I'm not sure if you would be able to manage CentOS 5.4 hosts with RHEV, but it'd be worth a try. I don't see why it wouldn't work.
Bo Lynch wrote:
Whats your thoughts on Vmware server over esxi? Really do not want to have to budget for Virtualization if I do not have to. Thanks for any info.
There is a free version of ESXi - which is really the same as the paid version with the cluster management and vmotion functions disabled. The only reason to use Server is if you need to drop it on a host that is already running things natively - or you need to display on the local console. If you are starting from scratch, install ESXi on the hardware first and put everything on guests. You do need a windows box to run the control software when setting it up or making changes. It can use the local server's disk for storage, but eventually you'll probably want to spend money on a reliable disk subsystem.
Thanks for the info. Looks like VM would be the way to go. I have been looking at Vmware and virtualbox. Would you recommend Vmware over virtualbox?
Whats your thoughts on Vmware server over esxi? Really do not want to have to budget for Virtualization if I do not have to.
I know some will disagree with me but for production I recommend sticking with VMware's ESXi product, which is free, unless you have need of some of the more advanced features which are available through paid options.
The downside of offerings like Virtualbox or VMware Server, where the guest OS is hosted inside the app running on a full blown OS, is the host itself. In my experience, the smaller footprint of VMware ESX(i) reduces the amount of maintenance required as well as has minimal performance impact of the guest OS's.
That said, apps like Virtualbox / WMware server do have their place. At work I routinely create virtual machines under WMware Server to experiment with new software before releasing it into the wild at work. The cost overhead of running Server on my own workstation is acceptable for testing but I wouldn't consider it for production.
On Feb 5, 2010, at 9:03 AM, Drew drew.kay@gmail.com wrote:
Thanks for the info. Looks like VM would be the way to go. I have been looking at Vmware and virtualbox. Would you recommend Vmware over virtualbox?
Whats your thoughts on Vmware server over esxi? Really do not want to have to budget for Virtualization if I do not have to.
I know some will disagree with me but for production I recommend sticking with VMware's ESXi product, which is free, unless you have need of some of the more advanced features which are available through paid options.
The downside of offerings like Virtualbox or VMware Server, where the guest OS is hosted inside the app running on a full blown OS, is the host itself. In my experience, the smaller footprint of VMware ESX(i) reduces the amount of maintenance required as well as has minimal performance impact of the guest OS's.
That said, apps like Virtualbox / WMware server do have their place. At work I routinely create virtual machines under WMware Server to experiment with new software before releasing it into the wild at work. The cost overhead of running Server on my own workstation is acceptable for testing but I wouldn't consider it for production.
Citrix XenServer Pro is also free and it comes with live migration, you don't get VMotion with ESXi unless you dish out big $$ for Enterprise.
-Ross
On 2/4/2010 3:17 PM, Bo Lynch wrote:
Right know we have about 30 or so linux servers scattered through out or district. Was looking at ways of consolidating and some sort of redundancy would be nice. Will clustering not work with certain apps? We have a couple mysql dbases, oracle database, smb shares, nfs, email, and web servers.
Each app has it's own best way to provide the redundancy and auto-failover and it's own set of tradeoffs of the added complexity vs. the possible reduced downtime if the primary fails.
I'd balance the options against the low-tech method of having raid mirrors in swappable bays with a spare similar server chassis or two around plus regular backups kept at a different location. The raid lets you continue in the likely event of a disk failure so you can repair it at a convenient time. Other failures (motherboard, power supply) are less likely but can be handled by swapping the drives into an alternate chassis (and with Centos you'll need to re-assign the IP addresses that are tied to the old NIC mac addresses) with a small amount of downtime. And the backups cover things like operator or software errors (that would wipe a cluster too) or a building-level disaster that destroys the disks or the primary and spare chassis at the same time. Some apps may be worth the effort to do better.
Hi,
On 2/4/2010 3:17 PM, Bo Lynch wrote:
Right know we have about 30 or so linux servers scattered through out
or
district. Was looking at ways of consolidating and some sort of
redundancy
would be nice. Will clustering not work with certain apps? We have a couple mysql
dbases,
oracle database, smb shares, nfs, email, and web servers.
Each app has it's own best way to provide the redundancy and auto-failover and it's own set of tradeoffs of the added complexity vs. the possible reduced downtime if the primary fails.
I'd balance the options against the low-tech method of having raid mirrors in swappable bays with a spare similar server chassis or two around plus regular backups kept at a different location. The raid lets you continue in the likely event of a disk failure so you can repair it at a convenient time. Other failures (motherboard, power supply) are less likely but can be handled by swapping the drives into an alternate chassis (and with Centos you'll need to re-assign the IP addresses that are tied to the old NIC mac addresses) with a small amount of downtime. And the backups cover things like operator or software errors (that would wipe a cluster too) or a building-level disaster that destroys the disks or the primary and spare chassis at the same time. Some apps may be worth the effort to do better.
In our configurations we utilise different strategies depending on what we want to achieve as there isn't really a panacea for this... We use virtual servers, hot standby firewalls/routers, load balanced servers, warm standby servers (using such things as mysql replication, rsync and DRBD to keep the boxes in sync) and shared storage from disk arrays and servers with local disk arrays for local performance and resilience. We have also utilised hadoop (distributed filesystem) on some again to provide resilience within the limitations of hadoop.
S.
On Thu, February 4, 2010 6:34 pm, Les Mikesell wrote:
On 2/4/2010 3:17 PM, Bo Lynch wrote:
Right know we have about 30 or so linux servers scattered through out or district. Was looking at ways of consolidating and some sort of redundancy would be nice. Will clustering not work with certain apps? We have a couple mysql dbases, oracle database, smb shares, nfs, email, and web servers.
Each app has it's own best way to provide the redundancy and auto-failover and it's own set of tradeoffs of the added complexity vs. the possible reduced downtime if the primary fails.
I'd balance the options against the low-tech method of having raid mirrors in swappable bays with a spare similar server chassis or two around plus regular backups kept at a different location. The raid lets you continue in the likely event of a disk failure so you can repair it at a convenient time. Other failures (motherboard, power supply) are less likely but can be handled by swapping the drives into an alternate chassis (and with Centos you'll need to re-assign the IP addresses that are tied to the old NIC mac addresses) with a small amount of downtime. And the backups cover things like operator or software errors (that would wipe a cluster too) or a building-level disaster that destroys the disks or the primary and spare chassis at the same time. Some apps may be worth the effort to do better.
-- Les Mikesell lesmikesell@gmail.com
Currently we are doing the low tech method. Daily and weekly backups both onsite and off along with RAID and all that other good stuff. I was just wondering if clustering was a better way of handling things. Thanks for the info. Bo
Bo Lynch wrote:
Currently we are doing the low tech method. Daily and weekly backups both onsite and off along with RAID and all that other good stuff. I was just wondering if clustering was a better way of handling things. Thanks for the info.
If you are looking at VMware, ESX(i) is the nicest of the bunch but moderately expensive for the full version that does clustering and live moves - and you also need a highly reliable iscsi disk server. But even the free version is very nice in terms of the management tools, low overhead, and the ability to overcommit the host's RAM. You could start by building shadow copies of most of your servers that could be activated as needed, with perhaps a few being live with application level failover (heartbeat, drbd, database replication, etc.). ESXi is also a nice lab framework for testing new thing.
Bo Lynch wrote:
Currently we are doing the low tech method. Daily and weekly backups both onsite and off along with RAID and all that other good stuff. I was just wondering if clustering was a better way of handling things. Thanks for the info.
If you are looking at VMware, ESX(i) is the nicest of the bunch but moderately expensive for the full version that does clustering and live moves - and you also need a highly reliable iscsi disk server. But even the free version is very nice in terms of the management tools, low overhead, and the ability to overcommit the host's RAM. You could start by building shadow copies of most of your servers that could be activated as needed, with perhaps a few being live with application level failover (heartbeat, drbd, database replication, etc.). ESXi is also a nice lab framework for testing new thing.
-- Les Mikesell lesmikesell@gmail.com
When you talk about the free version are your referring to Vmware server or is there a free version of Esxi? The website is a little misleading with "free trail" and such.
When you talk about the free version are your referring to Vmware server or is there a free version of Esxi? The website is a little misleading with "free trail" and such.
ESXi is free to use. ESX / vSphere is the paid version.
Drew wrote:
When you talk about the free version are your referring to Vmware server or is there a free version of Esxi? The website is a little misleading with "free trail" and such.
ESXi is free to use. ESX / vSphere is the paid version.
A common confusion point. While there is a free license available for ESXi and not for ESX, you can pay for ESXi to unlock additional functionality(such as live migration, HA, DRS etc) and still keep the "thin" hypervisor footprint that ESXi offers.
nate
Bo Lynch wrote:
When you talk about the free version are your referring to Vmware server or is there a free version of Esxi? The website is a little misleading with "free trail" and such.
You have to register, but the way it works is that you download a full-featured ESXi demo with a 30-day trial license and you get free license keys that you can install any time within the 30-days to downgrade it to run for an unlimited time with the clustering and cluster mangement features disabled. You also need to download the vcenter control program and the image conversion tool.
And they'll send some email occasionally, but not a huge amount.
On Fri, 2010-02-05 at 07:57 -0600, Les Mikesell wrote:
Bo Lynch wrote:
Currently we are doing the low tech method. Daily and weekly backups both onsite and off along with RAID and all that other good stuff. I was just wondering if clustering was a better way of handling things. Thanks for the info.
If you are looking at VMware, ESX(i) is the nicest of the bunch but moderately expensive for the full version that does clustering and live moves - and you also need a highly reliable iscsi disk server. But even the free version is very nice in terms of the management tools, low overhead, and the ability to overcommit the host's RAM. You could start by building shadow copies of most of your servers that could be activated as needed, with perhaps a few being live with application level failover (heartbeat, drbd, database replication, etc.). ESXi is also a nice lab framework for testing new thing.
There are also a lot community scripts for management as well.
http://communities.vmware.com/docs/DOC-9852
-- Les Ault VCP, RHCE Linux Systems Administrator, Office of Information Technology Computing Systems Services: Technical Services and Research
The University of Tennessee 135C5 Kingston Pike Building 2309 Kingston Pike Knoxville, TN 37996 Phone: 865-974-1640