Hi all,
I'm happy to announce a new tutorial!
https://alteeve.com/w/2-Node_Red_Hat_KVM_Cluster_Tutorial
This tutorial walks a user through the entire process of building a 2-Node cluster for making KVM virtual machines highly available. It uses Red Hat Cluster services v3 and DRBD 8.3.12. It is written such that you can use entirely free or fully Red Hat supported environments.
Highlights; * Full network and power redundancy; no single-points of failure. * All off-the-shelf hardware; Storage via DRBD. * Starts with base OS install, no clustering experience required. * All software components explained. * Includes all testing steps covered. * Configuration is used in production environments!
This tutorial is totally free (no ads, no registration) and released under the Creative Common 3.0 Share-Alike Non-Commercial license. Feedback is always appreciated!
On 01/03/2012 10:29 AM, Digimer wrote:
Hi all,
I'm happy to announce a new tutorial!
Hello Digimer,
Thanks for sharing this. I might try it in a couple of months as I'm not ready yet (need to grasp some concepts/technologies first). I also haven't used KVM but I have some experience with VMware (vSphere Clusters).
For vSphere clusters you need a shared storage system: ideally (in preference order) you'll be using a FC SAN, iSCSI SAN or a NAS (serving NFS). I'm interested in the DRBD part here. Did you use it because you didn't have access to a shared storage system? or is it a requirement for a particular functionality you wanted? Have you done it before with a shared system? Any considerable performance difference (DRBD vs shared-storage)?
Thanks!
Best regards, Jorge
On 01/04/2012 11:52 AM, Jorge Fábregas wrote:
On 01/03/2012 10:29 AM, Digimer wrote:
Hi all,
I'm happy to announce a new tutorial!
Hello Digimer,
Thanks for sharing this. I might try it in a couple of months as I'm not ready yet (need to grasp some concepts/technologies first). I also haven't used KVM but I have some experience with VMware (vSphere Clusters).
For vSphere clusters you need a shared storage system: ideally (in preference order) you'll be using a FC SAN, iSCSI SAN or a NAS (serving NFS). I'm interested in the DRBD part here. Did you use it because you didn't have access to a shared storage system? or is it a requirement for a particular functionality you wanted? Have you done it before with a shared system? Any considerable performance difference (DRBD vs shared-storage)?
Thanks!
Best regards, Jorge
When you get a chance to try it out, please feel free to ask for help if you run into any issues.
I chose DRBD because of it's ease to implement and that it did not require external storage. I've had very good success with performance of DRBD, getting near-capacity speeds out of it (that is, near the speed of the underlying storage). The only limitation is that DRBD is a best fit at two nodes only. You can do three nodes with stacked configuration, but I've not played with that so I can't comment on it's effectiveness.
As for external storage as a comparison, I can't say. I don't have corporate backing or a hardware budget. :) I suspect though that the real question will not be so much FC SAN vs DRBD as it will be the speed of the underlying storage and the number and type of VMs hitting that storage. The consistent issue I have to deal with in production is storage seek latency. Thankfully, 15k drives and sufficient caching seems to resolve this in most cases. Also, the distributed locking, by it's nature, can be a source of slow down. So you need to allocate time to tune both the storage and the locking when concerned with performance, more than the details of the storage.
Cheers!