Take a look at iSCSI for the storage servers. iSCSI Enterprise Target is what I use here and it works well for us.
You don't really need shared filesystems if you are doing direct block io to LVs or raw partitions as the Xen migration will handle the hand-off, but you will if you are using flat files, because of this I recommend using LVs or raw partitions as clustered filesystems will put a serious overhead on the Xen guest io.
-Ross
-----Original Message----- From: centos-bounces@centos.org centos-bounces@centos.org To: CentOS mailing list centos@centos.org Sent: Wed Jan 02 17:44:19 2008 Subject: [CentOS] Xen, GFS, GNBD and DRBD?
Hi all,
We're looking at deploying a small Xen cluster to run some of our smaller applications. I'm curious to get the lists opinions and advice on what's needed.
The plan at the moment is to have two or three servers running as the Xen dom0 hosts and two servers running as storage servers. As we're trying to do this on a small scale, there is no means to hook the system into our SAN, so the storage servers do not have a shared storage subsystem.
Is it possible to run DRBD on the two storage servers and then export the block devices over the network to the xen hosts? Ideally the goal is to have the effect of shared storage on the xen hosts so that domains can be migrated between them in case one server needs to go offline. Do I run GFS on top of the DRBD mirrored device, exported via GNBD to the xen hosts; or the other way around, using GNBD to export the DRBD mirrored device and then GFS running on the xen hosts?
Is this possible; is there an easier/simpler/better way to do it?
Thanks, Tom _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
On 03/01/2008, at 9:55 AM, Ross S. W. Walker wrote:
Take a look at iSCSI for the storage servers. iSCSI Enterprise Target is what I use here and it works well for us.
You don't really need shared filesystems if you are doing direct block io to LVs or raw partitions as the Xen migration will handle the hand-off, but you will if you are using flat files, because of this I recommend using LVs or raw partitions as clustered filesystems will put a serious overhead on the Xen guest io.
-Ross
Ross,
I can use DRBD to mirror data between the two storage servers and iSCSI to export the block devices, but how will iSCSI cope with failure of one storage server?
Can I use heartbeat and CRM to failover the host IP and iSCSI target to the other storage server?
Regards, Tom
Tom,
Check out the following URL. It should answer most [all?] of your questions about creating an iSCSI target using two CentOS boxes, Heartbeat, and DRBD.
http://www.pcpro.co.uk/realworld/82284/san-on-the-cheap.html
Hope this helps. -Ken
Ross,
I can use DRBD to mirror data between the two storage servers and iSCSI to export the block devices, but how will iSCSI cope with failure of one storage server?
Can I use heartbeat and CRM to failover the host IP and iSCSI target to the other storage server?
Regards, Tom
---------------------------------------------------------------- This message was sent using IMP, the Internet Messaging Program.