Hi Guys,
I have two mail servers and I want them to use gfs to share some storage device.. I don't want to buy a storage device but I have a spare server that maybe could be used. Anybody here use gfs without a proper san? Say just somehow using a server running linux? Or anybody have any thoughts?
Thanks, Wayne
On Thu, Aug 25, 2005 at 08:19:30PM +0100, Wayne wrote:
Hi Guys,
I have two mail servers and I want them to use gfs to share some storage device.. I don't want to buy a storage device but I have a spare server that maybe could be used. Anybody here use gfs without a proper san? Say just somehow using a server running linux? Or anybody have any thoughts?
If your storage isn't HA anyway, just use NFS. Should be much simpler to setup.
Alan Hodgson wrote:
On Thu, Aug 25, 2005 at 08:19:30PM +0100, Wayne wrote:
Hi Guys,
I have two mail servers and I want them to use gfs to share some storage device.. I don't want to buy a storage device but I have a spare server that maybe could be used. Anybody here use gfs without a proper san? Say just somehow using a server running linux? Or anybody have any thoughts?
If your storage isn't HA anyway, just use NFS. Should be much simpler to setup.
I was looking this for a while and couldn't find any decent documentation on how to do GFS "on the cheap". I ended up just using NFS for my simple two-node web farm. The ultramonkey.org packages make it super easy.
--Ajay
Thanks guys, ill guess ill just use nfs until I get some proper storage.
On 25/08/2005 21:12, "Ajay Sharma" ssharma@revsharecorp.com wrote:
Alan Hodgson wrote:
On Thu, Aug 25, 2005 at 08:19:30PM +0100, Wayne wrote:
Hi Guys,
I have two mail servers and I want them to use gfs to share some storage device.. I don't want to buy a storage device but I have a spare server that maybe could be used. Anybody here use gfs without a proper san? Say just somehow using a server running linux? Or anybody have any thoughts?
If your storage isn't HA anyway, just use NFS. Should be much simpler to setup.
I was looking this for a while and couldn't find any decent documentation on how to do GFS "on the cheap". I ended up just using NFS for my simple two-node web farm. The ultramonkey.org packages make it super easy.
--Ajay _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
-- ** Email Scanned by Elive's Virus Scanning Service - http://www.elive.net **
Him,
On Thu, Aug 25, 2005 at 01:12:40PM -0700, Ajay Sharma wrote:
I was looking this for a while and couldn't find any decent documentation on how to do GFS "on the cheap". I ended up just using NFS for my simple two-node web farm. The ultramonkey.org packages make it super easy.
It's the same method, which is described for Oracel/RAC - FireWire and shared disk on that media.
Linux sbp2 driver for FireWire media does support 'nonexclusive login' where one can mount the same disk between N nodes. I'ver personally back few years ago verified three node RAC-installation with this method (with shared SCSI one can only get up to two :)
Sharing SCSI-bus is the next method, which is working pretty decently too for evaluation purposes. Just stuff in few SCSI-adapters, change the controller IDs so that those would not conflict and stack up some extrernal box with two connectors between them. Obviously this is not easy target being bootable media, but one doesn't want to boot from that media on GFS anyway.
Third method, which is just cheapest, would be iSCSI. There are
http://iscsitarget.sourceforge.net/
which is extremely stable, but not so user friendly as there isn't much of tools available. One would make one box act up as iSCSI-server and use clients to that. This is what i use at my LAN for iSCSI-server ATM. Actually my NFS-server acts up as iSCSI-server too now as i've tested this iSCSI-part being stable and propably not crashing up my NFS-server :)
I don't know about cheap, but those are three methods to tacle GFS-testing w/o FibreChannel etc. that are not normally available for home LABs :)
If someone should have 'few extra QLA2X00-adaters' available, the
Can make you 'my own little FC-box' from any linux box. There aer evaluation licenses available, so you can try before you buy.
I've personally spent a lot of time debugging problems with this piece of software. It's not workin aOK, but recently it has been extremely stable - even with the older QLA2200-boards (which i do have a pile).
As an aside, has anyone looked at using GFS with the ATA-Over-Ethernet stuff from Coraid? It appears to have the potential for better performance than iSCSI (no walking up and down the IP stack). The 'supported' part works with their hardware, but they have an open-source server-side package (http://aoetools.sourceforge.net) called 'vblade' that that appears allow you to set up a box as an AOE server. This is something I've wanted to try for a bit but haven't had the chance to dig into it yet.
Sorry for the 'noise'!
Jay Leafey jay.leafey@mindless.com wrote:
As an aside, has anyone looked at using GFS with the ATA-Over-Ethernet stuff from Coraid? It appears to have the potential for better performance than iSCSI (no walking up and down the IP stack).
I looked at their design and it doesn't seem to address the multi-targeting aspects very well. In other words, I see a lot of complications for clustering because they have not addressed them, which is left to GFS to address. I think I'd rather use another storage solution that either handles multi-targetting better, or is just much faster for close to the same distance limitations.
Serial Attached SCSI (SAS) is the next generation of multi-targettable SCSI. It is far higher performing and very competitive with FC-AL on speed, but at a massively reduced price-point and even less overhead. The only disadvantage is length, limited to 8m (~26'), although it can be repeated just as far as AoE. So it's ideal for clustering in the same closet/room, or repeated to nearby areas.
I personally see AoE getting crushed between multi-vendor FC-AL/iSCSI lengths and SAS performance. Especially given that SAS already uses the proven SCSI-2 protocol and stack, whereas AoE is one vendor right now. I know a lot of people are talking about AoE because they've done the "conference circuit," but once people hear about SAS, they quickly reconsider.
For more on SAS, see my blog: http://thebs413.blogspot.com/2005/08/serial-storage-is-future.html