Hello:
I am planning to implement GFS for my university as a summer project. I have 10 servers each with SAN disks attached. I will be reading and writing many files for professor's research projects. Each file can be anywhere from 1k to 120GB (fluid dynamic research images). The 10 servers will be using NIC bonding (1GB/network). So, would GFS be ideal for this? I have been reading a lot about it and it seems like a perfect solution.
Any thoughts?
TIA
Mag Gam wrote:
I am planning to implement GFS for my university as a summer project. I have 10 servers each with SAN disks attached.
GFS works well, gfs2 is at the moment in technology-preview mode only, but its still worth looking at.
So, how do you have your setup?
How many nodes? I need something stable so I will look into GFSv1, but may consider GFSv2 later on.
On Thu, May 29, 2008 at 5:16 AM, Karanbir Singh mail-lists@karan.org wrote:
Mag Gam wrote:
I am planning to implement GFS for my university as a summer project. I have 10 servers each with SAN disks attached.
GFS works well, gfs2 is at the moment in technology-preview mode only, but its still worth looking at.
-- Karanbir Singh : http://www.karan.org/ : 2522219@icq _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Mag Gam wrote:
Hello:
I am planning to implement GFS for my university as a summer project. I have 10 servers each with SAN disks attached. I will be reading and writing many files for professor's research projects. Each file can be anywhere from 1k to 120GB (fluid dynamic research images). The 10 servers will be using NIC bonding (1GB/network). So, would GFS be ideal for this? I have been reading a lot about it and it seems like a perfect solution.
Any thoughts?
TIA
"Perfect"? No, but usable. We've got a cluster of 4 systems attached to a fibre-channel-based SAN running CentOS 4 and the Cluster Suite components with multiple instances of the Oracle database. It actually works pretty well and fails over nicely in the case of exceptions. It is moderately complex to set up, but the information needed REALLY IS in the docs... you just have to REALLY read them!
We haven't tried CentOS 5 and the new cluster components as Oracle only supports the version of the database we're running on Red Hat EL4. Given that, the combination looks a bit more "finished" than the versions in EL4.
Another alternative that we are examining is using OCFS2 (Oracle Cluster File System 2) and iSCSI for the shared storage with Heartbeat for service management. This combination looks to be a bit "lighter" than the Cluster Suite and GFS, but I'm hoping to confirm or disprove that impression this summer in my "copious free time".
As usual, you mileage may vary.
Jay Leafey wrote:
Another alternative that we are examining is using OCFS2 (Oracle Cluster File System 2) and iSCSI for the shared storage with Heartbeat for service management. This combination looks to be a bit "lighter" than the Cluster Suite and GFS, but I'm hoping to confirm or disprove that impression this summer in my "copious free time".
ocfs isnt really worth spending time on anymore. iirc, even oracle no longer support an ocfs/ocfs2 based backend store.
might as well consider gpfs ( the cost per machine isnt that high, and there is reasonable assurances that it would work ).
/me is still thrashing out gfs2 though, and conga! and clusterlvm!