Mag Gam wrote:
Hello:
I am planning to implement GFS for my university as a summer project. I have 10 servers each with SAN disks attached. I will be reading and writing many files for professor's research projects. Each file can be anywhere from 1k to 120GB (fluid dynamic research images). The 10 servers will be using NIC bonding (1GB/network). So, would GFS be ideal for this? I have been reading a lot about it and it seems like a perfect solution.
Any thoughts?
TIA
"Perfect"? No, but usable. We've got a cluster of 4 systems attached to a fibre-channel-based SAN running CentOS 4 and the Cluster Suite components with multiple instances of the Oracle database. It actually works pretty well and fails over nicely in the case of exceptions. It is moderately complex to set up, but the information needed REALLY IS in the docs... you just have to REALLY read them!
We haven't tried CentOS 5 and the new cluster components as Oracle only supports the version of the database we're running on Red Hat EL4. Given that, the combination looks a bit more "finished" than the versions in EL4.
Another alternative that we are examining is using OCFS2 (Oracle Cluster File System 2) and iSCSI for the shared storage with Heartbeat for service management. This combination looks to be a bit "lighter" than the Cluster Suite and GFS, but I'm hoping to confirm or disprove that impression this summer in my "copious free time".
As usual, you mileage may vary.