To start I wish to that you for the swift response on this issue. I do not think that I would get such a quick response from a proprietary (closed-source) company. Open Source :-).
To respond to one the comments about large file systems recommend you split it in several smaller (2-4TB) filesystems This is not feasible in many situations. In some situations 2-4TB is not even a reasonable starting point.
A little background. I have been using RH from v2 to v9. and in v9 I did get an install ISO of RH9 that included XFS support. Way back then I used it on a 1.4TB PATA hardware raid 5 (A lot of disk for it time). The system is still operational with out any FS issues short of failed drives. Fixed with the hot spares on the system. in five years of operation the system has had one outage a maintenance reboot Less then 2min down). After RH9 I switched to Centos.
The system that I am currently configuring with 7+ TB of storage is one of the smaller storage servers for our systems. Using the same configuration with more drives we are planning several 20TB+ systems.
For the work we do a single file system over 100TB is not unreasonable. We will be replacing a 80TB SAN system based on StorNext with a Isilon system with 10G network connections.
If there was a way to create a Linux (Centos) 100TB 500TB or larger clustered file system with the nodes connected via infiniband that was easily manageable with throughput that can support multiple 10Gbps Ethernet connections I would be very interested.
And once more thanks for the fast response.
Mike