To start I wish to that you for the swift response on this issue. I do not think that I would get such a quick response from a proprietary (closed-source) company. Open Source :-).
To respond to one the comments about large file systems recommend you split it in several smaller (2-4TB) filesystems This is not feasible in many situations. In some situations 2-4TB is not even a reasonable starting point.
A little background. I have been using RH from v2 to v9. and in v9 I did get an install ISO of RH9 that included XFS support. Way back then I used it on a 1.4TB PATA hardware raid 5 (A lot of disk for it time). The system is still operational with out any FS issues short of failed drives. Fixed with the hot spares on the system. in five years of operation the system has had one outage a maintenance reboot Less then 2min down). After RH9 I switched to Centos.
The system that I am currently configuring with 7+ TB of storage is one of the smaller storage servers for our systems. Using the same configuration with more drives we are planning several 20TB+ systems.
For the work we do a single file system over 100TB is not unreasonable. We will be replacing a 80TB SAN system based on StorNext with a Isilon system with 10G network connections.
If there was a way to create a Linux (Centos) 100TB 500TB or larger clustered file system with the nodes connected via infiniband that was easily manageable with throughput that can support multiple 10Gbps Ethernet connections I would be very interested.
And once more thanks for the fast response.
Mike
mslist@opcenter.net wrote:
If there was a way to create a Linux (Centos) 100TB - 500TB or larger clustered file system with the nodes connected via infiniband that was easily manageable with throughput that can support multiple 10Gbps Ethernet connections I would be very interested.
Check out Cheslio's line of adapters and drivers for 10G iSCSI.
As for file systems there is only really one for that scenario, GFS, as OCFSv1 only goes up to 8TB and OCFSv2 is still a technology preview. Besides GFS is included in the distro!
You will need to run the nodes 64-bit though to see the 8EB file system limit with GFS as 32-bit GFS has a file system limit of 16TB.
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
On Mon, Jun 2, 2008 at 7:48 PM, Ross S. W. Walker rwalker@medallion.com wrote:
As for file systems there is only really one for that scenario, GFS, as OCFSv1 only goes up to 8TB and OCFSv2 is still a technology preview. Besides GFS is included in the distro!
Lustre?
Bent Terp wrote:
On Mon, Jun 2, 2008 at 7:48 PM, Ross S. W. Walker rwalker@medallion.com wrote:
As for file systems there is only really one for that scenario, GFS, as OCFSv1 only goes up to 8TB and OCFSv2 is still a technology preview. Besides GFS is included in the distro!
Lustre?
Can you still get this on a non-Sun system?
I believe it's called CFS now and is being rolled out by Sun.
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
Lustre only runs on Linux. RH5 is fully supported. (And it works fine with CentOS 5.)
On Mon, Jun 2, 2008 at 7:48 PM, Ross S. W. Walker
rwalker@medallion.com wrote:
As for file systems there is only really one for that scenario, GFS, as OCFSv1 only goes up to 8TB and OCFSv2 is still a technology preview. Besides GFS is included in the distro!
Lustre?
Can you still get this on a non-Sun system?
I believe it's called CFS now and is being rolled out by Sun.
-Ross
This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Lundgren, Andrew wrote:
Lustre only runs on Linux. RH5 is fully supported. (And it works fine with CentOS 5.)
My personal guess (and probably more than a few others) is that Sun is interested in taking the clustering functionality that underlies Lustre and integrating it with ZFS for a future version of Solaris. Even if Sun stops supporting Lustre development on Linux, its GPL open source, undoubtably someone else will pick up the slack...
My personal guess (and probably more than a few others) is that Sun is interested in taking the clustering functionality that underlies Lustre and integrating it with ZFS for a future version of Solaris. Even if Sun stops supporting Lustre development on Linux, its GPL open source, undoubtedly someone else will pick up the slack...
That could be. I have spoken with some of the lead cluster fs people (now Sun) about it. For the next few revs at least it will continue as a Linux product. ZFS will be put underneath lustre rather than ext3.
-- Andrew
Bent Terp wrote:
Lustre?
there's also the commercial IBrix Fusion distributed file system. I've only seen demos and done a paper eval for a project that never materialized, it looked very interesting, but our requirements shifted, so we never got past the initial research stage... Its designed for very high performance filesystems, you dedicate some number of servers to manage the filesystem, then your applications talk to that cluster. It can work with both shared SAN attached storage, and per node direct attached storage, distributing the metadata management workload as well as the IO processing workload across the cluster, while maintaining full redundancy.
Just wondering if any one ever consider/use Coraid for massive storage under CentOS? http://www.coraid.com It seems like a very reasonable option.. comments ?
I$ilon could also be a option for petabytes storage http://www.isilon.com/products/index.php
mslist@opcenter.net wrote:
To start I wish to that you for the swift response on this issue. I do not think that I would get such a quick response from a proprietary (closed-source) company. Open Source :-).
To respond to one the comments about large file systems “recommend you split it in several smaller (2-4TB) filesystems “ This is not feasible in many situations. In some situations 2-4TB is not even a reasonable starting point.
A little background. I have been using RH from v2 to v9. and in v9 I did get an install ISO of RH9 that included XFS support. Way back then I used it on a 1.4TB PATA hardware raid 5 (A lot of disk for it time). The system is still operational with out any FS issues short of failed drives. Fixed with the hot spares on the system. in five years of operation the system has had one outage a maintenance reboot Less then 2min down). After RH9 I switched to Centos.
The system that I am currently configuring with 7+ TB of storage is one of the smaller storage servers for our systems. Using the same configuration with more drives we are planning several 20TB+ systems.
For the work we do a single file system over 100TB is not unreasonable. We will be replacing a 80TB SAN system based on StorNext with a Isilon system with 10G network connections.
If there was a way to create a Linux (Centos) 100TB – 500TB or larger clustered file system with the nodes connected via infiniband that was easily manageable with throughput that can support multiple 10Gbps Ethernet connections I would be very interested.
And once more thanks for the fast response.
Mike _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Mon, Jun 2, 2008 at 9:38 PM, Alain Terriault alain.terriault@mcgill.ca wrote:
Just wondering if any one ever consider/use Coraid for massive storage under CentOS? http://www.coraid.com It seems like a very reasonable option.. comments ?
It doesn't go all the way, but sure looks as interesting storage blocks for a Lustre deployment.