[CentOS] mkfs.ext3 on a 9TB volume
jlb17 at duke.edu
Mon Sep 12 09:31:44 UTC 2005
On Sun, 11 Sep 2005 at 10:34pm, Bryan J. Smith wrote
> On Sun, 2005-09-11 at 21:01 -0400, Joshua Baker-LePain wrote:
> > Having hit a similar issue (big FS, I wanted XFS, but needed to run centos
> > 4), I just went ahead and stuck with ext3. My FS is 5.5TiB -- a software
> > RAID0 across 2 3w-9xxx arrays. I had no issues formatting it and have had
> > no issues in testing or production with it. So, it can be done.
> I don't think I _ever_ said it couldn't be done.
> In fact, the Ext3 support is now up to 17.6TB (16TiB) now.
And I never said that you said that, nor did I mean to imply it.
> But is there any guarantee that volume will work if moved to another set
> of hardware, kernels, etc...??? As I said, I _never_ create Ext3
As I just mentioned in another post, this configuration is explicitly
supported by Red Hat. Therefore, if it doesn't work in some other
configuration, it's a bug that Red Hat will want to fix.
> P.S. Red Hat's going to wake up sooner or later and realize it's just
> as Sun said, they have not addressed the enterprise filesystem issue.
> I'm sure SGI and the XFS team would be more than happy to see some
> engagement from Red Hat on this matter -- and have wished for years now
> -- and the said thing is that it would _help_ Red Hat's future. XFS is
> the only option -- ReiserFS and JFS have interface/compatibility issues
> that are "show stoppers" for Red Hat. XFS has not, and the only issues
> are newer kernel/distribution developments that just need to be
> addressed at a distro-level.
I too have been waiting for a long while for Red Hat to wake up to XFS.
My *other* 5.5TB of RAID space (spread over 4 servers) is all XFS on
RH7.3. But this volume needed large block device support (obviously), and
I couldn't get consistent results wedging XFS into centos-4, so I went
with the supported configuration. I'm not willing to go to SuSE just to
Department of Biomedical Engineering
More information about the CentOS