[CentOS] mkfs.ext3 on a 9TB volume

Nick Bryant list at everywhereinternet.com
Wed Sep 14 14:28:01 UTC 2005


> On Mon, 12 Sep 2005 at 8:42am, Francois Caen wrote
> 
> > On 9/12/05, Joshua Baker-LePain <jlb17 at duke.edu> wrote:
> > > As I mentioned, I'm running centos-4, which, as we all know, is based
> off
> > > RHEL 4.  If you go to <http://www.redhat.com/software/rhel/features/>,
> > > they explicitly state that they support ext3 FSs up to 8TB.
> >
> > Wow! Odd! RH says 8TB but ext3 FAQ says 4TB.
> 
> I wouldn't call it that odd.  RH patches their kernels to a fair extent,
> both for stability and features.
> 
> > >From my personal testing on CentOS 4.1, you can't go over 4TB without
> kludging.
> >
> > > I then did a software RAIDO across them, and finally:
> > >
> > > mke2fs -b 4096 -j -m 0 -R stride=1024 -T largefile4 /dev/md0
> >
> > Joshua, thanks for the reply on this.
> > There's something kludgy about having to do softraid across 2
> > partitions before formatting. It adds a layer of complexity and
> > reduces reliability. Is that the trick RH recommended to go up to 8TB?
> 
> Err, it's not a kludge and it's not a trick.  Those 2 "disks" are hardware
> RAID5 arrays from 2 12 port 3ware 9500 cards.  I like 3ware's hardware
> RAID, and those are the biggest (in terms of ports) cards 3ware makes.
> So, I hook 12 disks up to each card, and the OS sees those as 2 SCSI
> disks.  I then do the software RAID to get 1) speed and 2) one partition
> to present to the users.  Folks (myself included) have been doing this for
> years.
> 
> The one gotcha in this setup (other than not being able to boot from the
> big RAID5 arrays, since each is >2TiB) is that the version of mdadm
> shipped with RHEL4 does not support array members bigger than 2TiB.  I had
> to upgrade to an upstream release to get that support.

Just out of interest, and to complicate the matter even more, does anyone
know what the upper limit of GFS is?






More information about the CentOS mailing list