On Sun, 11 Sep 2005 at 6:41pm, Francois Caen wrote
On 9/11/05, Joshua Baker-LePain jlb17@duke.edu wrote:
Having hit a similar issue (big FS, I wanted XFS, but needed to run centos 4), I just went ahead and stuck with ext3. My FS is 5.5TiB -- a software RAID0 across 2 3w-9xxx arrays. I had no issues formatting it and have had no issues in testing or production with it. So, it can be done. Perhaps the bugs you're hitting are in the FC driver layer?
ext3 had a 4TB limit: http://batleth.sapienti-sat.org/projects/FAQs/ext3-faq.html) which I didn't know when I started this thread.
As I mentioned, I'm running centos-4, which, as we all know, is based off RHEL 4. If you go to http://www.redhat.com/software/rhel/features/, they explicitly state that they support ext3 FSs up to 8TB.
I found it the hard way, through testing. There are ways to force past that limit (mkpartfs ext2 in parted, then tune2fs -j), but the resulting filesystem is totally unstable.
Joshua, how the heck did you format your 5.5TB in ext3? You 100% sure it's not mounted as ext2?
To answer the 2nd question: [jlb@$HOST ~]$ df -h Filesystem Size Used Avail Use% Mounted on . . /dev/md0 5.5T 634G 4.9T 12% /nefs [jlb@$HOST ~]$ mount . . /dev/md0 on /nefs type ext3 (rw)
As to the first, I created the FS as simply as possible. /dev/sdb and /dev/sdc both look like this:
(parted) print Disk geometry for /dev/sdb: 0.000-2860920.000 megabytes Disk label type: gpt Minor Start End Filesystem Name Flags 1 0.017 2860919.983
I then did a software RAIDO across them, and finally:
mke2fs -b 4096 -j -m 0 -R stride=1024 -T largefile4 /dev/md0