Hi all,
I've a 30TB hardware based RAID array.
Wondering what you all thought of using ext4 over XFS.
I've been a big XFS fan for years as I'm an Irix transplant but would like your opinions.
This 30TB drive will be an NFS exported asset for my users housing home dirs and other frequently accessed files.
- aurf
On 01/11/2011 01:47 PM, aurfalien@gmail.com wrote:
Hi all,
I've a 30TB hardware based RAID array.
Wondering what you all thought of using ext4 over XFS.
I've been a big XFS fan for years as I'm an Irix transplant but would like your opinions.
This 30TB drive will be an NFS exported asset for my users housing home dirs and other frequently accessed files.
- aurf
You will need XFS for a single partition that large. You won't be able to make such a large ext4 partition, I don't think.
On Jan 11, 2011, at 10:49 AM, Digimer wrote:
On 01/11/2011 01:47 PM, aurfalien@gmail.com wrote:
Hi all,
I've a 30TB hardware based RAID array.
Wondering what you all thought of using ext4 over XFS.
I've been a big XFS fan for years as I'm an Irix transplant but would like your opinions.
This 30TB drive will be an NFS exported asset for my users housing home dirs and other frequently accessed files.
- aurf
You will need XFS for a single partition that large. You won't be able to make such a large ext4 partition, I don't think.
I read where ext4 supports 1EB partition size and 16TB files.
However I'm unsure of its implementation in 5.5
- aurf
On 01/11/2011 11:07 AM, aurfalien@gmail.com wrote:
On Jan 11, 2011, at 11:01 AM, Benjamin Franz wrote:
On 01/11/2011 10:56 AM, aurfalien@gmail.com wrote:
I read where ext4 supports 1EB partition size
The format supports it - the e2fsprogs tools do not. 16TB is the practical limit.
Have you installed e4fsprogs?
The tools do not support over 16TB.
https://ext4.wiki.kernel.org/index.php/Ext4_Howto#Bigger_File_System_and_Fil...
On Tue, 11 Jan 2011 at 1:49pm, Digimer wrote
On 01/11/2011 01:47 PM, aurfalien@gmail.com wrote:
Hi all,
I've a 30TB hardware based RAID array.
Wondering what you all thought of using ext4 over XFS.
I've been a big XFS fan for years as I'm an Irix transplant but would like your opinions.
This 30TB drive will be an NFS exported asset for my users housing home dirs and other frequently accessed files.
You will need XFS for a single partition that large. You won't be able to make such a large ext4 partition, I don't think.
This is correct. While ext4 theoretically supports volumes (much) larger than 16TB, the developers don't think it's production ready yet and the userspace tools don't support it yet.
So, short answer -- XFS is the only way to go.
I use ext4 on my tiny 8TB arrays. Centos 5.5 does support it, although the gui tools have small issues with it.
Centos 6 should support it better...
On Jan 11, 2011, at 10:59 AM, Joshua Baker-LePain wrote:
On Tue, 11 Jan 2011 at 1:49pm, Digimer wrote
On 01/11/2011 01:47 PM, aurfalien@gmail.com wrote:
Hi all,
I've a 30TB hardware based RAID array.
Wondering what you all thought of using ext4 over XFS.
I've been a big XFS fan for years as I'm an Irix transplant but would like your opinions.
This 30TB drive will be an NFS exported asset for my users housing home dirs and other frequently accessed files.
You will need XFS for a single partition that large. You won't be able to make such a large ext4 partition, I don't think.
This is correct. While ext4 theoretically supports volumes (much) larger than 16TB, the developers don't think it's production ready yet and the userspace tools don't support it yet.
So, short answer -- XFS is the only way to go.
My RAID has a strip size of of 32KB and a block size of 512bytes.
I've usually just done blind XFS formats but would like to tune it for smaller files. Of course big/small is relative but in my env, small means sub 300MB or so.
What would your XFS tuning params be for such an env?
- aurf
On Tue, 11 Jan 2011 at 11:12am, aurfalien@gmail.com wrote
My RAID has a strip size of of 32KB and a block size of 512bytes.
I've usually just done blind XFS formats but would like to tune it for smaller files. Of course big/small is relative but in my env, small means sub 300MB or so.
What would your XFS tuning params be for such an env?
It's been a long while since I've done tuned XFS formats. But you also need to consider how many disks are in the array and what RAID level you're using.
Hey I've been watching the thread on and off. How large in the file system you are trying to share? What will it / they be used?
-- Thanks,
Gene Brandt SCSA 8625 Carriage Road River Ridge, LA 70123
home 504-737-4295
cell 504-452-3250
Family Web Page | My Web Page | LinkedIn | Facebook | Resumebucket
On Tue, 2011-01-11 at 14:19 -0500, Joshua Baker-LePain wrote:
On Tue, 11 Jan 2011 at 11:12am, aurfalien@gmail.com wrote
My RAID has a strip size of of 32KB and a block size of 512bytes.
I've usually just done blind XFS formats but would like to tune it for smaller files. Of course big/small is relative but in my env, small means sub 300MB or so.
What would your XFS tuning params be for such an env?
It's been a long while since I've done tuned XFS formats. But you also need to consider how many disks are in the array and what RAID level you're using.
Hey I've been watching the thread on and off. How large in the file system you are trying to share? What will it / they be used?
http://lists.centos.org/pipermail/centos/2011-January/thread.html http://lists.centos.org/pipermail/centos/2011-January/104184.html
On Jan 12, 2011, at 8:19 AM, Gene Brandt wrote:
Hey I've been watching the thread on and off. How large in the file system you are trying to share? What will it / they be used?
Home dirs which are low/medium bandwidth and other low bandwidth data.
Basically 3 individual NFS exports.
Currently served off an Xserve(s)/Xraid(s) so this new server will be a boost in both reliability, performance and simplicity in terms of management.
And the parity calcs for Raid 6 are taking quite some time, almost twice as long as Raid 5, but then again its more robust.
I'm not overly concerned with performance as it will be leaps above what we have now.
What I require is reliable bulk storage.
- aurf
----- Original Message ----- | On Jan 11, 2011, at 10:59 AM, Joshua Baker-LePain wrote: | | > On Tue, 11 Jan 2011 at 1:49pm, Digimer wrote | > | >> On 01/11/2011 01:47 PM, aurfalien@gmail.com wrote: | >>> Hi all, | >>> | >>> I've a 30TB hardware based RAID array. | >>> | >>> Wondering what you all thought of using ext4 over XFS. | >>> | >>> I've been a big XFS fan for years as I'm an Irix transplant but | >>> would | >>> like your opinions. | >>> | >>> This 30TB drive will be an NFS exported asset for my users housing | >>> home dirs and other frequently accessed files. | >>> | >> | >> You will need XFS for a single partition that large. You won't be | >> able | >> to make such a large ext4 partition, I don't think. | > | > This is correct. While ext4 theoretically supports volumes (much) | > larger | > than 16TB, the developers don't think it's production ready yet and | > the | > userspace tools don't support it yet. | > | > So, short answer -- XFS is the only way to go. | > | | My RAID has a strip size of of 32KB and a block size of 512bytes. | | I've usually just done blind XFS formats but would like to tune it for | smaller files. Of course big/small is relative but in my env, small | means sub 300MB or so. | | What would your XFS tuning params be for such an env? | | - aurf | _______________________________________________ | CentOS mailing list | CentOS@centos.org | http://lists.centos.org/mailman/listinfo/centos
Your sw and su options should match the number of *usable* data disk and your stripe size to best optimize performance.
Do *not* create a volume this large unless you have a lot of memory. If you ever need to run an xfs_check on the volume you could be looking at 10s of GB of memory. I just ran an xfs_check on an 11TB volume used by our medical imaging lab and the xfs_check/xfs_db process grew to around 32GB of CentOS 5.5 64-bit.
XFS is safe - lots of protection for your data, but it cuts write speeds in half.
Ext4 does not slow things down...
On Wednesday, January 12, 2011 02:55 AM, compdoc wrote:
XFS is safe - lots of protection for your data, but it cuts write speeds in half.
When did XFS start looking like reiserfs?
Lots of protection for your data? Let's see, super aggressive caching and no data journaling only metadata journaling, what on earth are you blabbering about?
Use XFS with anything that has no BBU cache support or barrier support and recent files are toast when there is a crash or sudden power failure.
Nah, XFS has historically been the fastest at writing and also the most dangerous filesystem available on Linux.
Lots of protection for your data? Let's see, super aggressive caching and
no data journaling only metadata journaling, what on earth are you blabbering about?
Use XFS with anything that has no BBU cache support or barrier support and
recent files are toast when there is a crash or sudden power failure.
Nah, XFS has historically been the fastest at writing and also the most
dangerous filesystem available on Linux.
You're right, I was thinking of zfs. Which does cut write speeds in half...
On Wednesday, January 12, 2011 08:51 AM, compdoc wrote:
Lots of protection for your data? Let's see, super aggressive caching and
no data journaling only metadata journaling, what on earth are you blabbering about?
Use XFS with anything that has no BBU cache support or barrier support and
recent files are toast when there is a crash or sudden power failure.
Nah, XFS has historically been the fastest at writing and also the most
dangerous filesystem available on Linux.
You're right, I was thinking of zfs. Which does cut write speeds in half...
Huh? There you go blabbering again. There is no native ZFS for Linux nor will there ever be unless there is a license change.
On 01/11/2011 08:00 PM, Christopher Chan wrote:
On Wednesday, January 12, 2011 08:51 AM, compdoc wrote:
Lots of protection for your data? Let's see, super aggressive caching and
no data journaling only metadata journaling, what on earth are you blabbering about?
Use XFS with anything that has no BBU cache support or barrier support and
recent files are toast when there is a crash or sudden power failure.
Nah, XFS has historically been the fastest at writing and also the most
dangerous filesystem available on Linux.
You're right, I was thinking of zfs. Which does cut write speeds in half...
Huh? There you go blabbering again. There is no native ZFS for Linux nor will there ever be unless there is a license change.
Your points may be valid, but their delivery could be more polite. :)
On Jan 11, 2011, at 5:17 PM, Digimer wrote:
On 01/11/2011 08:00 PM, Christopher Chan wrote:
On Wednesday, January 12, 2011 08:51 AM, compdoc wrote:
Lots of protection for your data? Let's see, super aggressive caching and
no data journaling only metadata journaling, what on earth are you blabbering about?
Use XFS with anything that has no BBU cache support or barrier support and
recent files are toast when there is a crash or sudden power failure.
Nah, XFS has historically been the fastest at writing and also the most
dangerous filesystem available on Linux.
You're right, I was thinking of zfs. Which does cut write speeds in half...
Huh? There you go blabbering again. There is no native ZFS for Linux nor will there ever be unless there is a license change.
Your points may be valid, but their delivery could be more polite. :)
I'm actually enjoying the spirited discussion.
- aurf
On Wednesday, January 12, 2011 10:07 AM, compdoc wrote:
I never said it was native. zfs-fuse.x86_64
Not a Centos or a RHEL package. Please don't bring up experimental software in threads that are comparing filesystems for production use.
If you want to suggest ZFS, you should suggest that the OP go with Solaris or FreeBSD although I won't touch ZFS on the latter with a ten foot pole. Nevermind that I run OpenIndiana and OpenSolaris for ZFS instead of Solaris.
I didn't bring up experimental software - I thought that's what he was using. I misread.
And it worked quite well, except for write speeds. There are some cool features with zfs.
Trying to decide just what file system to use for these larger and larger arrays is something I've been facing very recently.
I even tried Ubuntu LTS and server in the last week because of their native support for ext4.
And even thought Ubuntu has some things right, I don't think it has everything right.
So I'm waiting, like everyone else for centos 6. For me, the wait is mainly for better ext4 support. Along with any improvements with KVM. (Not that I have anything to complain about with KVM on 5.5)
I doubt I'll be moving beyond 16TB, or whatever the limit of ext4 on centos is.
Sorry to have offended you.
On 1/12/11, compdoc compdoc@hotrodpc.com wrote:
I didn't bring up experimental software - I thought that's what he was using. I misread.
And it worked quite well, except for write speeds. There are some cool features with zfs.
http://www.redhat.com/rhel/compare/
As it stands, 16/25/100 TB ext/gfs2(HA)/xfs respective max sizes
I guess of course, dunno about OCFS/GPFS or afs or gluster and the such..
Season's greetings to all,
Regards,
Rajagopal
On Jan 11, 2011, at 6:28 PM, Christopher Chan wrote:
On Wednesday, January 12, 2011 10:07 AM, compdoc wrote:
I never said it was native. zfs-fuse.x86_64
Not a Centos or a RHEL package. Please don't bring up experimental software in threads that are comparing filesystems for production use.
If you want to suggest ZFS, you should suggest that the OP go with Solaris or FreeBSD although I won't touch ZFS on the latter with a ten foot pole. Nevermind that I run OpenIndiana and OpenSolaris for ZFS instead of Solaris.
Nope, this OP will be sticking with Centos.
On Tue, Jan 11, 2011 at 08:42:55PM -0700, compdoc wrote:
zfs-fuse.x86_64 is from epel - at least some users trust that repo.
EPEL is very trustworthy, but I for one wouldn't use ZFS fuse for anything "Enterprise" (though I would use it for testing, or personal use).
As an aside, a company called KQ Infotech is working on a native port of ZFS to Linux. It's still in closed beta testing.
Ray
On Jan 11, 2011, at 7:51 PM, "compdoc" compdoc@hotrodpc.com wrote:
Lots of protection for your data? Let's see, super aggressive caching and
no data journaling only metadata journaling, what on earth are you blabbering about?
Use XFS with anything that has no BBU cache support or barrier support and
recent files are toast when there is a crash or sudden power failure.
Nah, XFS has historically been the fastest at writing and also the most
dangerous filesystem available on Linux.
You're right, I was thinking of zfs. Which does cut write speeds in half...
ZFS doesn't cut write performance in half!
ZFS provides the performance of your storage, over NFS async IO should be normal, but sync IO will be the synchronous performance of your storage. If your storage is crap, your performance will be crap. You can fix this sync penalty by putting the journal/log (ZIL) on an SSD drive or for non-critical data, disabling the ZIL altogether (bad, but if your using XFS your use to this type of bad).
Linux tends to cache anything and everything, so it often masks how crappy one's storage really is.
-Ross
On Tue, Jan 11, 2011 at 1:47 PM, aurfalien@gmail.com wrote:
Hi all,
I've a 30TB hardware based RAID array.
Wondering what you all thought of using ext4 over XFS.
I've been a big XFS fan for years as I'm an Irix transplant but would like your opinions.
This 30TB drive will be an NFS exported asset for my users housing home dirs and other frequently accessed files.
- aurf
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
We have a 26 TB XFS partition. Works like a charm.
What I really like - the initialization time is only a couple of minutes (as opposed to many hours on a 4 TB ext3 partition every time the OS does an fsck on it).
Boris.
On Tuesday, January 11, 2011 01:47:33 pm aurfalien@gmail.com wrote:
I've a 30TB hardware based RAID array.
Wondering what you all thought of using ext4 over XFS.
XFS. But make sure you're using a 64-bit CentOS. 32-bit CentOS (at least C5 of six months or so ago) will in fact run mkfs.xfs on that large of a device; however, the filesystem will silently fail and will no longer mount when you get over 16TB of data. Been there, done that, with C5 32 bit kernel. Reassigned the RDM's containing the PV's for the volumegroup containing the logical volume to a VM running 64-bit C5 and things are fine.
There are other issues with XFS on 32-bit kernel as well, but I've not run into those. Best to use 64-bit.
I'm using ext4 on less than 16TB filesystems, though, with good success.