On Tue, 6 Aug 2013, SilverTip257 wrote:
On Tue, Aug 6, 2013 at 8:58 PM, Eliezer Croitoru eliezer@ngtech.co.ilwrote:
OK so back to the issue in hands. The issue is that I have a mail storage for more then 65k users per domain and the ext4 doesn't support this size of directory list. The reiser FS indeed fits for the purpose but ext4 doesn't even start to scratch it. Now the real question is that: What FS will you use for dovecot backhand to store a domain with more then 65k users?
XFS? Used for situations where one has "lots or large" as Dave Chinner says [0] Meaning lots of files or large files.
[0] http://www.youtube.com/watch?v=i3IreQHLELU
Eliezer
On 07/05/2013 04:45 PM, Eliezer Croitoru wrote:
I was learning about the different FS exists. I was working on systems that ReiserFS was the star but since there is no longer support from the creator there are other consolidations to be done. I want to ask about couple FS options. EXT4 which is amazing for one node but for more it's another story. I have heard about GFS2 and GlusterFS and read the docs and official materials from RH on them. In the RH docs it states the EXT4 limit files per directory is 65k and I had a directory which was pretty loaded with files and I am unsure exactly what was the size but I am almost sure it was larger the 65k files per directory.
I was considering using GlusterFS for a very large storage system with NFS front. I am still unsure EXT4 should or shouldn't be able to handle more then 16TB since the linux kernel ext4 docs at: https://www.kernel.org/doc/Documentation/filesystems/ext4.txt in
section 2.1
it states: * ability to use filesystems > 16TB (e2fsprogs support not available yet). so can I use it or not?? if there are no tools to handle this size then I cannot trust it.
I want to create a storage with more then 16TB based on GlusterFS since it allows me to use 2-3 rings FS which will allow me to put the storage in a form of: 1 client -> HA NFS servers -> GlusterFS cluster.
it seems to more that GlusterFS is a better choice then Swift since RH do provide support for it.
Every response will be appreciated.
Thanks, Eliezer
Just for interest I have had two 44TB "raid 6" arrays using EXT4, running with heavy useage 24/7 on el6 now since January 2013 without any problems so far. I rebuilt "e2fsprogs" from source. Something along the lines below looking at my notes.
wget http://atoomnet.net/files/rpm/e2fsprogs/e2fsprogs-1.42.6-1.el6.src.rpm yum-builddep e2fsprogs-1.42.6-1.el6.src.rpm rpmbuild --rebuild --recompile e2fsprogs-1.42.6-1.el6.src.rpm cd /root/rpmbuild/RPMS/x86_64 rpm -Uvh *.rpm
###### build array with a partition ####### parted /dev/sda mkpart primary ext4 1 -1 mkfs.ext4 -L sraid1v -E stride=64,stripe-width=384 /dev/sda1
###### build array without a partition #######
mkfs.ext4 -L sraid1v -E stride=64,stripe-width=384 /dev/sda
Maybe this will help someone.
Cheers Steve