[CentOS] Question about optimal filesystem with many small files.

Kwan Lowe

kwan.lowe at gmail.com
Wed Jul 8 16:23:22 UTC 2009


On Wed, Jul 8, 2009 at 2:27 AM, oooooooooooo ooooooooooooo <
hhh735 at hotmail.com> wrote:

>
> Hi,
>
> I have a program that writes lots of files to a directory tree (around 15
> Million fo files), and a node can have up to 400000 files (and I don't have
> any way to split this ammount in smaller ones). As the number of files
> grows, my application gets slower and slower (the app is works something
> like a cache for another app and I can't redesign the way it distributes
> files into disk due to the other app requirements).
>
> The filesystem I use is ext3 with teh following options enabled:
>
> Filesystem features:      has_journal resize_inode dir_index filetype
> needs_recovery sparse_super large_file
>
> Is there any way to improve performance in ext3? Would you suggest another
> FS for this situation (this is a prodution server, so I need a stable one) ?
>

I saw this article some time back.

http://www.linux.com/archive/feature/127055

I've not implemented it, but from past experience, you may lose some
performance initially, but the database fs performance might be more
consistent as the number of files grow.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.centos.org/pipermail/centos/attachments/20090708/21d73fc0/attachment-0001.html>


More information about the CentOS mailing list