[CentOS] Question about optimal filesystem with many small files.

oooooooooooo ooooooooooooo

hhh735 at hotmail.com
Wed Jul 8 06:27:40 UTC 2009


Hi,

I have a program that writes lots of files to a directory tree (around 15 Million fo files), and a node can have up to 400000 files (and I don't have any way to split this ammount in smaller ones). As the number of files grows, my application gets slower and slower (the app is works something like a cache for another app and I can't redesign the way it distributes files into disk due to the other app requirements).

The filesystem I use is ext3 with teh following options enabled:

Filesystem features:      has_journal resize_inode dir_index filetype needs_recovery sparse_super large_file

Is there any way to improve performance in ext3? Would you suggest another FS for this situation (this is a prodution server, so I need a stable one) ?

Thanks in advance (and please excuse my bad english).


_________________________________________________________________
Connect to the next generation of MSN Messenger 
http://imagine-msn.com/messenger/launch80/default.aspx?locale=en-us&source=wlmailtagline


More information about the CentOS mailing list