Thanks to everybody for answering. I thing >250E6 is a lot and keep decent read and write access speed is unreal using mutli-purpose filesystems like ext? and other ?FS. I would need a dedicated filesystem for that. This problem was only a possible solution to another problem. I will solve the original problem using another way. Regards On Sat, Mar 12, 2011 at 3:12 PM, Simon Matter <simon.matter at invoca.ch> wrote: >> Hi >> >> I need to store about 250.000.000 files. Files are less than 4k. >> >> On a ext4 (fedora 14) the system crawl at 10.000.000 in the same >> directory. >> >> I tried to create hash directories, two level of 4096 dir = 16.000.000 >> but I had to stop the script to create these dir after hours >> and "rm -rf" would have taken days ! mkfs was my friend >> >> I tried two levels, first of 4096 dir, second of 64 dir. The creation >> of the hash dir took "only" few minutes, >> but copying 10000 files make my HD scream for 120s ! I take only 10s >> when working in the same directory. >> >> The filenames are all 27 chars and the first chars can be used to hash >> the files. >> >> My question is : Which filesystem and how to store these files ? > > Did you try XFS? Deletes may be slow but apart from that it did a nice > jobs when I last used it. But we had only around 50.000.000 files at the > time. > However, also ext3 worked quite well after *removing* dir_index. > > Also, did you run a x86_64 kernel? We were having all kind of troubles > with big boxes and i686-PAE kernel, because direntry and inode caches were > very small. > > Simon > > _______________________________________________ > CentOS mailing list > CentOS at centos.org > http://lists.centos.org/mailman/listinfo/centos > -- Alain Spineux | aspineux gmail com Monitor your iT & Backups | http://www.magikmon.com Free Backup front-end | http://www.magikmon.com/mksbackup Your email 100% available | http://www.emailgency.com