I haven't tried it but could you possibly use a database to hold all those files instead? At less than 4K per "row", performance from an indexed database might be faster. On 3/12/11, Alain Spineux <aspineux at gmail.com> wrote: > Hi > > I need to store about 250.000.000 files. Files are less than 4k. > > On a ext4 (fedora 14) the system crawl at 10.000.000 in the same directory. > > I tried to create hash directories, two level of 4096 dir = 16.000.000 > but I had to stop the script to create these dir after hours > and "rm -rf" would have taken days ! mkfs was my friend > > I tried two levels, first of 4096 dir, second of 64 dir. The creation > of the hash dir took "only" few minutes, > but copying 10000 files make my HD scream for 120s ! I take only 10s > when working in the same directory. > > The filenames are all 27 chars and the first chars can be used to hash > the files. > > My question is : Which filesystem and how to store these files ? > > Regards > > -- > Alain Spineux | aspineux gmail com > Monitor your iT & Backups | http://www.magikmon.com > Free Backup front-end | http://www.magikmon.com/mksbackup > Your email 100% available | http://www.emailgency.com > _______________________________________________ > CentOS mailing list > CentOS at centos.org > http://lists.centos.org/mailman/listinfo/centos >