[CentOS] files in a directory limitation
Feizhou
feizhou at graffiti.netMon Nov 6 05:05:36 UTC 2006
- Previous message: [CentOS] files in a directory limitation
- Next message: [CentOS] page allocation failure. order:0, mode:0x50
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
Benjamin Smith wrote: > Don't know what you're after - but I've found that having over about 1024 > files in a directory gets sluggish on a number of filesystems. if you are going to walk through the directory, ext3 is reasonable okay I think ( what's sluggish for you/me? :D ). If you know what file you are after, XFS/reiserfs are fast even when there are hundreds of thousands of files in a directory. I cannot say the same for ext3 + htree. > > So when I write code (EG: databases) with file attachments, I use an algorithm > that results in < 1024 files per directory. :) This would work also on FreeBSD boxes too. FreeBSD hashes the first thousand entries. Not sure about the other BSDs or Solaris.
- Previous message: [CentOS] files in a directory limitation
- Next message: [CentOS] page allocation failure. order:0, mode:0x50
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
More information about the CentOS mailing list