[CentOS] files in a directory limitation

Benjamin Smith lists at benjamindsmith.com
Tue Nov 7 16:56:19 UTC 2006


On Sunday 05 November 2006 01:17, Marcin Godlewski wrote:
> if u know command for and rm and ls u can do it exactly the same what
> did u do with mkdir command.
> for i in ls; do rm -rf $i;done

I sincerely doubt he was deleting the files *manually*... 

There are performance issues on many different FS's when you have more than 
about 1024 files. (Fat32 is just awful here, ext3 holds up somewhat better, I 
don't know about other FS's) 

Typically, I'll code files in a web-based application to a database like so: 

table myfiles
id integer unique serial primary key, 
name varchar, 
mimeheaders varchar

Then use the id field to reference the file, and run that thru a simple 
algorithm that creates directories in groups of 1000 so that there are never 
more than 1002 files in a directory. 

Then, I access the files through a script that does a DB lookup to get the 
meta info, kick out the appropriate headers, then spit out the contents of 
the file. 

This works very well, scales nicely to (at least) millions of files, and 
performs lickety split. Dunno if this helps any, but it does work well in 
this context. 

-Ben 
-- 
"The best way to predict the future is to invent it."
- XEROX PARC slogan, circa 1978



More information about the CentOS mailing list