> Does this number vary depending on the architecture used? Not sure. I've only got x86 boxen to work with and test on for now. There is no limit (beyond inode limitations) on files on a filesystem or in a directory. There *is* a limit on directories in a directory. Sluggishness and other reports aside, this is pretty easy to test. mkdir ~/tmp (because doing this in an important dir is probably a bad idea) for i in {1..63000} ; do mkdir $i ; done Watch for where the filesystem tells you to go screw yourself. If it doesn't, then it's probably arch specific, and you can increase the number and try again. You'll have to remove the tmp dir you created rather than cleaning it out, as there'll be too many directories and bash will tell you to go screw yourself if you give it an rmdir * Once you're dialed in on where it fails, repeat for files and see if you can push it over. (I've been told by a few people smarter than me that no such limit exists, nor was I able to bump into one during my few minutes of testing. for i in {1..63000} ; do touch $i ; done -- During times of universal deceit, telling the truth becomes a revolutionary act. George Orwell