Just curious but is there a limitation on the number of files in a directory? I am using ext3.
I'm not concerned about file size just the number of files.
Thanks,
Jerry
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Tue, Oct 17, 2006 at 12:59:56PM -0400, Jerry Geis wrote:
Just curious but is there a limitation on the number of files in a directory? I am using ext3.
I'm not concerned about file size just the number of files.
I don't think there is actually a hardcoded limit, but I'm not sure. I can say for certain that 20000 files will work, but that is as high as I ever got.
I can also say for certain that having that many file on a single directory is slow as hell, and would never do it again.
[]s
- -- Rodrigo Barbosa "Quid quid Latine dictum sit, altum viditur" "Be excellent to each other ..." - Bill & Ted (Wyld Stallyns)
On 10/17/06, Jerry Geis geisj@pagestation.com wrote:
Just curious but is there a limitation on the number of files in a directory? I am using ext3.
There is an upper bound for directories within a directory of around 63,000. I'm not certain if this applies to files as well.
Jim Perrin wrote:
On 10/17/06, Jerry Geis geisj@pagestation.com wrote:
Just curious but is there a limitation on the number of files in a directory? I am using ext3.
There is an upper bound for directories within a directory of around 63,000. I'm not certain if this applies to files as well.
Does this number vary depending on the architecture used? Regards, Michael
Does this number vary depending on the architecture used?
Not sure. I've only got x86 boxen to work with and test on for now. There is no limit (beyond inode limitations) on files on a filesystem or in a directory. There *is* a limit on directories in a directory.
Sluggishness and other reports aside, this is pretty easy to test.
mkdir ~/tmp (because doing this in an important dir is probably a bad idea) for i in {1..63000} ; do mkdir $i ; done
Watch for where the filesystem tells you to go screw yourself. If it doesn't, then it's probably arch specific, and you can increase the number and try again. You'll have to remove the tmp dir you created rather than cleaning it out, as there'll be too many directories and bash will tell you to go screw yourself if you give it an rmdir *
Once you're dialed in on where it fails, repeat for files and see if you can push it over. (I've been told by a few people smarter than me that no such limit exists, nor was I able to bump into one during my few minutes of testing.
for i in {1..63000} ; do touch $i ; done
On 11/4/2006 2:21 PM, Jim Perrin wrote:
Does this number vary depending on the architecture used?
Not sure. I've only got x86 boxen to work with and test on for now. There is no limit (beyond inode limitations) on files on a filesystem or in a directory. There *is* a limit on directories in a directory.
I share your statement partly, because ...
mkdir ~/tmp (because doing this in an important dir is probably a bad idea) for i in {1..63000} ; do mkdir $i ; done
... this was limited up to 31998 directories in an ext3 fs. mkdir: cannot create directory `31999': Too many links mkdir: cannot create directory `32000': Too many links and so on...
I tested the same thing on an xfs fs and the interesting thing was that I got further... Creating {1..63000} wasn't any problem, I also added these dirs in the same directory without a prob: for i in {1..63000} ; do mkdir abc${i} ; done for i in {1..63000} ; do mkdir def${i} ; done I didn't want to spend more time on testing. It took me a whole lotta time to delete that stuff. ;-) Maybe under xfs directories just are limited by inodes and nothing else.
Both cases, ext3 and xfs, were tested on an x86_64 (Dual xeon). cya Michael
Michael Kress napisaĆ(a):
I tested the same thing on an xfs fs and the interesting thing was that I got further... Creating {1..63000} wasn't any problem, I also added these dirs in the same directory without a prob: for i in {1..63000} ; do mkdir abc${i} ; done for i in {1..63000} ; do mkdir def${i} ; done I didn't want to spend more time on testing. It took me a whole lotta time to delete that stuff. ;-) Maybe under xfs directories just are limited by inodes and nothing else.
COuld u tell me why did u take some much time to delete stuff ??
if u know command for and rm and ls u can do it exactly the same what did u do with mkdir command. for i in ls; do rm -rf $i;done thats it. no big deal to remove that files. :) Cheers!
I tested the same thing on an xfs fs and the interesting thing was that
<snip>
It took me a whole lotta time to delete that stuff. ;-) Maybe under xfs directories just are limited by inodes and nothing else.
COuld u tell me why did u take some much time to delete stuff ??
This is a known performance problem of XFS.
if u know command for and rm and ls u can do it exactly the same what did u do with mkdir command. for i in ls; do rm -rf $i;done thats it. no big deal to remove that files. :)
XFS is slow on large deletes. That is just it.
On 11/5/2006 10:17 AM, Marcin Godlewski wrote:
It took me a whole lotta time to delete that stuff. ;-) Maybe under xfs directories just are limited by inodes and nothing else.
COuld u tell me why did u take some much time to delete stuff ??
According to 'time' the rm -rf command on these dirs took about 3mn 30sec - I think that's much, but as Feizhou already remarked, it's a performance matter of xfs. The ext3 deletes were somewhat faster, but I couldn't really compare as I couldn't create much of these, but doing a find|wc parallel to rm made me feel that the ext3 rm went faster.
if u know command for and rm and ls u can do it exactly the same what did u do with mkdir command. for i in ls; do rm -rf $i;done thats it. no big deal to remove that files. :)
I did rm -rf /mnt/xfspart/tmp/ where all the files were inside. I don't think, creating a single process for each occurrence would be faster (didn't try it, but I'm quite sure). One also could try 'find ... | xargs rm' to avoid bash limitations with the asterisk * and which will also just create one process. cu - Michael
On Sunday 05 November 2006 01:17, Marcin Godlewski wrote:
if u know command for and rm and ls u can do it exactly the same what did u do with mkdir command. for i in ls; do rm -rf $i;done
I sincerely doubt he was deleting the files *manually*...
There are performance issues on many different FS's when you have more than about 1024 files. (Fat32 is just awful here, ext3 holds up somewhat better, I don't know about other FS's)
Typically, I'll code files in a web-based application to a database like so:
table myfiles id integer unique serial primary key, name varchar, mimeheaders varchar
Then use the id field to reference the file, and run that thru a simple algorithm that creates directories in groups of 1000 so that there are never more than 1002 files in a directory.
Then, I access the files through a script that does a DB lookup to get the meta info, kick out the appropriate headers, then spit out the contents of the file.
This works very well, scales nicely to (at least) millions of files, and performs lickety split. Dunno if this helps any, but it does work well in this context.
-Ben
On 11/5/2006 10:46 AM, Tim Uckun wrote:
... this was limited up to 31998 directories in an ext3 fs. mkdir: cannot create directory `31999': Too many links mkdir: cannot create directory `32000': Too many links and so on...
Did you experience any performance problems at all?
Well, no, not really. Some little lags, but nothing remarkable. BTW, when I use bonnie++, the rest of my machine gets remarkably slow. I think that's a limitation of IBM based architectures (I've got a dual xeon HT on a Supermicro board). Any opinions on that? cu - Michael
There is a limitation of inode numbers in a filesystem. This implies to files and directories.
You should check ext3 manuals/docs to see how can you calculate the limitation .....
Nicholas Anderson Administrador de Sistemas Unix LPIC-1 Certified Rede Fiocruz
Jerry Geis wrote:
Just curious but is there a limitation on the number of files in a directory? I am using ext3.
I'm not concerned about file size just the number of files.
Thanks,
Jerry _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Don't know what you're after - but I've found that having over about 1024 files in a directory gets sluggish on a number of filesystems.
So when I write code (EG: databases) with file attachments, I use an algorithm that results in < 1024 files per directory.
-Ben
On Tuesday 17 October 2006 09:59, Jerry Geis wrote:
Just curious but is there a limitation on the number of files in a directory? I am using ext3.
I'm not concerned about file size just the number of files.
Thanks,
Jerry _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
-- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean.
Benjamin Smith wrote:
Don't know what you're after - but I've found that having over about 1024 files in a directory gets sluggish on a number of filesystems.
if you are going to walk through the directory, ext3 is reasonable okay I think ( what's sluggish for you/me? :D ). If you know what file you are after, XFS/reiserfs are fast even when there are hundreds of thousands of files in a directory. I cannot say the same for ext3 + htree.
So when I write code (EG: databases) with file attachments, I use an algorithm that results in < 1024 files per directory.
:) This would work also on FreeBSD boxes too. FreeBSD hashes the first thousand entries. Not sure about the other BSDs or Solaris.