On Mon, January 25, 2010 10:31, Robert Nichols wrote: \
Now if the "{}" string appears more than once then the command line contains that path more than once, but it is essentially impossible to exceed the kernel's MAX_ARG_PAGES this way.
The only issue with using "-exec command {} ;" for a huge number of files is one of performance. If there are 100,000 matched files, the command will be invoked 100,000 times.
-- Bob Nichols RNichols42@comcast.net
Since the OP reported that the command he used:
find -name "*.access*" -mtime +2 -exec rm {} ;
in fact failed, one may infer that more than performance is at issue.
The OP's problem lies not with the -exec construction but with the unstated, but nonetheless present, './' of his find invocation. Therefore he begins a recursive descent into that directory tree. Since the depth of that tree is not given us, nor its contents, we may only infer that there must be some number of files therein which are causing the MAXPAGES limit to be exceeded before the recursion returns.
I deduce that he could provide the -prune option or the -maxdepth= 0 option to avoid this recursion instead. I have not tried either but I understand that one, or both, should work.
James B. Byrne wrote:
On Mon, January 25, 2010 10:31, Robert Nichols wrote: \
Now if the "{}" string appears more than once then the command line contains that path more than once, but it is essentially impossible to exceed the kernel's MAX_ARG_PAGES this way.
The only issue with using "-exec command {} ;" for a huge number of files is one of performance. If there are 100,000 matched files, the command will be invoked 100,000 times.
-- Bob Nichols RNichols42@comcast.net
Since the OP reported that the command he used:
find -name "*.access*" -mtime +2 -exec rm {} ;
in fact failed, one may infer that more than performance is at issue.
The OP's problem lies not with the -exec construction but with the unstated, but nonetheless present, './' of his find invocation. Therefore he begins a recursive descent into that directory tree. Since the depth of that tree is not given us, nor its contents, we may only infer that there must be some number of files therein which are causing the MAXPAGES limit to be exceeded before the recursion returns.
Find just emits the filenames as encountered, so _no_ number of files should be able to cause an error. An infinitely deep directory tree might, or recursively linked directories, but only after a considerable amount of time and churning to exhaust the machine's real and virtual memory.
I deduce that he could provide the -prune option or the -maxdepth= 0 option to avoid this recursion instead. I have not tried either but I understand that one, or both, should work.
I'd say it is more likely that the command that resulted in an error wasn't exactly what was posted or there is a filesystem problem.
On Mon, January 25, 2010 13:40, Les Mikesell wrote: .
I'd say it is more likely that the command that resulted in an error wasn't exactly what was posted or there is a filesystem problem.
I do not consider a file system issue, as in error or corruption, highly probable in this case. It might be, however, that something returned by the find caused rm itself to choke.
On 1/26/2010 11:42 AM, James B. Byrne wrote:
On Mon, January 25, 2010 13:40, Les Mikesell wrote: .
I'd say it is more likely that the command that resulted in an error wasn't exactly what was posted or there is a filesystem problem.
I do not consider a file system issue, as in error or corruption, highly probable in this case. It might be, however, that something returned by the find caused rm itself to choke.
Causing one instance of the per-file rm invocations to choke shouldn't bother the rest. And while file system corruption isn't likely, it is still a possible cause of generally-strange behavior. The most probable thing still seems like there was an unquoted * on the line that was actually typed when the error was reported.
Les Mikesell wrote:
On 1/26/2010 11:42 AM, James B. Byrne wrote:
On Mon, January 25, 2010 13:40, Les Mikesell wrote: .
I'd say it is more likely that the command that resulted in an error wasn't exactly what was posted or there is a filesystem problem.
I do not consider a file system issue, as in error or corruption, highly probable in this case. It might be, however, that something returned by the find caused rm itself to choke.
Causing one instance of the per-file rm invocations to choke shouldn't bother the rest. And while file system corruption isn't likely, it is still a possible cause of generally-strange behavior. The most probable thing still seems like there was an unquoted * on the line that was actually typed when the error was reported.
Indeed, upon closer examination, that message:
-bash: /usr/bin/find: Argument list too long
came from the login shell, not from 'find', and indicates that the shell got a failure return with errno==E2BIG when it tried to exec() /usr/bin/find. The 'find' command was never executed.
What was your original find command?
Robert Nichols wrote:
Les Mikesell wrote:
On 1/26/2010 11:42 AM, James B. Byrne wrote:
On Mon, January 25, 2010 13:40, Les Mikesell wrote: .
I'd say it is more likely that the command that resulted in an error wasn't exactly what was posted or there is a filesystem problem.
I do not consider a file system issue, as in error or corruption, highly probable in this case. It might be, however, that something returned by the find caused rm itself to choke.
Causing one instance of the per-file rm invocations to choke shouldn't bother the rest. And while file system corruption isn't likely, it is still a possible cause of generally-strange behavior. The most probable thing still seems like there was an unquoted * on the line that was actually typed when the error was reported.
Indeed, upon closer examination, that message:
-bash: /usr/bin/find: Argument list too long
came from the login shell, not from 'find', and indicates that the shell got a failure return with errno==E2BIG when it tried to exec() /usr/bin/find. The 'find' command was never executed.
On Tue, Jan 26, 2010 at 1:15 PM, Les Mikesell lesmikesell@gmail.com wrote:
On 1/26/2010 11:42 AM, James B. Byrne wrote:
On Mon, January 25, 2010 13:40, Les Mikesell wrote: .
I'd say it is more likely that the command that resulted in an error wasn't exactly what was posted or there is a filesystem problem.
I do not consider a file system issue, as in error or corruption, highly probable in this case. It might be, however, that something returned by the find caused rm itself to choke.
Causing one instance of the per-file rm invocations to choke shouldn't bother the rest. And while file system corruption isn't likely, it is still a possible cause of generally-strange behavior. The most probable thing still seems like there was an unquoted * on the line that was actually typed when the error was reported.
To illustrate what you and others were saying I did the following:
[kwan@linbox find_test]$ cat add_one.sh #!/bin/sh
COUNTER=`cat counter` COUNTER=`expr ${COUNTER} + 1` echo ${COUNTER} echo "${COUNTER}" > counter
[kwan@linbox find_test] mkdir foo; cd foo; for i in $(seq 1 1 20); do touch a${i}; done
[kwan@linbox find_test]$ ls add_one.sh counter foo
[kwan@linbox find_test]$ ls foo a1 a10 a11 a12 a13 a14 a15 a16 a17 a18 a19 a2 a20 a3 a4 a5 a6 a7 a8 a9
[kwan@linbox find_test]$ echo "0">counter [kwan@linbox find_test]$ find foo -name "a*" |xargs ./add_one.sh 1 [kwan@linbox find_test]$ echo "0">counter [kwan@linbox find_test]$ find foo -name "a*" -exec ./add_one.sh {} ; 1 2 [snip] 18 19 20
Finally: [kwan@linbox find_test]$ find foo -name a* -exec ./add_one.sh {} ; [kwan@linbox find_test]$
(This last one has no results because the a* is not quoted and therefore expanded by the shell before it hits the find command. )
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of James B. Byrne Sent: Monday, January 25, 2010 10:06 AM To: Robert Nichols Cc: centos@centos.org Subject: Re: [CentOS] The directory that I am trying to clean up is huge
On Mon, January 25, 2010 10:31, Robert Nichols wrote: \
Now if the "{}" string appears more than once then the command line contains that path more than once, but it is essentially impossible to exceed the kernel's MAX_ARG_PAGES this way.
The only issue with using "-exec command {} ;" for a huge number of files is one of performance. If there are 100,000 matched files, the command will be invoked 100,000 times.
-- Bob Nichols RNichols42@comcast.net
Since the OP reported that the command he used:
find -name "*.access*" -mtime +2 -exec rm {} ;
in fact failed, one may infer that more than performance is at issue.
The OP's problem lies not with the -exec construction but with the unstated, but nonetheless present, './' of his find invocation. Therefore he begins a recursive descent into that directory tree. Since the depth of that tree is not given us, nor its contents, we may only infer that there must be some number of files therein which are causing the MAXPAGES limit to be exceeded before the recursion returns.
I deduce that he could provide the -prune option or the -maxdepth= 0 option to avoid this recursion instead. I have not tried either but I understand that one, or both, should work.
I still suspect that the OP had an unquoted wildcard someplace on his original command. Either a find * -name ..., or find . -name *.access*...
I see people all the time forget to quote the argument to -name, which would normally work if the wildcard doesn't match more than 1 file in the current directory. But if there is more than 1 file, then find will return an error since the second file would likely not match an option to find.
If there are too many matches in the current directory, the unquoted example would fail even before the find command could execute.