Since when is there a limit in how long directory listings CentOS can show (ls), or how large directories can be removed (rm). It is really annoying to say, for example
rm -rf /var/amavis/tmp
and get only "argument list too long" as feedback.
Is there a way to go round this problem?
I have CentOS 5.2.
- Jussi
-- Jussi Hirvi * Green Spot Topeliuksenkatu 15 C * 00250 Helsinki * Finland Tel. & fax +358 9 493 981 * Mobile +358 40 771 2098 (only sms) jussi.hirvi@greenspot.fi * http://www.greenspot.fi
2008/10/17 Jussi Hirvi greenspot@greenspot.fi:
Since when is there a limit in how long directory listings CentOS can show (ls), or how large directories can be removed (rm). It is really annoying to say, for example
rm -rf /var/amavis/tmp
and get only "argument list too long" as feedback.
Is there a way to go round this problem?
I have CentOS 5.2.
- Jussi
try something like: for i in /var/amavis/tmp/* do rm -rf $i done
Laurent
Satchel Paige - "Don't look back. Something might be gaining on you."
On Fri, Oct 17, 2008 at 4:36 AM, Laurent Wandrebeck l.wandrebeck@gmail.com wrote:
2008/10/17 Jussi Hirvi greenspot@greenspot.fi:
Since when is there a limit in how long directory listings CentOS can show (ls), or how large directories can be removed (rm). It is really annoying to say, for example
rm -rf /var/amavis/tmp
and get only "argument list too long" as feedback.
Is there a way to go round this problem?
I have CentOS 5.2.
- Jussi
try something like: for i in /var/amavis/tmp/* do rm -rf $i done
it should be:
for i in `ls /var/amavis/tmp` do rm $i done
thad wrote:
Satchel Paige - "Don't look back. Something might be gaining on you."
On Fri, Oct 17, 2008 at 4:36 AM, Laurent Wandrebeck l.wandrebeck@gmail.com wrote:
2008/10/17 Jussi Hirvi greenspot@greenspot.fi:
Since when is there a limit in how long directory listings CentOS can show (ls), or how large directories can be removed (rm). It is really annoying to say, for example
rm -rf /var/amavis/tmp
and get only "argument list too long" as feedback.
Is there a way to go round this problem?
I have CentOS 5.2.
- Jussi
try something like: for i in /var/amavis/tmp/* do rm -rf $i done
it should be:
for i in `ls /var/amavis/tmp` do rm $i done
These shouldn't make any difference. The limit is on the size of the expanded shell command line. The original example won't cause it. The ones that expand a list with a * or the output of ls may. The right solution is to let rm recurse with -r or to potentially long list to xargs.
Les Mikesell wrote:
thad wrote:
it should be:
for i in `ls /var/amavis/tmp` do rm $i done
These shouldn't make any difference. The limit is on the size of the expanded shell command line.
Really?
$ M=0; N=0; for W in `find /usr -xdev 2>/dev/null`; do M=$(($M+1)); N=$(($N+${#W}+1)); done; echo $M $N 156304 7677373
vs.
$ /bin/echo `find /usr -xdev 2>/dev/null` bash: /bin/echo: Argument list too long
For the first case, the shell never tries to pass the list as command arguments. It builds the list internally, limited only by memory size, and processes the words one by one. As a final test case, by using the shell's builtin 'echo' the whole 7-plus megabytes gets echoed to the terminal:
$ echo `find /usr -xdev 2>/dev/null` (no errors -- just lots of output)
Anyway, the "for i in `ls ...`" solution breaks for paths that include embedded white space.
Robert Nichols wrote:
These shouldn't make any difference. The limit is on the size of the expanded shell command line.
Really?
$ M=0; N=0; for W in `find /usr -xdev 2>/dev/null`; do M=$(($M+1)); N=$(($N+${#W}+1)); done; echo $M $N 156304 7677373
vs.
$ /bin/echo `find /usr -xdev 2>/dev/null` bash: /bin/echo: Argument list too long
For the first case, the shell never tries to pass the list as command arguments. It builds the list internally, limited only by memory size, and processes the words one by one.
Is that peculiar to bash? I thought the `command` construct was expanded by shells into the command line before being evaluated.
On Fri, 2008-10-17 at 23:52 -0500, Les Mikesell wrote:
Robert Nichols wrote:
These shouldn't make any difference. The limit is on the size of the expanded shell command line.
Really?
$ M=0; N=0; for W in `find /usr -xdev 2>/dev/null`; do M=$(($M+1)); N=$(($N+${#W}+1)); done; echo $M $N 156304 7677373
vs.
$ /bin/echo `find /usr -xdev 2>/dev/null` bash: /bin/echo: Argument list too long
For the first case, the shell never tries to pass the list as command arguments. It builds the list internally, limited only by memory size, and processes the words one by one.
Is that peculiar to bash? I thought the `command` construct was expanded by shells into the command line before being evaluated.
IIRC, none of the above make a "command line". Everything but the
`find /usr -xdev 2>/dev/null`
is a bash "internal command". IIRC, what should happen here is a new instance of bash is spawned as part of a pipeline that sends the output of the find (which is "exec'd" by that new instance of bash, the child) into the parent. The parent reads the input from the pipe and can do whatever it wants, in this case build an array. It then uses the array as data to the loop.
The "command line" is never constructed with the long list. It is only passed to the child (the new instance of bash that is part of the pipeline). That instance receives an argument count and an array of pointers to the arguments. In "C" parlance it looks something like this.
main(argc, *argv[]) /* could be **argv instead */ { /* stuff to do */ . . . }
The "*argv[]" pointers point to the parts of the "command line",
find /usr -xdev
The child execs find, passing the "/usr" and "-xdev" as arguments to find (which has a similar "main" construct), another "command line". The I/O redirection was already done by the parent, so the child need not even know that "stdout" is a pipe.
The longest command line in this case is "find /usr -xdev', 15 characters. Find "sees" only 10 characters.
I hope I've remembered correctly, that this is not FUD, and that it helps someone.
<snip>
On Sat, 2008-10-18 at 06:00 -0400, William L. Maltby wrote:
<snip>
Ok. 3rd cup of coffee has made its way into various of my systems. A minor correction (but important for us pedantic typers) is below.
main(argc, *argv[]) /* could be **argv instead */
main(int argc, char *argv[]) /* could be **argv instead */
{ /* stuff to do */ . . . }
<snip>
Les Mikesell wrote:
Robert Nichols wrote:
These shouldn't make any difference. The limit is on the size of the expanded shell command line.
Really?
$ M=0; N=0; for W in `find /usr -xdev 2>/dev/null`; do M=$(($M+1)); N=$(($N+${#W}+1)); done; echo $M $N 156304 7677373
vs.
$ /bin/echo `find /usr -xdev 2>/dev/null` bash: /bin/echo: Argument list too long
For the first case, the shell never tries to pass the list as command arguments. It builds the list internally, limited only by memory size, and processes the words one by one.
Is that peculiar to bash? I thought the `command` construct was expanded by shells into the command line before being evaluated.
I can't answer for how any particular shell allocates its internal memory, but yes, the shell does read the entire output from `command` before evaluating it. If this data is simply being used internally it never gets passed to the kernel as an argument to exec() and thus can never result in errno==E2BIG (7, "Argument list too long").
On Oct 17, 2008, at 7:58 PM, thad wrote:
Satchel Paige - "Don't look back. Something might be gaining on you."
On Fri, Oct 17, 2008 at 4:36 AM, Laurent Wandrebeck l.wandrebeck@gmail.com wrote:
2008/10/17 Jussi Hirvi greenspot@greenspot.fi:
Since when is there a limit in how long directory listings CentOS can show (ls), or how large directories can be removed (rm). It is really annoying to say, for example
rm -rf /var/amavis/tmp
and get only "argument list too long" as feedback.
Is there a way to go round this problem?
I have CentOS 5.2.
- Jussi
try something like: for i in /var/amavis/tmp/* do rm -rf $i done
it should be:
for i in `ls /var/amavis/tmp` do rm $i done _______________________________________________
Taking into account the valid objections others have mentioned, such as problems of embedded whitespace in names, rm -rf $i and rm $i above are not the same. Even if there are no directories under the /var/amavis/tmp/, depending on aliases, etc, rm $i may prompt you for confirmation. the other will go ahead and do the remove if you have permission to do it (or at least the -f).
The -r for files is unnecessary, and offends me when I see people do it, but doesn't really cause any harm :)
I personally either rm -rf directory, and recreate the directory if necessary, or do a find /var/amavis/tmp -type f ... because of experience through the years with too long of command lines. Unixes in the past had even smaller limits. xargs most frequently, and if things fail, I may just do -exec rm -f {} ; on the find.
piping ls to xargs should do the trick. man xargs for details.
Jussi Hirvi wrote:
Since when is there a limit in how long directory listings CentOS can show (ls), or how large directories can be removed (rm). It is really annoying to say, for example
rm -rf /var/amavis/tmp
and get only "argument list too long" as feedback.
Is there a way to go round this problem?
I have CentOS 5.2.
- Jussi
-- Jussi Hirvi * Green Spot Topeliuksenkatu 15 C * 00250 Helsinki * Finland Tel. & fax +358 9 493 981 * Mobile +358 40 771 2098 (only sms) jussi.hirvi@greenspot.fi * http://www.greenspot.fi
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Lawrence Guirre (lawrence.guirre@gmail.com) kirjoitteli (17.10.2008 12:55):
piping ls to xargs should do the trick. man xargs for details.
Ok, thanks for ideas, Laurent and Lawrence.
A strange limitation in ls and rm, though. My friend said he hasn't seen that in Fedora.
- Jussi
-- Jussi Hirvi * Green Spot Topeliuksenkatu 15 C * 00250 Helsinki * Finland Tel. & fax +358 9 493 981 * Mobile +358 40 771 2098 (only sms) jussi.hirvi@greenspot.fi * http://www.greenspot.fi
piping ls to xargs should do the trick. man xargs for details.
Ok, thanks for ideas, Laurent and Lawrence.
A strange limitation in ls and rm, though. My friend said he hasn't seen that in Fedora.
Are you sure you are comparing apples to apples? There is nothing particularly Centos specific about this problem. I've seen it on a variety of *NIX systems over the years, though I presume some distributions or UNIX variants may have upped the buffer size.
Here is an interesting blog post which illustrates how you can get into this kind trouble:
http://stevenroddis.com/2006/10/07/binrm-argument-list-too-long/index.html
-geoff
Jussi Hirvi wrote:
Lawrence Guirre (lawrence.guirre@gmail.com) kirjoitteli (17.10.2008 12:55):
piping ls to xargs should do the trick. man xargs for details.
Ok, thanks for ideas, Laurent and Lawrence.
A strange limitation in ls and rm, though. My friend said he hasn't seen that in Fedora.
Than he doesn't have as many files in the directory as you have:
#define ARG_MAX 131072 /* # bytes of args + environ for exec() */
That's from /usr/include/linux/limits.h. Also see http://partmaps.org/era/unix/arg-max.html
Ralph
Jussi Hirvi wrote:
Lawrence Guirre (lawrence.guirre@gmail.com) kirjoitteli (17.10.2008 12:55):
piping ls to xargs should do the trick. man xargs for details.
Ok, thanks for ideas, Laurent and Lawrence.
A strange limitation in ls and rm, though. My friend said he hasn't seen that in Fedora.
This limitation has been removed from more recent kernels.
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=...
http://www.gnu.org/software/coreutils/faq/#Argument-list-too-long
Jeremy
Jeremy Sanders wrote:
piping ls to xargs should do the trick. man xargs for details.
Ok, thanks for ideas, Laurent and Lawrence.
A strange limitation in ls and rm, though. My friend said he hasn't seen that in Fedora.
This limitation has been removed from more recent kernels.
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=...
http://www.gnu.org/software/coreutils/faq/#Argument-list-too-long
It is probably still best not to expect the ability to build infinitely long command lines. You can hit some other limit eventually.
On Fri, Oct 17, 2008 at 4:09 AM, Jeremy Sanders jeremy@jeremysanders.net wrote:
This limitation has been removed from more recent kernels.
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=...
http://www.gnu.org/software/coreutils/faq/#Argument-list-too-long
This is usually not a kernel issue at all - it is a shell issue. The limitation is the len
Jussi Hirvi wrote:
piping ls to xargs should do the trick. man xargs for details.
Ok, thanks for ideas, Laurent and Lawrence.
A strange limitation in ls and rm, though. My friend said he hasn't seen that in Fedora.
This issue is in Fedora, Ubuntu, CentOS, RHEL, (put any other linux version you want here).
When you get too many files in a directory, you will receive this error. The same SOURCE code is compiled regardless of the "Distro".
As you have seen. there are many solutions to this problem ... HOWEVER, picking a new distro is not one of them
Most people never hit this limitation, but it is certainly possible and there in all versions of Linux.
Thanks, Johnny Hughes
On Fri, Oct 24, 2008 at 06:49:02AM -0500, Johnny Hughes wrote:
Jussi Hirvi wrote:
piping ls to xargs should do the trick. man xargs for details.
Ok, thanks for ideas, Laurent and Lawrence.
A strange limitation in ls and rm, though. My friend said he hasn't seen that in Fedora.
This issue is in Fedora, Ubuntu, CentOS, RHEL, (put any other linux version you want here).
When you get too many files in a directory, you will receive this error. The same SOURCE code is compiled regardless of the "Distro".
As you have seen. there are many solutions to this problem ... HOWEVER, picking a new distro is not one of them
Most people never hit this limitation, but it is certainly possible and there in all versions of Linux.
Thanks, Johnny Hughes
I've always understood it to be an issue with commandline length: somewhere (probably in bash) there's a limit on how big a buffer is/can be used for storing the comamndline.
On Fri, Oct 24, 2008 at 8:48 AM, fred smith fredex@fcshome.stoneham.ma.us wrote:
I've always understood it to be an issue with commandline length: somewhere (probably in bash) there's a limit on how big a buffer is/can be used for storing the comamndline.
There are two possible buffer limits one could encounter: tty driver input line buffer (which is not an issue for bash because readline avoids it) and kernel exec space for the arguments plus environment passed to a new process. Only the second one causes the error message that started this thread, and previous posts have pointed out that recent Linux kernels have effectively removed that limit (see message from Jeremy Sanders).
On Fri, Oct 24, 2008, Bart Schaefer wrote:
On Fri, Oct 24, 2008 at 8:48 AM, fred smith fredex@fcshome.stoneham.ma.us wrote:
I've always understood it to be an issue with commandline length: somewhere (probably in bash) there's a limit on how big a buffer is/can be used for storing the comamndline.
There are two possible buffer limits one could encounter: tty driver input line buffer (which is not an issue for bash because readline avoids it) and kernel exec space for the arguments plus environment passed to a new process. Only the second one causes the error message that started this thread, and previous posts have pointed out that recent Linux kernels have effectively removed that limit (see message from Jeremy Sanders).
While current Linux kernels may have removed the limit, this has been a common issue on all *nix systems for decades, which is why xargs was written.
As a general rule, it's best to use find to pipe lists to xargs rather than depend on the characteristics of the underlying system. This might be called defensive programming, as it insures that scripts will work anywhere, not just on the system you are using today.
Programming to the lowest common denominator may not feel sexy, but it can prevent many headaches in the future. I spent quite a bit of time many years ago getting a large FORTRAN system working that had been written on a system that use 7 character variable names where standard FORTRAN only permitted 6 (it was amazing how many of the variable names differed only in the 7th character). While this would be relatively easy to deal with today, it was a bitch when all programs were on 80-column punch cards.
Bill
Bill Campbell wrote:
There are two possible buffer limits one could encounter: tty driver input line buffer (which is not an issue for bash because readline avoids it) and kernel exec space for the arguments plus environment passed to a new process. Only the second one causes the error message that started this thread, and previous posts have pointed out that recent Linux kernels have effectively removed that limit (see message from Jeremy Sanders).
While current Linux kernels may have removed the limit,
It's probably a mistake to say that the limit is removed. I think this change just moves the limiting factor elsewhere - to the RAM or virtual memory that happens to be available.
this has been a common issue on all *nix systems for decades, which is why xargs was written.
Recognizing that you do not have infinite buffer space available is a good thing. Keep using xargs.
On Fri, Oct 24, 2008 at 9:31 AM, Bill Campbell centos@celestial.com wrote:
Programming to the lowest common denominator may not feel sexy, but it can prevent many headaches in the future. I spent quite a bit of time many years ago getting a large FORTRAN system working that had been written on a system that use 7 character variable names where standard FORTRAN only permitted 6 (it was amazing how many of the variable names differed only in the 7th character). While this would be relatively easy to deal with today, it was a bitch when all programs were on 80-column punch cards.
Okay, now you're officially old.
(Like me.)
mhr
MHR wrote:
On Fri, Oct 24, 2008 at 9:31 AM, Bill Campbell centos@celestial.com wrote:
Programming to the lowest common denominator may not feel sexy, but it can prevent many headaches in the future. I spent quite a bit of time many years ago getting a large FORTRAN system working that had been written on a system that use 7 character variable names where standard FORTRAN only permitted 6 (it was amazing how many of the variable names differed only in the 7th character). While this would be relatively easy to deal with today, it was a bitch when all programs were on 80-column punch cards.
Okay, now you're officially old.
(Like me.)
mhr
Forgive my senility, but I'm continually amazed how many of us ole fossils are still around, and running Linux! Not to use up too much bandwidth, but the switch from Fortran 2 to 2D, for disk, was a big event way back when. Then Fortran 4 came around! Be still my old heart!
ENW
On Fri, 2008-10-24 at 14:19 -0400, Ed Westphal wrote:
Forgive my senility, but I'm continually amazed how many of us ole fossils are still around, and running Linux! Not to use up too much bandwidth, but the switch from Fortran 2 to 2D, for disk, was a big event way back when. Then Fortran 4 came around! Be still my old heart!
WAY OT, but since the thread has already been hijacked, can't resist a trip down memory lane...
Ah yes, how fondly I remember running FORTRAN from punched tape on the Data General Nova "minicomputer". At least it was not prone to dropping the 80-column card deck and having to re-sort it. Then we got the 8" hard-sector floppy drive. Luxury! Still had to boot it up with the correct sequence of flips of the front panel switches, but actually had somewhere to save output data as well as load programs - up to 256KB. Did real-time data acquisition using an 8-bit A/D and ran fast Fourier transforms to get frequency domain responses using ASCII graphics on a printer.
http://en.wikipedia.org/wiki/Data_General_Nova
Seem to remember an "old farts" thread on this list a while back, so I guess "ole fossils" sounds a bit better. :-)
Phil
on 10-24-2008 3:21 PM Phil Schaffner spake the following:
On Fri, 2008-10-24 at 14:19 -0400, Ed Westphal wrote:
Forgive my senility, but I'm continually amazed how many of us ole fossils are still around, and running Linux! Not to use up too much bandwidth, but the switch from Fortran 2 to 2D, for disk, was a big event way back when. Then Fortran 4 came around! Be still my old heart!
WAY OT, but since the thread has already been hijacked, can't resist a trip down memory lane...
Ah yes, how fondly I remember running FORTRAN from punched tape on the Data General Nova "minicomputer". At least it was not prone to dropping the 80-column card deck and having to re-sort it. Then we got the 8" hard-sector floppy drive. Luxury! Still had to boot it up with the correct sequence of flips of the front panel switches, but actually had somewhere to save output data as well as load programs - up to 256KB. Did real-time data acquisition using an 8-bit A/D and ran fast Fourier transforms to get frequency domain responses using ASCII graphics on a printer.
http://en.wikipedia.org/wiki/Data_General_Nova
Seem to remember an "old farts" thread on this list a while back, so I guess "ole fossils" sounds a bit better. :-)
Phil
I remember numbering on the back of cards with a pencil as a backup when you dropped the deck. And of course you numbered by tens just in case you had to insert something.
Scott Silva wrote:
on 10-24-2008 3:21 PM Phil Schaffner spake the following:
On Fri, 2008-10-24 at 14:19 -0400, Ed Westphal wrote:
Forgive my senility, but I'm continually amazed how many of us ole fossils are still around, and running Linux! Not to use up too much bandwidth, but the switch from Fortran 2 to 2D, for disk, was a big event way back when. Then Fortran 4 came around! Be still my old heart!
WAY OT, but since the thread has already been hijacked, can't resist a trip down memory lane...
Ah yes, how fondly I remember running FORTRAN from punched tape on the Data General Nova "minicomputer". At least it was not prone to dropping the 80-column card deck and having to re-sort it. Then we got the 8" hard-sector floppy drive. Luxury! Still had to boot it up with the correct sequence of flips of the front panel switches, but actually had somewhere to save output data as well as load programs - up to 256KB. Did real-time data acquisition using an 8-bit A/D and ran fast Fourier transforms to get frequency domain responses using ASCII graphics on a printer.
http://en.wikipedia.org/wiki/Data_General_Nova
Seem to remember an "old farts" thread on this list a while back, so I guess "ole fossils" sounds a bit better. :-)
Phil
I remember numbering on the back of cards with a pencil as a backup when you dropped the deck. And of course you numbered by tens just in case you had to insert something.
That's why you punch sequence numbers in the last 8 columns. :-)
On Fri, 2008-10-24 at 16:16 -0700, Raymond Lillard wrote:
That's why you punch sequence numbers in the last 8 columns. :-)
... and some of the fancier card readers would even sort them for you, but remember to number by some integer >> 1 or you had to redo the whole remainder of the deck to insert a line.
On Fri, Oct 24, 2008, Phil Schaffner wrote:
On Fri, 2008-10-24 at 16:16 -0700, Raymond Lillard wrote:
That's why you punch sequence numbers in the last 8 columns. :-)
... and some of the fancier card readers would even sort them for you, but remember to number by some integer >> 1 or you had to redo the whole remainder of the deck to insert a line.
The Burroughs Medium Systems mainframes I worked on allowed one to store the program on disk, then compile with modifications in a card deck, using the sequence numbers to replace or insert lines from the cards. There were options to create a new disk file with the patches included, and to resquence the source on disk. Typically there were several card desks in a drawer which could be loaded to recreate the patched disk file by loading them in sequence which was fine until the disk file was resequenced when it was time to punch new cards from the disk file to replace the original deck and patches. Punch cards were far more reliable backup than mag tape and in a pinch one could read the printing on the card to fix a badly damaged card (it was amazing how fast a card reader jam could turn the first card into an accordian fold).
COBOL had the sequence numbers in the first six columns while FORTRAN in the last eight.
I always laughed at the early quiz shows where they had a ``computer'' selecting the questions -- where the computer was really a card sorter that would select the picked question into a specific bin.
Bill
On Fri, 2008-10-24 at 18:09 -0700, Bill Campbell wrote:
The Burroughs Medium Systems mainframes I worked on allowed one to store the program on disk, then compile with modifications in a card deck, using the sequence numbers to replace or insert lines from the cards. There were options to create a new disk file with the patches included, and to resquence the source on disk. Typically there were several card desks in a drawer which could be loaded to recreate the patched disk file by loading them in sequence which was fine until the disk file was resequenced when it was time to punch new cards from the disk file to replace the original deck and patches. Punch cards were far more reliable backup than mag tape and in a pinch one could read the printing on the card to fix a badly damaged card (it was amazing how fast a card reader jam could turn the first card into an accordian fold).
Then came CANDE, TD8xx terminals, and editing on your head-per-track disk. Ah for the good old days, when men were men, and memory upgrades involved fork lifts.
Dave
On Sat, 2008-10-25 at 10:17 -0500, David G. Mackay wrote:
<snip>
Then came CANDE, TD8xx terminals, and editing on your head-per-track disk. Ah for the good old days, when men were men, and memory upgrades involved fork lifts.
I tried to stay out of this thread, I really did. But the "forklift" reference hooked me.
Circa 1971/2(?), we had an IBM S360/30 with 64K (that's right, "K", "M") bytes of "core" (back then, no simms, dimms, ...). Running IBM DOS, we had three partitons going, 1 bg, 2 fg. It was decided that an aftermarket upgrade would allow us to consolidate the two foreground functions into one and use two background partitions for batch production processing.
The aftermarket expansion was bought and took us up to a "whopping" 96KB of "core" memory. The expansion unit (best I can recall) was about 5.5' x 8' x 3', or 132 cubic feet. 8-O
Anyway, a forklift took it off the truck. And large hand pallet jack was used to roll it across the raised flooring.
It did the job too. It was several years before we upgraded to a S360/50 with 512K (IIRC).
Dave
<snip sig stuff>
On Sat, 2008-10-25 at 12:16 -0400, William L. Maltby wrote:
On Sat, 2008-10-25 at 12:14 -0400, William L. Maltby wrote:
<snip>
Circa 1971/2(?), we had an IBM S360/30 with 64K (that's right, "K", "M")
s/"M"/not "M"/
Yep. The first computer I programmed on was an IBM 1130 with 16K of core. You could power down for the weekend, and the memory contents would still be there when you powered up on Monday.
Dave
On Sat, Oct 25, 2008 at 12:16:23PM -0400, William L. Maltby wrote:
On Sat, 2008-10-25 at 12:14 -0400, William L. Maltby wrote:
<snip>
Circa 1971/2(?), we had an IBM S360/30 with 64K (that's right, "K", "M")
s/"M"/not "M"/
I wish I still had some of my 789 and 6789 cards. If only to use as bookmarks when I nod off in the middle of the afternoon drooling.
On Sat, Oct 25, 2008, William L. Maltby wrote:
On Sat, 2008-10-25 at 10:17 -0500, David G. Mackay wrote:
<snip>
Then came CANDE, TD8xx terminals, and editing on your head-per-track disk. Ah for the good old days, when men were men, and memory upgrades involved fork lifts.
I tried to stay out of this thread, I really did. But the "forklift" reference hooked me.
Circa 1971/2(?), we had an IBM S360/30 with 64K (that's right, "K", "M") bytes of "core" (back then, no simms, dimms, ...). Running IBM DOS, we had three partitons going, 1 bg, 2 fg. It was decided that an aftermarket upgrade would allow us to consolidate the two foreground functions into one and use two background partitions for batch production processing.
The aftermarket expansion was bought and took us up to a "whopping" 96KB of "core" memory. The expansion unit (best I can recall) was about 5.5' x 8' x 3', or 132 cubic feet. 8-O
Anyway, a forklift took it off the truck. And large hand pallet jack was used to roll it across the raised flooring.
It did the job too. It was several years before we upgraded to a S360/50 with 512K (IIRC).
And our Burroughs B-3500 would run circles around the 360/50. The Burroughs had a whopping 200KB of memory, ran an average of 20 jobs in the mix, and didn't require 40 JCL cards to compile and run a one line Hello World FORTRAN program.
Burroughs invented virtual memory in the early 60s in their large systems allowing them to run large programs in small memory. When IBM invented thrashing, called it virtual memory, the minimum memory requirements to run it was 1MB requiring major updgrades to support it. IBM never wrote a line of code that was not designed to sell more hardware.
Bringing this back to Linux, at that time IBM occupied the place of honor that Microsoft has now with an effective monopoly, a cumbersome and inefficient system requiring an army of support people to keep it running, and required constant patching.
Bill
On Sat, 2008-10-25 at 10:30 -0700, Bill Campbell wrote:
And our Burroughs B-3500 would run circles around the 360/50. The Burroughs had a whopping 200KB of memory, ran an average of 20 jobs in the mix, and didn't require 40 JCL cards to compile and run a one line Hello World FORTRAN program.
The good old Master Control Program at work.
Burroughs invented virtual memory in the early 60s in their large systems allowing them to run large programs in small memory. When IBM invented thrashing, called it virtual memory, the minimum memory requirements to run it was 1MB requiring major updgrades to support it. IBM never wrote a line of code that was not designed to sell more hardware.
Of course, there was the time that the large systems group put the segment-not-present handler in an overlayable segment. The good folks at the factory had machines with max memory, so it wasn't a problem for them. It was a nice hard hang for those that didn't have enough memory.
Bringing this back to Linux, at that time IBM occupied the place of honor that Microsoft has now with an effective monopoly, a cumbersome and inefficient system requiring an army of support people to keep it running, and required constant patching.
Yes, but at least IBM tested their equipment, and HAD sufficient support folks. I used to work for Burroughs, and that was a source of frustration for all concerned.
Dave
On Sat, Oct 25, 2008, David G. Mackay wrote:
On Sat, 2008-10-25 at 10:30 -0700, Bill Campbell wrote:
And our Burroughs B-3500 would run circles around the 360/50. The Burroughs had a whopping 200KB of memory, ran an average of 20 jobs in the mix, and didn't require 40 JCL cards to compile and run a one line Hello World FORTRAN program.
The good old Master Control Program at work.
Burroughs invented virtual memory in the early 60s in their large systems allowing them to run large programs in small memory. When IBM invented thrashing, called it virtual memory, the minimum memory requirements to run it was 1MB requiring major updgrades to support it. IBM never wrote a line of code that was not designed to sell more hardware.
Of course, there was the time that the large systems group put the segment-not-present handler in an overlayable segment. The good folks at the factory had machines with max memory, so it wasn't a problem for them. It was a nice hard hang for those that didn't have enough memory.
My first Burroughs experience was on the B-5500, and it had some ``interesting'' quirks. Using Burroughs extended ALGOL, one could do what they called array row writes to very efficiently write large chunks of memory with a single hardware command. The hitch was that if one tried to write more than 1024 48bit words, it would crash the entire system, with a side effect of losing the accounting information for all running programs, which could be useful when paying $750/hour for time sharing :-).
Bringing this back to Linux, at that time IBM occupied the place of honor that Microsoft has now with an effective monopoly, a cumbersome and inefficient system requiring an army of support people to keep it running, and required constant patching.
Yes, but at least IBM tested their equipment, and HAD sufficient support folks. I used to work for Burroughs, and that was a source of frustration for all concerned.
Are you retired Air Farce? A fair number of Burroughs field engineers had learned the Burroughs equipment in the AF (and could afford to work at BGH low pay because of their retirement pay).
One might say that I worked for Burroughs too as I debugged their Remote Job Entry (RJE) software for Medium systems, including patching MCP, because the company I worked for needed it to work. I talked Burroughs out of the source code for RJE and the current version of MCP so that I could fix things. After I sent them the fixes, I never had any problem getting anything I asked for.
FWIW, the entire source code listing for MCP fit in a single file drawer. Reading the comments in the code, it was obvious that a very small group of people worked on it which resulted in quite nice integration and consistency.
Can you imagine`Microsoft making the source code for Windows available to a small customer for free, and with no NDA so the customer could fix a problem that was critical to them? Even if they supplied the source, do you think anybody could figure it out?
One of the most important features of open source software is the availability of the source code so people can quickly fix bugs critical to them or add features they need. As an example, in January 2000, groff had a y2k problem with dates which I found printing a letter that needed to go out. It took me about 15 minutes to find the problem in the code, fix it, and send that patch back to the maintainers. Imagine how long it would take to get a similar problem fixed in M$-Word.
Bill
On Sat, 2008-10-25 at 12:10 -0700, Bill Campbell wrote:
My first Burroughs experience was on the B-5500, and it had some ``interesting'' quirks. Using Burroughs extended ALGOL, one could do what they called array row writes to very efficiently write large chunks of memory with a single hardware command. The hitch was that if one tried to write more than 1024 48bit words, it would crash the entire system, with a side effect of losing the accounting information for all running programs, which could be useful when paying $750/hour for time sharing :-).
I'm surprised that the bug lasted very long, or did it just go unreported? ;)
Are you retired Air Farce? A fair number of Burroughs field engineers had learned the Burroughs equipment in the AF (and could afford to work at BGH low pay because of their retirement pay).
No, I was just young and foolish. Then someone explained that Burroughs wanted to get their techs hired away by the customers. They'd most likely continue to support Burroughs equipment, but on someone else's nickel.
One might say that I worked for Burroughs too as I debugged their Remote Job Entry (RJE) software for Medium systems, including patching MCP, because the company I worked for needed it to work. I talked Burroughs out of the source code for RJE and the current version of MCP so that I could fix things. After I sent them the fixes, I never had any problem getting anything I asked for.
It's impressive that you managed to talk them out of the source, and that you fixed it.
FWIW, the entire source code listing for MCP fit in a single file drawer. Reading the comments in the code, it was obvious that a very small group of people worked on it which resulted in quite nice integration and consistency.
Legend had it that the medium systems MCP was mostly written by one guy who lived in a beach house in California with two women.
Can you imagine`Microsoft making the source code for Windows available to a small customer for free, and with no NDA so the customer could fix a problem that was critical to them? Even if they supplied the source, do you think anybody could figure it out?
Well, I did have a go at their Device Driver kit at one point. Convoluted is the first printable word that comes to mind.
One of the most important features of open source software is the availability of the source code so people can quickly fix bugs critical to them or add features they need. As an example, in January 2000, groff had a y2k problem with dates which I found printing a letter that needed to go out. It took me about 15 minutes to find the problem in the code, fix it, and send that patch back to the maintainers. Imagine how long it would take to get a similar problem fixed in M$-Word.
Yes. Trying to support a black box (It took YEARS before they released the source code to the B1xx systems to their support employees outside of the plant) made me a firm believer in open source.
Dave
On Sat, Oct 25, 2008, David G. Mackay wrote:
On Sat, 2008-10-25 at 12:10 -0700, Bill Campbell wrote:
My first Burroughs experience was on the B-5500, and it had some ``interesting'' quirks. Using Burroughs extended ALGOL, one could do what they called array row writes to very efficiently write large chunks of memory with a single hardware command. The hitch was that if one tried to write more than 1024 48bit words, it would crash the entire system, with a side effect of losing the accounting information for all running programs, which could be useful when paying $750/hour for time sharing :-).
I'm surprised that the bug lasted very long, or did it just go unreported? ;)
I'm not sure about that. It was only available on the extended ALGOL that was the system language (it had no assembly per se). They came out with what they called Compatible ALGOL that was more limited, and was all that was available for the average user, but it broke several of my programs so they allowed me to use the extended version.
The COMNET time sharing service in D.C. used the B-5500. It was formed by several ex G.E. time sharing people, and we were one of their first beta (and largest) customers, so I tended to get what I asked for. On the other hand if something went wrong, and they saw me on the system, I usually got the blame :-).
Are you retired Air Farce? A fair number of Burroughs field engineers had learned the Burroughs equipment in the AF (and could afford to work at BGH low pay because of their retirement pay).
No, I was just young and foolish. Then someone explained that Burroughs wanted to get their techs hired away by the customers. They'd most likely continue to support Burroughs equipment, but on someone else's nickel.
That sounds like Burroughs. Ray MacDonald, Burroughs Chairman, was quoted in an interview in Fortune magazine saying their goal was to keep their customers ``surly but not rebellious''.
One might say that I worked for Burroughs too as I debugged their Remote Job Entry (RJE) software for Medium systems, including patching MCP, because the company I worked for needed it to work. I talked Burroughs out of the source code for RJE and the current version of MCP so that I could fix things. After I sent them the fixes, I never had any problem getting anything I asked for.
It's impressive that you managed to talk them out of the source, and that you fixed it.
I think that was because I always had an excellent relationship with the support people, and made some good contacts at the annual CUBE meetings. It always helps to have low friends in high places.
FWIW, the entire source code listing for MCP fit in a single file drawer. Reading the comments in the code, it was obvious that a very small group of people worked on it which resulted in quite nice integration and consistency.
Legend had it that the medium systems MCP was mostly written by one guy who lived in a beach house in California with two women.
That would not surprise me. Medium Systems MCP was very well written with lots of comments (although some might be considered R-Rated). It definately did not look like it was the product of a committee.
Can you imagine`Microsoft making the source code for Windows available to a small customer for free, and with no NDA so the customer could fix a problem that was critical to them? Even if they supplied the source, do you think anybody could figure it out?
Well, I did have a go at their Device Driver kit at one point. Convoluted is the first printable word that comes to mind.
I have always thought that a major problem with Microsoft software is that it is largely written by young, inexperienced people who had little or no understanding of networking, security, or multi-user systems. My brother is one of the few people I know who worked for Microsoft who had major experience on Real systems(TM) (DEC, Prime, etc.) before going to MS.
... Bill
On Sat, 2008-10-25 at 15:02 -0700, Bill Campbell wrote:
The COMNET time sharing service in D.C. used the B-5500. It was formed by several ex G.E. time sharing people, and we were one of their first beta (and largest) customers, so I tended to get what I asked for. On the other hand if something went wrong, and they saw me on the system, I usually got the blame :-).
You should have charged extra for helping to harden the system.
No, I was just young and foolish. Then someone explained that Burroughs wanted to get their techs hired away by the customers. They'd most likely continue to support Burroughs equipment, but on someone else's nickel.
That sounds like Burroughs. Ray MacDonald, Burroughs Chairman, was quoted in an interview in Fortune magazine saying their goal was to keep their customers ``surly but not rebellious''.
Too bad he didn't give his employees that much concern. There's another story that when Michael Blumenthal became chairman, he asked for a list of all the salesmen that were making six figures. He was told that there were none.
I think that was because I always had an excellent relationship with the support people, and made some good contacts at the annual CUBE meetings. It always helps to have low friends in high places.
Burroughs technical employees were almost always happy to let someone else buy them drinks.
I have always thought that a major problem with Microsoft software is that it is largely written by young, inexperienced people who had little or no understanding of networking, security, or multi-user systems. My brother is one of the few people I know who worked for Microsoft who had major experience on Real systems(TM) (DEC, Prime, etc.) before going to MS.
It'll be interesting to see what happens now that they got religion concerning the utility computing trend. What they've always had is marketing capability. Otherwise, why would anyone run MS Office as opposed to Open Office?
Dave
On Sat, 2008-10-25 at 10:30 -0700, Bill Campbell wrote:
<snip>
It did the job too. It was several years before we upgraded to a S360/50 with 512K (IIRC).
And our Burroughs B-3500 would run circles around the 360/50. The Burroughs had a whopping 200KB of memory, ran an average of 20 jobs in the mix, and didn't require 40 JCL cards to compile and run a one line Hello World FORTRAN program.
Burroughs invented virtual memory in the early 60s in their large systems allowing them to run large programs in small memory. When IBM invented thrashing, called it virtual memory, the minimum memory requirements to run it was 1MB requiring major updgrades to support it. IBM never wrote a line of code that was not designed to sell more hardware.
Bringing this back to Linux, at that time IBM occupied the place of honor that Microsoft has now with an effective monopoly, a cumbersome and inefficient system requiring an army of support people to keep it running, and required constant patching.
Yep. I was very fortunate to have worked in that environment so long. It gave me a very good living because I seemed to have a better than average ability to handle all that stuff. I was one of those that actually read the docs (IBM seemed to be very thorough about that) and could recall/reference many months later the answers to some problem.
Even back then when folks bad-mouthed them, I didn't care. I made good $$, the only criteria that mattered to me then.
I always laughed at the early quiz shows where they had a ``computer'' selecting the questions -- where the computer was really a card sorter that would select the picked question into a specific bin.
Bill
Knowing Hollywood, it was probably a prop, with a human behind it sorting the cards!
It is not surprising how much I forgot from almost 30 years ago.
I remember numbering on the back of cards with a pencil as a backup when you dropped the deck. And of course you numbered by tens just in case you had to insert something.
I always took a magic-marker and made a diagonal line across the top of the deck. Made the initial rough sort after a "deck reorg" (somebody dropped the deck) easier. (NCR Century 100, circa. 1968)
Jay Leafey wrote:
I remember numbering on the back of cards with a pencil as a backup when you dropped the deck. And of course you numbered by tens just in case you had to insert something.
I always took a magic-marker and made a diagonal line across the top of the deck. Made the initial rough sort after a "deck reorg" (somebody dropped the deck) easier. (NCR Century 100, circa. 1968)
I recall finishing up a service call one evening and overhearing one end of a phone conversation. (This was in '64 or '65). The guy was about to make his evening transmission of the day's activity to the home office. The conversation went something like this: "Yeah, I'm pretty busy tonight. I'm gonna send 'em all together." <pause> "Oh! No problem. The AR's have an upper left corner cut; payroll has a pink stripe" <pause> O.K., here we go.... (And pulls the "data" switch on the phone, followed immediately by insane laughter, confident that the guy on the other end had not a clue about how to break his freshly punched deck apart with a sorter or collator.)
On Fri, Oct 24, 2008 at 5:21 PM, Phil Schaffner P.R.Schaffner@ieee.org wrote:
On Fri, 2008-10-24 at 14:19 -0400, Ed Westphal wrote:
Forgive my senility, but I'm continually amazed how many of us ole fossils are still around, and running Linux! Not to use up too much bandwidth, but the switch from Fortran 2 to 2D, for disk, was a big event way back when. Then Fortran 4 came around! Be still my old heart!
<snip>
Seem to remember an "old farts" thread on this list a while back, so I guess "ole fossils" sounds a bit better. :-)
Phil: Smells better too. :-) I remember the line printers we had, for the IBM 7090's, http://en.wikipedia.org/wiki/IBM_7090 in the NOC of an Airline Reservation Center. I think they were about 30% the size of our home office. :-) Lanny
On Fri, Oct 24, 2008 at 5:07 PM, Lanny Marcus lmmailinglists@gmail.com wrote:
On Fri, Oct 24, 2008 at 5:21 PM, Phil Schaffner P.R.Schaffner@ieee.org wrote:
On Fri, 2008-10-24 at 14:19 -0400, Ed Westphal wrote:
Forgive my senility, but I'm continually amazed how many of us ole fossils are still around, and running Linux! Not to use up too much bandwidth, but the switch from Fortran 2 to 2D, for disk, was a big event way back when. Then Fortran 4 came around! Be still my old heart!
<snip> > Seem to remember an "old farts" thread on this list a while back, so I > guess "ole fossils" sounds a bit better. :-)
Phil: Smells better too. :-) I remember the line printers we had, for the IBM 7090's, http://en.wikipedia.org/wiki/IBM_7090 in the NOC of an Airline Reservation Center. I think they were about 30% the size of our home office. :-) Lanny
That's it - I'm not speaking to either of you again. You're too old!
(So am I - how awkward! :-)
mhr
on 10-24-2008 11:19 AM Ed Westphal spake the following:
MHR wrote:
On Fri, Oct 24, 2008 at 9:31 AM, Bill Campbell centos@celestial.com wrote:
Programming to the lowest common denominator may not feel sexy, but it can prevent many headaches in the future. I spent quite a bit of time many years ago getting a large FORTRAN system working that had been written on a system that use 7 character variable names where standard FORTRAN only permitted 6 (it was amazing how many of the variable names differed only in the 7th character). While this would be relatively easy to deal with today, it was a bitch when all programs were on 80-column punch cards.
Okay, now you're officially old.
(Like me.)
mhr
Forgive my senility, but I'm continually amazed how many of us ole fossils are still around, and running Linux! Not to use up too much bandwidth, but the switch from Fortran 2 to 2D, for disk, was a big event way back when. Then Fortran 4 came around! Be still my old heart!
ENW
When I learned Fortran IV in 1980 my teacher said that Fortran and Cobol were the languages of the future!
On Fri, Oct 24, 2008, Scott Silva wrote: ...
When I learned Fortran IV in 1980 my teacher said that Fortran and Cobol were the languages of the future!
In a presentation at the 1985 Usenix conference, Rob Pike made a comment that he didn't know what the language for scientific program of the future would be, but that it would be called FORTRAN.
COBOL on Burroughs Medium Systems was an extremely powerful language. I wrote some pretty large commerical systems with it. My main problems with COBOL came when I had to run on a system other than Burroughs where COBOL was not fully recursive, and missing features that I took for granted.
My first exposure to computers was in 1966 on a Bendix G-20 and their Mishewaka FORTRAN. This version of FORTRAN was written by engineers, and had features that were well ahead of IBM's FORTRAN:
+ Everything was done in floating point -- engineers don't grok integers.
+ ``DO'' loops would of course have floating point variables, and worked as an engineer or mathematician would expect.
+ ``DO'' loops tested at the top of the loop instead of at the end as they did on IBM FORTRAN. Thus if the starting value was greater than the terminating value nothing in the loop would be executed.
+ Free form input from cards (e.g. one could have ``PI=3.14159'' and it would do the reasonable thing.
+ Free form output.
Bill
Scott Silva wrote:
on 10-24-2008 11:19 AM Ed Westphal spake the following:
MHR wrote:
On Fri, Oct 24, 2008 at 9:31 AM, Bill Campbell centos@celestial.com wrote:
Programming to the lowest common denominator may not feel sexy, but it can prevent many headaches in the future. I spent quite a bit of time many years ago getting a large FORTRAN system working that had been written on a system that use 7 character variable names where standard FORTRAN only permitted 6 (it was amazing how many of the variable names differed only in the 7th character). While this would be relatively easy to deal with today, it was a bitch when all programs were on 80-column punch cards.
Okay, now you're officially old.
(Like me.)
mhr
Forgive my senility, but I'm continually amazed how many of us ole fossils are still around, and running Linux! Not to use up too much bandwidth, but the switch from Fortran 2 to 2D, for disk, was a big event way back when. Then Fortran 4 came around! Be still my old heart!
ENW
When I learned Fortran IV in 1980 my teacher said that Fortran and Cobol were the languages of the future!
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
I have been learning and using COBOL since the mid 80's. I use COBOL at the present time for Web Programming also. The COBOL we use runs on UNIX and Linux. I use it in addition to PHP/MySQL for Web Programming.
I have looked at Fortran programs but never had to learn the language. It is on a PDP 11 that we shutdown in the late 90's.
On Fri, Oct 24, 2008 at 06:15:31PM -0500, Michael Peterson wrote:
I have been learning and using COBOL since the mid 80's. I use COBOL at the present time for Web Programming also. The COBOL we use runs on UNIX and Linux. I use it in addition to PHP/MySQL for Web Programming.
I have looked at Fortran programs but never had to learn the language. It is on a PDP 11 that we shutdown in the late 90's.
PDP-11... now there was a nice machine! That's where I first learned Assembly language--and I definitely was spoiled by that. Now when I look at assembler for, e.g. 80x86 machines I want to throw up. Nothing has been anything as nice to program in since with the possible exception of the 68000 family which had a lot of similarities.
On Fri, Oct 24, 2008 at 3:28 PM, Scott Silva ssilva@sgvwater.com wrote:
When I learned Fortran IV in 1980 my teacher said that Fortran and Cobol were the languages of the future!
Sheesh! When I learned Fortran IV in 1974, we had the WatFour and WatFive compilers, and were getting ready to upgrade to Fortran V. Algol 68 was the language of the future.
All that changed when I learned Pascal at UCSD in 1978, another "language of the future" that still is....
Still, we built a whole OS based on UCSD Pascal 2.0 (and then modified it extensively) in 1980, and that was fine until I moved into DYNIX in 1987 - loved it, and C, and stayed there.
That's why I love Linux. Sort of.
mhr
On 2008-10-17 11:30, Jussi Hirvi wrote:
Since when is there a limit in how long directory listings CentOS can show (ls), or how large directories can be removed (rm). It is really annoying to say, for example
rm -rf /var/amavis/tmp
and get only "argument list too long" as feedback.
Is there a way to go round this problem?
I have CentOS 5.2.
I believe you gave a bad example! In the command
rm -rf /var/amavis/tmp
the argument list is not at all "very long". However if you did:
rm -rf /var/amavis/tmp/*
then the argument list could be very long depending on the number of entries there are in the subdirectory.
You have to understand that globbing is done by the shell before starting the command. The result of the glob is what makes the "argument list too long"; it needs to fit in a buffer (about 128K bytes on a default CentOS install I believe).
If you want the to remove the subfolders and files, and not the parent folder itself, on CentOS 5, try this:
cd /var/amavis/tmp rm -rf *
Doing a "cd" makes the resulting of the globbing much shorter, maybe fixing your problem already.
If still too long, then try:
find . -maxdepth 1 -exec rm -rf {} +
You could even try things like:
cd /var/amavis/tmp rm a* rm *0 ...etc...
Any glob pattern resulting in less arguments for the command to avoid overflowing the 128K buffer is good.
On CentOS 4, the '+' variant of -exec does not exist, and you need to do:
find . -maxdepth 1 -exec rm -rf {} ; # one rm command for each arg or find . -maxdepth 1 -print0 | xargs -0 rm -rf # less resource intensive
Yes, you are right - my example was misleading.
Thanks for the very easy solution (cd into directory). Have to try it the next time.
- Jussi
Paul Bijnens (Paul.Bijnens@xplanation.com) kirjoitteli (17.10.2008 13:18):
I believe you gave a bad example! In the command
rm -rf /var/amavis/tmp
the argument list is not at all "very long". However if you did:
rm -rf /var/amavis/tmp/*
then the argument list could be very long depending on the number of entries there are in the subdirectory.
-- Jussi Hirvi * Green Spot Topeliuksenkatu 15 C * 00250 Helsinki * Finland Tel. & fax +358 9 493 981 * Mobile +358 40 771 2098 (only sms) jussi.hirvi@greenspot.fi * http://www.greenspot.fi
on 10-17-2008 2:30 AM Jussi Hirvi spake the following:
Since when is there a limit in how long directory listings CentOS can show (ls), or how large directories can be removed (rm). It is really annoying to say, for example
rm -rf /var/amavis/tmp
and get only "argument list too long" as feedback.
Is there a way to go round this problem?
I have CentOS 5.2.
It isn't a problem with the commands, it is a problem of how long a command line can be when piped to a command.
rm -rf /var/amavis/tmp is effectively the same as rm -rf /var/amavis/tmp/1 /var/amavis/tmp/2 /var/amavis/tmp/3 /var/amavis/tmp/4 /var/amavis/tmp/5 ... etc. The number of diles and directories in that folder is the limiting factor.
And yes, Fedora would have the same limitation.
rm -rf /var/amavis/tmp
and get only "argument list too long" as feedback.
Is there a way to go round this problem?
I have CentOS 5.2.
It isn't a problem with the commands, it is a problem of how long a command line can be when piped to a command.
rm -rf /var/amavis/tmp is effectively the same as rm -rf /var/amavis/tmp/1 /var/amavis/tmp/2 /var/amavis/tmp/3 /var/amavis/tmp/4 /var/amavis/tmp/5 ... etc. The number of diles and directories in that folder is the limiting factor.
I don't believe this is correct. The command "rm -rf /path/to/dir" doesn't expand on the shell the same way "rm -rf /path/to/dir/*" would.
Unless I'm misunderstanding your comment, "rm -rf /path/to/dir" will remove everything as intended without blowing out the argument list.
Dealing with file removal and getting 'argument list too long' is a FAQish question, and there is more than one way to get around the issue. Common workarounds include find piped to xargs rm, the above mentioned recursive directory nuke, one line perl scripts, etc.
-John
Jussi Hirvi a écrit :
Since when is there a limit in how long directory listings CentOS can show (ls), or how large directories can be removed (rm). It is really annoying to say, for example
rm -rf /var/amavis/tmp
and get only "argument list too long" as feedback.
I doubt this. "argument list too long" is a shell error, and in your command the shell doesn't see many arguments.
I guess you want to remove amavisd-new temp files and you did rm -rf /var/amavis/tmp/*
In this case, the shell would need to replace that with rm -rf /var/amavis/tmp/foo1 /var/amavis/tmp/foo2 .... in which case, it needs to store these arguments in memory. so it would need to allocate enough memory for all these before passing them to the rm command. so a limitation is necessary to avoid consuming all your memory. This limitation exists on all unix systems that I have seen.
Is there a way to go round this problem?
Since amavisd-new temp files have no spaces in them, you can do for f in in /var/amavis/tmp/*; do rm -rf $f; done (Here, the shell does the loop, so doesn't need to expand the list at once).
alternatively, you could remove the whole directory (rm -rf /var/amavis/tmp) and recreate it (don't forget to reset the owner and permisions).
I have CentOS 5.2.
On Oct 18, 2008, at 8:13 PM, mouss wrote:
Jussi Hirvi a écrit :
Since when is there a limit in how long directory listings CentOS can show (ls), or how large directories can be removed (rm). It is really annoying to say, for example
rm -rf /var/amavis/tmp
and get only "argument list too long" as feedback.
I doubt this. "argument list too long" is a shell error, and in your command the shell doesn't see many arguments.
I guess you want to remove amavisd-new temp files and you did rm -rf /var/amavis/tmp/*
In this case, the shell would need to replace that with rm -rf /var/amavis/tmp/foo1 /var/amavis/tmp/foo2 .... in which case, it needs to store these arguments in memory. so it would need to allocate enough memory for all these before passing them to the rm command. so a limitation is necessary to avoid consuming all your memory. This limitation exists on all unix systems that I have seen.
Is there a way to go round this problem?
Since amavisd-new temp files have no spaces in them, you can do for f in in /var/amavis/tmp/*; do rm -rf $f; done (Here, the shell does the loop, so doesn't need to expand the list at once).
alternatively, you could remove the whole directory (rm -rf /var/amavis/tmp) and recreate it (don't forget to reset the owner and permisions).
I have CentOS 5.2.
Possible to learn something new every day. I would have expected the for loop to fail too, thinking it would attempt to expand the wildcard before starting it's iteration.
and get only "argument list too long" as feedback.
Is there a way to go round this problem?
I have CentOS 5.2.
I'm not going to repeat some of the good advice given to you by others as to how to avoid this error, but will instead tell you this is related to the ARG_MAX variable. The standard limit for linux kernels up to 2.6.22.xxxx is 131072 chars. This can be confirmed by typing: getconf ARG_MAX Until CentOS uses the 2.6.23 kernel (or later) in which the length of arguments is constrained only by system resources, you'll need to use scripting techniques which are more parsimonious.