On Jan 8, 2008 7:00 PM, David G. Miller dave@davenjudy.org wrote:
Since upgrading my server from CentOS 4.5 to 4.6 I've been getting the following error from amanda backups:
mutilate /home lev 1 FAILED [compress got signal 11, /bin/tar got signal 13]
I was away from the house for most of the end of December and had a couple of other issues that came up that could have been related but apparently weren't (why is it that several things all go wrong at once?). After getting these other issues resolved I was still getting the above error. I tried running the following command as root:
/bin/tar -X /etc/amanda/exclude-list/exclude.txt -cvf - /home | gzip -v -c > /share/dave/Home.tar.gz
Initially tar would die while attempting to back up one of IMAP folders that had quite a few fairly large e-mails (some pictures my brother had sent). I removed the larger e-mails and tar proceeded past the IMAP folder that had been the problem only to die later:
... /home/judy/Judy's Stuff/School/ /home/judy/Judy's Stuff/School/2007 Spring/ /home/judy/Judy's Stuff/School/2007 Spring/Mynametemplate.ppt /home/judy/Judy's Stuff/School/2007 Spring/MyNameSamples.doc /home/judy/Judy's Stuff/School/2007 Spring/Myname105.ppt /home/judy/Judy's Stuff/School/2007 Spring/TYP Types.doc /home/judy/Judy's Stuff/School/2007 Spring/Photoshop_CS2.exe Segmentation fault
The copy of PhotoShop is the trial version that my wife had downloaded about a year ago for a class she was taking. This directory has been getting backed up at least every thirty days since then given my tape rotation. If I remove the PhotoShop_CS2.exe file, the backup completes normally.
So, is this a tar bug (doesn't like big files now) or is there some other issue like available shared memory that's causing the problem?
Cheers, Dave
-- Politics, n. Strife of interests masquerading as a contest of principles. -- Ambrose Bierce
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Dave,
First you have to figure out if the problem occurs in tar or gzip, do you get the problem if you tar and then gzip or is it combined (make a none compressed tar archive), in case no is the pipe somehow the problem, same crash with tar -z option rather then piping to gzip? Finally you need to get a stack trace why you need to set the core size limit above the default 0 size with "ulimit -c unlimited" before running the command. You can now use gdb to make a stack trace "gdb <path to exec file> <path to core file>" and type where and type "where" in order to get a stack trace that you can publish in a relevant forum for further examination by developers.
- Nicolas