On Sep 15, 2011, at 9:16 AM, sebastiano at datafaber.net wrote: > On Thu, 15 Sep 2011 08:42:42 -0700, Craig White wrote: >> might be hard to run package-cleanup without having base enabled but >> I would certainly recommend that you run 'rpm -Va [--nofiles >> --nodigest]' to identify the broken dependencies - apparently >> something that the base repository really believes should be there no >> matter what. > > I get no output at all from this command, perhaps I have misunderstood > the flags? ---- no output means that you haven't changed any of the files I suppose. Seems odd but possible. ---- > > [root at picard ~]# rpm -Va --nofiles --nodigest > [root at picard ~]# > > In the meantime I have found an interesting data point: > > [root at picard ~]# yum clean all > Loaded plugins: fastestmirror > Cleaning up Everything > Cleaning up list of fastest mirrors > [root at picard ~]# yum update > Loaded plugins: fastestmirror > Determining fastest mirrors > * base: mirror.ash.fastserv.com > * extras: mirror.net.cen.ct.gov > * updates: mirror.7x24web.net > base | 1.1 kB 00:00 > base/primary | 961 kB 00:00 > Segmentation fault > [root at picard ~]# ll /var/cache/yum/base > total 1004K > -rw-r--r-- 1 root root 0 Sep 15 19:12 cachecookie > -rw-r--r-- 1 root root 1017 Sep 15 19:11 mirrorlist.txt > drwxr-xr-x 2 root root 4.0K Jul 10 12:19 packages/ > -rw-r--r-- 1 root root 961K Sep 5 13:52 primary.xml.gz > -rw-r--r-- 1 root root 20K Sep 15 19:12 primary.xml.gz.sqlite > -rw-r--r-- 1 root root 1.2K Sep 5 13:52 repomd.xml > > The file /var/cache/yum/base/primary.xml.gz.sqlite is only 20KB, > whereas in the "normal" case I'd expect it to be 6.5MB. Somehow, yum is > failing to regenerate this file for the base repository, and is crashing > with a segmentation fault when trying to read it. I don't know however > how to make it generate a correct sqlite file. ---- mv /var/cache/yum/base/primary.xml.gz/sqlite /tmp and try again I suppose - yes, that file is supposed to be much larger - I suspect that it will create a new 'copy' of that file if it fails to find it. Craig