Les Mikesell <lesmikesell at gmail.com> wrote: > Caching network content without having to make a special > effort for every different source is a problem that was > solved eons ago by squid and similar caching proxies. > My other complaint about yum is that it goes out of its > way to prevent this from being useful in the standard way. > No machine behind a caching proxy should have to pull a > new copy of an unchanged file. YUM _uses_ HTTP. But yes, any [good] HTTP proxy _will_ check for changes to the files retrieved. I think you're failing to realize the problem is _not_ the tool, but the social/load aspects of something doing what you want. I've posted my concept for a little "hack," but it's far from "ideal." "Ideal" is _only_ by maintaining your own, internal repository. It is what Microsoft does. It is what Red Hat does. It is what Sun does. Etc... You mirror their updates, and deploy internally. How you manage that varies, but it's _not_ a tool issue. It's a 100% social/load issue. -- Bryan J. Smith | Sent from Yahoo Mail mailto:b.j.smith at ieee.org | (please excuse any http://thebs413.blogspot.com/ | missing headers)