Hi all, I was wondering if would be possible to configure squid to co ope with yum so it recognize mirrors and it can cache rpm packages based on package name and maybe some other parameter to be sure that is the right package. Any ideas?
Cheers, Lorenzo
I know up2date can be configured to save all RPMs it installs/upgrades. Can yum be made to do the same. If so, have 1 server connect to the mirrors, have it save all the rpms into a directory, run createrepo every so often to update the directory, and share out that directory with ftp or http so that your other servers can download the packages.
Or do like I do, have 1 server that you use for imaging all my servers and mirror the CentOS repos, then I run Apache and have my local servers connect to that local repo. You don't even have to mirror the entire repo if you don't want.
-matt
On 5/24/07, Lorenzo lorenzo@gmk.it wrote:
Hi all, I was wondering if would be possible to configure squid to co ope with yum so it recognize mirrors and it can cache rpm packages based on package name and maybe some other parameter to be sure that is the right package. Any ideas?
Cheers, Lorenzo _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Matt Shields ha scritto:
I know up2date can be configured to save all RPMs it installs/upgrades. Can yum be made to do the same. If so, have 1 server connect to the mirrors, have it save all the rpms into a directory, run createrepo every so often to update the directory, and share out that directory with ftp or http so that your other servers can download the packages.
Or do like I do, have 1 server that you use for imaging all my servers and mirror the CentOS repos, then I run Apache and have my local servers connect to that local repo. You don't even have to mirror the entire repo if you don't want.
-matt
Since I'm on a testing stage, installing, upgrading, and re-installing several times with slightly different setups and so on, I think the easiest way would be to only configure one proxy line on yum.conf on each new installation than playing each time with the .repo files on /etc/yum.repos.d or moving around RPMs between systems... after all what I need is a proxy-cache which knows how to handle RPMs: am I wrong?
Lorenzo
If you are imaging servers over and over wouldn't it be easier to maintain a local repo and setup pxeboot with kickstart?
On 5/24/07, Lorenzo lorenzo@gmk.it wrote:
Matt Shields ha scritto:
I know up2date can be configured to save all RPMs it installs/upgrades. Can yum be made to do the same. If so, have 1 server connect to the mirrors, have it save all the rpms into a directory, run createrepo every so often to update the directory, and share out that directory with ftp or http so that your other servers can download the packages.
Or do like I do, have 1 server that you use for imaging all my servers and mirror the CentOS repos, then I run Apache and have my local servers connect to that local repo. You don't even have to mirror the entire repo if you don't want.
-matt
Since I'm on a testing stage, installing, upgrading, and re-installing several times with slightly different setups and so on, I think the easiest way would be to only configure one proxy line on yum.conf on each new installation than playing each time with the .repo files on /etc/yum.repos.d or moving around RPMs between systems... after all what I need is a proxy-cache which knows how to handle RPMs: am I wrong?
Lorenzo _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
If you are imaging servers over and over wouldn't it be easier to maintain a local repo and setup pxeboot with kickstart?
Have you looked at Cobbler http://http://cobbler.et.redhat.com/ and mrepo? http://dag.wieers.com/home-made/mrepo/
Barry Brimer ha scritto:
If you are imaging servers over and over wouldn't it be easier to maintain a local repo and setup pxeboot with kickstart?
Have you looked at Cobbler http://http://cobbler.et.redhat.com/ and mrepo? http://dag.wieers.com/home-made/mrepo/
I've setup mrepo: as of now I have 1,4G of updates; usually on an average server my whole /var/cache/yum is 33M, on the workstation (which is the biggest install I have) is 69M. So mrepo and PXE boot are really nice and funny, but they don't solve at all my need of bandwidth saving.
Bye, Lorenzo
Quoting Lorenzo lorenzo@gmk.it:
Barry Brimer ha scritto:
If you are imaging servers over and over wouldn't it be easier to maintain a local repo and setup pxeboot with kickstart?
Have you looked at Cobbler http://http://cobbler.et.redhat.com/ and mrepo? http://dag.wieers.com/home-made/mrepo/
I've setup mrepo: as of now I have 1,4G of updates; usually on an average server my whole /var/cache/yum is 33M, on the workstation (which is the biggest install I have) is 69M. So mrepo and PXE boot are really nice and funny, but they don't solve at all my need of bandwidth saving.
Have you set all of your systems to use your mrepo server for their updates?
Barry Brimer ha scritto:
Quoting Lorenzo lorenzo@gmk.it:
Barry Brimer ha scritto:
If you are imaging servers over and over wouldn't it be easier to maintain a local repo and setup pxeboot with kickstart?
Have you looked at Cobbler http://http://cobbler.et.redhat.com/ and mrepo? http://dag.wieers.com/home-made/mrepo/
I've setup mrepo: as of now I have 1,4G of updates; usually on an average server my whole /var/cache/yum is 33M, on the workstation (which is the biggest install I have) is 69M. So mrepo and PXE boot are really nice and funny, but they don't solve at all my need of bandwidth saving.
Have you set all of your systems to use your mrepo server for their updates?
Not yet, and this is the second part of the problem: I don't like playing too much with config files, even because from time to time there could be changes from new version, it's boring to manually change all of them (and dangerous, because I'm very distract ;) ), and I don't think is worth automating it (at least with my programming skills, it take too much time do do).
Lorenzo wrote:
So mrepo and PXE boot are really nice and funny, but they don't solve at all my need of bandwidth saving.
Why don't you create your own repo mirror? That's what I used to do, works pretty well.
Just pick a file server with enough disk space, mirror a whole repo once (*), then point all your systems to the internal mirror. The mirror is polling external repos once a day or something like that and only transfers files when something new comes up. IIRC, lftp can do that out of the box.
(*) - the base repo does not have to be mirrored (downloaded), you can probably reconstruct it from the install CDs / DVD. You only have to download the updates repo, and even that only once.
The internal repo mirror is a classic solution that works fairly well in many cases.
Florin Andrei ha scritto:
Lorenzo wrote:
So mrepo and PXE boot are really nice and funny, but they don't solve at all my need of bandwidth saving.
Why don't you create your own repo mirror? That's what I used to do, works pretty well.
See my other replay: again with few installations (around 10) this doesn't drive to bandwidth saving.
Lorenzo wrote:
Matt Shields ha scritto:
I know up2date can be configured to save all RPMs it installs/upgrades. Can yum be made to do the same. If so, have 1 server connect to the mirrors, have it save all the rpms into a directory, run createrepo every so often to update the directory, and share out that directory with ftp or http so that your other servers can download the packages.
Or do like I do, have 1 server that you use for imaging all my servers and mirror the CentOS repos, then I run Apache and have my local servers connect to that local repo. You don't even have to mirror the entire repo if you don't want.
Since I'm on a testing stage, installing, upgrading, and re-installing several times with slightly different setups and so on, I think the easiest way would be to only configure one proxy line on yum.conf on each new installation than playing each time with the .repo files on /etc/yum.repos.d or moving around RPMs between systems... after all what I need is a proxy-cache which knows how to handle RPMs: am I wrong?
A proxy doesn't need to know anything about RPM's - it's like any other httpd request through a proxy. It just needs to be configured to cache large files. And you don't even need to configure anything - you can pass the proxy in the environment, even on the command line like: http_proxy=http://my_proxy.domain.com:port yum update What you need is a yum that always uses the same URL to request the same file. The default is to pick something randomly from a list of mirrors.
I know up2date can be configured to save all RPMs it installs/upgrades. Can yum be made to do the same. If so, have 1 server connect to the mirrors, have it save all the rpms into a directory, run createrepo every so often to update the directory, and share out that directory with ftp or http so that your other servers can download the packages.
You could do something like this:
http://servers.linux.com/article.pl?sid=04/07/22/1718242&tid=42
I believe you will need to configure keepcache=1 on yum.conf
You could use a post install script to make the necessary modifications to the files in /etc/yum.repos.d so that all of the servers point to a centralized repo server for yum updates.
You just need to be careful when setting up your local repository following this method. Because if you don't do an everything install on your repo box you could run into a problem when trying to install packages that the repo doesn't have the package.
Lorenzo wrote:
Les Mikesell ha scritto:
What you need is a yum that always uses the same URL to request the same file. The default is to pick something randomly from a list of mirrors.
Is there a way to tell squid that even if coming from different mirror it is always the same file?
No, but squid has a hook for a redirector. If you did some work you might be able to force the request for the mirror lists to redirect to your own server which would return a list with only one repository. Of course you have to repeat that work for every distro/version/repository.
This used to 'just work' in Centos3 since the repos all had the same name and used round robin dns to distribute the load and maybe it does again in Centos5 - I thought I'd seen something about some work being done. But with Centos4 and the fedoras the mirrorlist would usually give you something different every time and fill up the cache with multiple copies.
Les Mikesell wrote:
This used to 'just work' in Centos3 since the repos all had the same name and used round robin dns to distribute the load and maybe it does again in Centos5 - I thought I'd seen something about some work being done. But with Centos4 and the fedoras the mirrorlist would usually give you something different every time and fill up the cache with multiple copies.
Yup, that's what happens with Fedora. Too many repos, the cache hit rate is pretty low.
Florin Andrei ha scritto:
Les Mikesell wrote:
This used to 'just work' in Centos3 since the repos all had the same name and used round robin dns to distribute the load and maybe it does again in Centos5 - I thought I'd seen something about some work being done. But with Centos4 and the fedoras the mirrorlist would usually give you something different every time and fill up the cache with multiple copies.
Yup, that's what happens with Fedora. Too many repos, the cache hit rate is pretty low.
So again would be nice to "convince" someone (maybe already working with Fedora) to make such tool (suggested name: yum-cacher; function: doing for Fedora/CentOS/ any other RPM based distro the same thing that apt-cacher does for debian)
Cheers
Lorenzo
Lorenzo wrote:
This used to 'just work' in Centos3 since the repos all had the same name and used round robin dns to distribute the load and maybe it does again in Centos5 - I thought I'd seen something about some work being done. But with Centos4 and the fedoras the mirrorlist would usually give you something different every time and fill up the cache with multiple copies.
Yup, that's what happens with Fedora. Too many repos, the cache hit rate is pretty low.
So again would be nice to "convince" someone (maybe already working with Fedora) to make such tool (suggested name: yum-cacher; function: doing for Fedora/CentOS/ any other RPM based distro the same thing that apt-cacher does for debian)
I'd rather not have any tool-specific, distro-specific, or version-specific thing to set up. Http caching has been well understood for ages and works with everything that doesn't go out of its way to break it. There has to be some way to make yum work again without breaking it.
On Fri, May 25, 2007 at 08:41:26AM -0500, Les Mikesell enlightened us:
Lorenzo wrote:
This used to 'just work' in Centos3 since the repos all had the same name and used round robin dns to distribute the load and maybe it does again in Centos5 - I thought I'd seen something about some work being done. But with Centos4 and the fedoras the mirrorlist would usually give you something different every time and fill up the cache with multiple copies.
Yup, that's what happens with Fedora. Too many repos, the cache hit rate is pretty low.
So again would be nice to "convince" someone (maybe already working with Fedora) to make such tool (suggested name: yum-cacher; function: doing for Fedora/CentOS/ any other RPM based distro the same thing that apt-cacher does for debian)
I'd rather not have any tool-specific, distro-specific, or version-specific thing to set up. Http caching has been well understood for ages and works with everything that doesn't go out of its way to break it. There has to be some way to make yum work again without breaking it.
There is, use a baseurl in your config file rather than the mirrorlist.
Matt
Matt Hyclak wrote:
On Fri, May 25, 2007 at 08:41:26AM -0500, Les Mikesell enlightened us:
Lorenzo wrote:
This used to 'just work' in Centos3 since the repos all had the same name and used round robin dns to distribute the load and maybe it does again in Centos5 - I thought I'd seen something about some work being done. But with Centos4 and the fedoras the mirrorlist would usually give you something different every time and fill up the cache with multiple copies.
Yup, that's what happens with Fedora. Too many repos, the cache hit rate is pretty low.
So again would be nice to "convince" someone (maybe already working with Fedora) to make such tool (suggested name: yum-cacher; function: doing for Fedora/CentOS/ any other RPM based distro the same thing that apt-cacher does for debian)
I'd rather not have any tool-specific, distro-specific, or version-specific thing to set up. Http caching has been well understood for ages and works with everything that doesn't go out of its way to break it. There has to be some way to make yum work again without breaking it.
There is, use a baseurl in your config file rather than the mirrorlist.
But that's a per-box per-repo specific change, with per-distro, per-repo variations for the end users to figure out for themselves. And then it won't fail over at all if the url you pick goes away. Aren't computers supposed to work for you instead of the other way around?
On Fri, May 25, 2007 at 09:51:36AM -0500, Les Mikesell enlightened us:
There is, use a baseurl in your config file rather than the mirrorlist.
But that's a per-box per-repo specific change, with per-distro, per-repo variations for the end users to figure out for themselves. And then it won't fail over at all if the url you pick goes away. Aren't computers supposed to work for you instead of the other way around?
You can't please everyone. I'd argue that the majority of people are not behind a proxy of any sort. I'm assuming you are using a transparent proxy such that you don't have to configure proxy settings on every box - so I won't argue that point.
Regardless, tools like puppet, or simply providing .repo files for folks makes it fairly painless to make those changes for end-users.
Yes, computers should work for you, however you can't expect the software to read your mind and do what you think it should do. Some amount of configuration is necessary on *any* computer - why is this different?
Matt
Matt Hyclak wrote:
There is, use a baseurl in your config file rather than the mirrorlist.
But that's a per-box per-repo specific change, with per-distro, per-repo variations for the end users to figure out for themselves. And then it won't fail over at all if the url you pick goes away. Aren't computers supposed to work for you instead of the other way around?
You can't please everyone. I'd argue that the majority of people are not behind a proxy of any sort.
And I'd argue that the majority of Centos installations have more than one copy in locations that could easily use a common proxy. I'm not sure how to prove that, but it makes sense for an 'enterprise' OS.
I'm assuming you are using a transparent proxy such that you don't have to configure proxy settings on every box - so I won't argue that point.
No, I just let the shell export the setting when it is useful: http_proxy=http://proxy.domain.com:3128 yum update I can update one box, test things, then ssh that command in a loop to any number of boxes in locations where the specified proxy will help. With Cento3 it works great and the subsequent runs are almost instant.
Regardless, tools like puppet, or simply providing .repo files for folks makes it fairly painless to make those changes for end-users.
Those look like extra work to me. And you have no fail-over if you force a single repository url. A better scheme would prefer to use a one based on location but still retry elsewhere if that fails.
Yes, computers should work for you, however you can't expect the software to read your mind and do what you think it should do.
Reading 'my' mind wouldn't be all that useful because I don't know the right answer either - and if I knew something today it would probably be wrong tomorrow. What we need is a way to compute the best URL that is repeatable when requested by the same proxy unless that one is no longer available.
Some amount of configuration is necessary on *any* computer - why is this different?
This is different because we usually see progress going in the direction of more and more automatic configuration with better designs. In this case Centos3 had it right and has since regressed in design. I realize that there were reasons for the change, but there still has to be a better way.
On Fri, May 25, 2007 at 11:01:28AM -0500, Les Mikesell enlightened us:
Matt Hyclak wrote:
There is, use a baseurl in your config file rather than the mirrorlist.
But that's a per-box per-repo specific change, with per-distro, per-repo variations for the end users to figure out for themselves. And then it won't fail over at all if the url you pick goes away. Aren't computers supposed to work for you instead of the other way around?
You can't please everyone. I'd argue that the majority of people are not behind a proxy of any sort.
And I'd argue that the majority of Centos installations have more than one copy in locations that could easily use a common proxy. I'm not sure how to prove that, but it makes sense for an 'enterprise' OS.
I don't disagree, and if those folks have enough machines, I bet they're setting up local mirrors.
I'm assuming you are using a transparent proxy such that you don't have to configure proxy settings on every box - so I won't argue that point.
No, I just let the shell export the setting when it is useful: http_proxy=http://proxy.domain.com:3128 yum update I can update one box, test things, then ssh that command in a loop to any number of boxes in locations where the specified proxy will help. With Cento3 it works great and the subsequent runs are almost instant.
And looping an scp command is harder how?
Regardless, tools like puppet, or simply providing .repo files for folks makes it fairly painless to make those changes for end-users.
Those look like extra work to me. And you have no fail-over if you force a single repository url. A better scheme would prefer to use a one based on location but still retry elsewhere if that fails.
baseurl allows for more than one URL to be defined, so add in 2 or three mirrors. Or read the yum mailing list where the priority setting was discussed when combining baseurl and mirrorlist just today.
Yes, computers should work for you, however you can't expect the software to read your mind and do what you think it should do.
Reading 'my' mind wouldn't be all that useful because I don't know the right answer either - and if I knew something today it would probably be wrong tomorrow. What we need is a way to compute the best URL that is repeatable when requested by the same proxy unless that one is no longer available.
Some amount of configuration is necessary on *any* computer - why is this different?
This is different because we usually see progress going in the direction of more and more automatic configuration with better designs. In this case Centos3 had it right and has since regressed in design. I realize that there were reasons for the change, but there still has to be a better way.
It seems your definition of better and the developer's definition differ. They are making the configuration more automatic for most of the people. For those that need to install/update multiple machines, you have several options. If you have that many machines, you're doing some sort of management anyway so it really isn't a stretch to ask you to change your .repo file along with your mail configuration, web server configuration, etc.
Matt
Matt Hyclak wrote:
It seems your definition of better and the developer's definition differ. They are making the configuration more automatic for most of the people. For those that need to install/update multiple machines, you have several options. If you have that many machines, you're doing some sort of management anyway so it really isn't a stretch to ask you to change your .repo file along with your mail configuration, web server configuration, etc.
Its just one of those things that the best thing you can say about it is that it gives you job security.
On 5/25/07, Les Mikesell lesmikesell@gmail.com wrote:
But that's a per-box per-repo specific change, with per-distro, per-repo variations for the end users to figure out for themselves. And then it won't fail over at all if the url you pick goes away. Aren't computers supposed to work for you instead of the other way around?
Sigh!!!!! Why do people insist on doing things the hard way and insist on doing it over and over. As explained in the previous post of mine, I have a local repo, I use pxeboot with a kickstart file. There is one kickstart per distro, per version. The kickstart files are identicle except for the URL to the distro/arch specific repo and minor diffs for arch specific stuff. In the %post section of kickstart I have a ton of stuff automated and one of the things is modifiying the location of my repo in /etc/yum.repos.d/CentOS-Base.repo file. If people have a ton of RPM based servers(RHEL, CentOS, WBEL, Fedora, etc) that they manage and regularly image/reimage it makes sense to check out how great kickstart and pxebooting can benefit you. We have over 200+ servers, 50+ VMs and growing and it makes life so much simpler. I can image up a server in about 10 minutes and it will be totally customized the way I want it with all the latest updates. We've also added into our repo the ability to pxeboot install VMWare ESX and Windows 2k3. I haven't checked it out yet, but I've heard Cobbler takes this one step further and makes it very very simple. Not sure if Cobbler will handle the VMWare and Windows though. Little bit of time upfront will save you a ton of time later. Here's an example totally custom kickstart file below, search the net for pxeboot or pxeinstall and kickstart, it will save you so much time.
-matt
[root@install kickstart]# cat centos-4.4-i386.ks.cfg install text reboot url --url http://install.mydomain.com/install/centos/4.4/os/i386/ lang en_US.UTF-8 langsupport --default=en_US.UTF-8 en_US.UTF-8 keyboard us #xconfig --card "ATI Radeon 7000" --videoram 8192 --hsync 31.5-37.9 --vsync 50-70 --resolution 800x600 --depth 16 --defaultdesktop gnome network --device eth0 --bootproto dhcp rootpw --iscrypted mypw firewall --disabled selinux --disabled authconfig --enableshadow --enablemd5 timezone America/New_York bootloader --location=mbr autopart zerombr yes #windows mbr removal clearpart --all --drives=sda #s/linux/all (windows) part /boot --fstype ext3 --size=200 --asprimary part swap --size=1000 --grow --maxsize=2000 --ondisk=sda part pv.01 --size=1024 --grow --ondisk=sda volgroup vg.01 pv.01 logvol / --fstype ext3 --size=1024 --vgname=vg.01 --name=rootvol --grow logvol /opt --fstype ext3 --size=2048 --vgname=vg.01 --name=junkvol #part / --fstype ext3 --size=1024 --grow --ondisk=sda %packages @ compat-arch-development @ editors @ emacs @ mysql @ admin-tools @ development-tools @ text-internet @ compat-arch-support lvm2 kernel-smp-devel kernel-smp e2fsprogs screen sysstat net-snmp
%post CMDLINE="`cat /proc/cmdline`" HOSTNAME=`expr "$CMDLINE" : '.*hostname=([^ ]*).*'` #grab hostname from pxeboot export CMDLINE HOSTNAME set > /root/ks.env if [ "$HOSTNAME" ] then hostname $HOSTNAME fi
# use these next few lines if you have a RHN account #rpm --import /usr/share/rhn/RPM-GPG-KEY #rhnreg_ks --force --profilename ${HOSTNAME:-"tmp-host-name.mydomain.com"} --username myrhnuser --password myrhnpasswd #up2date --nox --force --update -v
# use these lines if you're using CentOS wget -O /etc/yum.repos.d/CentOS-Base.repo http://install.mydomain.com/install/kickstart/CentOS-Base.repo #get my custom CentOS-Base.repo file rpm --import http://install.mydomain.com/install/CentOS/RPM-GPG-KEY-CentOS-4 yum -y update #why not update to the latest packages right after install
# look I'm setting custom resolv.conf info cat > /etc/resolv.conf <<EOF domain mydomain.com nameserver x.x.x.x nameserver x.x.x.x EOF
chkconfig ntpd on chkconfig cups off chkconfig cups-config-daemon off chkconfig sendmail off
# set machine to boot at init3 perl -i -pe 's/id:5:initdefault:/id:3:initdefault:/g' /etc/inittab
ed /etc/sysconfig/i18n <<EOF %s/en_US.UTF-8/en_US/g w EOF
ed /etc/mail/sendmail.mc <<DONE %s/.*MASQUERADE_AS.*/MASQUERADE_AS(`mydomain.com')dnl/ w q DONE
make -C /etc/mail
umount /opt lvremove -f /dev/vg.01/junkvol ed /etc/fstab <<EOF %s//dev/vg.01/junkvol/#/dev/vg.01/junkvol/g w EOF
makewhatis
cat >> /etc/bashrc <<EOF alias dir='ls -lasF' EOF cat >> /etc/profile <<EOF
EDITOR=vi SVN_EDITOR=vi export EDITOR SVN_EDITOR EOF
# install a custom snmpd.conf wget -O /etc/snmp/snmpd.conf http://install.mydomain.com/install/snmp/snmpd.conf chkconfig snmpd on
# create a directory for software installs mkdir /opt/install
Matt Shields wrote:
But that's a per-box per-repo specific change, with per-distro, per-repo variations for the end users to figure out for themselves. And then it won't fail over at all if the url you pick goes away. Aren't computers supposed to work for you instead of the other way around?
Sigh!!!!! Why do people insist on doing things the hard way and insist on doing it over and over. As explained in the previous post of mine, I have a local repo, I use pxeboot with a kickstart file. There is one kickstart per distro, per version.
So you had to copy everything and set this up for every disto and version everyone at a location wants to use? I guess we have different ideas of what is the hard way to do things. Things that pull though a caching proxy don't take any pre-arrangement about the content. But thanks for posting your files - I'll probably set up something like this for Centos 5 in several locations even though I still consider it a workaround for a design flaw.
Les Mikesell wrote: [snip]
But that's a per-box per-repo specific change, with per-distro, per-repo variations for the end users to figure out for themselves. And then it won't fail over at all if the url you pick goes away. Aren't computers supposed to work for you instead of the other way around?
Have you seen this setup from Guru Labs? http://www.gurulabs.com/goodies/YUM_automatic_local_mirror.php
It sounds like it does what you want it to. It uses DNS to redirect requests for a mirror list to a page you control without changing the clients.
William Hooper wrote:
Have you seen this setup from Guru Labs? http://www.gurulabs.com/goodies/YUM_automatic_local_mirror.php
It sounds like it does what you want it to. It uses DNS to redirect requests for a mirror list to a page you control without changing the clients.
Oh, this is brilliant! It's like the classic local mirror scheme, with none of the disadvantages.
William Hooper wrote:
Les Mikesell wrote: [snip]
But that's a per-box per-repo specific change, with per-distro, per-repo variations for the end users to figure out for themselves. And then it won't fail over at all if the url you pick goes away. Aren't computers supposed to work for you instead of the other way around?
Have you seen this setup from Guru Labs? http://www.gurulabs.com/goodies/YUM_automatic_local_mirror.php
It sounds like it does what you want it to. It uses DNS to redirect requests for a mirror list to a page you control without changing the clients.
They are actually directing to a local repo which I'd like to avoid since I don't want to copy anything unless at least one update requests it, but I suppose similar contortions would work to force the same external repo to be used. At least someone else recognizes the problem.
Les Mikesell ha scritto:
William Hooper wrote:
Les Mikesell wrote: [snip]
But that's a per-box per-repo specific change, with per-distro, per-repo variations for the end users to figure out for themselves. And then it won't fail over at all if the url you pick goes away. Aren't computers supposed to work for you instead of the other way around?
Have you seen this setup from Guru Labs? http://www.gurulabs.com/goodies/YUM_automatic_local_mirror.php
It sounds like it does what you want it to. It uses DNS to redirect requests for a mirror list to a page you control without changing the clients.
They are actually directing to a local repo which I'd like to avoid since I don't want to copy anything unless at least one update requests it, but I suppose similar contortions would work to force the same external repo to be used. At least someone else recognizes the problem.
Just guessing... would be possible to use squidGuard (or another form of interceptor) to force always the same mirror?