I would like to ask that more flexibility be built into automatically choosing sites for downloading.
I have just upgraded to CentOS 5.2. When I look for a mirror, I get many choices from Taiwan. Since I am in Hong Kong this makes sense geographically but .... not in terms of bandwidth.
Although Taiwan is very close, it is almost the slowest connection I can make. Some .jp sites are good but I find .sg or the US is usually best.
This morning I tried to update to the new kernel (92.1.6). My system seemed to die during the download. After some investigation, I found that Yum et al is being defaulted to .tw sites. (base, updates, add ons all go to .tw.) I have now been sitting at base primary.xml.gz for over 15 minutes and am only about 50% done.
Is it possible when deciding these default sites to look at actual bandwidth and not just geographical closeness?
Mel
Joseph L. Casale wrote:
Me thinks they call that yum-fastestmirror :)
Me thinks that doesn't work.
fastest mirror give me the lowest values for .tw sites. Which, I think mean the fastest. They are in fact, the slowest. Here are some real numbers:
base 856 kb 27:38 updates 91 kb 1:20
kernel-devel died after 52:01
At this point several mirrors were tried finally a fast one was found and things went quickly and smoothly.
Almost one hour and only 1.7 of 4.8 m was downloaded
This is typical for me getting data from .tw sites.
Mel
tech wrote:
Joseph L. Casale wrote:
Me thinks they call that yum-fastestmirror :)
Me thinks that doesn't work.
fastest mirror give me the lowest values for .tw sites. Which, I think mean the fastest. They are in fact, the slowest. Here are some real numbers:
base 856 kb 27:38 updates 91 kb 1:20
kernel-devel died after 52:01
At this point several mirrors were tried finally a fast one was found and things went quickly and smoothly.
Almost one hour and only 1.7 of 4.8 m was downloaded
This is typical for me getting data from .tw sites.
Then ... find the fastest mirrors and don't use the default.
We can't possibly try to program in thnigs like this on our end.
The fastestmirror script actually makes a connection to the mirrors in question, the lower ones should be the best ones. It is possible that is not the cause, but we can't test for that.
You can, however, remark out the "mirrorlist=" line and instead add several baseurl=<server_path> lines yourself that you want to use.
Then you get updates from where ever you want
Johnny Hughes wrote:
Almost one hour and only 1.7 of 4.8 m was downloaded
This is typical for me getting data from .tw sites.
Then ... find the fastest mirrors and don't use the default.
We can't possibly try to program in thnigs like this on our end.
The fastestmirror script actually makes a connection to the mirrors in question, the lower ones should be the best ones. It is possible that is not the cause, but we can't test for that.
You can, however, remark out the "mirrorlist=" line and instead add several baseurl=<server_path> lines yourself that you want to use.
Then you get updates from where ever you want
Is there a way to coax several hosts behind the same caching proxy to use the same URL as the 1st choice but still fail over and try others if there is a problem? And preferably without having to manually edit files on each machine or coordinate choices.
Les Mikesell wrote:
Johnny Hughes wrote:
Is there a way to coax several hosts behind the same caching proxy to use the same URL as the 1st choice but still fail over and try others if there is a problem? And preferably without having to manually edit files on each machine or coordinate choices.
Fatestmirror does not work with a proxy server ... however you can adjust your yum.conf to use the priority failover method, from 'man yumconf':
=================================================================== failovermethod
Either ‘roundrobin’ or ‘priority’.
‘roundrobin’ randomly selects a URL out of the list of URLs to start with and proceeds through each of them as it encounters a failure contacting the host.
‘priority’ starts from the first baseurl listed and reads through them sequentially.
failovermethod defaults to ‘roundrobin’ if not specified. ===================================================================
You can use baseurl=<firstchoice> at the top, then other ones after that. They will be picked in order.
Johnny Hughes wrote:
Is there a way to coax several hosts behind the same caching proxy to use the same URL as the 1st choice but still fail over and try others if there is a problem? And preferably without having to manually edit files on each machine or coordinate choices.
Fatestmirror does not work with a proxy server ... however you can adjust your yum.conf to use the priority failover method, from 'man yumconf':
=================================================================== failovermethod
Either ‘roundrobin’ or ‘priority’. ‘roundrobin’ randomly selects a URL out of the list of URLs to
start with and proceeds through each of them as it encounters a failure contacting the host.
‘priority’ starts from the first baseurl listed and reads through
them sequentially.
failovermethod defaults to ‘roundrobin’ if not specified.
===================================================================
You can use baseurl=<firstchoice> at the top, then other ones after that. They will be picked in order.
But this doesn't work if two different people in the same building do updates since they won't know the other's choice of order. Plus it is painful to have to edit files on every machine to make something happen that should work by default. I liked the Centos 3.x approach with rrdns much better since all requests had the same URL even when served by different sites.
Les Mikesell wrote:
Johnny Hughes wrote:
Is there a way to coax several hosts behind the same caching proxy to use the same URL as the 1st choice but still fail over and try others if there is a problem? And preferably without having to manually edit files on each machine or coordinate choices.
Fatestmirror does not work with a proxy server ... however you can adjust your yum.conf to use the priority failover method, from 'man yumconf':
=================================================================== failovermethod
Either ‘roundrobin’ or ‘priority’. ‘roundrobin’ randomly selects a URL out of the list of URLs to
start with and proceeds through each of them as it encounters a failure contacting the host.
‘priority’ starts from the first baseurl listed and reads through
them sequentially.
failovermethod defaults to ‘roundrobin’ if not specified.
===================================================================
You can use baseurl=<firstchoice> at the top, then other ones after that. They will be picked in order.
But this doesn't work if two different people in the same building do updates since they won't know the other's choice of order. Plus it is painful to have to edit files on every machine to make something happen that should work by default. I liked the Centos 3.x approach with rrdns much better since all requests had the same URL even when served by different sites.
we had this discussion many times, rrdns does NOT work correctly with python, it causes one server to carry 67% of the load, no matter how many servers you pass back to it. It also does not allow us to leverage 200 public servers. We do not have enough servers to serve 2 million updates if we don't leverage external public mirrors ... and nether does fedora.
... and you can sync the same config file to everyone's machine if you want. You can also just have a local mirror on any webserver.
Les Mikesell wrote:
Is there a way to coax several hosts behind the same caching proxy to use the same URL as the 1st choice but still fail over and try others if there is a problem? And preferably without having to manually edit files on each machine or coordinate choices.
I've been using ncache ( nginx+caching ) to the same effect at one setup to achieve something like this, but I have a script that hijacks the request to mirrorlist= fetches and returns my own fixed list from a local fixed cache.
I know its very very hacky and completely bogus. But it works for me in a very limited sense. Someone might want to pickup on that and develop it further into a more functional system perhaps.
But if you dont control the proxy and cant admin/change configs in there, then you will need to manually setup each machine to use the same set of mirrors and drop the mirrorlist fetch's from .centos.org
- KB
Joseph L. Casale <> scribbled on Friday, June 27, 2008 9:29 AM:
Is it possible when deciding these default sites to look at actual bandwidth and not just geographical closeness?
Mel
Me thinks they call that yum-fastestmirror :)
That addon rocks! Add the delta-addon as well, and you get really fast updates! 8-D
Sorin@Gmail wrote:
That addon rocks! Add the delta-addon as well, and you get really fast updates! 8-D
There is no functional delta-addon for yum in CentOS-5, are you getting confused with Fedora here ?
Karanbir Singh <> scribbled on Friday, June 27, 2008 10:28 AM:
Sorin@Gmail wrote:
That addon rocks! Add the delta-addon as well, and you get really fast updates! 8-D
There is no functional delta-addon for yum in CentOS-5, are you getting confused with Fedora here ?
Umm... I might. I think it was called something like that at least . Lemme' check. BRB.
...
Here; "rpm -qa *delta*" reports I have "deltarpm - 3.3-2.el5.rf.i386" installed on my CentOS5.2-machine.
I got the impression this worked with CentOS5 anyway. At least I saw some stats when running yum with my pre-5.2 CentOS flashing by. My understanding was that the repo however had to be compatible with deltarpm for this to work at all.
So this doesn't work with CentOS then, correct?
Ok, any particular reason why not?
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Karanbir Singh Sent: Friday, June 27, 2008 4:54 PM To: CentOS mailing list Subject: Re: [CentOS] Automatic site selection for dowload
Sorin@Gmail wrote:
There is no functional delta-addon for yum in CentOS-5, are you getting confused with Fedora here ?
So this doesn't work with CentOS then, correct?
Thats what I said earlier, we dont support delta rpm in CentOS.
- KB _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Sorin Srbu wrote:
Ok, any particular reason why not?
Sorin@Gmail wrote:
There is no functional delta-addon for yum in CentOS-5, are you getting confused with Fedora here ?
So this doesn't work with CentOS then, correct?
Thats what I said earlier, we dont support delta rpm in CentOS.
- KB
Far be it from me to put words in the mouths of our distro's developers and maintainers, but I'd bet its because upstream doesn't.
My .02 -R
PS: upgrade to 5.2 successful here...zero problems and MANY thanks to all of the CentOS team!
Sorin Srbu wrote:
Ok, any particular reason why not?
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Karanbir Singh Sent: Friday, June 27, 2008 4:54 PM To: CentOS mailing list Subject: Re: [CentOS] Automatic site selection for dowload
Sorin@Gmail wrote:
There is no functional delta-addon for yum in CentOS-5, are you getting confused with Fedora here ?
So this doesn't work with CentOS then, correct?
Thats what I said earlier, we dont support delta rpm in CentOS.
can you consider not top posting ?
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf
Of
Karanbir Singh Sent: Saturday, June 28, 2008 12:34 AM To: CentOS mailing list Subject: Re: [CentOS] Automatic site selection for dowload
Sorin Srbu wrote:
Ok, any particular reason why not?
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On
Behalf
Of Karanbir Singh Sent: Friday, June 27, 2008 4:54 PM To: CentOS mailing list Subject: Re: [CentOS] Automatic site selection for dowload
Sorin@Gmail wrote:
There is no functional delta-addon for yum in CentOS-5, are you getting confused with Fedora here ?
So this doesn't work with CentOS then, correct?
Thats what I said earlier, we dont support delta rpm in CentOS.
can you consider not top posting ?
Sorry.
I think I fixed that last evening, all posts from me, from then on should be properly quoted and non-topposted. Outlook 2007 doesn't allow too much leeway on this unfortunately... 8-/
tech wrote:
I would like to ask that more flexibility be built into automatically choosing sites for downloading.
You can always just disable the yum-fastestmirror plugin if its not working well for you. In /etc/yum/pluginconf.d there should be a file called fastestmirror.conf - look in there, and change enabled=1 to enabled=0. Might be worth running it with verbose=1 for a little while to work out whats going on. Perhaps also reduce the maxhostfileage value to have fastestmirror speed check each mirror more often.
There is also always the option of only using mirrors you know work well. To do that, comment out the mirrorlist= lines from /etc/yum.repos.d/CentOS-Base.repo and replace them with baseurl=<url to mirror>. You can have multiple mirrors listed there. man yum.conf for more info on that.
Karanbir Singh <> scribbled on Friday, June 27, 2008 10:27 AM:
tech wrote:
I would like to ask that more flexibility be built into automatically choosing sites for downloading.
You can always just disable the yum-fastestmirror plugin if its not working well for you. In /etc/yum/pluginconf.d there should be a file called fastestmirror.conf - look in there, and change enabled=1 to enabled=0. Might be worth running it with verbose=1 for a little while to work out whats going on. Perhaps also reduce the maxhostfileage value to have fastestmirror speed check each mirror more often.
I wonder if running "yum update yum*" first would help, before doing the full "yum update"?
In my case with CentOS v5.2, I saw yum got updated too, and since then yum works blazingly fast for me.
You might want to try it anyway. If it works better after the update, then fine, you're all set for better speeds.
FWIW, I had slow speeds pre-5.2 as well.
Karanbir Singh wrote:
You can always just disable the yum-fastestmirror plugin if its not working well for you. In /etc/yum/pluginconf.d there should be a file called fastestmirror.conf - look in there, and change enabled=1 to enabled=0. Might be worth running it with verbose=1 for a little while to work out whats going on. Perhaps also reduce the maxhostfileage value to have fastestmirror speed check each mirror more often.
There is also always the option of only using mirrors you know work well. To do that, comment out the mirrorlist= lines from /etc/yum.repos.d/CentOS-Base.repo and replace them with baseurl=<url to mirror>. You can have multiple mirrors listed there. man yum.conf for more info on that.
Thanks.
I have already set enable=0. That helps.
I will try your other suggestions.
I suspect this is something unique. Initial transfers are fast but then things get very slow.
On Thu, Jun 26, 2008 at 11:15 PM, tech tech@laamail.com wrote:
I would like to ask that more flexibility be built into automatically choosing sites for downloading.
I have just upgraded to CentOS 5.2. When I look for a mirror, I get many choices from Taiwan. Since I am in Hong Kong this makes sense geographically but .... not in terms of bandwidth.
Although Taiwan is very close, it is almost the slowest connection I can make. Some .jp sites are good but I find .sg or the US is usually best.
This morning I tried to update to the new kernel (92.1.6). My system seemed to die during the download. After some investigation, I found that Yum et al is being defaulted to .tw sites. (base, updates, add ons all go to .tw.) I have now been sitting at base primary.xml.gz for over 15 minutes and am only about 50% done.
Is it possible when deciding these default sites to look at actual bandwidth and not just geographical closeness?
Mel: I had this problem, before fastest mirror. The nearest server to me, geographically, probably would not be the fastest for me. Another example is Sourceforge. They set up downloads from Brazil. I'm in Colombia. I change that to a server in the USA. Thank you for fastest mirror! Get fastest mirror fixed and it will work for you! BTW, when I upgraded 2 Desktops, from 5.1 to 5.2, on our 550k connection, it took just over an hour to get approximately 489 MB of Packages Downloaded. (DL speed from the USA was well above 550k, for both downloads). HK is probably connected directly to the USA, via satellite and/or undersea cable and it should be fast for you, downloading from the USA. Lanny