There have been some complaints recently concerning this, so I thought an explanation of the setup AND recommendations are in order.
First, I want to say that we need more donated servers with fast (at least 100mbit) internet connections. If you are making lots of money using CentOS and are a hosting provider, please consider providing us a donated server so we can better distribute CentOS.
Now on to the message :)
----------------------------------------- 1. mirror.centos.org is a rrdns of several machines, so that we can distribute the load for CentOS downloads across several servers. Distributed load is the only way we can serve more than 1 TiB per day for updates. (That's right, 1 TiB per day, ~30TiB per month, double the traffic of 3 months ago, 10x the traffic of a year ago).
We had several servers totally maxed out serving updates during the 4.2 release cycle. Did I mention server donations :)
There are other ways to distribute the load other than rrdns ... like a load balancer ... but these usually only work well with geographically co-located machines.
Since all our machines for mirror.centos.org rrdns are donated and they are disbursed throughout the US and EU. They don't work well with the typical load balancing.
2. Since our machines are not geographically co-located, we need to come up with another method to use multiple servers to distribute load.
We are currently developing an application that will use geoip (and public mirrors that we verify are updated:
http://www.centos.org/modules/tinycontent/index.php?id=13
along with the donated mirror.centos.org servers) to generate a mirrorlist for each country identified by geoip.
This mirrorlist should then have good, fast and geographically accurate servers for each person who does updates.
We hope to have this system functioning properly before the release of CentOS-4.3 and CentOS-3.7.
If you use the optional fastestmirror plugin, you will even get the fastest mirror in each repo from the mirrorlist. The fastestmirror plugin is currently under heavy development by the people at yum (in which CentOS is helping) ... so it is going to change in the next couple months, but in it's current form it works well for those not on proxy servers to pick the fastest mirrors from a list.
http://lists.centos.org/pipermail/centos-devel/2005-November/000947.html
3. Another common problem is transparent proxy servers and yum. HTTP mirrors and yum don't work really well via transparent proxy servers, so if you are using one, your best bet is to pick an FTP mirror from our public external server list (see mirror link above) instead of mirror.centos.org.
-------------------------------
So, what can you do now if you have problems ... or just want to help.
First thing is to pick several close mirrors and not use mirror.centos.org in your /etc/yum.repos.d/CentOS-Base.conf file. There is an example of a repo with more than 1 mirror in the fastestmirror link above.
If you have more than 1 mirror for each repo (I recommend that you do) ... and if you don't have to use yum via a proxy server that blocks outbound client connections ... then you should try the fastestmirror plugin above to pick your best mirror from the list at yum run time.
We will continue to strive to deliver CentOS within our means as we continue to grow by leaps and bounds, but we need the help and support of our user community if we are going to be able to continue to provide the same stellar services that we have all grown accustomed to. That means people who use CentOS will need to support it ... that is what the Community ENTerprise Operating System (CentOS) is all about.
Thanks, Johnny Hughes
May I make a suggestion? How about advertising a torrent for the various ISO's (especially the DVD version) and offload some of that traffic onto people who perhaps cannot donate ~100mbits, but are cool with a few megabits?
Cheers,
On Tue, 2005-11-29 at 07:25 -0500, Chris Mauritz wrote:
May I make a suggestion? How about advertising a torrent for the various ISO's (especially the DVD version) and offload some of that traffic onto people who perhaps cannot donate ~100mbits, but are cool with a few megabits?
Cheers,
All the CentOS ISOs are available via torrents that CentOS.org is the tracker for.
(In fact, you can't download ISOs from the CentOS servers directly)
If you want to help out on the torrent front, you can get the torrents that you are interested in helping provide from here:
http://mirror.centos.org/centos/4/isos/
or
http://mirror.centos.org/centos/3/isos/
and join the torrent :)
Thanks, Johnny Hughes
Has anyone thought of using some interesting/innovative way to enable sharing of updates via a torrent-like swarm as well?
All the CentOS ISOs are available via torrents that CentOS.org is the tracker for.
(In fact, you can't download ISOs from the CentOS servers directly)
If you want to help out on the torrent front, you can get the torrents that you are interested in helping provide from here:
http://mirror.centos.org/centos/4/isos/
or
http://mirror.centos.org/centos/3/isos/
and join the torrent :)
Thanks, Johnny Hughes
On 11/29/05, Maciej Żenczykowski maze@cela.pl wrote:
Has anyone thought of using some interesting/innovative way to enable sharing of updates via a torrent-like swarm as well?
This has been considered - check out the list of "ToDont" items:
http://wiki.linux.duke.edu/YumTodont
It's a bad idea for a variety of reasons.
One solution to distributing the load is for CentOS to provide CentOS.repo files that include a long list of mirrors and then distribute the fastest-mirror plugin. There are some problems in that solution, but they are getting fixed.
Greg
On Tue, 2005-11-29 at 08:07, Greg Knaddison wrote:
Has anyone thought of using some interesting/innovative way to enable sharing of updates via a torrent-like swarm as well?
This has been considered - check out the list of "ToDont" items:
http://wiki.linux.duke.edu/YumTodont
It's a bad idea for a variety of reasons.
One solution to distributing the load is for CentOS to provide CentOS.repo files that include a long list of mirrors and then distribute the fastest-mirror plugin. There are some problems in that solution, but they are getting fixed.
Better yet would be to teach yum why multiple A records have been used in DNS for redundancy for ages. That way wouldn't break caching proxies.
Maciej Żenczykowski wrote:
Has anyone thought of using some interesting/innovative way to enable sharing of updates via a torrent-like swarm as well?
many times, and each time was written off as a bad idea - look through the yum mailing lists for these threads.
- K
Johnny Hughes wrote:
On Tue, 2005-11-29 at 07:25 -0500, Chris Mauritz wrote:
May I make a suggestion? How about advertising a torrent for the various ISO's (especially the DVD version) and offload some of that traffic onto people who perhaps cannot donate ~100mbits, but are cool with a few megabits?
Cheers,
All the CentOS ISOs are available via torrents that CentOS.org is the tracker for.
(In fact, you can't download ISOs from the CentOS servers directly)
If you want to help out on the torrent front, you can get the torrents that you are interested in helping provide from here:
http://mirror.centos.org/centos/4/isos/
or
http://mirror.centos.org/centos/3/isos/
and join the torrent :)
I didn't realise there were torrents available. I'll d/l the latest ISOs and make them available on some well connected system at the datacenter.
Cheers,
Is there a timetable to fix the yum/http proxy incompatibility? i find it interesting this has not been addressed yet.
- Another common problem is transparent proxy servers and yum. HTTP
mirrors and yum don't work really well via transparent proxy servers, so if you are using one, your best bet is to pick an FTP mirror from our public external server list (see mirror link above) instead of mirror.centos.org.
On 11/29/05, William Warren hescominsoon@emmanuelcomputerconsulting.com wrote:
Is there a timetable to fix the yum/http proxy incompatibility? i find it interesting this has not been addressed yet.
First, it's not a problem with all http proxies, just the ones that suck. Second, it has been addressed. The method of diagnosing and fixing it is even in the FAQ.
http://wiki.linux.duke.edu/YumFaq#Q5
Basically, 1. Get your proxy software/firmware updated so that it properly implements HTTP 1.1 2. Use an FTP repository, where byte ranges are more commonly supported by the proxy 3. Create a local mirror with rsync and then point your yum.conf to that local mirror 4. Don't use yum
Regards, Greg
interesting. Last time i checked Squid didn't suck..:) I have used Astaro and IPcop and run into this problem.
Greg Knaddison wrote:
On 11/29/05, William Warren hescominsoon@emmanuelcomputerconsulting.com wrote:
Is there a timetable to fix the yum/http proxy incompatibility? i find it interesting this has not been addressed yet.
First, it's not a problem with all http proxies, just the ones that suck. Second, it has been addressed. The method of diagnosing and fixing it is even in the FAQ.
http://wiki.linux.duke.edu/YumFaq#Q5
Basically,
- Get your proxy software/firmware updated so that it properly
implements HTTP 1.1 2. Use an FTP repository, where byte ranges are more commonly supported by the proxy 3. Create a local mirror with rsync and then point your yum.conf to that local mirror 4. Don't use yum
Regards, Greg _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
William Warren wrote:
Is there a timetable to fix the yum/http proxy incompatibility? i find it interesting this has not been addressed yet.
- Another common problem is transparent proxy servers and yum. HTTP
mirrors and yum don't work really well via transparent proxy servers, so if you are using one, your best bet is to pick an FTP mirror from our public external server list (see mirror link above) instead of mirror.centos.org.
why not ask the guys who built the proxy, thats where the breakage is mostly.
Also, you have the option of going to a ftp mirror, and circumvent this issue.
i would rather not simply circumvent this isue...i would rather it be fixed. Like i psoted to naother person..squid is a great proxy..that is unless it is one that now sucks.
Karanbir Singh wrote:
William Warren wrote:
Is there a timetable to fix the yum/http proxy incompatibility? i find it interesting this has not been addressed yet.
- Another common problem is transparent proxy servers and yum. HTTP
mirrors and yum don't work really well via transparent proxy servers, so if you are using one, your best bet is to pick an FTP mirror from our public external server list (see mirror link above) instead of mirror.centos.org.
why not ask the guys who built the proxy, thats where the breakage is mostly.
Also, you have the option of going to a ftp mirror, and circumvent this issue.
William Warren wrote:
i would rather not simply circumvent this isue...i would rather it be fixed. Like i psoted to naother person..squid is a great proxy..that is unless it is one that now sucks.
Karanbir Singh wrote:
William Warren wrote:
Is there a timetable to fix the yum/http proxy incompatibility? i find it interesting this has not been addressed yet.
- Another common problem is transparent proxy servers and yum. HTTP
mirrors and yum don't work really well via transparent proxy servers, so if you are using one, your best bet is to pick an FTP mirror from our public external server list (see mirror link above) instead of mirror.centos.org.
why not ask the guys who built the proxy, thats where the breakage is mostly.
Also, you have the option of going to a ftp mirror, and circumvent this issue.
I've not had a problem with squid, its worked fine for me - I run a few machines that sit behind it. But then again, not in a transparent role.
the fix you are looking for is the with proxy developers, not in yum - yum uses http 1.1, as long as the proxy supports it and works in a sane manner, you will have no problems ( that I know of ).
- K
On Tue, 2005-11-29 at 10:47 -0500, William Warren wrote:
i would rather not simply circumvent this isue...i would rather it be fixed. Like i psoted to naother person..squid is a great proxy..that is unless it is one that now sucks.
Yum is not maintained by CentOS ... it is something that we use from somewhere else. We use what they produce, so there is no timetable.
It isn't proxy servers in general, just some implementations that don't work well.
I am using squid on my network and I get updates through it for 10 machines with no problems.
A transparent proxy is one where ports are routed behind the scenes and there is no setup file.
Using an FTP server is best in these situations.
Karanbir Singh wrote:
William Warren wrote:
Is there a timetable to fix the yum/http proxy incompatibility? i find it interesting this has not been addressed yet.
- Another common problem is transparent proxy servers and yum. HTTP
mirrors and yum don't work really well via transparent proxy servers, so if you are using one, your best bet is to pick an FTP mirror from our public external server list (see mirror link above) instead of mirror.centos.org.
why not ask the guys who built the proxy, thats where the breakage is mostly.
Also, you have the option of going to a ftp mirror, and circumvent this issue.
so my question is..why can't yum deal with a squid proxy running in transparent mode?..<G>
Johnny Hughes wrote:
On Tue, 2005-11-29 at 10:47 -0500, William Warren wrote:
i would rather not simply circumvent this isue...i would rather it be fixed. Like i psoted to naother person..squid is a great proxy..that is unless it is one that now sucks.
Yum is not maintained by CentOS ... it is something that we use from somewhere else. We use what they produce, so there is no timetable.
It isn't proxy servers in general, just some implementations that don't work well.
I am using squid on my network and I get updates through it for 10 machines with no problems.
A transparent proxy is one where ports are routed behind the scenes and there is no setup file.
Using an FTP server is best in these situations.
Karanbir Singh wrote:
William Warren wrote:
Is there a timetable to fix the yum/http proxy incompatibility? i find it interesting this has not been addressed yet.
- Another common problem is transparent proxy servers and yum. HTTP
mirrors and yum don't work really well via transparent proxy servers, so if you are using one, your best bet is to pick an FTP mirror from our public external server list (see mirror link above) instead of mirror.centos.org.
why not ask the guys who built the proxy, thats where the breakage is mostly.
Also, you have the option of going to a ftp mirror, and circumvent this issue.
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
William Warren wrote:
so my question is..why can't yum deal with a squid proxy running in transparent mode?..<G>
how do you know its broken ?
if it is, file bugs and propose a fix ( if you are able to ).
Just looked and I cant find any proxy issues filed against squid ( on the yum bugzilla ).
On Tue, 2005-11-29 at 11:46 -0500, William Warren wrote:
so my question is..why can't yum deal with a squid proxy running in transparent mode?..<G>
Because a transparent proxy is cheating ... yum has no idea that you are using a proxy. Transparent proxies are not the way proxy servers should be done.
If you know the IP address and port of your transparent proxy ... and you setup yum to use it properly, it will be no problem.
If yum is not configured to use a proxy, it assumes that it is making a direct connection. This is not an unreasonable assumption, and it is quite logical.
Transparent proxies should be against the freaking law :)
Johnny Hughes wrote:
On Tue, 2005-11-29 at 10:47 -0500, William Warren wrote:
i would rather not simply circumvent this isue...i would rather it be fixed. Like i psoted to naother person..squid is a great proxy..that is unless it is one that now sucks.
Yum is not maintained by CentOS ... it is something that we use from somewhere else. We use what they produce, so there is no timetable.
It isn't proxy servers in general, just some implementations that don't work well.
I am using squid on my network and I get updates through it for 10 machines with no problems.
A transparent proxy is one where ports are routed behind the scenes and there is no setup file.
Using an FTP server is best in these situations.
Karanbir Singh wrote:
William Warren wrote:
Is there a timetable to fix the yum/http proxy incompatibility? i find it interesting this has not been addressed yet.
- Another common problem is transparent proxy servers and yum. HTTP
mirrors and yum don't work really well via transparent proxy servers, so if you are using one, your best bet is to pick an FTP mirror from our public external server list (see mirror link above) instead of mirror.centos.org.
why not ask the guys who built the proxy, thats where the breakage is mostly.
Also, you have the option of going to a ftp mirror, and circumvent this issue.
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
interesting. I like my transparent proxies..:) Your post seems to indicate there is a yum configuration that can fix this issue without having to use ft and without having to reveal the proxy. Am i correct?
Johnny Hughes wrote:
On Tue, 2005-11-29 at 11:46 -0500, William Warren wrote:
so my question is..why can't yum deal with a squid proxy running in transparent mode?..<G>
Because a transparent proxy is cheating ... yum has no idea that you are using a proxy. Transparent proxies are not the way proxy servers should be done.
If you know the IP address and port of your transparent proxy ... and you setup yum to use it properly, it will be no problem.
If yum is not configured to use a proxy, it assumes that it is making a direct connection. This is not an unreasonable assumption, and it is quite logical.
Transparent proxies should be against the freaking law :)
On Tue, 2005-11-29 at 12:08 -0500, William Warren wrote:
interesting. I like my transparent proxies..:) Your post seems to indicate there is a yum configuration that can fix this issue without having to use ft and without having to reveal the proxy. Am i correct?
Yes, you can use the proxy settings in /etc/yum.conf to tell it to use the proxy ... this should minimize any issues.
man yum.conf
should give you good instructions for proxy setup of yum
basically, just this should work for a transparent proxy:
proxy=http://whatever.is.name:port/
Johnny Hughes wrote:
On Tue, 2005-11-29 at 11:46 -0500, William Warren wrote:
so my question is..why can't yum deal with a squid proxy running in transparent mode?..<G>
Because a transparent proxy is cheating ... yum has no idea that you are using a proxy. Transparent proxies are not the way proxy servers should be done.
If you know the IP address and port of your transparent proxy ... and you setup yum to use it properly, it will be no problem.
If yum is not configured to use a proxy, it assumes that it is making a direct connection. This is not an unreasonable assumption, and it is quite logical.
Transparent proxies should be against the freaking law :)
On Tue, 2005-11-29 at 11:04, Johnny Hughes wrote:
so my question is..why can't yum deal with a squid proxy running in transparent mode?..<G>
Because a transparent proxy is cheating ... yum has no idea that you are using a proxy.
That's the point. You don't need to configure every client. Why would anyone want to?
Transparent proxies are not the way proxy servers should be done.
And the more correct alternative that allows yum to work without configuration would be???
If you know the IP address and port of your transparent proxy ... and you setup yum to use it properly, it will be no problem.
It is no problem for browsers either way. What does yum need that browsers don't?
If yum is not configured to use a proxy, it assumes that it is making a direct connection. This is not an unreasonable assumption, and it is quite logical.
Transparent proxies should be against the freaking law :)
Yes, right *after* there is universal agreement on how to auto-configure everything that uses http and ftp to use a non-transparent proxy - and the matching code gets added everywhere. Meanwhile things that claim to use http should work the same way as browsers.
agreed. I was unable to figure out how to state it clearly..:)
Les Mikesell wrote:
On Tue, 2005-11-29 at 11:04, Johnny Hughes wrote:
so my question is..why can't yum deal with a squid proxy running in transparent mode?..<G>
Because a transparent proxy is cheating ... yum has no idea that you are using a proxy.
That's the point. You don't need to configure every client. Why would anyone want to?
Transparent proxies are not the way proxy servers should be done.
And the more correct alternative that allows yum to work without configuration would be???
If you know the IP address and port of your transparent proxy ... and you setup yum to use it properly, it will be no problem.
It is no problem for browsers either way. What does yum need that browsers don't?
If yum is not configured to use a proxy, it assumes that it is making a direct connection. This is not an unreasonable assumption, and it is quite logical.
Transparent proxies should be against the freaking law :)
Yes, right *after* there is universal agreement on how to auto-configure everything that uses http and ftp to use a non-transparent proxy - and the matching code gets added everywhere. Meanwhile things that claim to use http should work the same way as browsers.
On Tue, 2005-11-29 at 11:58 -0600, Les Mikesell wrote:
On Tue, 2005-11-29 at 11:04, Johnny Hughes wrote:
so my question is..why can't yum deal with a squid proxy running in transparent mode?..<G>
Because a transparent proxy is cheating ... yum has no idea that you are using a proxy.
That's the point. You don't need to configure every client. Why would anyone want to?
Transparent proxies are not the way proxy servers should be done.
And the more correct alternative that allows yum to work without configuration would be???
If you know the IP address and port of your transparent proxy ... and you setup yum to use it properly, it will be no problem.
It is no problem for browsers either way. What does yum need that browsers don't?
If yum is not configured to use a proxy, it assumes that it is making a direct connection. This is not an unreasonable assumption, and it is quite logical.
Transparent proxies should be against the freaking law :)
Yes, right *after* there is universal agreement on how to auto-configure everything that uses http and ftp to use a non-transparent proxy - and the matching code gets added everywhere. Meanwhile things that claim to use http should work the same way as browsers.
OR ... maybe one should figure out how to design transparent proxy rules that somehow test and make sure the agent is a web browser before one decides to reroute that information.
actually, squid is pretty good ... and it mostly works with yum as a transparent proxy.
BUT, rerouting stuff with IPTABLES and using transparent proxies does sometimes cause problems ... and I can't personally define what gets fixed in either yum or up2date ... as they are both maintained upstream.
I can tell people how they might work around known problems ... which I have tried to do.
Les Mikesell lesmikesell@gmail.com wrote:
That's the point. You don't need to configure every client. Why would anyone want to?
Good configuration management of the network perhaps? ;->
And the more correct alternative that allows yum to work without configuration would be???
FTP -- that's been stated several times now. The problem only affects HTTP streams. HTTP is not a well defined protocol, too generic, too free-form. Things break over it. Heck, there is an ever sprawling set of APIs for HTTP now -- many incomplete or have various compatibility issues.
Relating this to another thread on security, it's getting to the point that layer-3/4 firewalls are useless, because _everything_ is getting exploited over HTTP. So you should have a dedicated layer-7 gateway for HTTP that _all_ systems communicate through _explicitly_ by default.
It is no problem for browsers either way.
Now hold on there! Are you _sure_ about that? It really depends exactly _what_ is being serviced over HTTP. Plenty of HTTP services _break_ when transparently proxied.
In fact, in managing a large network, you quickly realize this when you get support calls from people on subnets that are doing stupid things. And that's when I get my baseball bat out. ;->
What does yum need that browsers don't?
Oh, many, many things. A biggie is that you're transfering files, typically large files. You can have issues doing such with web browsers too. One would argue that we're getting to the point where WebDAV HTTP would be a far better protocol than just "plain'ole, non-standard HTTP" for file transfers.
Yes, right *after* there is universal agreement on how to auto-configure everything that uses http and ftp to use a non-transparent proxy - and the matching code gets added everywhere. Meanwhile things that claim to use http should work the same way as browsers.
Another alternative would continue to be a local mirror. That addresses all of the suggestions we've seen lately -- from Torrent-based updates to the issue of transparent proxies.
In fact, you just gave "the litmus test." If you have so many systems that adding a proxy line to each of your Linux systems would be a chore, then you have enough systems that you should have a _local_ mirror instead of them all hitting mirror.centos.org.
Let alone that's also "the litmus test" that you should have a formal configuration management system in place to automate configuration changes anyway. But don't get me started on that. ;->
Just another day on the "bitch about what CentOS can't solve" list.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Tue, Nov 29, 2005 at 10:37:00AM -0800, Bryan J. Smith wrote:
And the more correct alternative that allows yum to work without configuration would be???
FTP -- that's been stated several times now. The problem only affects HTTP streams. HTTP is not a well defined protocol, too generic, too free-form. Things break over it. Heck, there is an ever sprawling set of APIs for HTTP now -- many incomplete or have various compatibility issues.
Not to mention the fact that using FTP (instead of HTTP) will reduce the ammount of data that needs to be transfered (on the wire) by 25%.
To transfer a binary file using HTTP, is had to be encoded using base64, which increases the file size by 1/3.
That, at least, is the theory, and hold true at least on HTTP/1.0. I'm not sure if HTTP/1.1 (or some other extension hacks) makes it possible to transfer binary files without encoding these days.
[]s
- -- Rodrigo Barbosa rodrigob@suespammers.org "Quid quid Latine dictum sit, altum viditur" "Be excellent to each other ..." - Bill & Ted (Wyld Stallyns)
I have gone ftp in my yum.confs. I only have two machines so a local repo is not really feasible here. If i had more than a few machines i would go with an rsync local mirror...however that would be more work than it's worth here. I would like it to work over http. I have to disagree with you on this one bryan..:)
Just another day on the "bitch about what CentOS can't solve" list.
William Warren wrote:
I have gone ftp in my yum.confs. I only have two machines so a local repo is not really feasible here.
So you, in fact, did _not_ agree with Les' statement:
"That's the point. You don't need to configure every client. Why would anyone want to?"
You have 2 systems. Add the damn proxy line and get over it. ;->
Everything else is arguing about things that are much larger than YUM. If you want to make an argument, don't pick and choose the context as it is appropriate to make your case.
If i had more than a few machines i would go with an rsync local mirror...however that would be more work than it's worth here.
And I agree. But in the context of Les' statements, which you agreed with, you have but *2* systems. ;->
I would like it to work over http. I have to disagree with you on this one bryan..:)
But you didn't exactly agree with Les either. Again, if you want to make an argument, don't pick and choose the context as it is appropriate to make your case.
Having to set the proxy server is not going to kill you on 2 systems. If you have a lot of systems, you're going to have a local mirror and, hopefully, a proper configuration management system anyway. I don't care what OS/app you're running. ;->
As far as disagreeing with me, get in line. I'm overwhelmingly in the minority.
h'm not. I don't want to HAVE to reconfigure my clients..nor do i want to HAVE to add a proxy line into my yum.conf files. I actually do agree with les it should just work w/o having to screw around with config files.
Please don't accuse me of picking and choosing Bryan. It is not what i did at all. I chose to add the ftp lines to simply get things working again. it is not something that should have to be done at all. If i had more than 3-5 machine here i would fire up another one as a dedicated mirror.
Bryan J. Smith wrote:
William Warren wrote:
I have gone ftp in my yum.confs. I only have two machines so a local repo is not really feasible here.
So you, in fact, did _not_ agree with Les' statement:
"That's the point. You don't need to configure every client. Why would anyone want to?"
You have 2 systems. Add the damn proxy line and get over it. ;->
Everything else is arguing about things that are much larger than YUM. If you want to make an argument, don't pick and choose the context as it is appropriate to make your case.
If i had more than a few machines i would go with an rsync local mirror...however that would be more work than it's worth here.
And I agree. But in the context of Les' statements, which you agreed with, you have but *2* systems. ;->
I would like it to work over http. I have to disagree with you on this one bryan..:)
But you didn't exactly agree with Les either. Again, if you want to make an argument, don't pick and choose the context as it is appropriate to make your case.
Having to set the proxy server is not going to kill you on 2 systems. If you have a lot of systems, you're going to have a local mirror and, hopefully, a proper configuration management system anyway. I don't care what OS/app you're running. ;->
As far as disagreeing with me, get in line. I'm overwhelmingly in the minority.
On Tue, 2005-11-29 at 13:03, William Warren wrote:
h'm not. I don't want to HAVE to reconfigure my clients..nor do i want to HAVE to add a proxy line into my yum.conf files. I actually do agree with les it should just work w/o having to screw around with config files.
What I actually do is add the proxy info to the command line so it is exported to yum - and I don't have a transparent proxy, I just do it to make all the machines use the same cache which I've configured to store large files. I update often enough that I can nearly always recall the command line from history with ^r3128 (the squid port). Or, I'll ssh to several machines and cut/paste the command line between windows.
Please don't accuse me of picking and choosing Bryan. It is not what i did at all. I chose to add the ftp lines to simply get things working again. it is not something that should have to be done at all. If i had more than 3-5 machine here i would fire up another one as a dedicated mirror.
But, if you had more machines, you'd probably be trying other distributions or versions on them.
On Tue, 2005-11-29 at 14:05 -0600, Les Mikesell wrote:
On Tue, 2005-11-29 at 13:03, William Warren wrote:
h'm not. I don't want to HAVE to reconfigure my clients..nor do i want to HAVE to add a proxy line into my yum.conf files. I actually do agree with les it should just work w/o having to screw around with config files.
What I actually do is add the proxy info to the command line so it is exported to yum - and I don't have a transparent proxy, I just do it to make all the machines use the same cache which I've configured to store large files. I update often enough that I can nearly always recall the command line from history with ^r3128 (the squid port). Or, I'll ssh to several machines and cut/paste the command line between windows.
Please don't accuse me of picking and choosing Bryan. It is not what i did at all. I chose to add the ftp lines to simply get things working again. it is not something that should have to be done at all. If i had more than 3-5 machine here i would fire up another one as a dedicated mirror.
But, if you had more machines, you'd probably be trying other distributions or versions on them.
Heresy I tell you ... no other distributions need to be tried ... :)
Les Mikesell lesmikesell@gmail.com wrote:
What I actually do is add the proxy info to the command line so it is exported to yum - and I don't have a transparent proxy,
That's good. But instead of adding it to the command-line, you can put it in the configuration file.
I just do it to make all the machines use the same cache which I've configured to store large files.
Mirroring would be quite a bit more effective, more exacting and far, far less error-prone to transmission issues.
I update often enough that I can nearly always recall the command line from history with ^r3128 (the squid port).
God forbid you might have write a script that ... gasp ... retrieves the latest proxy info -- let alone from an automatic proxy URL. ;->
Or, I'll ssh to several machines and cut/paste the command line between windows.
God forbid you might have to write a script to automate that across your enterprise. I can hear it now ... "No, no, there will be _no_ automated configuration management on my network! I have to justify my job with manually-intensive busy work!" ;->
But, if you had more machines, you'd probably be trying other distributions or versions on them.
Wow! What a great topic for the CentOS list! Let's all join the bandwagon now ... which distros suck less than CentOS? ;->
Sorry, not my bag of tea. This isn't a YUM issue. It's a greater issue of installation/configuration management when you have a large number of systems, and a relatively simplistic task when you only have a few.
On Tue, 2005-11-29 at 15:20, Bryan J. Smith wrote:
What I actually do is add the proxy info to the command line so it is exported to yum - and I don't have a transparent proxy,
That's good. But instead of adding it to the command-line, you can put it in the configuration file.
But then it would have to change when I build/update a machine here and ship it or the disk to another location.
Or, I'll ssh to several machines and cut/paste the command line between windows.
God forbid you might have to write a script to automate that across your enterprise. I can hear it now ... "No, no, there will be _no_ automated configuration management on my network! I have to justify my job with manually-intensive busy work!" ;->
Actually I just prefer not to break all my machines at once... It's really not that hard to paste an update command into several windows or ssh it to the next machine after the previous one is rebooted and back in the load balancer pools.
But, if you had more machines, you'd probably be trying other distributions or versions on them.
Wow! What a great topic for the CentOS list! Let's all join the bandwagon now ... which distros suck less than CentOS? ;->
I'm sure you know as well as anyone that you need to be working with fedora to know what to expect from the next version of Centos. And working with fedora is frustrating enough to make you try some other distos whenever you have time.
Then there's the k12ltsp and SMEserver variations on the Centos base.
Sorry, not my bag of tea. This isn't a YUM issue. It's a greater issue of installation/configuration management when you have a large number of systems, and a relatively simplistic task when you only have a few.
And it gets ugly when you have lots of machines running an assortment of different distributions. Just not quite ugly enough to make it worth reinstalling just to make it all the same (if that's even possible due to hardware and application differences) or holding back on new things.
Les Mikesell lesmikesell@gmail.com wrote:
But then it would have to change when I build/update a machine here and ship it or the disk to another location.
So what do you do about other site-specific system settings?
Actually I just prefer not to break all my machines at once...
That's why you test on one system _first_. I don't care if you're doing it automated or manually, there is absolutely _no_ difference!
It's really not that hard to paste an update command into several windows or ssh it to the next machine after the previous one is rebooted and back in the load balancer pools.
But it takes a lot less time (and is much safer) to: 1) Download updates to 1 system 2) Test that system 3) Pull the files from its APT/YUM cache 4) Redistribute them to all other systems
I actually didn't know about automated YUM tools to do #3/#4 until that previous thread a month or two back.
I either just leveraged my existing configuration management structure to distribute files from APT/YUM caches -- or more formally -- just maintained my own local APT/YUM mirrors, my own "enterprise release" tags on the package sets, etc...
So, are we going to _continue_ going round and round on this? I'm not saying what you're doing is "wrong." I'm just saying there _are_ other ways to deal with your problem, and they _are_ very efficient for most of us other administrators. ;->
I'm sure you know as well as anyone that you need to be working with fedora to know what to expect from the next version of Centos. And working with fedora is frustrating enough to make you try some other distos whenever you have time.
I'm just telling you what's in store in the future. I'm doing it so you can stop asking for things from CentOS on this list that _are_ being addressed by the upstream provider who _can_ do such!
Do you have to take everything and make it either an insult about a distro or a threat to use another? Honestly, I don't think we need more Linux users -- excuse me, administrators -- like yourself.
The only thing I'm guilty of is taking the time to explain extensive technical options people have -- to make things easier. You may like one way, but it's _not_ the only way -- and if there is another option, one that might save you time, I'm going to offer it. Especially when you're "bitching" for solutions that CentOS can_not_ give you. ;->
Then there's the k12ltsp and SMEserver variations on the Centos base.
Then setup 1 system, download the updates, and rip/distribute from their APT/YUM caches. I've been doing that for _years_ (with my own scripts) when the organization is small enough I don't have a local mirror setup.
And it gets ugly when you have lots of machines running an assortment of different distributions.
Again, pick 1 system per distro. Pick a user who doesn't mind "experimenting" if you don't have a spare system. They download the updates first. If they work and pass mustard, then use his/her APT or YUM cache to feed _all_ others. Done.|
Otherwise, how are you maintaining any management over these systems?
Just not quite ugly enough to make it worth reinstalling
Huh? Who said _anything_ about re-installing?!?!?! @-o
just to make it all the same (if that's even possible due to hardware and application differences) or holding back on new things.
Huh? You just lost me. Why would you re-install?
William Warren wrote:
h'm not. I don't want to HAVE to reconfigure my clients..
How do you install your systems? How do you make system changes? How do you do any changes?
This is what I don't understand at all. This has nothing to do with YUM.
nor do i want to HAVE to add a proxy line into my yum.conf files.
But what do you do for system installation/customization? Do you use a Kickstart disk? Or do you manually install the system and manually configure? How do you make other changes to a system?
Again, this is what I don't understand at all. And it has nothing to do with YUM.
I actually do agree with les it should just work w/o having to screw around with config files.
Why not just do it when you install the system? I mean, I assume you're doing it then, correct? Or do your computers read your brain when you install?
Please don't accuse me of picking and choosing Bryan.
My point was you where clearly making an _inapplicable_ argument just because it fits.
There are countless configuration details that you do -- even if just once (like adding a proxy line to yum.conf). What makes this so different from any other install-time configuration detail?
It is not what i did at all. I chose to add the ftp lines to simply get things working again.
Then why couldn't you put in the proxy line instead? You only have to to do it once.
it is not something that should have to be done at all.
I'm still failing the point here. Transparent proxies do not work for a lot of things.
If i had more than 3-5 machine here i would fire up another one as a dedicated mirror.
And that's great. But even then, you'd have to change the repository configuration. So what's the issue?
On Tue, 2005-11-29 at 14:58, Bryan J. Smith wrote:
There are countless configuration details that you do -- even if just once (like adding a proxy line to yum.conf). What makes this so different from any other install-time configuration detail?
Among other things, it changes when you move the machine. How do you deal with laptops? The proxy at work is not available on my home network or a remote wireless connection.
Les Mikesell lesmikesell@gmail.com wrote:
Among other things, it changes when you move the machine. How do you deal with laptops? The proxy at work is not available on my home network or a remote wireless
connection.
Okay, now you're talking about far _bigger_ issues than just YUM. Especially since I expect mobile users to talk to the network far differently at work than when at home or when mobile. E.g., I don't believe in setting "default gateways" on systems on a corporate network. I also use multiple Firefox and Evolution profiles, just like I do Outlook ones, for when users are at home, at work or roaming.
But I'll meet you half-way.
First off, many Linux tools (including YUM) that access information via HTTP look for a global environmental variable called "http_proxy" in the format of: http_proxy=http://USER:PASS@some.domain.com:PORT/ So your network scripts will need to export this, based on what network you have connected to. I do this all-the-time for my mobile users, with our scripts. Debian has actually had a nice little framework for years to do this.
Secondly, Fedora Core 4 (and so RHEL 5) adds GNOME's NetworkManager. GNOME's NetworkManager basically addresses the issues you talk about -- providing a _standard_ way for _applications_ to go to _single_ place for dynamic networking information. It is essentially the GNOME equivalent proxy settings for all GNOME applications to those of MS IE for all GDI/Explorer applications.
That includes getting the Proxy server option from a DHCP server, which brings me to my final item.
Proxy server information is commonly passed on enterprise networks as a DHCP option. IETF RFC2132 nailed down the common DHCP options 8 years ago, but proxy server wasn't one of them. So it wasn't long before companies added their own option for proxy server URL:port (e.g., Microsoft put the option into its ADS/DHCP service starting with Windows 2000 Server), which eventually led to IETF BCP0043 (aka RFC2939) -- the process on adding more. Today there are a few DHCP options that are "best current practices" (BCPs) that are accepted by servers and clients.
But to go beyond that, there has been a continuing draft standard** that not only addresses the current practices beyond what BCP0043 specified, but defines an extensible set so you can change the proxy server based on the service you are trying to reach. So it goes well above and beyond the standard industry DHCP option, that basically gives only a single URL:port for all. E.g., I would personally like a way I can direct SSH to a dedicated SOCKS proxy well away from any other proxy.
[ **The latest revision (4) of this draft standard can be found here: http://www.ietf.org/internet-drafts/draft-ietf-dhc-proxyserver-opt-04.txt
If the term "draft standard" throws you off, understand almost _all_ standards created and used in the last 4-5 years are _still_ "draft standards." E.g., IPv4 LINKLOCAL (169.254/16) is still considered a "draft standard" even though it's been in widespread use for 6+ years. The legacy "throw everything out as a Request for Comments (RFC)" was deprecated long ago -- let alone the whole "RFC" tag never meant (by its very acronym) a "standard" either. ;-]
As someone mentioned, from a security standpoint, using transparent proxies should be "against the law." ;->
Now, how much more do you want to throw at this?
On Tue, 2005-11-29 at 16:11, Bryan J. Smith wrote:
Okay, now you're talking about far _bigger_ issues than just YUM. Especially since I expect mobile users to talk to the network far differently at work than when at home or when mobile. E.g., I don't believe in setting "default gateways" on systems on a corporate network.
Heh... Our business is selling streaming data services, mostly over the internet and from an assortment of distributed servers, so you can guess how I feel about that.
[...]
It is essentially the GNOME equivalent proxy settings for all GNOME applications to those of MS IE for all GDI/Explorer applications.
So, if yum ever becomes a GNOME app, all our problems are solved.
Now, how much more do you want to throw at this?
Bottom line is it has to be easier than typing the command line once with the proxy info on it and subsequently recalling it from command history or I probably won't change. Actually I seldom even type it the first time - I usually ssh in after installing a new box and paste the command from another window on a different machine.
Les Mikesell lesmikesell@gmail.com wrote:
Heh... Our business is selling streaming data services, mostly over the internet and from an assortment of distributed servers, so you can guess how I feel about that.
Huh? Just because I do _not_ set a "default gateway" on a system does _not_ mean they can't reach the Internet. Quite the opposite! If anything, I'm ensuring _how_ they reach and _what_ they reach on the Internet. ;->
No offense, but at this point, I'm starting to question your technical reasoning (let alone Internet security fundamentals ;-). You see only 1 way to do something, and then make assumptions on what is and isn't possible based on them.
I have _nothing_ against people who find something that works for them. But I have something against people who think it's the only way, or no other way could possible be better.
So, if yum ever becomes a GNOME app, all our problems are solved.
Sigh ... NetworkManager is _part_ of GNOME, but it sets things up _outside_ of GNOME too. It just offers them to GNOME applications as well -- things like Evolution, Firefox, etc...
That's the "greater issue" I'm talking about, so you don't have to go around to every application to set things.
Bottom line is it has to be easier than typing the command line once with the proxy info on it and subsequently recalling it from command history or I probably won't
change.
How about exporting an environmental variable in your /etc/rc.d/rc.local?
And atop of that, why not just download updates on 1 system, then redistribute them from it's cache instead -- _after_ you've tested that they work?
At this point, I don't know if you're really interested in anything that would make your life easier, you just want the way you know -- you have continually argued to be "best" and nothing else could be -- to work. And until you get that from the CentOS developers on this, or any other issue that seems to be an issue at the upstream provider, we will hear about it on this list.
Actually I seldom even type it the first time - I usually
ssh
in after installing a new box and paste the command from another window on a different machine.
Again, do you get paid by the hour? You certainly must. ;-> If I do anything more than a few times, it's scripted.
On Tue, 2005-11-29 at 17:20, Bryan J. Smith wrote:
Heh... Our business is selling streaming data services, mostly over the internet and from an assortment of distributed servers, so you can guess how I feel about that.
Huh? Just because I do _not_ set a "default gateway" on a system does _not_ mean they can't reach the Internet. Quite the opposite! If anything, I'm ensuring _how_ they reach and _what_ they reach on the Internet. ;->
The 'what' is the problem. If our sales person want to demo a product that connects on 6 different ports to places that aren't known until the first connection is established, will it work?
No offense, but at this point, I'm starting to question your technical reasoning (let alone Internet security fundamentals ;-). You see only 1 way to do something, and then make assumptions on what is and isn't possible based on them.
I didn't design the product, but I've had to help make it work in places that don't use a default gateway. It's not pretty.
I have _nothing_ against people who find something that works for them. But I have something against people who think it's the only way, or no other way could possible be better.
The reason there are other ways is that none of them are perfect. There's nothing wrong with understanding the flaws and tradeoffs of each.
Bottom line is it has to be easier than typing the command line once with the proxy info on it and subsequently recalling it from command history or I probably won't
change.
How about exporting an environmental variable in your /etc/rc.d/rc.local?
Generally I don't want applications to use a proxy unless I know they are going to download the same big files as other systems. Otherwise it slows things down slightly and has no benefit.
And atop of that, why not just download updates on 1 system, then redistribute them from it's cache instead -- _after_ you've tested that they work?
That's a reasonable approach, but takes an extra step and unless the same programs are installed everywhere the 1st system may not have all the others need.
At this point, I don't know if you're really interested in anything that would make your life easier, you just want the way you know -- you have continually argued to be "best" and nothing else could be -- to work. And until you get that from the CentOS developers on this, or any other issue that seems to be an issue at the upstream provider, we will hear about it on this list.
I'm not demanding solutions, but if people don't consider the problems there won't ever be any solutions.
Actually I seldom even type it the first time - I usually
ssh
in after installing a new box and paste the command from another window on a different machine.
Again, do you get paid by the hour? You certainly must. ;-> If I do anything more than a few times, it's scripted.
It's a one-line command. How does making it a script help? You have to spend the time to create the script and then it takes just as long to type it's name as the command itself - or recall it from history.
Les Mikesell lesmikesell@gmail.com wrote:
The 'what' is the problem. If our sales person want to demo a product that connects on 6 different ports to places that aren't known until the first connection is established, will it work?
If I setup the proxy to allow all access by default, such as I would in such a situation, then yes. But I do _not_ let just any port out the firewall.
I didn't design the product, but I've had to help make it work in places that don't use a default gateway. It's not pretty.
If people would make such security considerations in the first place, the Internet would be a lot safer. The problem is not the networks, but the apps.
The reason there are other ways is that none of them are perfect. There's nothing wrong with understanding the flaws and tradeoffs of each.
That was my point!
So why were you so hell-bent on talking about how something must work only one way? And you continually discarded countless suggestions from others, and even more from myself, as if they were not options?
Generally I don't want applications to use a proxy unless I know they are going to download the same big files as other systems. Otherwise it slows things down slightly and has no benefit.
No benefit?!?!?! Security???
That's a reasonable approach, but takes an extra step and unless the same programs are installed everywhere the 1st system may not have all the others need.
But it would _still_ cache the programs that are similar, as well as they test at least on the "common" system. Even you mentioned "testing," so I'm now even more curious how you're managing these systems?!
I'm not demanding solutions, but if people don't consider the problems there won't ever be any solutions.
Not the solution you explicitly want, as you seem to want to consider no others, or their merits for that matter.
It's a one-line command. How does making it a script help?
First off, you're aruging that 1 command is easy to do on a lot of systems. So how difficult is it to make a 1 line change to yum.conf? Could you please _stick_ with something, instead of just arguing however it may favor your viewpoint at any given moment?
Secondly, you're forgetting that you're SSH'ing into systems, etc... All those manual steps -- launch the terminal, etc... -- for _each_ system.
Having all systems automagically pull from the same configuration server would mean you make a change in 1 place, and then it is pulled by all other systems.
If you only have a half dozen or so systems, then just select one user's system at the client as the configuration management server. You then run 1 command to say the change has been implemented, and it gets copied into the configuration management repository for all other systems to grab.
You have to spend the time to create the script and then it takes just as long to type it's name as the command itself - or recall it from history.
The time spent to setup a basic configuration management setup is tiny -- especially for multiple systems. It is certainly less time than to launch a terminal, SSH into each one and hit the up arrow on a regular basis.
As I said, I am really starting to question many things at this point. But you keep on at it.
On Wed, 2005-11-30 at 14:11, Bryan J. Smith wrote:
The 'what' is the problem. If our sales person want to demo a product that connects on 6 different ports to places that aren't known until the first connection is established, will it work?
If I setup the proxy to allow all access by default, such as I would in such a situation, then yes. But I do _not_ let just any port out the firewall.
Proxies won't handle our streaming data - and the ports are not the commonly used ones. The next version will work over socks5 but for now it has to go direct.
I didn't design the product, but I've had to help make it work in places that don't use a default gateway. It's not pretty.
If people would make such security considerations in the first place, the Internet would be a lot safer. The problem is not the networks, but the apps.
An earlier product where I was involved in the design used only couple of addresses and ports with the server farms behind load balancers using a single address. Now we are integrated into a much larger company with a more distributed server base where the initial authentication server also performs load balancing by telling the client which individual server to use for each data type. There are disadvantages but that approach scales better.
The reason there are other ways is that none of them are perfect. There's nothing wrong with understanding the flaws and tradeoffs of each.
That was my point!
So why were you so hell-bent on talking about how something must work only one way? And you continually discarded countless suggestions from others, and even more from myself, as if they were not options?
I don't think they solve the general problem unless they work by default. That is, trying to reduce mirror bandwidth usage by relying one every user with more than one machine to set up their own mirror just doesn't seem right. Yes, it works, yes it can solve an individual problem - but there has to be a better way and it doesn't need new technology. Spreading load has been solved ages ago with RRDNS - saving bandwidth on repeated downloads was solved with caching proxies. Neither needs special per-distribution setup or ongoing work.
Generally I don't want applications to use a proxy unless I know they are going to download the same big files as other systems. Otherwise it slows things down slightly and has no benefit.
No benefit?!?!?! Security???
Maybe, if you hook a virus scanner to the proxy or limit destinations.
That's a reasonable approach, but takes an extra step and unless the same programs are installed everywhere the 1st system may not have all the others need.
But it would _still_ cache the programs that are similar, as well as they test at least on the "common" system. Even you mentioned "testing," so I'm now even more curious how you're managing these systems?!
Generally I update all within a short time after knowing the first box didn't break. Realistically, I've had exactly one surprise in the lifetime of Centos when an update fixed the ifup/down scripts to ignore interfaces if the HWADDR entry didn't match the hardware MAC address, and I wouldn't have caught that one with additional testing because it only affected machines where the disk had been built in one machine and the IP settings added, then moved to the destination box. So, any additional effort would have had no benefit. This just doesn't seem like a big risk considering the QA that has gone into the code before it reaches the Centos repositories - although I still wish yum could be told to ignore updates that might have been added to the repositories since the time of the initial/tested run. On the other hand, we have more windows boxes to maintain than Linux and we have had instances where updates affected our apps. If you have come up with a way to automatically restrain windows to a known-to-work update level without waiting for service pack releases, I'd like to know how to do it.
I'm not demanding solutions, but if people don't consider the problems there won't ever be any solutions.
Not the solution you explicitly want, as you seem to want to consider no others, or their merits for that matter.
I've considered them - and pointed out the flaws.
It's a one-line command. How does making it a script help?
First off, you're aruging that 1 command is easy to do on a lot of systems. So how difficult is it to make a 1 line change to yum.conf? Could you please _stick_ with something, instead of just arguing however it may favor your viewpoint at any given moment?
It's more difficult than not making it - especially over time when the disk or whole machine may be shipped to different locations where the setting will be incorrect. Most are configured at the main office, then shipped to a colo site where they run until needed elsewhere or there are hardware problems.
Secondly, you're forgetting that you're SSH'ing into systems, etc... All those manual steps -- launch the terminal, etc... -- for _each_ system.
In some cases that's necessary to control the apps and load balancer around the update and possible reboot window. In others it's a one line ssh command from my workstation.
replies inline
Bryan J. Smith wrote:
William Warren wrote:
h'm not. I don't want to HAVE to reconfigure my clients..
How do you install your systems? How do you make system changes? How do you do any changes?
straw man. this topic isabout yum..not the rest of the system. I said i don't want to have to change yum t all or mess with configuring its config files.
This is what I don't understand at all. This has nothing to do with YUM.
this thread and my responses are totally about yum. YOu are the one going off-topic
nor do i want to HAVE to add a proxy line into my yum.conf files.
But what do you do for system installation/customization? Do you use a Kickstart disk? Or do you manually install the system and manually configure? How do you make other changes to a system?
AGain you are going outside the scope of this thread and of my comments
Again, this is what I don't understand at all. And it has nothing to do with YUM.
it most certainly does
I actually do agree with les it should just work w/o having to screw around with config files.
Why not just do it when you install the system? I mean, I assume you're doing it then, correct? Or do your computers read your brain when you install?
And you wonder why folks killfile you.
Please don't accuse me of picking and choosing Bryan.
My point was you where clearly making an _inapplicable_ argument just because it fits.
wrong..my argument as you cll it was perfectly on target with my previous comments
William Warren wrote:
replies inline
No need to declare it, being an "old guy" in Internet terms, I actually prefer bottom posting (but don't mind either way).
straw man. this topic isabout yum..not the rest of the system. I said i don't want to have to change yum t all or mess with configuring its config files.
It's exactly about post-installation configuration. Added *1* configuration line or, better yet, environmental variable to your system for the proxy is not exactly "a heavy burden."
You do it for all sorts of other things. And the environmental variable is considered "proper" when you _do_ have a proxy server.
this thread and my responses are totally about yum. YOu are the one going off-topic
This started being _off-topic_ nature when someone quickly pointed out that CentOS *CAN*DO*NOTHING* about your issue. That has been a _repeat_theme_ here, and it _never_ seems to end either. I can be silent, and just let it go (which I did when this thread started), and it will go on. And then the next topic will come up, and yet again, something CentOS can do _nothing_ about will get bitched about too (regardless whether I respond or not).
It's not only an upstream provider addressed issue, but it's a greater issue of handing site-specific settings _period_. Transparent proxies are _not_ the workaround. ;->
So the "new point" which _you'all_ introduced when you were not fit to accept CentOS' situation, is why transparent proxies are an issue, and how much of a so-called "big deal" it is to work around them. Com'mon, it's 1 environmental variable! And it's considered "good practice," whereas transparent proxies are _not_ (hence the "against the law comment" by one other ;-).
Someone else even suggested using another distro. I guess I can only assume he meant to type "tongue-in-cheek" next to it, or he's just making threats that really have no meaning, since CentOS cannot and will not resolve the issue. It's upstream.
AGain you are going outside the scope of this thread and of my comments
What I'm trying to figure out is how many things that require configuration changes are going to be continually talked about on this list when CentOS can do _nothing_ about them. And it is _continually_ from the _same_ people too.
Now I stood back and watched the first, few posts on this. And yet, the non-sense starts again! So I responded, trying to point out the technical specifics in all my verbosity (instead of the "transparent proxies should be against the law" or some other one-liner). And yet again, they are _ignored_ like I'm pulling them out of my rectum.
Stop!
And you wonder why folks killfile you.
Honestly, you're basically wanting the world to work your way, and you won't accept anything else. So then you bitch about it on this list, a project that can do _nothing_ to address it. So what do you hope to accomplish?
I'm sorry I take the time to point out the technical specifics of your delima in the hope that you will understand and compensate. But apparently you want to rant about how it doesn't do what you want it to do, in full absence of the reality that it will _not_ change on this list.
wrong..my argument as you cll it was perfectly on target with my previous comments
You have 2 systems. When you setup those 2 systems, you make all sorts of configuration changes to each. This is just 1 additional change.
Export the variable in /etc/rc.d/rc.local and be done with it. Or, better yet, write a proper init script -- especially for portables where network configurations may be dynamic.
In the case of the latter, again, Fedora is working on addressing that at the GUI, and you'll see it in RHEL 5. NetworkManager offers a lot of capabilities that will remove much of the dynamic configuration issues that plague many things beyond just YUM.
On Tue, 2005-11-29 at 12:37, Bryan J. Smith wrote:
That's the point. You don't need to configure every client. Why would anyone want to?
Good configuration management of the network perhaps? ;->
There are places where you might want to hand-configure IP addresses too, but DHCP is a lot handier.
And the more correct alternative that allows yum to work without configuration would be???
FTP -- that's been stated several times now.
How is that a solution? Proxies are used where you don't allow direct outbound access. How do you do ftp without configuring a proxy on every client?
Relating this to another thread on security, it's getting to the point that layer-3/4 firewalls are useless, because _everything_ is getting exploited over HTTP. So you should have a dedicated layer-7 gateway for HTTP that _all_ systems communicate through _explicitly_ by default.
How do you propose this should work without per-box configuration?
Now hold on there! Are you _sure_ about that? It really depends exactly _what_ is being serviced over HTTP. Plenty of HTTP services _break_ when transparently proxied.
OK - ftp breaks when you NAT it too - sometimes.
Yes, right *after* there is universal agreement on how to auto-configure everything that uses http and ftp to use a non-transparent proxy - and the matching code gets added everywhere. Meanwhile things that claim to use http should work the same way as browsers.
Another alternative would continue to be a local mirror.
Of what?
That addresses all of the suggestions we've seen lately -- from Torrent-based updates to the issue of transparent proxies.
Yes, just mirror the whole internet locally - or at least all yummable repositories...
In fact, you just gave "the litmus test." If you have so many systems that adding a proxy line to each of your Linux systems would be a chore, then you have enough systems that you should have a _local_ mirror instead of them all hitting mirror.centos.org.
And all of the fedora repositories, and all the 3rd party add on repositories, and the k12ltsp variations, and the ubuntu/debian apt repositories.
Let alone that's also "the litmus test" that you should have a formal configuration management system in place to automate configuration changes anyway. But don't get me started on that. ;->
It doesn't make sense to cache things unless at least one person uses it. The point of the internet is that you can get the latest when you need it, and the point of a cache is that only one person has to wait.
Just another day on the "bitch about what CentOS can't solve" list.
Yes, CentOS is as much a victim as the other distros on this point.
Les Mikesell lesmikesell@gmail.com wrote:
There are places where you might want to hand-configure IP addresses too, but DHCP is a lot handier.
So what's the difference between configuring your system to use DHCP and configuring your system to use a proxy? I honestly don't get it. @-o
How is that a solution? Proxies are used where you don't allow direct outbound access. How do you do ftp without configuring a proxy on every client?
The question is, why aren't you configuring software for a proxy in the first place? You do it once ... done.|
How do you propose this should work without per-box configuration?
Why don't you just configure it at install-time, like everything else? Again, I don't understand how this is different than anything else you configure at install-time.
Furthermore, we're back to the "how to you change anything on all systems when you need to?" Don't you have some sort of configuration management of all your Linux systems? Something that can redistribute system changes to all systems?
This has nothing to do with YUM.
OK - ftp breaks when you NAT it too - sometimes.
I'm not talking about just FTP, I'm talking about HTTP too. HTTP can and _does_ break because it's a stream protocol that carries a lot of rich service data over it. Some of those rich service data streams don't take kindly to transparent proxies.
[ As a side note, I mentioned that HTTP-based repositories should use WebDAV services instead. Because WebDAV adds file management to the protocol. ]
Of what?
Of the CentOS repository.
Yes, just mirror the whole internet locally - or at least all yummable repositories...
Of the packages you use, yes. Take some load off the CentOS mirrors if you have enough systems.
And all of the fedora repositories, and all the 3rd party add on repositories, and the k12ltsp variations, and the ubuntu/debian apt repositories.
Yes! Once you have the first sync, it is not much to download a day. In fact, if you're about conserving the bandwidth you use for updates, hell yes! If your point is that you have all those repositories to sync from and that is a "burden," then my counter-point is "Exactly! You're yanking from all those different repositories from _multiple_ systems already -- so why not just do it from _one_?" ;->
When you have a number of systems, there is _no_negative_ to this, other than having the disk space required! APT And YUM repositories are "dumb" FTP/HTTP stores. rsync down and serve. Save your bandwidth and save your headaches.
It doesn't make sense to cache things unless at least one person uses it.
Now I'm really confused. If you're not using a repository, then do _not_ mirror it. I don't understand that point you just made. Or are you adding yet more unrelated items just to make a point?
The point of the internet is that you can get the latest when you need it, and the point of a cache is that only one person has to wait.
We're talking about software repositories. If you are pulling multiple files from multiple systems, mirror it. These aren't some arbitrary web sites, they are known repositories.
If you have enough systems, you should be doing this anyway -- out of sheer configuration management principles. You don't want people grabbing arbitrary software on a mass number of systems, but only what you allow from your own repositories.
If you don't have a lot of systems, then take the few seconds to add the proxy line during install -- or make it part of your Kickstart post-install script, etc... (whatever you normally do at install-time).
Yes, CentOS is as much a victim as the other distros on this point.
I just don't know what you expect CentOS to solve.
You can always use a transparent proxy if you want inetnet access, but don't want all ports with direct access outbound...
P.
Bryan J. Smith wrote:
Les Mikesell lesmikesell@gmail.com wrote:
There are places where you might want to hand-configure IP addresses too, but DHCP is a lot handier.
So what's the difference between configuring your system to use DHCP and configuring your system to use a proxy? I honestly don't get it. @-o
How is that a solution? Proxies are used where you don't allow direct outbound access. How do you do ftp without configuring a proxy on every client?
The question is, why aren't you configuring software for a proxy in the first place? You do it once ... done.|
How do you propose this should work without per-box configuration?
Why don't you just configure it at install-time, like everything else? Again, I don't understand how this is different than anything else you configure at install-time.
Furthermore, we're back to the "how to you change anything on all systems when you need to?" Don't you have some sort of configuration management of all your Linux systems? Something that can redistribute system changes to all systems?
This has nothing to do with YUM.
OK - ftp breaks when you NAT it too - sometimes.
I'm not talking about just FTP, I'm talking about HTTP too. HTTP can and _does_ break because it's a stream protocol that carries a lot of rich service data over it. Some of those rich service data streams don't take kindly to transparent proxies.
[ As a side note, I mentioned that HTTP-based repositories should use WebDAV services instead. Because WebDAV adds file management to the protocol. ]
Of what?
Of the CentOS repository.
Yes, just mirror the whole internet locally - or at least all yummable repositories...
Of the packages you use, yes. Take some load off the CentOS mirrors if you have enough systems.
And all of the fedora repositories, and all the 3rd party add on repositories, and the k12ltsp variations, and the ubuntu/debian apt repositories.
Yes! Once you have the first sync, it is not much to download a day. In fact, if you're about conserving the bandwidth you use for updates, hell yes! If your point is that you have all those repositories to sync from and that is a "burden," then my counter-point is "Exactly! You're yanking from all those different repositories from _multiple_ systems already -- so why not just do it from _one_?" ;->
When you have a number of systems, there is _no_negative_ to this, other than having the disk space required! APT And YUM repositories are "dumb" FTP/HTTP stores. rsync down and serve. Save your bandwidth and save your headaches.
It doesn't make sense to cache things unless at least one person uses it.
Now I'm really confused. If you're not using a repository, then do _not_ mirror it. I don't understand that point you just made. Or are you adding yet more unrelated items just to make a point?
The point of the internet is that you can get the latest when you need it, and the point of a cache is that only one person has to wait.
We're talking about software repositories. If you are pulling multiple files from multiple systems, mirror it. These aren't some arbitrary web sites, they are known repositories.
If you have enough systems, you should be doing this anyway -- out of sheer configuration management principles. You don't want people grabbing arbitrary software on a mass number of systems, but only what you allow from your own repositories.
If you don't have a lot of systems, then take the few seconds to add the proxy line during install -- or make it part of your Kickstart post-install script, etc... (whatever you normally do at install-time).
Yes, CentOS is as much a victim as the other distros on this point.
I just don't know what you expect CentOS to solve.
Peter Farrow peter@farrows.org wrote:
You can always use a transparent proxy if you want inetnet access, but don't want all ports with direct access outbound...
Okay, step back a bit. I am _not_ asking why people use proxy services. Going back to the SELinux thread, I figured you'all would pick up on the fact that I _deny_all_ outgoing by _default_.
That means there _is_ a proxy server, if not an advanced filtering layer-7 gateway, that users _must_ go through.
I'm just saying that I don't use transparent proxy redirection. In fact, most of the nodes on my network are setup with_out_ a default gateway. That removes a lot of issues. ;->
On Tue, 2005-11-29 at 15:11, Bryan J. Smith wrote:
There are places where you might want to hand-configure IP addresses too, but DHCP is a lot handier.
So what's the difference between configuring your system to use DHCP and configuring your system to use a proxy? I honestly don't get it. @-o
DHCP is the default. There is a difference between configuring and not having to configure. Talk a large number of people through setting up a system over the phone and you'll get it.
How is that a solution? Proxies are used where you don't allow direct outbound access. How do you do ftp without configuring a proxy on every client?
The question is, why aren't you configuring software for a proxy in the first place? You do it once ... done.|
Once per install. And then you can't move the box.
Why don't you just configure it at install-time, like everything else? Again, I don't understand how this is different than anything else you configure at install-time.
The difference is that dhcp configures everything else you need.
Furthermore, we're back to the "how to you change anything on all systems when you need to?" Don't you have some sort of configuration management of all your Linux systems? Something that can redistribute system changes to all systems?
This has nothing to do with YUM.
I want my system changes to be what the experts updating whatever distribution happens to be on a box have put in their yum repository, so yes it does have to do with yum.
OK - ftp breaks when you NAT it too - sometimes.
I'm not talking about just FTP, I'm talking about HTTP too. HTTP can and _does_ break because it's a stream protocol that carries a lot of rich service data over it. Some of those rich service data streams don't take kindly to transparent proxies.
Yum doesn't need 'rich' services or it couldn't work over ftp.
Yes, just mirror the whole internet locally - or at least all yummable repositories...
Of the packages you use, yes. Take some load off the CentOS mirrors if you have enough systems.
The point of using yum is that I don't need to know about the packages ahead of time. Running through a proxy cache automatically takes the load off the repository instead of adding to it by sucking copies of apps I don't ever install.
And all of the fedora repositories, and all the 3rd party add on repositories, and the k12ltsp variations, and the ubuntu/debian apt repositories.
Yes! Once you have the first sync, it is not much to download a day. In fact, if you're about conserving the bandwidth you use for updates, hell yes!
How can it conserve bandwidth making copies of updates to programs that aren't installed anywhere?
If your point is that you have all those repositories to sync from and that is a "burden," then my counter-point is "Exactly! You're yanking from all those different repositories from _multiple_ systems already -- so why not just do it from _one_?" ;->
The machines I update are at several different locations so it doesn't make a lot of sense to do them all from the same local mirror.
When you have a number of systems, there is _no_negative_ to this, other than having the disk space required! APT And YUM repositories are "dumb" FTP/HTTP stores. rsync down and serve. Save your bandwidth and save your headaches.
It doesn't make sense to cache things unless at least one person uses it.
Now I'm really confused. If you're not using a repository, then do _not_ mirror it.
There's no difference between repositories and any other ftp/web site in this respect. I don't know ahead of time who is going to want to update what distribution any more than I'd know what other downloads could be mirrored ahead of time.
I don't understand that point you just made. Or are you adding yet more unrelated items just to make a point?
I want to only download things that are actually needed, and only once per location. Caching proxies have gotten that right for ages.
The point of the internet is that you can get the latest when you need it, and the point of a cache is that only one person has to wait.
We're talking about software repositories. If you are pulling multiple files from multiple systems, mirror it. These aren't some arbitrary web sites, they are known repositories.
But they change all the time - and some are huge with only one app being pulled from it.
If you have enough systems, you should be doing this anyway -- out of sheer configuration management principles. You don't want people grabbing arbitrary software on a mass number of systems, but only what you allow from your own repositories.
That would imply that I'd have to test everything first or no one could take advantage of something I didn't permit. I don't scale that well...
Yes, CentOS is as much a victim as the other distros on this point.
I just don't know what you expect CentOS to solve.
Their unique problem is that they are cache-friendly by using RRDNS to distribute load instead of a mirrorlist, and yum is not too bright about retrying alternate addresses on errors. I'm hoping they come up with a way around this that won't screw up caching like fedora does.
Les Mikesell lesmikesell@gmail.com wrote:
DHCP is the default.
And so can this proxy server setting! ;->
In fact, with the newer NetworkManager, you can retrieve the proxy server as a DHCP option, which is set in your DHCP service. But even without it, you can retrieve that option in network startup scripts.
There is a difference between configuring and not having to configure.
You tell your system to use DHCP. Now add a post-install script to add the environmental variable, along with any others you normally put in.
Talk a large number of people through setting up a system
over
the phone and you'll get it.
Why not just send them a Kickstart file instead? Seriously, I do this with people on the phone all the time. Takes up 1/10th of my time and results in 1/10th their frustration.
Once per install. And then you can't move the box.
Who says? Use a network script to get the proxy server as an option from the DHCP server.
The next release of RHEL (and, therefore, CentOS) will offer a GUI for this -- and will address a _lot_ of apps simultaneously.
The difference is that dhcp configures everything else you need.
So does an exported enviromental variable "http_proxy".
Furthermore, using the DHCP option for HTTP proxy server, or any arbitrary DHCP option that you have written a script to extract, allows you to take care of it at the DHCP server.
I want my system changes to be what the experts updating whatever distribution happens to be on a box have put in their yum repository, so yes it does have to do with yum.
Then how can CentOS help you?
Do you want these solutions? Or do you want to just bitch?
That is the _continuing_ question? ;->
I'm trying to point out how many of us enterprise admins are doing it. I waited a few posts to see how Johnny and others responded, and apparently, it went on _deaf_ears_!
Since then I've tried to lay out the technical realities, options as well as "what's in the pipe" for RHEL 5. I don't know what else I can give you that.
In the meanwhile, you've basically told me ... - You don't understand the value of configuration managenet - You don't understand the value of installation management - You spend a _lot_ of your time doing _manual_ tasks -- from installing, configuring and updating yourself, to telling others how to do it on the phone, waisting your time
Now, will you please _drop_ what CentOS can_not_ help you with? Or at least take some of my recommended solutions to heart in the meantime?!
Yum doesn't need 'rich' services or it couldn't work over ftp.
FTP is more 'rich' when it comes to file management than HTTP.
The HTTP client either needs 'rich' services or makes various 'assumptions' or it couldn't handle file management. FTP has file management, which is why I said it would be _better_ if YUM didn't even use standard HTTP, but used WebDAV HTTP.
The point of using yum is that I don't need to know about the packages ahead of time.
But on a large network, even one that uses a web cache to reduce network traffic, I don't like to arbitrarily push out updates to systems. I like to maintain tighter configuration management.
Most enterprises do.
Running through a proxy cache automatically takes the load
off
the repository instead of adding to it by sucking copies of apps I don't ever install.
Agreed, there is _some_ savings there. But a proxy cache is not ideal for mirroring. Never has been, never will be.
Furthermore, you can use your YUM cache to feed a local repository _better_ than a proxy cache. This was discussed in length not to long ago with regards to maintaining _coherency_ among systems. I.e., so they all get the _same_ updates.
So not only would you still only download what you need, but you will have the exact same set of updates using this approach. Always taping a remote Internet repository could change between updates of 2 different systems -- even with a proxy server. ;->
The alternative is, again, to maintain a local mirror and instigate proper configuration management procedures. I advocate this, but it does not have to be absolute.
How can it conserve bandwidth making copies of updates to programs that aren't installed anywhere?
Then, again, update *1* system, then use its /var/cache/yum to update _all_other_ systems. This was discussed at length not too long ago as an "alternative" to maintaining a mirror.
If you aren't seeing a pattern, for every new argument you throw my way, I'm not only presenting a solution, but a solution that has _already_ been discussed on this list.
[ Now more than ever, I need to get that ELManagers FAQ out. ;-]
Please stop just "bitching" and actually look at what solutions _are_ available to you! I call it "bitching" because CentOS can_not_ provide you the solution you want. ;->
The machines I update are at several different locations so it doesn't make a lot of sense to do them all from the same local mirror.
Then use the tools to redistribute one system's /var/cache/yum. If you don't want to search the archives, I'll do it for you.
There's no difference between repositories and any other ftp/web site in this respect. I don't know ahead of time
who
is going to want to update what distribution any more than
I'd
know what other downloads could be mirrored ahead of time.
Then have one system do the update, and then feed the APT or YUM cache from that system to others. It's the best way to maintain a consistent network without setting up a local mirror.
Now if you're allowing users to arbitrarily do what they want to their systems, then I guess that feeds back into "manual tasks." I guess you consult and are paid by the hour? ;->
I want to only download things that are actually needed,
Then use the cache feeder option.
and only once per location. Caching proxies have gotten
that
right for ages.
But not very well for large files, and especailly _not_ over HTTP. Otherwise, we didn't need WebDAV extensions to HTTP, let alone DeltaV extensions to WebDAV ... and so forth. ;->
One of my main set of expertise that was regularly tapped in the financial industry was secure file transfers. I _avoided_ HTTP-based transfers for a reason, they were _unreliable_.
That would imply that I'd have to test everything first or no one could take advantage of something I didn't permit. I don't scale that well...
Well, how many systems are we talking about?
And, again, how are you sending configuration updates to your systems?
Their unique problem is that they are cache-friendly by using RRDNS to distribute load instead of a mirrorlist, and yum is not too bright about retrying alternate
addresses
on errors.
And CentOS can help you ... how?
I'm hoping they come up with a way around this that won't screw up caching like fedora does.
And CentOS can help you ... how?
I've presented all the technical solutions I can. It's what enterprise admins do all-the-time. If you want to continue to "bitch" about things that can't be addressed, well, I'm having no more part of it.
On Tue, 2005-11-29 at 10:14, Johnny Hughes wrote:
It isn't proxy servers in general, just some implementations that don't work well.
I am using squid on my network and I get updates through it for 10 machines with no problems.
A transparent proxy is one where ports are routed behind the scenes and there is no setup file.
Squid is frequently used as a transparent proxy by adding an iptables rule on the machine acting as the default gateway to redirect all outbound port 80 packets to port 3128 on the squid host - or localhost if squid runs on the same box. I don't have one set up now to see if there is a problem with squid in that configuration. If you know the proxy address/port, you can work around easily with the shell's command line export of variables to a program. http_proxy=proxy.domain:port ftp_proxy=proxy.domain:port yum update
Les Mikesell wrote:
On Tue, 2005-11-29 at 10:14, Johnny Hughes wrote:
It isn't proxy servers in general, just some implementations that don't work well.
I am using squid on my network and I get updates through it for 10 machines with no problems.
A transparent proxy is one where ports are routed behind the scenes and there is no setup file.
Squid is frequently used as a transparent proxy by adding an iptables rule on the machine acting as the default gateway to redirect all outbound port 80 packets to port 3128 on the squid host - or localhost if squid runs on the same box. I don't have one set up now to see if there is a problem with squid in that configuration. If you know the proxy address/port, you can work around easily with the shell's command line export of variables to a program. http_proxy=proxy.domain:port ftp_proxy=proxy.domain:port yum update
Proxy can also be specified in the yum.conf file's [main] section , required keywords :
proxy url to the proxy server that yum should use.
proxy_username username to use for proxy
proxy_password password for this proxy