[CentOS] Yum / Up2date issues and mirror.centos.org

Bryan J. Smith thebs413 at earthlink.net
Tue Nov 29 22:51:20 UTC 2005


Les Mikesell <lesmikesell at gmail.com> wrote:
> DHCP is the default.

And so can this proxy server setting!  ;->

In fact, with the newer NetworkManager, you can retrieve the
proxy server as a DHCP option, which is set in your DHCP
service.  But even without it, you can retrieve that option
in network startup scripts.

> There is a difference between configuring and not having to
> configure.

You tell your system to use DHCP.  Now add a post-install
script to add the environmental variable, along with any
others you normally put in.

> Talk a large number of people through setting up a system
over
> the phone and you'll get it.

Why not just send them a Kickstart file instead?  Seriously,
I do this with people on the phone all the time.  Takes up
1/10th of my time and results in 1/10th their frustration.

> Once per install. And then you can't move the box.

Who says?  Use a network script to get the proxy server as an
option from the DHCP server.

The next release of RHEL (and, therefore, CentOS) will offer
a GUI for this -- and will address a _lot_ of apps
simultaneously.

> The difference is that dhcp configures everything else 
> you need.

So does an exported enviromental variable "http_proxy".

Furthermore, using the DHCP option for HTTP proxy server, or
any arbitrary DHCP option that you have written a script to
extract, allows you to take care of it at the DHCP server.

> I want my system changes to be what the experts updating
> whatever distribution happens to be on a box have put
> in their yum repository, so yes it does have to do with
> yum.

Then how can CentOS help you?

Do you want these solutions?
Or do you want to just bitch?

That is the _continuing_ question?  ;->

I'm trying to point out how many of us enterprise admins are
doing it.  I waited a few posts to see how Johnny and others
responded, and apparently, it went on _deaf_ears_!

Since then I've tried to lay out the technical realities,
options as well as "what's in the pipe" for RHEL 5.  I don't
know what else I can give you that.

In the meanwhile, you've basically told me ...
- You don't understand the value of configuration managenet
- You don't understand the value of installation management
- You spend a _lot_ of your time doing _manual_ tasks -- from
installing, configuring and updating yourself, to telling
others how to do it on the phone, waisting your time

Now, will you please _drop_ what CentOS can_not_ help you
with?  Or at least take some of my recommended solutions to
heart in the meantime?!

> Yum doesn't need 'rich' services or it couldn't work over
> ftp.

FTP is more 'rich' when it comes to file management than
HTTP.

The HTTP client either needs 'rich' services or makes various
'assumptions' or it couldn't handle file management.  FTP has
file management, which is why I said it would be _better_ if
YUM didn't even use standard HTTP, but used WebDAV HTTP.

> The point of using yum is that I don't need to know about
> the packages ahead of time.

But on a large network, even one that uses a web cache to
reduce network traffic, I don't like to arbitrarily push out
updates to systems.  I like to maintain tighter configuration
management.

Most enterprises do.

> Running through a proxy cache automatically takes the load
off
> the repository instead of adding to it by sucking copies of
> apps I don't ever install.

Agreed, there is _some_ savings there.  But a proxy cache is
not ideal for mirroring.  Never has been, never will be.

Furthermore, you can use your YUM cache to feed a local
repository _better_ than a proxy cache.  This was discussed
in length not to long ago with regards to maintaining
_coherency_ among systems.  I.e., so they all get the _same_
updates.

So not only would you still only download what you need, but
you will have the exact same set of updates using this
approach.  Always taping a remote Internet repository could
change between updates of 2 different systems -- even with a
proxy server.  ;->

The alternative is, again, to maintain a local mirror and
instigate proper configuration management procedures.  I
advocate this, but it does not have to be absolute.

> How can it conserve bandwidth making copies of updates to
> programs that aren't installed anywhere?

Then, again, update *1* system, then use its /var/cache/yum
to update _all_other_ systems.  This was discussed at length
not too long ago as an "alternative" to maintaining a mirror.

If you aren't seeing a pattern, for every new argument you
throw my way, I'm not only presenting a solution, but a
solution that has _already_ been discussed on this list.

[ Now more than ever, I need to get that ELManagers FAQ out. 
;-]

Please stop just "bitching" and actually look at what
solutions _are_ available to you!  I call it "bitching"
because CentOS can_not_ provide you the solution you want. 
;->

> The machines I update are at several different locations so
> it doesn't make a lot of sense to do them all from the same
> local mirror.

Then use the tools to redistribute one system's
/var/cache/yum.  If you don't want to search the archives,
I'll do it for you.

> There's no difference between repositories and any other
> ftp/web site in this respect.  I don't know ahead of time
who
> is going to want to update what distribution any more than
I'd
> know what other downloads could be mirrored ahead of time. 


Then have one system do the update, and then feed the APT or
YUM cache from that system to others.  It's the best way to
maintain a consistent network without setting up a local
mirror.

Now if you're allowing users to arbitrarily do what they want
to their systems, then I guess that feeds back into "manual
tasks."  I guess you consult and are paid by the hour?  ;->

> I want to only download things that are actually needed,

Then use the cache feeder option.

> and only once per location.  Caching proxies have gotten
that
> right for ages.

But not very well for large files, and especailly _not_ over
HTTP.  Otherwise, we didn't need WebDAV extensions to HTTP,
let alone DeltaV extensions to WebDAV ... and so forth.  ;->

One of my main set of expertise that was regularly tapped in
the financial industry was secure file transfers.  I
_avoided_ HTTP-based transfers for a reason, they were
_unreliable_.

> That would imply that I'd have to test everything first or
> no one could take advantage of something I didn't permit. I
> don't scale that well...

Well, how many systems are we talking about?

And, again, how are you sending configuration updates to your
systems?

> Their unique problem is that they are cache-friendly by
> using RRDNS to distribute load instead of a mirrorlist,
> and yum is not too bright about retrying alternate
addresses
> on errors.

And CentOS can help you ... how?

> I'm hoping they come up with a way around this that won't
> screw up caching like fedora does.

And CentOS can help you ... how?

I've presented all the technical solutions I can.  It's what
enterprise admins do all-the-time.  If you want to continue
to "bitch" about things that can't be addressed, well, I'm
having no more part of it.



-- 
Bryan J. Smith                | Sent from Yahoo Mail
mailto:b.j.smith at ieee.org     |  (please excuse any
http://thebs413.blogspot.com/ |   missing headers)



More information about the CentOS mailing list