[CentOS] Why is yum not liked by some?
Bryan J. Smith
b.j.smith at ieee.org
Fri Sep 9 19:41:44 UTC 2005
Les Mikesell <lesmikesell at gmail.com> wrote:
> it would be trivial for a client to decide whether it is
> more efficient to apply a delta to an existing cached or
> locally available version or pull the latest.
But how does the "appropriate delta" get built?
At the server! That's more server overhead, let alone the
"service" that the client uses to query.
Now let's say you just have the client download the _entire_
delta for the RPM to avoid all that extra server overhead.
Now you're actually _increasing_ the amount you download.
There's no way to do it _remotely_, over the Internet without
add a lot of load -- either in the total amount of transfer,
or in the amount of overhead in the service. You're no
longer offering a simple HTTP-serviced tree, but an
intelligent service on the server that requires a lot more
> It would take more storage, but wouldn't break anything
> already working
You really don't know what a YUM repository is, do you
It's a web site.
Not only that, the YUM client only accesses the meta-data on
the web site for resolution, _not_ the actual RPMs.
That's the problem.
To do otherwise requires far more bandwidth.
Or requires the server to have an "intelligent" service, and
not just a "dumb" web site.
> Likewise, how does the style used in rdiff-backup compare?
rdiff-backup does 1 delta, against 2 files.
A delta versions system requires you to _ripple_ through
every delta, _rebuilding_ every change each time.
That either requires you to have a beefy server to do that.
Or it requires you to have a massive amount of bandwidth for
the client to download all the deltas and it will do it.
> It claims to be similar to rsync which has proven very
> efficient in being able to transmit the differences between
> two files.
rsync does *1* delta.
Furthermore, it's not as efficient as a straight HTTP stream
when it comes to the server.
> With rdiff the server side work only has to be done once.
Not for rippling through multiple deltas, which is what
versioned files are. Several of you seem to forget that
Let alone the repository "service" will need to temporarily
house what you need, or handle multiple accesses for each
"rippled" delta the client then re-assembles.
You're talking a lot more overhead than just a HTTP access.
> You really only need to create a delta once per RPM version
> update creation.
Yes, per RPM version update creation. You're talking about
the "disassembly" of the "check-in." That's cake! You're
But what do clients need? Re-assembly in the check-out! So
if there have been 5 version updates since, then _all_5_
deltas will need to be "rippled through."
Again, at what point do people realize YUM is just HTTP
access, and now you're doing a lot more. As someone who has
maintained engineering part files, semiconductor layouts,
etc... in various revision control systems -- I can tell you
this is a _lot_ of overhead versus just file access like via
HTTP, NFS, etc...
You can't service more than a few clients locally, much less
the nightmare of an Internet server where operations are much
slower, temporary files must be created, etc...
> Then you need to store the delta in addition to the full
> versions, so there is more disk storage needed for
> this approach.
So why not just store the _whole_ versions?
That's my point.
Maybe I can make a better analogy ... backup servers.
It's one thing to have a full backup and a few incrementals
and rebuilt them for a restore. It's a completely different
thing to have tens of clients wanting the same!
> They could still do that. The only overhead added would be
> for the storage of the deltas and the traffic of the client
> checking the sizes.
I give up.
> You would trade that off against the
> network traffic saved when the client chooses the smaller
> delta. But, for this to work you need an on-line local
> cache of the base rpms.
Hence why you should have a local repository!
Isn't that what you were arguing against?!?!?!
> Yum saves one for a while for the
> updates but I doubt if enough people would set up the local
> cache of the base files to make this approach work unless
> that step is automated during the OS install.
Wait, you're actually making sense now!
Bryan J. Smith | Sent from Yahoo Mail
mailto:b.j.smith at ieee.org | (please excuse any
http://thebs413.blogspot.com/ | missing headers)
More information about the CentOS