Les Mikesell <lesmikesell at gmail.com> wrote: > Please quantify easy. Will you do it for me every time it > needs to be done? Today I'd have a use for at least 6 > variations, although I guess you'd double that with the > suggested overlap of testing/staging instances. So, in other words, you want a service that provides custom tagging, revisioning and/or date-based retrieval. That includes your wish for a dynamic delta'ing repository and real-time RPM generation. Again, I don't think you understand what delta'ing systems like CVS does compared to just a "HTTP accessable" repository trees. A world of difference! > With a little thought about the process, Again, I don't think you understand how delta'ing systems like CVS work, and the massive mindshift in the "back-end" that is required. It's no "little thought about the process." > yum updates could be made to be repeatable without extra > work, network traffic or any other overhead. That's utter BS! Delta'ing is the _worst_ overhead! It works fine for textual data of a few MBs, but when you start rebuilding tens of MBs, you're going to kill your server after just a few clients! > I just don't see why this is not considered desirable. I _never_ said it wasn't desirable. I just said it is _not_ feasible. I have maintained version control systems for _large_ engineering components in my time -- everything from models to IC schematics. No matter how much you "break down" the files into smaller files, you still put a _lot_ of data around. And that means I either have a massive Sun (and now Opteron) box that does major I/O with a massive amount of memory for "server-side" delta'ing assembly/disassembly, or a have the same for NFS performance for "client-side" delta'ing assembly/disassembly. And that's _before_ we even get to the point of the "added delay" that users will see using the "YUM" or whatever client-side tool. It will take signficantly longer to resolve things -- regardless of who does it. Although client-side resolution will be a crapload slower over the Internet than server. Which means these "servers" will need to be "intelligent" and take a crapload more load just for the resolution (even before we get to the actual package delta'ing/services) than just "dumbly" serving files up via HTTP. Sorry, I've just built too many TB-sized engineering revision control repositories to even listen to this thread any longer. Revision control exponentially increases the load over just serving files whole. That's tolerable on small textual files, but intolerable on larger binaries (no matter how small you break down things) -- especially when tens of clients are hitting the server. -- Bryan J. Smith | Sent from Yahoo Mail mailto:b.j.smith at ieee.org | (please excuse any http://thebs413.blogspot.com/ | missing headers)