Lamar Owen wrote: > On Friday 09 September 2005 14:18, Bryan J. Smith wrote: > >>I don't think you're realizing what you're suggesting. > > > Yes, I do. I've suggested something like this before, and there has been some > work on it (see Fedora lists archives from nearly a year or more ago). > > >>Who is going to handle the load of the delta assembly? > > > The update generation process. Instead of building just an RPM, the > buildsystem builds the delta package to push to the package server. I recall you mentioning something which caused me to think you meant this, and I pointed out to you that this requires a degree of co- operation between developers which is difficult to maintain in a single organization, like a corporation, let alone between developers which have nothing to do with each other than that they happen to produce software which runs on the same platform. >>But the on-line, real-time, end-user assembly during >>"check-out" is going to turn even a high-end server into a >>big-@$$ door-stop (because it's not able to do much else) >>with just a few users checking things out! > > > Do benchmarks on a working system of this type, then come back to me about the > unbearable server load. Yes, if things were managed as a total package, it would be fast and easy to check out from it. But they aren't, and doing so in the present circumstances is infeasible, for the reasons I pointed out before. > >>Do you understand >>this? > > > Do you understand how annoyingly arrogant you sound? I am not a child, Bryan. You aren't a child, but you are naive. You seem intelligent, but, I hope you won't get offended, as I mean no offence, you are very ignorant. Remember, ignorance, like epoxy, can be cured. There is no cure for stupidity. And I don't think that you are stupid. When I first got involved with the kinds of things you want to try, I also thought that the problems would be trivial. I found out, and very quickly, that they were not trivial. >>Not true! Not true at all! You're talking GBs of >>transactions _per_user_. > > > I fail to see how a small update of a few files (none of which approach 1GB in > size!) can produce multiple GB's of transactions per user. You seem to not > understand how simple this system could be, nor do you seem willing to even > try to understand it past your own preconceived notions. They wouldn't, at checkout, if everything were managed as a single, released package. But as I pointed out, that is difficult with open-source web-developed independent programs. >>You're going to introduce: >>- Massive overhead > > > In your opinion. > > >>- Greatly increased "resolution time" (even before >>considering the server responsiveness) >>- Many other issues that will make it "unusable" from the >>standpoint of end-users > > > All in your opinion. Not just in his opinion. That work must be done in order to accomplish what you want. It can be done at check-out time (as Bryan is suggesting) or it can be done at build time (as you are suggesting). You fail to see that it is infeasible to do at build time, because there is no such thing as a single object with an associated state, there is just a web site and a glorified wget to pull it. And, as Brian points out, it is infeasible to do at check-out time because enormous file diffs would have to be done OVER THE WEB. In order to do it at build time (as you suggest) one would need a QA team managing release points of a whole package which contained everything which was being released to Linux. And this isn't going to happen when everyone in the world is a potential developer of variants of open-source programs. [snip] > Have you even bothered to analyze this in an orderly fashion, instead of > flying off the handle like Chicken Little? Calm down, Bryan. I agree that Bryan is a little agitated. Frankly, I'm finding it a little bit difficult to retain complete equanimity, myself. It seems that you haven't maintained a repository, but you insist that you know more than those who have. Then complain that others are being arrogant. Isn't that getting a little close to the pot and the kettle? [snip] > Oh, just get off the arrogance here, please. You are not the only genius out > here, and you don't have anything to prove with me. I am not impressed with > resumes, or even with an IEEE e-mail address. Good attitude beats brilliance > any day of the week. I'd rather have an arrogant, competent bastard running my repositories, than a nice well-mannered incompetent any day. I speak from experience, having been in both circumstances. [snip] >>>Store prebuilt headers if needed. >> >>As far as I'm concerned, that's the _only_ thing you should >>_ever_ delta. I don't relish the idea of a repository of >>delta'd cpio archives. It's just ludicrious to me -- and >>even more so over the Internet. > > > So you think I'm stupid for suggesting it. (That's how it comes across). Ok, > I can deal with that. > > *PLONK* Ah, so you kill-filed one of your best hopes of actually coming to grips with the subject you're discussing. I really, really suggest that you build your own repository, and use it to pull from. That way, you'll have repeatable (perhaps not consistent) updates using yum. And you'll also learn a little bit of why what you want to do is not as easy as it seems to you today. Using yum to manage staged releases makes as much sense as using Internet Explorer to do so. Unfortunately, it's difficult to explain to one who has no experience with it why that is so. It just isn't the tool for the job. It's just a transport and install mechanism, with some control files to tell it what to pull. It's nice and all, and no criticism of the developers of yum. It just isn't designed, intended, or capable of doing what you want. It's a wonderful tool for what it *is* designed to do, which is manage installs. Mike -- p="p=%c%s%c;main(){printf(p,34,p,34);}";main(){printf(p,34,p,34);} This message made from 100% recycled bits. You have found the bank of Larn. I can explain it for you, but I can't understand it for you. I speak only for myself, and I am unanimous in that!