From: Les Mikesell <lesmikesell at gmail.com> > No, I am looking for a solution that provides what a typical user needs, > not what a particular vendor feels like supporting this week. I didn't > really want this to be about motives for vendor's business decisions but > I think Johnny Hughes nailed it in saying the push for 2.6 was because > SLES 9 had it. Their decision wasn't stupid, but my point is that it > wasn't the best thing for some number of their old users. At what point does Red Hat hold off on adopting Linux 2.6? Or GCC 3? Or GLibC 2.0? As I said, there is a chicken-egg issue between "waiting for something to be well-tested with many packages" and "adopting something so it is well-tested with many packages." This existed back in the Red Hat Linux timeframe, and it still exists today. That's why RHEL is essentially based on the ".1" or ".2" release. Then it is kept fairly _static_ over its lifetime for SLA reasons. > However, I also support a number of general-purpose servers and some > desktops where a larger mix of applications need to be integrated > along with a variety of oddball hardware that has, over the years > been connected and supported one way or another. In this scenario > you are very likely to want to upgrade a single application and > find that you can't do it RedHat's-way(tm) because the bundled > upgrade will break something else that you need to keep running. And other "packages" distros don't have this problem? If you find Fedora Core + repositories doesn't solve what you need either, I think someone else pointed out that a "ports" distro, like Gentoo, might be more of what you are looking for. Especially if you're rolling out lots of "bleeding edge" distros. BTW, just so everyone knows, I support Debian, Fedora, Gentoo, NLD, RHEL, SuSE and SLES as a paid consultant. And I've been a maintainer in several projects so I'm not just a "Red Hat-only lacky" as many people believe. In fact, I am a software engineer at many clients (I just get stuck with the IT / configuration management because I seem to know how to do this with Linux better than the others around me). So I'm very partial to using "ports" distributions like Gentoo and BSD. ;-> > Explaining the reason the vendor made the decision that doesn't > work in my case isn't particularly helpful. Understanding why either allows you to: A) See what you can do to get the vendor to change, _or_ B) Better recognizing what you can do to accommodate the vendor I said it before and I'll say it again, one of the reasons I have very much sought after -- almost always by personal reference -- it because I can come in, look at someone's business/engineering model, and then architect a solution that maximizes the application of a solution while mitigating risk to any product changes. Unless you understand how and why the vendor is doing something, you can't either do "A" or "B" and customize the solution one way or another. The last thing my clients want to hear from me is "damn, Microsoft/RedHat/etc... broke your network." Furthermore, when architecting a solution, I can also give the client options and say, "if you go with RHEL, you're going to run into the re-occuring problem of X, Y and Z." In most cases, it helps me eliminate Microsoft from client consideration early on, but in some cases, I actually end up rolling out a completely _non-Red Hat_ solution. > You might as well try to explain to me why upgrading or installing > certain MS products require moving the whole enterprise to Active > Directory first. That's different, although the concepts are similiar. There is no vendor that can provide a solution that doesn't need accommodate at some point in a solution. Fortuntely, we don't have the "on-a-whim" changes in the RHEL space like we do in Windows -- _primarily_ _because_ Red Hat _never_ mixes in "new features" mid-release. But configuration management is _always_ an issue. You are _never_ going to eliminate much of it even in the Linux space. It is a necessary evil that you must do -- and that will vary on the focus of the distro. In fact, not doing configuration management in Linux roll-outs and, essentially, _falsifying_ the savings on Linux with it, is the absolute #1 reason why Linux projects fail. Having a background in defense, I liken this to NASA's COTS (commercial off-the-shelf) adoption in the '90s. After Sojouner (Mars Pathfinder) proved that GNU/VxWorks was a viable platform that cost 1/10th of custom software/hardware development, NASA's nexgen probes started adopting the same platforms/software as Pathfinder. The problem on these other projects is that the 90% savings was no longer applied to just software/hardware, but _all_ aspects of the project's budget. So unlike Sojouner, the QA (quality assurance) budget was cut. So when Mars Polar Lander entered the orbit assuming the units in a register were miles when they were, in fact, kilometers -- because there was not the QA between two engineering teams at two different companies several states away from each other -- I was not surprised one bit. Configuration management is quality assurance. You must do it and there will _never_ be a vendor, distro or other "off-the-shelf" solution that will allow you to avoid doing it. Because a solution is _never_ available that is customized for you and regression tested to work without issue. That's just reality. > I'm not really interested in playing a 'vendor lock-in' game. Okay, this is the typically spiral I see people do. They start talking about "vendor lock-in" with regards to Red Hat, applying their experiences with Microsoft. Sorry, this is _exactly_ the "demonization" I'm talking about. You are comparing the world's greatest, "we lack even proprietary standards" company to Red Hat, the absolute #1 pro-GPL commercial company. So there's _no_ sense in my responding further. -- Bryan J. Smith mailto:b.j.smith at ieee.org