Even I've left this thread. I guess we're all waiting for Lee to turn Blue. ;-> Or is it Red (Hat)? ;->
Okay Lee, we all agree, Red Hat makes stupid decisions, adopts buggy software - especially the kernel and Red Hat is to blame for the decisions in the kernel, and also stupidly backports fixes instead of adopting newer versions with the fixes. And there is absolutely no need for Red Hat to do so.
Happy?
-----Original Message----- From: Les Mikesell Date: 05-5-28 0:55 To: CentOS mailing list Subj: Re: [CentOS] Re: Demonizing generic Linux issues as Fedora Core-only issues -- WAS: Hi, Bryan
On Fri, 2005-05-27 at 19:13, Lamar Owen wrote:
On Thursday 26 May 2005 14:17, Les Mikesell wrote:
If you believe that, you have to believe that Red Hat's programmers are always better than the original upstream program author.
For the most part, the Red Hat crew is the best in the business. Or have you never heard of Jakub Jelinek, Alan Cox, Rick van Riel, and many many other of the top upstream developers that are employed in some capacity by Red Hat?
I think we've beaten these topics to death, but since it is kind of fun if you don't take it too seriously: Which of these guys knows how to make perl do character set conversions correctly better than the perl team?
I'll agree that they are good and on the average do a good job, but that stops far short of saying that they know better than the perl (etc.) teams what version you should be running.
The perl team has no business telling me what version I should be running, either. What version I run is dependent upon many things; one of which is 'what version does my vendor support?'
Sigh... at this point it is "how many versions does the vendor support"? And the issue is that the perl version (among many other things) that does a certain operation correctly is only included with a kernel version that has features missing and broken.
So, you want a working application, take an incomplete kernel. I understand that's the way things are. I don't understand why you like it.
Long term version stability. There has to be a freeze point; Red Hat has chosen the very documented 2-2-2 6-6-6 scheme, and sticks to its schedule, for the most part. Or, to put it very bluntly, just exactly which of the over a thousand packages are worth waiting on? And who decides which package holds up progress? CIPE, the example used here, is relatively insecure to begin with and interoperates with nobody.
I don't see how you can call setting up a WAN with many CIPE nodes, then finding it unavailable in the next release 'long term stability'.
Better to use IPsec (which virtually everybody supports to a degree) than a relatively nonstandard VPN like CIPE (I'd go as far as to say that most of the other VPN solutions are in the same boat; what's needed on the server side is typically Microsoft-compatible PPP over L2TP over IPsec, which is so easy to set up on the Windows client side it isn't even funny). That's why for-purpose firewall/VPN appliance Linux dists (SmoothWall and Astaro, for instance) are not using anything but IPsec. I have a SmoothWall box myself, and it Just Works.
Can you run it through NAT routers? I have locations where the end point is already NATed by equipment I don't control. CIPE doesn't mind and the blowfish encryption is pretty CPU-friendly. And again, it might be "long-term stability" if this had already been a choice in several prior versions so you didn't have to upgrade OS revs on machines in several countries on the same day to keep your machines connected.
Is there a reason that a Centos or third-party repository could not be arranged such that an explicit upgrade could be requested to a current version which would then be tracked like your kernel-xxx-version is when you select smp/hugemem/unsupported?
On Sat, 2005-05-28 at 05:30, Bryan J. Smith wrote:
Okay Lee, we all agree, Red Hat makes stupid decisions, adopts buggy software - especially the kernel and Red Hat is to blame for the decisions in the kernel, and also stupidly backports fixes instead of adopting newer versions with the fixes.
And there is absolutely no need for Red Hat to do so.
Happy?
No, I am looking for a solution that provides what a typical user needs, not what a particular vendor feels like supporting this week. I didn't really want this to be about motives for vendor's business decisions but I think Johnny Hughes nailed it in saying the push for 2.6 was because SLES 9 had it. Their decision wasn't stupid, but my point is that it wasn't the best thing for some number of their old users.
I do understand that the particular bundles that RH arranges fit some needs, and that in the big picture, most of this stuff lands on server farms where one machine runs one program so if the next-version distro runs on that machine, you take whatever else comes and it doesn't matter. I have a fair number of machines in that world, including some that go all the way back to when VALinux was in the hardware business and shipped with a paid copy of RH installed.
However, I also support a number of general-purpose servers and some desktops where a larger mix of applications need to be integrated along with a variety of oddball hardware that has, over the years been connected and supported one way or another. In this scenario you are very likely to want to upgrade a single application and find that you can't do it RedHat's-way(tm) because the bundled upgrade will break something else that you need to keep running.
Explaining the reason the vendor made the decision that doesn't work in my case isn't particularly helpful. You might as well try to explain to me why upgrading or installing certain MS products require moving the whole enterprise to Active Directory first. I'm not really interested in playing a 'vendor lock-in' game. I started using Linux in the first place to avoid that and have a system where different components from different sources could interoperate independently. A system as bundled by Red Hat is going back to the DLL-hell that windows users used to experience. I can support this argument with details from my own experience but I'd only expect anyone to care in the context of what a large number of people need. For that, I'd suggest browsing the k12ltsp project's archives at http://www.redhat.com/mailman/listinfo/k12osn.
This project combines a fedora install with the ability to network-boot thin clients so they run their desktop and apps from the server. As such they present the worst of all possible worlds in terms of needing server-stability and up-to-date apps on the same distribution and it has to run on an odd assortment of hardware. Judging from the list postings and my own testing, I'd guess that on the switch from the FC1 to FC2 or FC3 based distros, people had about a 1 in 4 chance that something would go drastically wrong in terms of hardware support ranging from random crashes to lack of disk controller support and perhaps a 50/50 chance that the ethernet interfaces would be dectected with different names. Maybe 10% would have serious enough problems that they had to re-install the FC1 based version.
Now, is there a better solution? I don't really want to hear another rendition of why RH chooses not to provide current apps in a distribution with a proven kernel, or why what's good for RH is good for the whole world. What I want is for this discussion to be about how to get the product that I think a lot of people need, not about brand loyalty to something that doesn't provide it. In the early days of freshrpms.net, I thought that 3rd party update repositories had a lot of promise but they ran into their own conflicts and in the consolidation effort seem to have focused on adding apps not included in the core product rather than updates to versions beyond stock. Centosplus seems headed the same direction - and perhaps that's the best anyone can do if they intend to offer any kind of testing and support. What's missing is something that fills the gap between a user having to rebuild from source himself and subsequently support the updates the same way forever and the downrev repository versions. Finding firefox backed into the FC1 legacy update repository is the sort of exception to the rule that I'd like to see expanded although it isn't precisely the same change as going to (say) evolution 2.x.
For a simple example, let's say someone can run Centos 3.x but not 4.x for any number of reasons, and they want current application features. Can we, without degenerating into vendor bashing again, discuss how such a thing would be possible in a way that every person who wants this does not have to repeat every possible thing that could go wrong in a local custom install, and repeat it again for every bugfix update?
Maybe Xen is the right direction to head to avoid this in the future. If the apps really will only work right when bundled into a massive distribution, then we should start planning for an emulated hardware environment so the other parts of the next bundle can't break our existing infrastructure. I'd guess that currently the best real-world solution is to keep running certain apps on the old box and install new hardware for the ones you want to upgrade. In practice the price might even be right going that way, but it just doesn't appeal to my engineering sense.
On Saturday 28 May 2005 14:30, Les Mikesell wrote:
No, I am looking for a solution that provides what a typical user needs,
Who is the typical user? CIPE users aren't, for one. Neither are radio astronomers, for that matter. :-)
I think Johnny Hughes nailed it in saying the push for 2.6 was because SLES 9 had it.
That's part of it, I'm sure, but there's more than that, or SLES9 wouldn't have had it either. So SuSE and Red Hat BOTH have it, and there has to be a reason. The biggest reason is that 2.6 supports enterprise-class hardware that 2.4 does not. 2.6's enterprise features are better in many ways than 2.4's features. There are some non-marketing reasons for going 2.6, but marketing reasons are just as valid as technical ones when the goal is to stay in business and make money.
could interoperate independently. A system as bundled by Red Hat is going back to the DLL-hell that windows users used to experience.
Not exactly. RPM dependency issues can be solved with intelligent library versioning; for the most part binaries that run under RHEL3 can run under RHEL4 using the compat-* packages built for that purpose. There are exceptions, of course, CIPE being one. The DLL problems under windows are mostly due to the lack of DLL versioning like we have available, in that multiple versions of a library can coexist on a system. Take, for example, all the various OpenSSL versions and how you might have three different versions of the openssl libraries at any given time installed. You cannot do this on Windows.
For the most part, the same problem does not exist on Linux. Take, for instance, the way theKompany distributes RPMs. They have a single binary RPM for each package, with a 'thekompany-support' base RPM for the basic differences between systems. For the most part, this single binary will install on virtually all modern RPM-based Linux dists.
Or Codeweaver's CrossOver Office product; it is available in a Loki-installer setup or as an RPM. The one RPM works on all their supported platforms, which includes 2.4 and 2.6 kernel-based distributions.
So the lib versioning problem can be addressed, but it has to be very carefully addressed. The most extreme example is Cinelerra, which, last I looked, was built fully statically linked and would basically run anywhere.
I can support this argument with details from my own experience but I'd only expect anyone to care in the context of what a large number of people need. For that, I'd suggest browsing the k12ltsp project's archives at http://www.redhat.com/mailman/listinfo/k12osn.
This project combines a fedora install with the ability to network-boot thin clients so they run their desktop and apps from the server. As such they present the worst of all possible worlds in terms of needing server-stability and up-to-date apps on the same distribution and it has to run on an odd assortment of hardware.
Ah, but there is more than one 'it' in the LTSP case. The server doesn't necessary have to run on odd hardware; it's the thin client that must run on oddball hardware (I know how it works; I'm beginning to deploy an LTSP setup at a private school on CentOS4 (with KDE-Redhat to get the modern desktop apps).
Now, is there a better solution? I don't really want to hear another rendition of why RH chooses not to provide current apps in a distribution with a proven kernel,
2.6 IS a proven kernel at this point in time, at least for the vast majority of users. There are exceptions; but, then again, Debian Stable is still a 2.2 kernel.
What I want is for this discussion to be about how to get the product that I think a lot of people need, not about brand loyalty to something that doesn't provide it.
CentOS4 is the product that I need as it stands for the majority of what I need. For the other needs I use the solution that fits best; for instance, CentOS4 is not my firewall/VPN router OS, SmoothWall Corporate Server 3.0+SmoothTunnel3.1 is. Adding KDE-Redhat to CentOS 4, and then adding DAG to that covers 99% of my use cases.
But remember the CentOS core mission: Community Enterprise Linux. Designed to be a rebuilt RHEL and nothing more, because a very large number of users need that exact functionality. Why as close as possible? To leverage the development and the security updates, as well as the third-party repos that will support 'el4' packages (like DAG, ATrpms, KDE-Redhat, etc).
In the early days of freshrpms.net, I thought that 3rd party update repositories had a lot of promise but they ran into their own conflicts and in the consolidation effort seem to have focused on adding apps not included in the core product rather than updates to versions beyond stock.
With the notable exception of KDE-Redhat. And I ask why there are third party conflicts? The RPMforge project aims to reduce those conflicts, but even then there still are some. The biggest reason these repos focus on 'extra' packages is that, the more you change the base OS the more difficult it becomes to coordinate updates' dependencies.
Centosplus seems headed the same direction - and perhaps that's the best anyone can do if they intend to offer any kind of testing and support. What's missing is something that fills the gap between a user having to rebuild from source himself and subsequently support the updates the same way forever and the downrev repository versions.
It's called Gentoo, and while you need to rebuild from source once there is a binary distribution mechanism available. But even then, the way Gentoo is built you could end up with some really odd dependency issues.
Finding firefox backed into the FC1 legacy update repository is the sort of exception to the rule that I'd like to see expanded although it isn't precisely the same change as going to (say) evolution 2.x.
So I ask, what is required to backport Ev 2.x to FC1 (the answer is 'alot more work than it's worth for most people' because Ev 2.x requires massive updates to core libs).
For a simple example, let's say someone can run Centos 3.x but not 4.x for any number of reasons, and they want current application features.
Use KDE-Redhat for current KDE. For Gnome I wouldn't know, as I don't use GNOME. The source for KDE-Redhat is kde-redhat.sourceforge.net and is available for Red Hat 7.3 and 9, Fedora Cores 1, 2, and 3, and Red Hat Enterprise Linux 3 and 4. The goal of the project is the latest KDE release on all those distributions, and it works. It requires nearly a complete CD-worth of packages for it to work, but it does work. That first download is a real doozy.
Can we, without degenerating into vendor bashing again, discuss how such a thing would be possible in a way that every person who wants this does not have to repeat every possible thing that could go wrong in a local custom install, and repeat it again for every bugfix update?
Ask the KDE-Redhat maintainer how much work it is. His name is Rex Dieter.
You do it the same way you release a distribution in the first place: you apply a lot of time and a lot of hard work to get the details in order. And dependency agnosticism, while somewhat possible, is a matter of juggling an immense number of details, which requires a large umber of man hours that someone must put in.
Rebuilding from source RPM with an intelligent source depsolver (Gentoo does this in their ebuild system) would work; it would basically mean everyone would need a buildsystem setup somewhere. Maybe virtual hardware ala Xen is part of the answer, but the bigger problem is API/ABI stability. And the problem with Open Source is enforcing interface stability: it won't happen in this world due to the independent nature of each of the 1000 parts.
But then there's Fedora Alternatives, which hasn't even started yet. What you're after is a 'RHEL-Alternatives' that isn't there yet.
But at the end you'll have to motivate people to do the work; and those people will need to have roughly the same goals you do.
In the case of KDE-Redhat, there are a lot of people rather unhappy with stock Red Hat KDE and decided to do something about it. Rex is the most vocal of the group, but I don't think he does it alone (although I could be wrong). So, rather than just gripe about the state of Red Hat's KDE, the KDE-Redhat group did something about it.
So, I can have a CentOS 3 system and a CentOS 4 system running essentially the same desktop applications from KDE using kde-redhat. But, again, it's a lot of work that the KDE-Redhat developers are doing.
On Sat, 2005-05-28 at 15:37, Lamar Owen wrote:
No, I am looking for a solution that provides what a typical user needs,
Who is the typical user? CIPE users aren't, for one. Neither are radio astronomers, for that matter. :-)
OK, I meant a 'normal' user, in the sense that the normal range extends well beyond the average and covers a lot of people. Obviously if you are looking for backwards compatibility and the ability to upgrade individal apps without breaking others, you are someone who has been using Linux a while. And if you are raising the issue on forum for an RPM-based distro, even if you are capable of compiling bits and pieces from source, you probably prefer to drop in rpms where possible.
could interoperate independently. A system as bundled by Red Hat is going back to the DLL-hell that windows users used to experience.
Not exactly. RPM dependency issues can be solved with intelligent library versioning; for the most part binaries that run under RHEL3 can run under RHEL4 using the compat-* packages built for that purpose.
So what was the problem with taking perl to 5.8.3 or later on RHEL3.x again? Or having multiple versions of the same app on the same machine so an usupported new release of something did force any conflicts with supported software?
For the most part, the same problem does not exist on Linux. Take, for instance, the way theKompany distributes RPMs. They have a single binary RPM for each package, with a 'thekompany-support' base RPM for the basic differences between systems. For the most part, this single binary will install on virtually all modern RPM-based Linux dists.
So why does anyone consider it a good idea to have a gazillion different copies of things like this in separate core and update repositories for RHELX/FCX/CentosX versions?
So the lib versioning problem can be addressed, but it has to be very carefully addressed. The most extreme example is Cinelerra, which, last I looked, was built fully statically linked and would basically run anywhere.
Static linking should be a last resort, when you think about what that does to disk space, memory usage, and startup time, not to mention the need to update every package that has it's own copy of a fixed bug.
Ah, but there is more than one 'it' in the LTSP case. The server doesn't necessary have to run on odd hardware;
Yes, they are schools, often running on donated equipment even on the server side and even if they had something from the supported list for the RH7.3/RH8/FC1 based versions they probably can't afford to replace it to match what the FC2/FC3 versions need.
it's the thin client that must run on oddball hardware (I know how it works; I'm beginning to deploy an LTSP setup at a private school on CentOS4 (with KDE-Redhat to get the modern desktop apps).
Those sound like good choices, assuming you don't already have backwards compatibility issues with hardware, filesystems, or applications that you want to keep. But note that what you are doing isn't at all unusual, yet you've already seen that it doesn't match what a mainstream distribution provides. And you'll probably still eventually want updated OOo, firefox, and evolution versions if they do version-level upgrades before RHEL/Centos - you are just coming into the cycle at a better point then you would have with Centos3.
What's missing is something that fills the gap between a user having to rebuild from source himself and subsequently support the updates the same way forever and the downrev repository versions.
It's called Gentoo, and while you need to rebuild from source once there is a binary distribution mechanism available. But even then, the way Gentoo is built you could end up with some really odd dependency issues.
I suppose I should try that. I've grown accustomed to the RedHat way of doing things, but maybe I can go the 'one app per box' route with Centos3 and 4 for some things and consolidate all the weird stuff on a few boxes with Gentoo's source rebuild capability. I just don't like the feeling of not having an easy upgrade option on separate items.
On Sat, 28 May 2005, Bryan J. Smith wrote:
Even I've left this thread. I guess we're all waiting for Lee to turn Blue. ;-> Or is it Red (Hat)? ;->
Okay Lee, we all agree, Red Hat makes stupid decisions, adopts buggy software - especially the kernel and Red Hat is to blame for the decisions in the kernel, and also stupidly backports fixes instead of adopting newer versions with the fixes. And there is absolutely no need for Red Hat to do so.
Happy?
I have a real problem with this thread. It seems as if, according to some, someone can only be with or against Red Hat.
I'm sure Red Hat has made stupid decisions, has adopted buggy software and are responsible for some of the headaches people have had. And I'm sure even Red Hat employees (like with any other distributor) would recognize this.
But just pointing things out does not put you on one or the other side, I may hope.
I always have a sour taste in my mouth if you have to pick sides, because that kills healthy rationalizing/critizing and often this has a hidden agenda attached to it.
-- dag wieers, dag@wieers.com, http://dag.wieers.com/ -- [all I want is a warm bed and a kind word and unlimited power]
On Sat, 2005-05-28 at 14:38, Dag Wieers wrote:
I have a real problem with this thread. It seems as if, according to some, someone can only be with or against Red Hat.
I probably set the wrong tone in pointing out my specific problems as examples, but those are the only ones I can go into detail about (and I thought someone might jump in with a solution for my firewire connection). But, the real issue is planning to avoid the same kind of problems for the next upgrade when the application dependencies are likely to be even more entangled.
I always have a sour taste in my mouth if you have to pick sides, because that kills healthy rationalizing/critizing and often this has a hidden agenda attached to it.
You probably have as much experience as anyone in trying to improve on the choices RH has offered. Is it ever going to be possible/practical to have a single RPM repository that stays close to the developers versions of applications and be able to use that same repository from a variety of RH/Centos/Fedora base installations? This, of course, would also coincide with what commercial application vendors have always wanted in terms of standard binary APIs, but my agenda is only on the usability side - I'm not interested in selling anything.