From: Les Mikesell lesmikesell@gmail.com
Agreed. But note that the standards are set long before that...
??? By "standard" what do you mean. ???
Linux's history has been notorious for variance from ANSI, NIST, POSIX and GNU standards. Yes, far less than Microsoft and even some UNIX vendors, but there are still major issues today with this.
When developers try to change things for standards compliance, they get reamed by all the people who want backward compatibility instead. That's why companies like Red Hat and SuSE try to not do it for several versions until it comes to a head (typically future adoption required).
If you are doing anything complicated, I'm not sure you have any other choice. But then you run into ugliness where you want to run perl programs that need perl=>5.8.3 to get character set handling right, and you can't, or you have to do a local install trying not to break the system version.
Internationalization is a reality and we English speaking Americans, as well as Westerners in general, can't go around complaining when a program really _does_ need to get off its American/Western-only duff. That means getting away from ASCII as well as ISO8859.
In reality, there is a massive set of issues in the UNIX space where there are no less than 15 common different interpretations of byte order, endianess and organization just for 4-byte ISO character sets. Pretty much all standards-based platforms are moving to UTF-8 because it solves the multi-byte organizational issues. ASCII is still 7-bit (1 byte), while any 4-byte ISO character set can be accomodated in 2-6 bytes total. And as 1-6 streamed bytes, the common endianness is network order (big endian), no multi-byte endianness/order issues.
Ironically, being an American company, this is where Red Hat has done a phenominal job of any "western" distro company, IMHO, of pushing _hard_ for UTF-8. Red Hat made this major shift with CL3.0 (Red Hat Linux 8.0), which then went into EL3, which was based on CL3.1 (Red Hat Linux 9). Typical bitching and moaning was present, condemnations of both Perl maintainers of 5.8 and Red Hat Linux 8.0, etc...
More on the GUI front, I think Java and .NET/Mono have the right idea, ISO10464 Unicode everything.
Or you need mysql 4.x for transactions.
Again, there is a licensing issue with MySQL 4 that MySQL AB introduced that most people, short of Red Hat, are ignoring. MySQL AB is GPL, _not_ LGPL or BSD, so you have to be very careful what you link to it.
I see that Fedora Core 4 Test does adopt MySQL 4 though. I need to read up on what Red Hat did, or is possibly excluding to make MySQL AB happy (unless they changed their stance on some static linking) to find out more.
I guess my real complaint is that the bugfixes and improvements to the old releases that you are forced by the situation to run are minimal as soon as it becomes obvious that the future belongs to a different version - that may be a long time in stabilizing to a usable point.
So what's your solution?
I'm still waiting for someone to show me one better than the: 2-2-2 -> 6-6-6 model
In fact, the full model is really: 2-2-2 -> 6-6-6 -> 18-18-18 model
Whereby Red Hat and SuSE maintain up to 3 simultaneous versions of their 18 month distros. [ Actually, SuSE is dropping the first "enterprise" release, SLES 7 just shy of 5 years. ]
So if you are talking about stability/maturity, then run RHEL2.1 or now. Heck, Fedora Legacy is _still_ supporting CL2.3 (RHL7.3) too! ;->
It will be interesting to see how this works out for Ubuntu. I think it would be possible to be more aggressive with the application versions but hold back more than RH on the kernel changes.
And that's the real question. What is the "best balance"?
Of course they'll have it easy for the first few since they don't have to be backwards compatible to a huge installed base.
Exactomundo! ;->
In reality, Red Hat has probably the longest "backward compatible" run of any vendor. Why? Because they adopt things early -- like GLibC 2, GCC 3, NPTL, etc...
Heck, pretty much everything released for CL/EL2 (GLibC 2.2, GCC 2.96/3.0 -- RHL7.x/RHEL2.1) still runs on my latest Fedora Core 4 Test systems.
How was the CIPE author supposed to know that what would be released as FC2 would have a changed kernel interface?
My God, I actually can't believe you could even make such a statement. At this point, it's _futile_ to even really debate you anymore, you keep talking from a total standing of "assumption" and "unfamilarity." Something a maintainer of a package would not be, unless he honestly didn't care.
Fedora Core 2 (2004May) was released 6 months _after_ Linux 2.6 (2003Dec).
Fedora Core 1 -- fka Red Hat Linux 10 -- had just been released (2003Nov) before kernel 2.6.
Within 2 months, Rawhide/Development had Fedora Core 2's intended kernel -- 2.6. There was absolutely _no_question_ what it was going to be. I know, I was checking out the packages on Fedora Development -- saw 2.6.1/2.6.2. I believe 2.6.3 was the first one used in Fedora Core 2 Test 1 in 2004Mar.
Red Hat's Rawhide, now Fedora Development (although many Red Hat employees still call it "Rawhide" ;-), has been around since the Red Hat Linux 5.x timeframe. It's the packages that are being built for the next revision -- be it same version or a major version change. Rawhide/Dev is the "package testing," so you can see what packages they are looking at. Beta/Test is the "regression testing" of the entire release as a whole (packages against packages testing in all).
Again, the 2-2-2 model. There is at least 4 months before release when you can find out pretty much _anything_ that's going to be in the distro for a good certainty.
Pretty much every major distro -- Red Hat, SuSE, etc... adopted Linux 2.6 within 6 months. SuSE even released SuSE Linux 9.0 before 2.6 was released, but with a lot of the backports (like Red Hat) in prepartion for 2.6 in SL9.1.
He did respond with 1.6 as quickly as could be expected after a released distribution didn't work with it and a user reported problems on the mailing list.
How can you blame this on distributions? Honestly, I don't see it at all! There were months and months of pre-release 2.6 releases in 2003. I saw the comments that many things weren't working (including CIPE). There were 6 months between Linux 2.6's release and FC2. Heck, I believe SL9.1 came out with it before that.
About the kernel, or all of the drivers' authors that assumed that kernel interfaces shouldn't change?
Many drivers were actually _deprecated_ in kernel 2.6, and not built by default because people didn't come forward and take ownership. I know, the lack of the "advansys.c" SCSI driver got me good. ;->
But who did I blame? Myself for not volunteering! At some point, when there's lack of interest, it typically means there are not enough people interested to hold up everything.
And if you want the things that work to not change???
Then you do _not_ adopt the latest distro that just came out -- especially not an "early adopter" release. It was well known that Fedora Core 2 was changing a lot of things, just like SuSE Linux 9.0/9.1.
[ SIDE NOTE: Again, I have stated that I am _disappointed_ that Red Hat does not use revisioning to designation something like Fedora Core 2 as a ".0" revision. But it was well known it was going to be kernel 2.6 4 months before its release. ]
Where do you find the real info?
It takes me 5 minutes to figure out what Red Hat's up to. Red Hat is pretty explicit on what their plans are, and the packages are out there for all to see during Development, even _before_ the Test hits (Development is about 4 months before release).
Heck, when their Test 1 hits (a good 2+ months before release), they go over _all_ the major changes in the "Release Notes." SuSE and even Mandrake are similar too.
Specific example: I'm trying to make a software RAID1 partition with one internal IDE and a matching external drive in a firewire case work. FC1 worked flawlessly once the raid was established but would not autodetect the drive, either when hotplugged or at bootup. Everything later that I've tried is worse. Some get the autodetect right on a hotplug but crash after some amount of runtime. Is any distro likely to autodetect at bootup in time to cleanly connect the RAID and then keep working?
When you disconnect a drive from the RAID array, there is no way for the software to assume whether or not the drive is any good anymore when it reconnects unless you _manually_ tell it (or configure it to always assume it is good).
This is not a Red Hat issue either.
And LVM2 is still going through some maturity with RAID-1 and snapshots.
Maybe it matters that the filesystem is reiserfs - I see some bug reports about that, but rarely have problems when the internal IDE is running as a broken mirror.
Filesystem shouldn't matter, it's a Disk Label (aka Partition Table) consideration.
Like letting it be a surprise to the CIPE author that it didn't work at release time. I don't remember seeing a hint about this on the CIPE mail list when someone must have known well ahead that it was coming.
More assumptions. CIPE has been breaking in 2.6 since the very early 2.5.2+ developments. I'm sure the author was waiting until late in 2.5 (say near 2.5.70+) as it came closer to 2.6. I saw reports on all sorts of network code breakage the second it hit in 2003Dec -- 6 months before.
In looking through Fedora Development, I note lots of references to all sorts of issues with the 2.6-test releases as of summer of 2004 (over 9 months before FC2 release, months before kernel 2.6's official release). There were efforts to get cipe to work -- both in FC2 and then in Fedora Extras for FC2, but they were eventually dropped because of lack of interested by the CIPE developers themselveas in getting "up-to-speed" on 2.5/2.6.
The first hints of recommendations to go IPSec instead were that fall. By March of 2004, I see the last comments to not even bother with CIPE for Fedora Core 2 (as well as SuSE Linux 9.1) were made from several.
Buggy. I don't mean RH bugs, I mean bugs from the upstream packages that got their first huge real world exposure because RH bundled them on a CD that you could drop into just about any PC and have a working system. I mean things like the bind, sendmail, and ssh versions that went out with RH 4.0 and were installed on more machines than ever before - all with remote exploits that weren't known yet.
Yes, Linux is not free from vunerabilities. This is not new.
Either you pay for a subscription (and optional SLA) to an "enterprise" distro that maintains 5+ years of support, or you rely on community projects (like Fedora Legacy) that attempt to do the same. Fedora Legacy is still supporting CL2.3 (Red Hat Linux 7.3), even though they dropped CL3.0 (Red Hat Linux 8.0) a long time ago because CL3.1/3.2 (Red Hat Linux 9 / Fedora Core 1) exist.
The real world is a complicated place. If you want to substitute a different program for an old but non-standard one you need to make sure it handles all the same things.
But what if those either conflict with standards or are broken?
Until just recently, star didn't even come close to getting incrementals right and still. unlike gnutar, requires the runs to be on filesystem boundaries for incrementals to work. And, it probably doesn't handle the options that amanda needs. Theory, meet practice.
You assume "GNU Tar" is the "standard." ;-> It's not, never has been, and is years late to the POSIX-2001/SUSv3 party.
As far as not crossing filesystem boundaries, that is easily accomodated.
By 'first', I meant before RH 4.x, which in my opinion really drove the popularity of Linux simply because it had a decent installer that came up working on most hardware at the time. Freebsd was around at the time but in a form that was much more difficult for a new user to get working.
That depends on when you started. Before SP6, NT was also unreliable.
Hell of a lot better than "Chicago," but yes, I agree.
Remember that in the timeframe of RH4, you could kill an NT box by sending it an oversized ping packet - and have a fair chance of corrupting the filesystem beyond repair from the ungraceful shutdown.
Oh, you could hang Linux in various ways too, just not as many.
It wasn't so much the cloning as it was setting the IP address with their GUI tool (and I did that because when I just made the change to the /etc/sysconfig/network-scripts/ifcfg-xxx file the way that worked in earlier RH versions it sometimes mysteriously didn't work). Then I shipped the disks in their hot-swap carriers to the remote sites to be installed.
Again, I will repeat that "configuration management" is part of avoiding "professional negligence." You should have tested those carriers before shipping them out by putting them in another box.
That procedure can't be uncommon.
No, it's not uncommon, I didn't say that. I'm just saying that vendors can't test for a lot of different issues.
Don't get me started on the design of NTFS and the Registry SAM-SID issues. The current "hack" is to either put everything in a domain, or use Dynamic Discs to store stuff outside of a NTFS filesystem because you can _never_ safely access a NTFS filesystem _except_ for the NT installation that created it (not even with another install of the same NT version).
And the same thing would happen if you swapped a NIC card - hmmm, I wonder about PCMCIA cards now.
Yes, maybe that has something to do with it, eh? ;->
For example, maybe when you want to switch between an unclassified and classified network, you use a different card (and hard drive), which will connect to a different subnet. ;->
I'm sure some changes have been for "Common Criteria" compliance. I've seen similiar on SuSE as well. They might have been made mid-update.
I'm not sure exactly when it happened because it was an update that did not include a new kernel so the machines weren't rebooted immediately. I think the ones where I noticed it first were somewhere in the Centos 3.4 range with some subsequent updates (maybe a month or so ago).
I'll check it out then.
I think this was just an update without a version number change. As it turned out, I saw the screen of one that was rebooted at a nearby site and knew more or less what happened but before I got around to fixing them all a machine crashed at a distant site and I had to talk someone who didn't know vi through finding the file and removing the hardware address entry.
No offense, but when I ship out components to remote sites, I do my due dilligence and test for every condition I can think of. But maybe I'm anal because I ship things out to places like White Sands, Edwards, etc... and techs can't be expected debugging such concepts, so I always test with at least a partially replicated environment (which would be at least 1 change of physical system ;-).
If I hadn't already known about it by then or if a lot of the remote servers had been rebooted and came up with no network access it might have ruined my whole day.
I think you're assigning blame in the wrong direction. You might think that's rude and arrogant, but in reality, if you keep blaming vendors for those type of mistakes, you're going to have a lot more of them coming your way until you change that attitude. No offense. ;->
-- Bryan J. Smith mailto:b.j.smith@ieee.org
On Wed, 2005-05-25 at 19:18, Bryan J. Smith wrote:
From: Les Mikesell lesmikesell@gmail.com
Agreed. But note that the standards are set long before that...
??? By "standard" what do you mean. ???
Most of the committee decisions.
Ironically, being an American company, this is where Red Hat has done a phenominal job of any "western" distro company, IMHO, of pushing _hard_ for UTF-8. Red Hat made this major shift with CL3.0 (Red Hat Linux 8.0), which then went into EL3, which was based on CL3.1 (Red Hat Linux 9). Typical bitching and moaning was present, condemnations of both Perl maintainers of 5.8 and Red Hat Linux 8.0, etc...
I guess I could narrow down my complaint here to the specific RedHat policy of not shipping version number upgrades of applications within their own distribution versions. In this instance, it is about building OS versions with options that required UTF-8 (etc.) character set support along with a perl version that didn't handle it correctly, (which I can understand because that's the best they could do at the time), then *not* providing updates to those broken distributions to perl 5.8.3+ which would have fixed them in RH 8.0 -> RHEL3.x but instead expecting users to move to the next RH/Fedora release which introduces new broken things. Maybe the problems have been fixed in recent backed-in patches to RHEL3/centos3 but I don't think so.
Or you need mysql 4.x for transactions.
Again, there is a licensing issue with MySQL 4 that MySQL AB introduced that most people, short of Red Hat, are ignoring.
But Centos 4 includes it, and I assume RHEL4. We've already covered why it isn't reasonable to run those. But why can't there be an application upgrade to 4.x on a distribution that is usable today, and one that will continue to keep itself updated with a stock 'yum update' command? I think this is just a policy issue, not based on any practical problems.
I guess my real complaint is that the bugfixes and improvements to the old releases that you are forced by the situation to run are minimal as soon as it becomes obvious that the future belongs to a different version - that may be a long time in stabilizing to a usable point.
So what's your solution?
Allow multiple version of apps in the update repositories, I think. Why can't we explictly update to an app version beyond the stock release if we want it and then have yum (etc.) track that instead of the old one? If I had the perl, mysql, and dovecot versions from centos 4 backed into centos 3, I'd be happy for a while. I know it wouldn't be horribly hard to do this myself but I really hate to break automatic updates and introduce problems that may be unique to each system.
It will be interesting to see how this works out for Ubuntu. I think it would be possible to be more aggressive with the application versions but hold back more than RH on the kernel changes.
And that's the real question. What is the "best balance"?
I'd be extremely conservative about changes that increase the chances of crashing the whole system (i.e. kernel, device drivers, etc.) and stay fairly close to the developer's version of applications that just run in user mode. Even better, make it easy to pick which version of each you want, but make the update-tracking system automatically follow what you picked. Then if you need a 2.4 kernel, perl 5.8.5 and mysql 4.1 in the same bundle you can have it.
How was the CIPE author supposed to know that what would be released as FC2 would have a changed kernel interface?
My God, I actually can't believe you could even make such a statement. At this point, it's _futile_ to even really debate you anymore, you keep talking from a total standing of "assumption" and "unfamilarity." Something a maintainer of a package would not be, unless he honestly didn't care.
I'm talking about the CIPE author, who had to be involved to write the 1.6 version not an RPM maintainer who probably couldn't have.
Fedora Core 2 (2004May) was released 6 months _after_ Linux 2.6 (2003Dec).
So how does any of this relate to the CIPE author, who didn't write CIPE for fedora and almost certainly didn't have an experimental 2.6 kernel on some unreleased distribution, knowing that CIPE wasn't going to work? On the other hand, someone involved in building FC2 must have known and I don't remember seeing any messages going to the CIPE list asking if anyone was working on it.
He did respond with 1.6 as quickly as could be expected after a released distribution didn't work with it and a user reported problems on the mailing list.
How can you blame this on distributions? Honestly, I don't see it at all!
Who else knew about the change? Do you expect every author of something that has been rpm-packaged to keep checking with Linus to see if he feels like changing kernel interfaces this month so as not to disrupt the FC release schedule?
Many drivers were actually _deprecated_ in kernel 2.6, and not built by default because people didn't come forward and take ownership. I know, the lack of the "advansys.c" SCSI driver got me good. ;->
I can understand people backing away from a changing interface.
And if you want the things that work to not change???
Then you do _not_ adopt the latest distro that just came out -- especially not an "early adopter" release. It was well known that Fedora Core 2 was changing a lot of things, just like SuSE Linux 9.0/9.1.
And, as much as you want this to not be about RH/Fedora policies, you are then stuck with something unnecessarily inconvenient because of their policy of not upgrading apps within a release.
Where do you find the real info?
Is any distro likely to autodetect at bootup in time to cleanly connect the RAID and then keep working?
When you disconnect a drive from the RAID array, there is no way for the software to assume whether or not the drive is any good anymore when it reconnects unless you _manually_ tell it (or configure it to always assume it is good).
That's not the issue - I don't expect a hot-plug to go into the raid automatically. I do want it to pair them up on a clean reboot as it would if they were both directly IDE connected. So far nothing has.
This is not a Red Hat issue either.
Isn't it? I see different behavior with knoppix and ubuntu. I think their startup order and device probing is somewhat different.
Maybe it matters that the filesystem is reiserfs - I see some bug reports about that, but rarely have problems when the internal IDE is running as a broken mirror.
Filesystem shouldn't matter, it's a Disk Label (aka Partition Table) consideration.
Separate issues - I'm able to use mdadm to add the firewire drive to the raid and it will re-sync, but if I leave the drive mounted and busy, every 2.6 kernel based distro I've tried so far will crash after several hours. I can get a copy by unmounting the partition, letting the raid resync then removing the external drive (being able to take a snapshot offsite is the main point anyway). I've seen some bug reports about reiserfs on raid that may relate to the crash problem when running with the raid active. This didn't happen under FC1 which never crashed between weekly disk swaps. There could also be some problems with my drive carriers. A firmware update on one type seems to have changed things but none of the problems are strictly reproducible so it is taking a long time to pin anything down.
There were efforts to get cipe to work -- both in FC2 and then in Fedora Extras for FC2, but they were eventually dropped because of lack of interested by the CIPE developers themselveas in getting "up-to-speed" on 2.5/2.6.
There's really only one CIPE 'developer' and I don't think he has any particular interest in any specific distributions. If anyone else was talking about it, and in any other place than the CIPE mailing list, I'm not surprised that it did not have useful results.
The real world is a complicated place. If you want to substitute a different program for an old but non-standard one you need to make sure it handles all the same things.
But what if those either conflict with standards or are broken?
You use the one that works and has a long history of working until the replacement handles all the needed operations. A committee decision isn't always the most reliable way to do something even if you follow the latest of their dozens of revisions.
Until just recently, star didn't even come close to getting incrementals right and still. unlike gnutar, requires the runs to be on filesystem boundaries for incrementals to work. And, it probably doesn't handle the options that amanda needs. Theory, meet practice.
You assume "GNU Tar" is the "standard." ;->
No, but I assume that Gnu tar will be available anywhere I need it. Given that I've compiled it under DOS, linked to both an aspi scsi driver and a tcp stack that could read/feed rsh on another machine that seems like a reasonable assumption. I can't think of anything less likely to work...
It's not, never has been, and is years late to the POSIX-2001/SUSv3 party.
So which is more important when I want to read something from my 1990's vintage tapes?
As far as not crossing filesystem boundaries, that is easily accomodated.
Maybe, maybe not. I always set up backups on filesystem boundaries anyway so I can prevent them from wandering into CD's or NFS mounts by accident, but I can imagine times when you'd want to include them and still do correct incrementals.
It wasn't so much the cloning as it was setting the IP address with their GUI tool (and I did that because when I just made the change to the /etc/sysconfig/network-scripts/ifcfg-xxx file the way that worked in earlier RH versions it sometimes mysteriously didn't work). Then I shipped the disks in their hot-swap carriers to the remote sites to be installed.
Again, I will repeat that "configuration management" is part of avoiding "professional negligence." You should have tested those carriers before shipping them out by putting them in another box.
You aren't following the scenario. The drives worked as shipped. They were running Centos 3.x which isn't supposed to have behavior-changing updates. I did a 'yum update' from the nicely-running remote boxes that didn't include a kernel and thus didn't do a reboot immediately afterwords. I normally test on a local system, then one or a few of the remotes, make sure nothing breaks, then proceed with the rest of the remotes. So, after all that, I ended up with a flock of running remote boxes that were poised to become unreachable on the next reboot. And even if I had rebooted the a local box after the corresponding update, it wouldn't have had the problem because I would have either installed that one in place or assigned the IP from its own console after swapping the disk in.
That procedure can't be uncommon.
No, it's not uncommon, I didn't say that. I'm just saying that vendors can't test for a lot of different issues.
But they could at least think about what a behavior change is likely to do in different situations, and this one is pretty obvious. If eth0 is your only network interface and you refuse to start it at bootup, remote servers that used to work become unreachable. I do understand the opposite problem that they were trying to fix where a change in kernel detection order changes the interface names and has the potential to make a DHCP server start on the wrong interface, handing out addresses that don't work. But, it's the kind of change that should have come at a version revision or along with the kernel with the detection change.
No offense, but when I ship out components to remote sites, I do my due dilligence and test for every condition I can think of. But maybe I'm anal because I ship things out to places like White Sands, Edwards, etc... and techs can't be expected debugging such concepts, so I always test with at least a partially replicated environment (which would be at least 1 change of physical system ;-).
Note that I did test everything I could, and everything I could have tested worked because the pre-shipping behavior was to include the hardware address in the /etc/sysconfig/networking/profiles/defaults/xxxx file, but to ignore it at startup. So even when I tested the cloned disks after moving to a 2nd box they worked. The 'partially replicated environment' to catch this would have had to be a local machine with it's IP set while the drive was in a different box and then rebooted after installing an update that didn't require it. I suppose if lives were at stake I might have gone that far.
If I hadn't already known about it by then or if a lot of the remote servers had been rebooted and came up with no network access it might have ruined my whole day.
I think you're assigning blame in the wrong direction. You might think that's rude and arrogant, but in reality, if you keep blaming vendors for those type of mistakes, you're going to have a lot more of them coming your way until you change that attitude. No offense. ;->
You are right, of course. I take responsibility for what happened along with credit for catching it before it caused any real downtime (which was mostly dumb luck from seeing the message on the screen because I happened to be at one of the remote locations when the first one was rebooted for another reason). Still, it gives me a queasy feeling about what to expect from vendors - and I've been burned the other direction too by not staying up the minute with updates so you can't just skip them. Hmmm, now I wonder if the code was intended to use the hardware address all along but was broken as originally shipped. It would be a bit more comforting if it was included in an update because someone thought it was a bugfix instead of someone thinking it was a good idea to change currently working behavior.
On Wed, 2005-05-25 at 22:43 -0500, Les Mikesell wrote:
I guess I could narrow down my complaint here to the specific RedHat policy of not shipping version number upgrades of applications within their own distribution versions. In this instance, it is about building OS versions with options that required UTF-8 (etc.) character set support along with a perl version that didn't handle it correctly, (which I can understand because that's the best they could do at the time), then *not* providing updates to those broken distributions to perl 5.8.3+ which would have fixed them in RH 8.0 -> RHEL3.x but instead expecting users to move to the next RH/Fedora release which introduces new broken things. Maybe the problems have been fixed in recent backed-in patches to RHEL3/centos3 but I don't think so.
The problem was a lot of CPAN programs were written for ASCII/ISO8859. The ones included with RHL/RHEL all worked fine and were tested, but I know people ran into issues with added programs.
The problem then becomes a Catch-22. Perl 5.8.3 fixed some issues in 2004, but also introduced other compatibility issues.
Fedora Core 1 did update to Perl 5.8.3 when it became available in 2004, but Red Hat has decided to stick with 5.8.0 for RHEL3. They must have had some reasons.
In a nutshell, by disabling the UTF-8 default locale, it fixes the problem with ASCII/ISO8859 Perl programs.
But Centos 4 includes it, and I assume RHEL4.
Nope, only MySQL 3.23.
We've already covered why it isn't reasonable to run those. But why can't there be an application upgrade to 4.x on a distribution that is usable today,
Definitely not! The whole reason why RHEL is very well trusted is because Red Hat sticks with a version and then backports any necessary fixes. Trust me, it actually takes Red Hat _more_work_ to do this, but they do it to ensure _exact_ functionality over the life of the product.
and one that will continue to keep itself updated with a stock 'yum update' command? I think this is just a policy issue, not based on any practical problems.
It's a policy issue, yes. And upgrading from MySQL 3.23 to MySQL 4.x would throw a massive wrench into a lot of Red Hat's SLAs.
Once Red Hat ships a package version in RHEL, unless they are unable to backport a fix, they do _not_ typically move forward. Again, SLAs, exact operation to an anal power, and _never_ "feature upgrades."
If you want that, that's what Fedora Core is for.
Allow multiple version of apps in the update repositories, I think.
Again, massive wrench into Red Hat SLAs.
Why can't we explictly update to an app version beyond the stock release if we want it and then have yum (etc.) track that instead of the old one?
SLAs.
If I had the perl, mysql, and dovecot versions from centos 4 backed into centos 3, I'd be happy for a while.
Not people who pay for RHEL with SLAs, no sir. Trust me on this, Red Hat is listening to the people who pay, and the people pay for the attention to bug fixes and that's about it. SuSE was the first to really prove this was the market, Red Hat just followed them.
I know it wouldn't be horribly hard to do this myself
Hard is not the problem. It's actually much harder to backport fixes to old versions. But Red Hat does it for a reason.
Remember, updating a system is more than just taking the latest package and building it. It's building it, running it in regression tests across a suite of systems, and _then_ shipping it. At least when you're talking about an environment where you're guaranteeing SLAs.
Unless you're like Microsoft and you ship things that re-introduce old bugs, have unforseen consequences, etc... Microsoft is notorious for "feature creep" and "historical loss" in their updates.
but I really hate to break automatic updates and introduce problems that may be unique to each system.
Exactomundo. ;->
I'd be extremely conservative about changes that increase the chances of crashing the whole system (i.e. kernel, device drivers, etc.) and stay fairly close to the developer's version of applications that just run in user mode. Even better, make it easy to pick which version of each you want, but make the update-tracking system automatically follow what you picked. Then if you need a 2.4 kernel, perl 5.8.5 and mysql 4.1 in the same bundle you can have it.
And you are now going to run a suite of regression tests with this various combinations -- remember, with each added combination, you increase the number of tests _exponentially_ -- and guarantee an X hour Service Level Agreement (SLA) on it?
In reality, what you're looking for is Fedora Core, not RHEL.
I'm talking about the CIPE author, who had to be involved to write the 1.6 version not an RPM maintainer who probably couldn't have.
Not to burst your bubble, but most Red Hat developers go beyond just being "maintainers." Many actively participate in many project developments. Red Hat used to actively include CIPE in the kernel, and test it as their standard VPN solution.
That changed in 2.6, for a number of reasons, a big one being that the other developers weren't even looking at the kernel 2.6-tests in 2003, let alone 2.6.0 on-ward once it came out in December. In reading the fall 2003 and other comments, it became pretty clear that Red Hat was extremely skeptical about even getting it to work, and if it was really worth it.
So how does any of this relate to the CIPE author, who didn't write CIPE for fedora and almost certainly didn't have an experimental 2.6 kernel on some unreleased distribution, knowing that CIPE wasn't going to work?
Excuse me? The developer didn't have to wait for a "release" distro to look at what issues where happening with kernel 2.6 -- let alone late kernel 2.5 developments or the months upon months of 2.6-test releases. For some reason you seem to believe this "scenario" is something only CIPE runs into?
There are countless kernel features and packages _external_ to the core kernel developers, and those projects _do_ "keep up" with kernel developments as they happen. But let's even assume for a moment they do not.
Debian 3.0 "Woody" and Debian 3.1 "Sarge" had kernel 2.6.0 available for download almost immediately. Various kernel 2.6-test testing in early Fedora Development showed that CIPE was totally broken for 2.6. And there are similar threads in 2003, while 2.6 was in 2.6-test, where people were talking about the lack of any CIPE compatibility.
This was _known_. Your continued insistence on saying Red Hat released 2.6 "early" is just non-sense. It was known for 9 months before it was released, months before even development started on Fedora Core 2, SuSE Linux 9.1, Mandrake Linux 10.0, etc... This is no where near a Red Hat policy, decision or otherwise "unstable" issue.
On the other hand, someone involved in building FC2 must have known and I don't remember seeing any messages going to the CIPE list asking if anyone was working on it.
Okay, I'm going to hit the CIPE archives just to see what I'm don't know ...
Hans Steegers seemed to be very aware and knowledgeable about the fact that CIPE 1.5 did not run on kernel 2.6 back in September 2003, 3 months before the final kernel 2.6.0 release. Unless I'm mistaken, he is very involved with CIPE's development.
Who else knew about the change? Do you expect every author of something that has been rpm-packaged to keep checking with Linus to see if he feels like changing kernel interfaces this month so as not to disrupt the FC release schedule?
I don't think you even understand the issue here. CIPE wasn't just made incompatible because of some "minor interface change" made in an odd-ball, interim 2.6 developer release. Kernel 2.6 was changed _massively_ from 2.4, and things like CIPE required _extensive_ re-writes! Hans knew this, as did most other people, about the same time -- Fall 2003 when the kernel 2.6-test releases were coming out!
This has absolutely *0* to do with Red Hat or any distributor, _period_!
I can understand people backing away from a changing interface.
??? I don't understand what you meant by that at all ???
And, as much as you want this to not be about RH/Fedora policies, you are then stuck with something unnecessarily inconvenient because of their policy of not upgrading apps within a release.
Fedora Core does, probably a little more so than Red Hat Linux prior.
But RHEL -- when you ship SLAs, you ship SLAs -- and you aren't upgrading features mid-release that can impact compatibility and reliability.
Period.
That's not the issue - I don't expect a hot-plug to go into the raid automatically. I do want it to pair them up on a clean reboot as it would if they were both directly IDE connected. So far nothing has.
That is _exactly_ the issue! Once you remove a disk from the volume, you have to _manually_ re-add it, even if you powered off and re- connected the drive. Once the system has booted without the drive just once, it doesn't connect it automagically.
Isn't it? I see different behavior with knoppix and ubuntu. I think their startup order and device probing is somewhat different.
Then report it to Bugzilla and use Knoppix and Ubuntu as examples. Red Hat _likes_ people to find issues and report them, and they will get fixed.
_Unless_ they don't do what Knoppix and Ubuntu do for a reason. Many times I've seen reasons not to autodetect things, and software RAID is one, depending on the conditions.
Separate issues - I'm able to use mdadm to add the firewire drive to the raid and it will re-sync, but if I leave the drive mounted and busy, every 2.6 kernel based distro I've tried so far will crash after several hours.
I've seen this issue with many other OSes as well.
I can get a copy by unmounting the partition, letting the raid resync then removing the external drive (being able to take a snapshot offsite is the main point anyway).
Once you do this, you must manually tell the system to trust it again. Otherwise, it will assume the drive was taken off-line for other reasons.
If some distros are trumping that logic and just blindly trusted it by default, then they deserve what they get from that logic -- even if it will only bite them in the ass 1 out of 20 times. I'll take the manual approach the other 19 times to avoid that 1. ;->
I've seen some bug reports about reiserfs on raid that may relate to the crash problem when running with the raid active.
Well, I'm ignorant on ReiserFS in general (I have limited experience dealing with it -- typically clean-ups and the off-line tools are never in-sync with the kernel, which seems good on its own, I'll admit), but maybe there is a race condition between ReiserFS and LVM2/MD.
This didn't happen under FC1 which never crashed between weekly disk swaps. There could also be some problems with my drive carriers.
It definitely could be a drive carrier issue. In reality, _only_ SATA (using the edge connections direction) and SCA SCSI can be trusted to properly stage transient power properly.
I typically like to use more reliable drive swapping. Again, either SCA SCSI or the newer SATA.
A firmware update on one type seems to have changed things but none of the problems are strictly reproducible so it is taking a long time to pin anything down.
Well, I wish you the best of luck.
There's really only one CIPE 'developer' and I don't think he has any particular interest in any specific distributions.
Could you _please_ explain the lack of 2.6 support until later in 2004 being a "distro-specific" issue? Red Hat, SuSE and many others just "moved on" and didn't bother to return, despite repeat attempts to get CIPE working in late 2003 through early 2004.
If anyone else was talking about it, and in any other place than the CIPE mailing list, I'm not surprised that it did not have useful results.
From what I've now read, people _were_ aware of it in fall of 2003
on-ward, and kernel 2.6-test was out, and basically no one worked on it.
You use the one that works and has a long history of working until the replacement handles all the needed operations.
I don't think you seem to understand what I just said. The standards compliant version can_not_ always handle the exact functionality of the variant from the standard.
Many times, what people think is so-called "proven" is actually quite broken. Anyone who exchanged tarballs between Linux and Solaris, Irix and other systems using GNU Tar typically ran into such issues.
POSIX compliance exists for a reason. GNU Tar, among many other Linux utilities, have deviated over the years. Things must break to bring back that deviation to standard.
I think the LibC4/5 forks and the return to GLibC 2 was a perfect example. And it doesn't take a rocket scientist to realize why GNU gave the reins to Cygnus (now Red Hat) on GCC 3 because GCC 2's C++ was quite the wasteland.
A committee decision isn't always the most reliable way to do something even if you follow the latest of their dozens of revisions.
I don't think you realize that many times it's not a 'committee decision' that cause the problem in the first place. Sometimes Linux utilities are just a bit too "eccentric" or introduce their own "extensions."
No, but I assume that Gnu tar will be available anywhere I need it.
On Linux, yes. The problem is that it doesn't interact well with other systems in many cases.
Given that I've compiled it under DOS, linked to both an aspi scsi driver and a tcp stack that could read/feed rsh on another machine that seems like a reasonable assumption. I can't think of anything less likely to work...
Unfortunately GNU Tar doesn't exactly handle its own extensions well on different platforms. ;->
So which is more important when I want to read something from my 1990's vintage tapes?
If GNU Tar even reads some of them! You should read up on GNU Tar. ;->
Maybe, maybe not. I always set up backups on filesystem boundaries anyway so I can prevent them from wandering into CD's or NFS mounts by accident, but I can imagine times when you'd want to include them and still do correct incrementals.
There are some defaults that are just dangerous. That's one of them.
You aren't following the scenario. The drives worked as shipped. They were running Centos 3.x which isn't supposed to have behavior-changing updates. I did a 'yum update' from the nicely-running remote boxes that didn't include a kernel and thus didn't do a reboot immediately afterwords.
You should have tested this in-house _first_.
I normally test on a local system, then one or a few of the remotes, make sure nothing breaks, then proceed with the rest of the remotes. So, after all that, I ended up with a flock of running remote boxes that were poised to become unreachable on the next reboot.
Again, you should have tested all this in-house _first_.
And even if I had rebooted the a local box after the corresponding update, it wouldn't have had the problem because I would have either installed that one in place or assigned the IP from its own console after swapping the disk in.
But had you followed such a procedure, you would have discovered it.
But they could at least think about what a behavior change is likely to do in different situations, and this one is pretty obvious. If eth0 is your only network interface and you refuse to start it at bootup, remote servers that used to work become unreachable.
You might want that in the case where you want only a specific hardware address to access the network.
I will re-iterate, there are things in "Common Criteria" standardization that is affecting both RHEL and SLES.
I do understand the opposite problem that they were trying to fix where a change in kernel detection order changes the interface names and has the potential to make a DHCP server start on the wrong interface, handing out addresses that don't work. But, it's the kind of change that should have come at a version revision or along with the kernel with the detection change.
Maybe so. But I'm still waiting on you to detail when this change was, in fact, made. So far, I'm just going on your comments that you merely yum'd the updates from the proper repository.
Note that I did test everything I could, and everything I could have tested worked because the pre-shipping behavior was to include the hardware address in the /etc/sysconfig/networking/profiles/defaults/xxxx file, but to ignore it at startup.
Ahhh, now we're getting to it! After you did a "yum update", did you check for any ".rpmsave" files?
So even when I tested the cloned disks after moving to a 2nd box they worked. The 'partially replicated environment' to catch this would have had to be a local machine with it's IP set while the drive was in a different box and then rebooted after installing an update that didn't require it. I suppose if lives were at stake I might have gone that far.
Maybe I've just been in too many environments where that's the deal, yes. And even when it's not lives, it's an "one shot deal" and I don't get a 2nd chance.
E.g., people complain about bugs in semiconductor designs, yet semiconductors aren't something like software where you build, run it, and know you've got bugs in 6-8 minutes. You have to go to layout, then fab it and then you get it back -- some 6-8 _weeks_ later if you're a major company (possibly 6-8 months if you're not).
So I tend to err on the side of making sure my formal testing is actually well thought out.
You are right, of course. I take responsibility for what happened along with credit for catching it before it caused any real downtime (which was mostly dumb luck from seeing the message on the screen because I happened to be at one of the remote locations when the first one was rebooted for another reason).
And that's good. If you're going to make a mistake, at least do it on a minimal number of systems. I've seen far too many people assume something will work and push it out to all.
Still, it gives me a queasy feeling about what to expect from vendors - and I've been burned the other direction too by not staying up the minute with updates so you can't just skip them.
If you want to make a comparison of the Linux world to any other, at least the "worst" Linux vendors are still better at patching than any other OS.
Hmmm, now I wonder if the code was intended to use the hardware address all along but was broken as originally shipped. It would be a bit more comforting if it was included in an update because someone thought it was a bugfix instead of someone thinking it was a good idea to change currently working behavior.
It might be that it was disabled -- possibly by yourself during config. But then an update changed that. Again, doing a: find / -name *.rpmsave
Is almost a mandatory step for myself anytime I upgrade. RPM is very good at dumping out those files when it can't use an existing config file or script that has been modified.
On Thu, 2005-05-26 at 01:20 -0500, Bryan J. Smith wrote: <snip>
there was some question about mysql support in CentOS-4 and RHEL-4 ... I would like to correct an error:
But Centos 4 includes it, and I assume RHEL4.
Nope, only MySQL 3.23.
RHEL-4 shipped with MySQL 4.1.x (and has a 3.23.58 client to work with older databases) ... the latest SRPMS are mysql-4.1.10a-1.RHEL4.1.src.rpm and mysqlclient10-3.23.58-4.RHEL4.1.src.rpm.
CentOS-4 has the identical programs {as with all other items except LOGOS, trademarked text, and yum and where updates are done} in it's base :)
On Thu, 2005-05-26 at 01:20, Bryan J. Smith wrote:
In this instance, it is about building OS versions with options that required UTF-8 (etc.) character set support along with a perl version that didn't handle it correctly,
In a nutshell, by disabling the UTF-8 default locale, it fixes the problem with ASCII/ISO8859 Perl programs.
No, that only fixes the problem of not being able to handle the default character set. It does not make explicit conversions work correctly as needed by Mime-tools, etc. Maybe this doesn't fit RedHat's definition of a bug, but it is still broken behavior, fixed in the upstream version that they only provide if you do a full disto change.
We've already covered why it isn't reasonable to run those. But why can't there be an application upgrade to 4.x on a distribution that is usable today,
Definitely not! The whole reason why RHEL is very well trusted is because Red Hat sticks with a version and then backports any necessary fixes. Trust me, it actually takes Red Hat _more_work_ to do this, but they do it to ensure _exact_ functionality over the life of the product.
If you believe that, you have to believe that Red Hat's programmers are always better than the original upstream program author. I'll agree that they are good and on the average do a good job, but that stops far short of saying that they know better than the perl (etc.) teams what version you should be running.
Once Red Hat ships a package version in RHEL, unless they are unable to backport a fix, they do _not_ typically move forward. Again, SLAs, exact operation to an anal power, and _never_ "feature upgrades."
If you want that, that's what Fedora Core is for.
So, you want a working application, take an incomplete kernel. I understand that's the way things are. I don't understand why you like it.
Allow multiple version of apps in the update repositories, I think.
Again, massive wrench into Red Hat SLAs.
Why can't we explictly update to an app version beyond the stock release if we want it and then have yum (etc.) track that instead of the old one?
SLAs.
OK, that limits what RedHat might offer. We are sort-of talking about Centos here as well as how other distibutions might be better. Is there a reason that a Centos or third-party repository could not be arranged such that an explicit upgrade could be requested to a current version which would then be tracked like your kernel-xxx-version is when you select smp/hugemem/unsupported?
Unless you're like Microsoft and you ship things that re-introduce old bugs, have unforseen consequences, etc... Microsoft is notorious for "feature creep" and "historical loss" in their updates.
Realistically you are just substituting a different set of people making different compromises.
I'd be extremely conservative about changes that increase the chances of crashing the whole system (i.e. kernel, device drivers, etc.) and stay fairly close to the developer's version of applications that just run in user mode. Even better, make it easy to pick which version of each you want, but make the update-tracking system automatically follow what you picked. Then if you need a 2.4 kernel, perl 5.8.5 and mysql 4.1 in the same bundle you can have it.
And you are now going to run a suite of regression tests with this various combinations -- remember, with each added combination, you increase the number of tests _exponentially_ -- and guarantee an X hour Service Level Agreement (SLA) on it?
There are times when you want predictable behavior, and times when you want correct behavior. When an upstream app makes changes that provide correct behavior but you are ensured of the old buggy behavior as a matter of policy, something is wrong.
In reality, what you're looking for is Fedora Core, not RHEL.
Well, FC1 seems like the only way to get the specific mix of working kernel and apps for certain things right now, but it is by definition a dead end - and not really up to date on the app side either.
I'm talking about the CIPE author, who had to be involved to write the 1.6 version not an RPM maintainer who probably couldn't have.
Not to burst your bubble, but most Red Hat developers go beyond just being "maintainers." Many actively participate in many project developments. Red Hat used to actively include CIPE in the kernel, and test it as their standard VPN solution.
Hence my surprise at their change of direction.
Excuse me? The developer didn't have to wait for a "release" distro to look at what issues where happening with kernel 2.6 -- let alone late kernel 2.5 developments or the months upon months of 2.6-test releases. For some reason you seem to believe this "scenario" is something only CIPE runs into?
It's something a decision to change kernels runs into. The CIPE author didn't make that decision.
Various kernel 2.6-test testing in early Fedora Development showed that CIPE was totally broken for 2.6. And there are similar threads in 2003, while 2.6 was in 2.6-test, where people were talking about the lack of any CIPE compatibility.
I just don't remember seeing any discussion of this on the CIPE mailing list which is the only place it might have been resolved.
I don't think you even understand the issue here. CIPE wasn't just made incompatible because of some "minor interface change" made in an odd-ball, interim 2.6 developer release. Kernel 2.6 was changed _massively_ from 2.4, and things like CIPE required _extensive_ re-writes! Hans knew this, as did most other people, about the same time -- Fall 2003 when the kernel 2.6-test releases were coming out!
I don't see how anyone but Olaf Titz could have made the necessary changes, and I don't see why he would have done so with appropriate timing for the FC2 release unless someone involved in the release made him aware of the planned changes.
This has absolutely *0* to do with Red Hat or any distributor, _period_!
The distribution decided to change the kernel version and you don't see how this affects the usability of included packages - or the need to coordinate such changes with the authors of said packages?
I can understand people backing away from a changing interface.
??? I don't understand what you meant by that at all ???
An interface is supposed to be a form of contract among programmers that is not changed. Linus has consistently refused to freeze his interfaces, hence the lack of binary driver support from device vendors, and frankly I'm surprised at the the number of open source developers that have continued to track the moving target. How interesting can it be to write the same device driver for the third time for the same OS?
And, as much as you want this to not be about RH/Fedora policies, you are then stuck with something unnecessarily inconvenient because of their policy of not upgrading apps within a release.
Fedora Core does, probably a little more so than Red Hat Linux prior.
How much change is going to happen in the lifetime of an FC release?
[back to firewire/raid]
That's not the issue - I don't expect a hot-plug to go into the raid automatically. I do want it to pair them up on a clean reboot as it would if they were both directly IDE connected. So far nothing has.
That is _exactly_ the issue! Once you remove a disk from the volume, you have to _manually_ re-add it, even if you powered off and re- connected the drive. Once the system has booted without the drive just once, it doesn't connect it automagically.
No, that isn't the issue on a simple reboot. A drive that is connected when you go down cleanly and is still connected when you restart shouldn't be handled differently just because there is a different type of wire connecting it.
Separate issues - I'm able to use mdadm to add the firewire drive to the raid and it will re-sync, but if I leave the drive mounted and busy, every 2.6 kernel based distro I've tried so far will crash after several hours.
I've seen this issue with many other OSes as well.
It didn't happen with FC1 on the same box/same drives.
I can get a copy by unmounting the partition, letting the raid resync then removing the external drive (being able to take a snapshot offsite is the main point anyway).
Once you do this, you must manually tell the system to trust it again. Otherwise, it will assume the drive was taken off-line for other reasons.
Agreed - I expect to have to mdadm --add and have a resync if I've done a --fail or --remove, or the hardware is disconnected.
I've seen some bug reports about reiserfs on raid that may relate to the crash problem when running with the raid active.
Well, I'm ignorant on ReiserFS in general (I have limited experience dealing with it -- typically clean-ups and the off-line tools are never in-sync with the kernel, which seems good on its own, I'll admit), but maybe there is a race condition between ReiserFS and LVM2/MD.
Actually, I think there may be a really horrible race condition built into any journaled file system that counts on ordered writes and the software raid level that doesn't guarantee that across the mirrors which may be working at different speeds, handling error retries independently, etc. But nobody seems to be talking much about it...
This didn't happen under FC1 which never crashed between weekly disk swaps. There could also be some problems with my drive carriers.
It definitely could be a drive carrier issue. In reality, _only_ SATA (using the edge connections direction) and SCA SCSI can be trusted to properly stage transient power properly.
I typically like to use more reliable drive swapping. Again, either SCA SCSI or the newer SATA.
Ummm, great. When I started doing this with FC1, SATA mostly didn't work and firewire did, except you had to modprobe it manually and tell it about new devices. These are 250 gig drives and I have 3 externals for offsite rotation, so I can't afford scsi.
A firmware update on one type seems to have changed things but none of the problems are strictly reproducible so it is taking a long time to pin anything down.
Well, I wish you the best of luck.
Today it is running with the mirroring on under FC3, but I don't know if anything is really different yet. There has been a recent kernel update, I've updated firmware on this carrier, and run some diagnostics to fix drive errors that might have been caused by the earlier firmware or kernels. The funny thing is that I started doing this because I thought working with disks would be easier than tapes... But it is nice to be able to plug the drive carrier into my laptop's usb and be able to restore anything instantly (the drive case does both usb and firewire).
[...]
You use the one that works and has a long history of working until the replacement handles all the needed operations.
I don't think you seem to understand what I just said. The standards compliant version can_not_ always handle the exact functionality of the variant from the standard.
Yes, when a standard is changed late in the game, that is to be expected. People will already have existing solutions and can only move away so fast - especially with formats of archived data.
Many times, what people think is so-called "proven" is actually quite broken. Anyone who exchanged tarballs between Linux and Solaris, Irix and other systems using GNU Tar typically ran into such issues.
An issue easily resolved by compiling GNU tar for the target system.
POSIX compliance exists for a reason. GNU Tar, among many other Linux utilities, have deviated over the years. Things must break to bring back that deviation to standard.
POSIX is the thing that changed here. And GNU tar has nothing to do with Linux other than being included in some distributions that also include a Linux kernel. I'm too lazy to look up the availability dates but I used GNUtar myself long before Linux. I agree that forward-looking, the current POSIX spec is useful, but the 'a' in tar is about archives that exist from a time when it wasn't.
So which is more important when I want to read something from my 1990's vintage tapes?
If GNU Tar even reads some of them! You should read up on GNU Tar. ;->
If you are reading the star author's comments, try to duplicate the situation yourself. The worst-case issue with GNU tar is that you have to repeat a restore of an incremental to get back a directory that was created between a full and incremental with the same name that an ordinary file had at the time of the full (or maybe that's backwards - at least your data is all there and you can restore it). For several years while the star author was posting this, star would have completely missed copying many changed files in an incremental. He's done some work in the last few months that probably fixes it but I doubt if that is in current distributions yet.
Here's the real test that you should try if you are even thinking about trusting incrementals: Make a full run of a machine with nearly full filesystems. Delete a bunch of files, add enough new ones that the old/new total would not fit. Rename some directories that contain old files. Make an incremental. Repeat if you plan multi-level incrementals. Restore the full and subsequent incremental(s) to bare metal. If you get a working machine with exactly the same files in the same places including your old files under the directories with new names, your plan will work. GNUtar gets all of this right with the --listed-incremental form at least from the mid-90's through recent distros that don't need magic file attributes to work (i.e. it might not do everything SELinux expects). And amanda depends on this behavior.
You aren't following the scenario. The drives worked as shipped. They were running Centos 3.x which isn't supposed to have behavior-changing updates. I did a 'yum update' from the nicely-running remote boxes that didn't include a kernel and thus didn't do a reboot immediately afterwords.
You should have tested this in-house _first_.
I did. It worked.
I normally test on a local system, then one or a few of the remotes, make sure nothing breaks, then proceed with the rest of the remotes. So, after all that, I ended up with a flock of running remote boxes that were poised to become unreachable on the next reboot.
Again, you should have tested all this in-house _first_.
I did. It worked.
And even if I had rebooted the a local box after the corresponding update, it wouldn't have had the problem because I would have either installed that one in place or assigned the IP from its own console after swapping the disk in.
But had you followed such a procedure, you would have discovered it.
Actually, in retrospect, the funny part is that one of the main reasons for cloning the disks in the first place was so that I'd be testing a bit-for-bit duplicate of what was in production.
But they could at least think about what a behavior change is likely to do in different situations, and this one is pretty obvious. If eth0 is your only network interface and you refuse to start it at bootup, remote servers that used to work become unreachable.
You might want that in the case where you want only a specific hardware address to access the network.
Perhaps, but do you really think I'd change my mind about that well after the machines were deployed?
Maybe so. But I'm still waiting on you to detail when this change was, in fact, made. So far, I'm just going on your comments that you merely yum'd the updates from the proper repository.
That's because I didn't do a reboot along with the update, so it could have been any of several runs. It pretty much had to be from initscripts-7.31.18.EL-1.centos.1.i386.rpm which I see is dated April 18 in my download cache. How should I associate this with RHEL3/Centos3 revisions to describe it?
Note that I did test everything I could, and everything I could have tested worked because the pre-shipping behavior was to include the hardware address in the /etc/sysconfig/networking/profiles/defaults/xxxx file, but to ignore it at startup.
Ahhh, now we're getting to it! After you did a "yum update", did you check for any ".rpmsave" files?
No, none of my configs changed, just the way they were handled after the initscript revision.
Maybe I've just been in too many environments where that's the deal, yes. And even when it's not lives, it's an "one shot deal" and I don't get a 2nd chance.
I'll admit to being a little sloppy because the boxes are behind a load balancer and I know I can lose one in production with serious problems. But, an *exact* copy here didn't show any problem, the updated remote machine didn't show any problem while still running. Everything looked like a go... I suppose I should have known that a new initscripts package could break booting, but RHEL3/Centos had a decent track record about that sort of thing so far.
It might be that it was disabled -- possibly by yourself during config. But then an update changed that. Again, doing a: find / -name *.rpmsave
Is almost a mandatory step for myself anytime I upgrade. RPM is very good at dumping out those files when it can't use an existing config file or script that has been modified.
None of the above. When I get a chance I'll compare the old/new version of the ifup steps to see if ignoring on a MAC mismatch was a new addition or if they fixed a broken comparison in the original. I'll feel much better about the whole thing if the check was there all along and they thought this was just a bugfix. Still, I have to wonder how many RH/Centos machines are out there in the same situation (IP set with redhat-config-network, then the disk or NIC moved, then a post April 18 update) just waiting to disappear from the network on the next reboot. It would also be interesting to see how RH support would respond when called about an unreachable box, but being a cheapskate running Centos, I wouldn't know.
On Thursday 26 May 2005 14:17, Les Mikesell wrote:
If you believe that, you have to believe that Red Hat's programmers are always better than the original upstream program author.
For the most part, the Red Hat crew is the best in the business. Or have you never heard of Jakub Jelinek, Alan Cox, Rick van Riel, and many many other of the top upstream developers that are employed in some capacity by Red Hat?
I'll agree that they are good and on the average do a good job, but that stops far short of saying that they know better than the perl (etc.) teams what version you should be running.
The perl team has no business telling me what version I should be running, either. What version I run is dependent upon many things; one of which is 'what version does my vendor support?'
So, you want a working application, take an incomplete kernel. I understand that's the way things are. I don't understand why you like it.
Long term version stability. There has to be a freeze point; Red Hat has chosen the very documented 2-2-2 6-6-6 scheme, and sticks to its schedule, for the most part. Or, to put it very bluntly, just exactly which of the over a thousand packages are worth waiting on? And who decides which package holds up progress? CIPE, the example used here, is relatively insecure to begin with and interoperates with nobody. Better to use IPsec (which virtually everybody supports to a degree) than a relatively nonstandard VPN like CIPE (I'd go as far as to say that most of the other VPN solutions are in the same boat; what's needed on the server side is typically Microsoft-compatible PPP over L2TP over IPsec, which is so easy to set up on the Windows client side it isn't even funny). That's why for-purpose firewall/VPN appliance Linux dists (SmoothWall and Astaro, for instance) are not using anything but IPsec. I have a SmoothWall box myself, and it Just Works.
Is there a reason that a Centos or third-party repository could not be arranged such that an explicit upgrade could be requested to a current version which would then be tracked like your kernel-xxx-version is when you select smp/hugemem/unsupported?
Yes, a couple: 1.) Lack of manpower to do the tracking; 2.) Doesn't fit the purpose for which CentOS was targeted. CentOS is supposed to be 'I Can't Believe it's not Red Hat' enterprise Linux. That 'explicit upgrade' might break many many things.
For instance, suppose you want PostgreSQL 8 instead of the supplied 7.4. Ok, now you have a maintenance nightmare, as all kinds of packages link to libpq, and the current 8.0.x release has a bumped libpq soversion. So how do you deal with this without making it virtually impossible for the maintainers to maintain? (I know the answer to that, but it creates its own problems, and that is a compat-postgresql-libs RPM to supply the older libpq, but that doesn't tackle all the problems).
Perl is another good example; upgrade perl and you could break all kinds of things. Or Python; upgrade python and break all kinds of things, including most of the system-config-* utilities. Or gcc. Or the kernel. Or glibc for a real humdinger. No package in CentOS is in a vacuum; so how many combinations need to be supported? Ok, suppose you have two choices of PHP version and three choices of PostgreSQL version (PHP4 and 5, PostgreSQL 7.4, 8.0, and the upcoming 8.1). Now, how many combinations of those will you support? You going to built six different sets of RPM's (since PostgreSQL can be built with a 'pl/PHP' stored procedure language, PostgreSQL and PHP depend upon each other. The other PL's, PL/Python and PL/Perl (yes, you can use Perl and Python to create stored procedures in PostgreSQL; addons are available to use Java, R, and even Bash in the backend) have even more dependencies.
Which combinations should be supported?
Now exponentiate by a factor of 1000 packages.
Not reasonable to do given the resources. Why not just use Fedora, since it tracks more or less what you want? Or, use gentoo. Or get really ambitious and do Linux From Scratch and make it just exactly how you want it.
There are times when you want predictable behavior, and times when you want correct behavior. When an upstream app makes changes that provide correct behavior but you are ensured of the old buggy behavior as a matter of policy, something is wrong.
What is 'correct' behavior? Doesn't correctness depend upon the customer? Is it possible that CentOS (and RHEL by extension) is Not For Everybody, but targeted to a particular set of customers?
In reality, what you're looking for is Fedora Core, not RHEL.
Well, FC1 seems like the only way to get the specific mix of working kernel and apps for certain things right now, but it is by definition a dead end - and not really up to date on the app side either.
If you want your custom versions of apps, then you can either do the legwork yourself (that is, backport your own security fixes) or pay someone to do it. If no one else wants it, then you will pay lots of money. But a volunteer project is under no obligation to fill the needs of every person who wants it 'their way' unless said person can afford to either do some legwork or pay someone to do some legwork or convince the Powers That Be that It Is Such A Good Idea that the project cannot do without it.
developments. Red Hat used to actively include CIPE in the kernel, and test it as their standard VPN solution.
Hence my surprise at their change of direction.
CIPE is not an industrial strength VPN and gives a false sense of security. Thus it needed to be removed since a false sense of security is worse than no security at all. Further, with the 2.6 kernel you get in-kernel IPsec, which is quite a bit more secure and is an actual standard.
It's something a decision to change kernels runs into. The CIPE author didn't make that decision.
No, like all others the CIPE author chose to pull an ostrich and hope to not be run over by the 2.6 Linux train. That, IMHO, was not wise on the part of the CIPE author. So Red Hat can either 1.) wait on CIPE; or 2.) wait on deploying 2.6 with all its much-more-critical-than-CIPE-features while the CIPE author gets his act together, or pay someone to fix CIPE, which, even according to you, would be very difficult. Which would you choose if you were in Red Hat's shoes? How important would CIPE be to you? There are thousands of similar things and similar decisions, and the choice sometimes is not easy.
I just don't remember seeing any discussion of this on the CIPE mailing list which is the only place it might have been resolved.
It is not Red Hat's job, or the kernel developers' jobs, to make it easy on the CIPE author.
I don't see how anyone but Olaf Titz could have made the necessary changes, and I don't see why he would have done so with appropriate timing for the FC2 release unless someone involved in the release made him aware of the planned changes.
If you want your software to coexist with others' software, it is your responsibility to be informed of the issues for your software; the other parties have no obligation to make it easy on you. There is no excuse for the CIPE community to not know what was happening, and it isn't the responsibility of the kernel developers or Red Hat to go hand-hold every author of every impacted module of all of the over 1000 packages shipped in Red Hat/ Fedora Core Linux.
An interface is supposed to be a form of contract among programmers that is not changed. Linus has consistently refused to freeze his interfaces, hence the lack of binary driver support from device vendors, and frankly I'm surprised at the the number of open source developers that have continued to track the moving target.
Linus != Red Hat. This is a Linus issue; not a Red Hat issue. Thus the subject line. You can't blame Red Hat for something Linus caused.
Red Hat needed the features from 2.6 for other things; Red Hat needed to get on the 2.6 bandwagon for marketing purposes, too; CIPE's author was reticent to make his software work with a kernel version that had been out for a long time; Red Hat had no choice but to drop CIPE. If CIPE wants in, CIPE has to play the game, too.
On Fri, 2005-05-27 at 20:13 -0400, Lamar Owen wrote:
What is 'correct' behavior? Doesn't correctness depend upon the customer? Is it possible that CentOS (and RHEL by extension) is Not For Everybody, but targeted to a particular set of customers?
Apparently when I say this, that RHEL/SLES has a different focus than just about any other distro out there, other people seem to hear it as I'm saying it's "better" (which I'm not).
I'm glad you actually heard what I actually said.
Linus != Red Hat. This is a Linus issue; not a Red Hat issue. Thus the subject line. You can't blame Red Hat for something Linus caused.
But he's blaming Red Hat for adopting kernel 2.6 "too early," whatever that means. Apparently he thinks that Red Hat should have held off on kernel 2.6 in Fedora until over a year after release and when CIPE was working, which would have pushed back RHEL4 until later this year as a result.
Red Hat needed the features from 2.6 for other things; Red Hat needed to get on the 2.6 bandwagon for marketing purposes, too; CIPE's author was reticent to make his software work with a kernel version that had been out for a long time; Red Hat had no choice but to drop CIPE. If CIPE wants in, CIPE has to play the game, too.
It's a chicken-egg issue. You want to adopt newer items when it has been broadly tested and accommodated, but you typically don't get broadly tested and accommodated until you adopt it.
The 6-6-6 model seems to be the best balance of this that I've ever seen, and Novell-SuSE seems to agree.
On 5/27/05, Bryan J. Smith b.j.smith@ieee.org wrote:
On Fri, 2005-05-27 at 20:13 -0400, Lamar Owen wrote:
What is 'correct' behavior? Doesn't correctness depend upon the customer? Is it possible that CentOS (and RHEL by extension) is Not For Everybody, but targeted to a particular set of customers?
Apparently when I say this, that RHEL/SLES has a different focus than just about any other distro out there, other people seem to hear it as I'm saying it's "better" (which I'm not).
I'm glad you actually heard what I actually said.
I get a chuckle out of this. You may not have actually said that the RedHat enterprise releases are better than other distros, but you have vigorously sought to prove RedHat totally blameless when confronted by the effects of their release choices (inclusions and omissions). When anyone dares to complain, SLA is offered as a panacea for all supposed failings.
I find RHEL/CentOS to be a blessing and a curse. It's certainly reliable (my desktop system has encountered no significant problems) and the CentOS list is a real gem, but what's going to happen when the next round of 2-2-2 6-6-6 hits? Actually, it's already underway. Will there be just as many functions dropped that people know and love and cannot relinquish without a lot of extra rework, and what new (only partially backwards compatible and perhaps still in their infancy) functions will be added that cause real grief and more rework for some portion of the community?
One thing that would help would be a roadmap. It shouldn't be necessary to pore over change logs to get an idea what is coming.
On Saturday 28 May 2005 00:37, Collins Richey wrote:
I get a chuckle out of this. You may not have actually said that the RedHat enterprise releases are better than other distros, but you have vigorously sought to prove RedHat totally blameless when confronted by the effects of their release choices (inclusions and omissions). When anyone dares to complain, SLA is offered as a panacea for all supposed failings.
Bryan has simply tried to balance against Red Hat bashers who seem to think that Red Hat connot do anything right. Red Hat is not blameless; but neither is Red Hat a Demon Evil. Red Hat has done a lot of good for the open source community.
I find RHEL/CentOS to be a blessing and a curse.
I find computers in general to be a blessing and a curse. A blessing in that it keeps me employed; a curse in that that employment can be frustrating at times dealing with the people effects of computers (that is, computers are made by people; operating systems are written by people; computers are used by people: and people are not perfect, nor is anything made by people ever perfect).
It's certainly reliable (my desktop system has encountered no significant problems) and the CentOS list is a real gem, but what's going to happen when the next round of 2-2-2 6-6-6 hits? Actually, it's already underway. Will there be just as many functions dropped that people know and love and cannot relinquish without a lot of extra rework, and what new (only partially backwards compatible and perhaps still in their infancy) functions will be added that cause real grief and more rework for some portion of the community?
Track Fedora and make up your own mind; this is the way it works and is the original reason for the existence of RawHide (now know by the much more mundane name of Fedora Core Development). Jonathan Kamens, for instance, tracks RawHide on several machines that he uses on a daily basis, and he catches all kinds of issues by so doing. Otherwise, you can at least put of the decision five years until RHEL4 (and by extension) CentOS4 are EOL'ed.
If you don't want to spend the time, effort, and money to track RawHide you can track the FC releases (don't bother with Fedora Core Test if you're not willing to track RawHide with it).
If you don't want to spend the time, effort, and money to track FC releases, then you really must approach things the classical IT way: read the release notes, and wait on the .1 to deploy at a minimum (in RHEL-speak wait on the U1 at minimum). And run it on a development server set up identically to your production server before trying to migrate production. If you can't afford a development server you really have a problem (one site I work at is in this shape; it's a school, and in that case I roll out to them what I already know works, and in fact waited on a rollout for the CentOS4 release so that I wouldn't be saddled with CentOS3 for the next four years). But you will get what you pay for, even running a 'free' release; you don't want to spend the time before hand you WILL spend the same amount or more of time and a whole lot more grief afterwards if you ignore basic preparations. If you're still employed, that it. I was serious about the 'in my shop such could be grounds for dismissal' remark I made earlier; admins should not be so reckless, and will not be so reckless on my servers.
And you don't blindly cron a yum -y update, either. You stage yum updates on a dev box, too, or be prepared for breakage if someone upstream made a mistake (mistakes do and will happen). Things like the MegaRAID driver being upgraded; in that particular case you really must have an identical development server to fully mirror your production server, because the MegaRAID drive can work or not work based on chipset lot number, all other things being the same.
The flip side of the freedom of open source is the same as the flip side of any other freedom; with rights come responsibilities. If you want all decisions made for you run Windows. And even then you run a pair of identical servers to roll out deployments (as people who have deployed Server 2k3 SP1 have found out the hard way) or you run considerable risks. Risk-cost-benefit analysis must be done, or you can get burnt, badly. It boils down to just how critical that server really is.
For example, a JIT-ERP (Just in Time Enterprise Resource Planning) server being actively used for production line scheduling in a busy factory with a couple of thousand line workers would be very critical and probably would still be running RHAS 2.1 even now, since even RHEL3 is not well-tested enough for that application. Hmm, one might even still be running RHL 6.2EE on that kind of app, given the flack over gcc 2.96 (at the base of RHAS 2.1). A misstep in the deployment of an upgrade there could cause the entire IT department to lose their jobs easily; if done very poorly could cost the whole business if the production line were taken down for an appreciable amount of time. And Linux would never be found in that factory again.
This is the market towards which Red Hat Enterprise Linux (R) is striving.
One thing that would help would be a roadmap. It shouldn't be necessary to pore over change logs to get an idea what is coming.
The roadmap exists, and it's called Fedora Core. You can watch it develop in real time. With as many projects and packages as there are represented in RHEL it would be impossible to accurately predict any future feature 18 months distant. Instead, you can track what they are trying to do (and you can track their backsteps!) by simply tracking RawHide. On a development machine, of course. If you don't have time to do this, hire someone who does.
On 5/28/05, Lamar Owen lowen@pari.edu wrote:
On Saturday 28 May 2005 00:37, Collins Richey wrote:
I get a chuckle out of this. You may not have actually said that the RedHat enterprise releases are better than other distros, but you have vigorously sought to prove RedHat totally blameless when confronted by the effects of their release choices (inclusions and omissions). When anyone dares to complain, SLA is offered as a panacea for all supposed failings.
Bryan has simply tried to balance against Red Hat bashers who seem to think that Red Hat connot do anything right. Red Hat is not blameless; but neither is Red Hat a Demon Evil. Red Hat has done a lot of good for the open source community.
It's a little more than that. I find few people who consider RedHat to be a Demon Evil or that they can't do anything right, but I can understand the concern about some of their decisions which have made life difficult for (granted) a few, and it's not really helpful to demonize those who complain, as Bryan has done. In essence, RedHat is a business entity, and they make their decisions for business reasons.
I find RHEL/CentOS to be a blessing and a curse.
I find computers in general to be a blessing and a curse.
True enough. My biggest problem, philosophically at least with RedHat, is the now unified release: what's good for server users is what Desktop users get. I'm still looking for the binary distro that has a really stable base but offers functionality upgrades as they come along. Balance but not bleeding edge is the name of the game.
Yes, I know about Gentoo. I've run it almost since inception, and it's ideal in many respects.
I've enjoyed this discussion and learned a lot about RedHat that I didn't know. I thank you for your rational comments. Unlike Bryan, I think you realize that not everyone who disagrees with RedHat on a given topic is an ignoramus.
On Fri, 2005-05-27 at 22:37 -0600, Collins Richey wrote:
I get a chuckle out of this. You may not have actually said that the RedHat enterprise releases are better than other distros, but you have vigorously sought to prove RedHat totally blameless when confronted by the effects of their release choices (inclusions and omissions).
Poor assumption.
If you read back through the thread, you note people using the terms "broken" or "stupid" or "adopted too early" or other rhetoric. I merely tried to explain how things aren't.
When anyone dares to complain, SLA is offered as a panacea for all supposed failings.
By "failings," what do you mean? That's the problem.
I find RHEL/CentOS to be a blessing and a curse. It's certainly reliable (my desktop system has encountered no significant problems) and the CentOS list is a real gem, but what's going to happen when the next round of 2-2-2 6-6-6 hits? Actually, it's already underway.
Correct.
Will there be just as many functions dropped that people know and love and cannot relinquish without a lot of extra rework, and what new (only partially backwards compatible and perhaps still in their infancy) functions will be added that cause real grief and more rework for some portion of the community?
As always. This isn't new in the Red Hat world. If you don't like the approach, then don't use RHEL/CentOS -- look for something else.
Even *I* don't deploy RHEL/CentOS or even Fedora for everything.
One thing that would help would be a roadmap. It shouldn't be necessary to pore over change logs to get an idea what is coming.
The roadmap is Fedora Core. And things change as Red Hat finds things unmaintainable.
If it doesn't ship in Fedora Core, Red Hat doesn't care about it in RHEL. Red Hat only regression tests the included packages with themselves as well as against popular, binary-only programs. That's part of the problem.
And it's _never_ going to change. It's just impossible to regression test all sort of 3rd party packages and guarantee SLAs.
On Fri, 2005-05-27 at 19:13, Lamar Owen wrote:
On Thursday 26 May 2005 14:17, Les Mikesell wrote:
If you believe that, you have to believe that Red Hat's programmers are always better than the original upstream program author.
For the most part, the Red Hat crew is the best in the business. Or have you never heard of Jakub Jelinek, Alan Cox, Rick van Riel, and many many other of the top upstream developers that are employed in some capacity by Red Hat?
I think we've beaten these topics to death, but since it is kind of fun if you don't take it too seriously: Which of these guys knows how to make perl do character set conversions correctly better than the perl team?
I'll agree that they are good and on the average do a good job, but that stops far short of saying that they know better than the perl (etc.) teams what version you should be running.
The perl team has no business telling me what version I should be running, either. What version I run is dependent upon many things; one of which is 'what version does my vendor support?'
Sigh... at this point it is "how many versions does the vendor support"? And the issue is that the perl version (among many other things) that does a certain operation correctly is only included with a kernel version that has features missing and broken.
So, you want a working application, take an incomplete kernel. I understand that's the way things are. I don't understand why you like it.
Long term version stability. There has to be a freeze point; Red Hat has chosen the very documented 2-2-2 6-6-6 scheme, and sticks to its schedule, for the most part. Or, to put it very bluntly, just exactly which of the over a thousand packages are worth waiting on? And who decides which package holds up progress? CIPE, the example used here, is relatively insecure to begin with and interoperates with nobody.
I don't see how you can call setting up a WAN with many CIPE nodes, then finding it unavailable in the next release 'long term stability'.
Better to use IPsec (which virtually everybody supports to a degree) than a relatively nonstandard VPN like CIPE (I'd go as far as to say that most of the other VPN solutions are in the same boat; what's needed on the server side is typically Microsoft-compatible PPP over L2TP over IPsec, which is so easy to set up on the Windows client side it isn't even funny). That's why for-purpose firewall/VPN appliance Linux dists (SmoothWall and Astaro, for instance) are not using anything but IPsec. I have a SmoothWall box myself, and it Just Works.
Can you run it through NAT routers? I have locations where the end point is already NATed by equipment I don't control. CIPE doesn't mind and the blowfish encryption is pretty CPU-friendly. And again, it might be "long-term stability" if this had already been a choice in several prior versions so you didn't have to upgrade OS revs on machines in several countries on the same day to keep your machines connected.
Is there a reason that a Centos or third-party repository could not be arranged such that an explicit upgrade could be requested to a current version which would then be tracked like your kernel-xxx-version is when you select smp/hugemem/unsupported?
Yes, a couple: 1.) Lack of manpower to do the tracking; 2.) Doesn't fit the purpose for which CentOS was targeted. CentOS is supposed to be 'I Can't Believe it's not Red Hat' enterprise Linux. That 'explicit upgrade' might break many many things.
Whatever happened to the Caos distribuion? I had some hopes that they would combine a very stable kernel/libc combo with very up-to-date apps.
For instance, suppose you want PostgreSQL 8 instead of the supplied 7.4. Ok, now you have a maintenance nightmare, as all kinds of packages link to libpq, and the current 8.0.x release has a bumped libpq soversion. So how do you deal with this without making it virtually impossible for the maintainers to maintain? (I know the answer to that, but it creates its own problems, and that is a compat-postgresql-libs RPM to supply the older libpq, but that doesn't tackle all the problems).
So how is it better that when you do get Postgresql 8 in a distribution you will likely get a largely untested kernel along with it? Or that you compile your own version and end up with potentially unique problems that every user has to solve individually?
Which combinations should be supported?
Now exponentiate by a factor of 1000 packages.
I used to laugh at the people running windows servers because they would not even try to run more than one application per machine. But that was when I compiled a lot of stuff by hand. Now that I've gotten lazy (or smart...) and try to stick to binary distributions, I don't laugh any more because now my Linux boxes are in exactly the same shape. I can't count on being able to upgrade any single application without breaking another one.
Not reasonable to do given the resources. Why not just use Fedora, since it tracks more or less what you want? Or, use gentoo. Or get really ambitious and do Linux From Scratch and make it just exactly how you want it.
Fedora breaks everything at once, so it isn't what I want. I want "something like" being able to install a Centos 3.x base, then selectively update certain apps to current-fedora level. A lot of this would work with just a source rpm rebuild - but what I'm really after is something that once done would still automatically pull updates when the selected packages were fixed.
There are times when you want predictable behavior, and times when you want correct behavior. When an upstream app makes changes that provide correct behavior but you are ensured of the old buggy behavior as a matter of policy, something is wrong.
What is 'correct' behavior? Doesn't correctness depend upon the customer? Is it possible that CentOS (and RHEL by extension) is Not For Everybody, but targeted to a particular set of customers?
I think character set conversion is something where the correctness can be determined objectively. And perl on Cento3 and Centos4 will do 2 different things. I'll take the perl team's word for it being made correct in 5.8.3 and up.
If you want your custom versions of apps, then you can either do the legwork yourself (that is, backport your own security fixes) or pay someone to do it. If no one else wants it, then you will pay lots of money. But a volunteer project is under no obligation to fill the needs of every person who wants it 'their way' unless said person can afford to either do some legwork or pay someone to do some legwork or convince the Powers That Be that It Is Such A Good Idea that the project cannot do without it.
I have a feeling that the 2.6 kernel will become usable before that happens this time around, but maybe by the next time RH rushes out a largely broken release there will be viable competition in Ubuntu and other distributions that will keep a working kernel under a fast release schedule for the apps.
It's something a decision to change kernels runs into. The CIPE author didn't make that decision.
No, like all others the CIPE author chose to pull an ostrich and hope to not be run over by the 2.6 Linux train. That, IMHO, was not wise on the part of the CIPE author.
I don't see how the CIPE author has any reason to care one way or another whether any particular distribution includes his package or not. Why should he?
So Red Hat can either 1.) wait on CIPE; or 2.) wait on deploying 2.6 with all its much-more-critical-than-CIPE-features while the CIPE author gets his act together, or pay someone to fix CIPE, which, even according to you, would be very difficult. Which would you choose if you were in Red Hat's shoes?
Well, if I had previously included CIPE, promoted it's use, and wanted anyone to think the distribution had long term stability...
How important would CIPE be to you? There are thousands of similar things and similar decisions, and the choice sometimes is not easy.
Well, now we know how much value they place on maintaining interoperability with their own older products. Don't forget that when planning for the future.
It is not Red Hat's job, or the kernel developers' jobs, to make it easy on the CIPE author.
Please explain how the CIPE author has any vested interest in this. I would have expected Red Hat to try to provide their existing users a way to upgrade one site at a time and remain compatible. Now I don't expect much at all.
An interface is supposed to be a form of contract among programmers that is not changed. Linus has consistently refused to freeze his interfaces, hence the lack of binary driver support from device vendors, and frankly I'm surprised at the the number of open source developers that have continued to track the moving target.
Linus != Red Hat. This is a Linus issue; not a Red Hat issue. Thus the subject line. You can't blame Red Hat for something Linus caused.
Red Hat made the decisions about if and when to ship it.
Red Hat needed the features from 2.6 for other things; Red Hat needed to get on the 2.6 bandwagon for marketing purposes, too; CIPE's author was reticent to make his software work with a kernel version that had been out for a long time; Red Hat had no choice but to drop CIPE. If CIPE wants in, CIPE has to play the game, too.
CIPE doesn't have to play any game, but a version for 2.6 is available and has been for some time. And before you say things had to be in Fedora before shipping in RHEL, look at the Mysql version included in RHEL4.
I think I asked before what things are measurably better in 2.6 and didn't get any answers. Are there any?
On Sat, 2005-05-28 at 00:54 -0500, Les Mikesell wrote:
On Fri, 2005-05-27 at 19:13, Lamar Owen wrote:
On Thursday 26 May 2005 14:17, Les Mikesell wrote:
Is there a reason that a Centos or third-party repository could not be arranged such that an explicit upgrade could be requested to a current version which would then be tracked like your kernel-xxx-version is when you select smp/hugemem/unsupported?
Yes, a couple: 1.) Lack of manpower to do the tracking; 2.) Doesn't fit the purpose for which CentOS was targeted. CentOS is supposed to be 'I Can't Believe it's not Red Hat' enterprise Linux. That 'explicit upgrade' might break many many things.
Whatever happened to the Caos distribuion? I had some hopes that they would combine a very stable kernel/libc combo with very up-to-date apps.
The cAos foundation is alive and well at http://www.caosity.org/ ... although the CentOS Project is not part of the cAos foundation any longer. They just released cAos-2.
They do have a fairly long release cycle and they do incorporate newer packages into the mix. In fact .. they might be a closer fit to what you are looking for. Although, they have a newer kernel than CentOS at 2.6.11.6
For instance, suppose you want PostgreSQL 8 instead of the supplied 7.4. Ok, now you have a maintenance nightmare, as all kinds of packages link to libpq, and the current 8.0.x release has a bumped libpq soversion. So how do you deal with this without making it virtually impossible for the maintainers to maintain? (I know the answer to that, but it creates its own problems, and that is a compat-postgresql-libs RPM to supply the older libpq, but that doesn't tackle all the problems).
So how is it better that when you do get Postgresql 8 in a distribution you will likely get a largely untested kernel along with it? Or that you compile your own version and end up with potentially unique problems that every user has to solve individually?
For certain items (php5, postfix with mysql suport compiled in, a kernel with some new different features turn on) we have created a CentOS Plus repo for CentOS-4 ... but the number of things we can do there is limited by a several things:
(1) Resources - It takes 50gb now, and at the final state probably closer to 100gb, to mirror CentOS. We are averaging 16.50 TB transferred monthly just to do the distro, updates and supply the public mirrors. We have to somewhat limit what we add to the mirrors so that we can mirror it effectively. We only have so many servers and so much bandwidth ... and CentOS Base has priority.
(2) Time - Someone has to regression test and maintain the added programs on CentOS-4 ... and those items DO NOT get nearly the stability / regression tests that RH puts in the RHEL-4 base. We only have so many developers (none of whom get paid a dime for doing anything for CentOS) ... and again, CentOS Base has priority.
(3) Number of Changes - If something requires other major items to have to be upgraded (i.e., a new version of glibc, libstdc++, python, etc.) then we will not add it, even to CentOS Plus. The end result, even with items installed from CentOS Plus, still needs to be CentOS-4.
Which combinations should be supported?
Now exponentiate by a factor of 1000 packages.
exactly ... CentOS certainly can't support that.
I used to laugh at the people running windows servers because they would not even try to run more than one application per machine. But that was when I compiled a lot of stuff by hand. Now that I've gotten lazy (or smart...) and try to stick to binary distributions, I don't laugh any more because now my Linux boxes are in exactly the same shape. I can't count on being able to upgrade any single application without breaking another one.
Not reasonable to do given the resources. Why not just use Fedora, since it tracks more or less what you want? Or, use gentoo. Or get really ambitious and do Linux From Scratch and make it just exactly how you want it.
Fedora breaks everything at once, so it isn't what I want. I want "something like" being able to install a Centos 3.x base, then selectively update certain apps to current-fedora level. A lot of this would work with just a source rpm rebuild - but what I'm really after is something that once done would still automatically pull updates when the selected packages were fixed.
Gentoo might just be the exact choice you are looking for ... you can have whichever kernel you want ... and only upgrade the packages that you want. They have a huge number of supported packages ... and they move forward fairly quickly. You can have some packages at the ~x86 level (newer, testing level) and others at the x86 level (normal stable) ... you can eve specify some packages to hold back to specific levels behind the current stable.
If you setup you portage and package configurations correctly ... when you upgrade an individual package, all dependencies are figured out and installed.
There are times when you want predictable behavior, and times when you want correct behavior. When an upstream app makes changes that provide correct behavior but you are ensured of the old buggy behavior as a matter of policy, something is wrong.
What is 'correct' behavior? Doesn't correctness depend upon the customer? Is it possible that CentOS (and RHEL by extension) is Not For Everybody, but targeted to a particular set of customers?
I think character set conversion is something where the correctness can be determined objectively. And perl on Cento3 and Centos4 will do 2 different things. I'll take the perl team's word for it being made correct in 5.8.3 and up.
But the perl team didn't verify that those changes will work with the rest of the packages that are in RHEL-3 or RHEL-4 ... and don't break anything. I do not want to upgrade and have other things not work. The bottom line is, RHEL does not normally change major packages to new versions in a release cycle ... if it ships with Gnome 2.2 and OpenOffice.org 1.1.2 ... that is going to be the gnome and OOo throughout the lifetime of that product ... they are not going to upgrade that to gnome 2.10 (or even gnome 2.4) or OOo 2 ... ever.
When people get RHEL (and CentOS) in the enterprise, they add things like Oracle and build/install customized CRM / ERP programs costing millions of dollars into it ... they can't have something change that breaks that once they get it working. They want a stable OS that gets security updates for those machines ... not a new program that makes them have to recompile all the code they based their business on.
If you want your custom versions of apps, then you can either do the legwork yourself (that is, backport your own security fixes) or pay someone to do it. If no one else wants it, then you will pay lots of money. But a volunteer project is under no obligation to fill the needs of every person who wants it 'their way' unless said person can afford to either do some legwork or pay someone to do some legwork or convince the Powers That Be that It Is Such A Good Idea that the project cannot do without it.
I have a feeling that the 2.6 kernel will become usable before that happens this time around, but maybe by the next time RH rushes out a largely broken release there will be viable competition in Ubuntu and other distributions that will keep a working kernel under a fast release schedule for the apps.
Let's be clear here ... Red Hat could care less about Ubuntu, Gentoo, Slackware, Debian, Mandriva, etc. They added 2.6 kernel support for one reason ... because it was in SLES 9 and they were taking a beating in the Enterprise Market. (And it was time for a new release ... 18 months from RHEL-3).
They didn't rush they were the last to put in a 2.6 kernel :) ... but it does have some issues. It works fine for me, and for the majority of the people who use it. There are known kernel issues that will be fixed in U1.
On Saturday 28 May 2005 07:54, Johnny Hughes wrote:
(1) Resources - It takes 50gb now, and at the final state probably closer to 100gb, to mirror CentOS. We are averaging 16.50 TB transferred monthly just to do the distro, updates and supply the public mirrors. We have to somewhat limit what we add to the mirrors so that we can mirror it effectively. We only have so many servers and so much bandwidth ... and CentOS Base has priority.
On a different note, I had heard rumors that the CentOS and Scientific Linux teams might work together. Any substance there (being that there are things Scientific Linux has that I like and use, but they seem to be pretty much duplicating your effort)? It would seem to be a good fit if the teams can work together and pool resources (particularly, the SL team has lots of bandwidth available).
hey!
Lamar Owen wrote:
teams might work together. Any substance there (being that there are things Scientific Linux has that I like and use, but they seem to be pretty much
What are these 'things' that are not provided via the CentOS Distribution base ?
- K
On Saturday 28 May 2005 16:31, Karanbir Singh wrote:
Lamar Owen wrote:
teams might work together. Any substance there (being that there are things Scientific Linux has that I like and use, but they seem to be pretty much
What are these 'things' that are not provided via the CentOS Distribution base ?
Referencing SL3 and CentOS 3 (as I haven't run SL4 as yet) there were some scientific applications and some Java stuff, eclipse for one, part of cluster suite for another, included. Lessee, https://www.scientificlinux.org/distributions/30x/features/ is the reference. GFS, Eclipse, Cluster Suite, OpenAFS, ksh93, a set of 'tweak' RPMs (my favorite being the serial console tweak RPM).
For SL4, the doc is at https://www.scientificlinux.org/distributions/4x/features/ and includes fewer addons. OpenAFS is the biggest of these, I guess. The same 'tweak' RPMs are there again (including my favorite, again, the serial console one. I duplicate its name and description here: SL_enable_serialconsole: This script makes all the changes necessary to send console output to both the serial port and the screen. This also creates a login prompt on the serial port and allows users to login at this prompt. Once the cluster suite and eclipse are available they probably will be rolled in.
The Fermi version of SL 3 had included a packaged JRE and was very attractive for that, but later releases have not had that and rather have pointers to download from sun.
Pine also is in the SL dists.
On Saturday 28 May 2005 01:54, Les Mikesell wrote:
On Fri, 2005-05-27 at 19:13, Lamar Owen wrote:
For the most part, the Red Hat crew is the best in the business. Or have you never heard of Jakub Jelinek, Alan Cox, Rick van Riel, and many many other of the top upstream developers that are employed in some capacity by Red Hat?
I think we've beaten these topics to death, but since it is kind of fun if you don't take it too seriously: Which of these guys knows how to make perl do character set conversions correctly better than the perl team?
Given time I believe Tom Lane (one of the PostgreSQL core developers and one of the key libjpeg developers -- and a Red Hat employee) could do it without question.
Sigh... at this point it is "how many versions does the vendor support"? And the issue is that the perl version (among many other things) that does a certain operation correctly is only included with a kernel version that has features missing and broken.
And what's the solution? Wait until upstream gets it 'correct' for 1000 pieces of upstream? One of the packages is going to always be broken, statistically speaking. You have to draw the line somewhere.
I don't see how you can call setting up a WAN with many CIPE nodes, then finding it unavailable in the next release 'long term stability'.
You then stick with the five year cycle of the release you have installed. You don't upgrade. In fact Red Hat specifically says not to 'upgrade' and to carefully roll out version increases with scratch new installs. Then, if you absolutely must have a feature only available in the latest release, you must do a cost-benefit analysis to determine which features are most important for you.
The five year supported cycle with no in-cycle version upgrades is the long term stability I'm talking about; IT Directors (I am one, so I can speak like one) want a stable five-year plan; and we do NOT want version upgrades that can break things. Things like OpenSSL's versions, for instance, where the ABI is broken between minor versions, even, are unacceptable for mission critical servers (which IS the target audience for RHEL!)
The perl code change you mention could have broken many things; Red Hat or any other Enterprise Linux vendor must freeze packages at some point to get any reasonable amount of QA in; and you really don't want to mess up the pot in a security release.
Microsoft-compatible PPP over L2TP over IPsec, which is so easy to set up on the Windows client side it isn't even funny). That's why for-purpose firewall/VPN appliance Linux dists (SmoothWall and Astaro, for instance) are not using anything but IPsec. I have a SmoothWall box myself, and it Just Works.
Can you run it through NAT routers?
Yes. NAT traversal is fully supported.
Whatever happened to the Caos distribuion? I had some hopes that they would combine a very stable kernel/libc combo with very up-to-date apps.
CentOS and cAos are no longer associated. cAos is at caosity.org and is still doing their thing.
For instance, suppose you want PostgreSQL 8 instead of the supplied 7.4.
So how is it better that when you do get Postgresql 8 in a distribution you will likely get a largely untested kernel along with it?
Just because the corner case that impacts you doesn't work does not mean the kernel was largely untested; there were in fact many months of testing behind the kernel that is in RHEL4/CentOS4. Your particular case was not tested; did you bother to test the Fedora Core kernels leading up to it, knowing that Fedora is the 'testbed' for RHEL? There's no such thing as a free lunch; either you buy your lunch, you make your lunch, or you eat what someone gives you. And beggars cannot be choosers: if the person who gave you your lunch uses a seasoning you don't like and you complain, do you think the person is going to make all their lunches your way next time?
Or that you compile your own version and end up with potentially unique problems that every user has to solve individually?
When you compile your own you become responsible for what you have compiled. This means you lose support from Red Hat, if you have an SLA for RHEL. This means the developers of a free alternative don't have to support your version. You are indeed on your own. That is the cost of open source, unfortunately. You want support? Then you work within the framework you have purchased or have decided to use. You don't like the framework? You get a different one, and absorb the costs of the migration. Basic IT business.
I used to laugh at the people running windows servers because they would not even try to run more than one application per machine. But that was when I compiled a lot of stuff by hand. Now that I've gotten lazy (or smart...) and try to stick to binary distributions, I don't laugh any more because now my Linux boxes are in exactly the same shape. I can't count on being able to upgrade any single application without breaking another one.
That's right. It's not a Windows disease; it's a server disease, and the two items on the balance are version stability and features. The two are often mutually exclusive.
Fedora breaks everything at once, so it isn't what I want. I want "something like" being able to install a Centos 3.x base, then selectively update certain apps to current-fedora level. A lot of this would work with just a source rpm rebuild - but what I'm really after is something that once done would still automatically pull updates when the selected packages were fixed.
Like Johnny said, then you probably want something like Gentoo or a BSD.
I think character set conversion is something where the correctness can be determined objectively. And perl on Cento3 and Centos4 will do 2 different things. I'll take the perl team's word for it being made correct in 5.8.3 and up.
So they do different things. There are many many things in CentOS4 that will do different things from CentOS3; PostgreSQL, for instance, won't upgrade between the two versions and you have to do a dump-initdb-restore between versions. When the great Apache divide occurred (1.3 to 2.0) that broke things; Sendmail version increases can break things; PHP version upgrades can break things; how is the perl example any different? More to the point: was it documented in the release notes? If not, when the first update came out had it been noted? Was it noted during the development cycle (Fedora Core being the development cycle)? You don't mean you tried to roll out an fully unsupported upgrade (Red Hat DOES NOT SUPPORT and WILL NOT support upgrading one RHEL version to another!) remotely using the first release of a major version increment?
Well, if there is a critical feature you absolutely must have, then you really really have a problem. I rolled out CentOS4 on a mission critical server: but only after I had been using Fedora Core 2 and 3 for a long period of time (FC2 and 3 are the development lines leading up to RHEL4, after all, and that means the 2.6.x line had gotten right at a year of testing at my site prior to rolling CentOS4 out!) on other non-critical but similar servers. It is irresponsible to roll out releases that one has not tested to one's critical servers, and in my shop could be grounds for dismissal.
I have a feeling that the 2.6 kernel will become usable before that happens this time around, but maybe by the next time RH rushes out a largely broken release there will be viable competition in Ubuntu and other distributions that will keep a working kernel under a fast release schedule for the apps.
Quite frankly, if RHEL4 were 'largely broken' it would not have been released. It is not a perfect release; none are. But it is a pretty good release and meets my needs very well: to the point that CentOS 4 is becoming the standard desktop and server Linux for PARI (as well as for me personally). Just wish a SPARC version existed for my cluster of 20 Ultra 30's.
I don't see how the CIPE author has any reason to care one way or another whether any particular distribution includes his package or not. Why should he?
Because his users might care?
Well, if I had previously included CIPE, promoted it's use, and wanted anyone to think the distribution had long term stability...
RHEL3 has long term stability: within RHEL3's line (and RHEL3 is STILL BEING SOLD and SUPPORTED for five years from initial release) there is long term stability. Red Hat doesn't support upgrades from RHEL3 to RHEL4, and fully documents the lack of CIPE in the release notes, which everyone should read before deploying an OS anyway. This is the way Big IT works, likes to work, and needs to work. You want the stability of a Big IT OS, then you must either take what's distributed or put the responsibility on your own shoulders. Or find a distribution that better meets you needs (like Johnny said, Gentoo might be your best shot, or, if you like Ubuntu go that way: it's Open Source and you're free to choose (that is, after all, what I like most about Open Source, is that I am free to choose what meets my needs, and can really do a cost-benefit study on various options)).
Well, now we know how much value they place on maintaining interoperability with their own older products. Don't forget that when planning for the future.
This is very true. And is simply one of the costs of doing business; they have to target their customers, and for the most part they are doing a great job of it. Yes, they make missteps (caching-nameserver anyone?) but the positives are far greater than the negatives, IMHO and for my particular purpose. YMMV. CIPE wasn't high enough on their customers' lists to override their other concerns; they made the executive decision to drop CIPE since it was going to be a pain to get it working, and they wanted to go another direction anyway.
Please explain how the CIPE author has any vested interest in this. I would have expected Red Hat to try to provide their existing users a way to upgrade one site at a time and remain compatible. Now I don't expect much at all.
Red Hat makes no secret about their complete non-support of major version RHEL upgrades; it's plastered heavily all over their release notes, and, in fact, you must supply a kernel command line parameter to get any upgrade functionality in RHEL. This should tell you something; I know it tells me 'upgrades aren't supported: if they work, you got lucky.'
Further, even with Fedora upgrades aren't really supported (but, then again, the whole OS isn't supported).
This is the route Red Hat has chosen to go; for most of their paying customers it is the correct behavior.
Linus != Red Hat. This is a Linus issue; not a Red Hat issue. Thus the subject line. You can't blame Red Hat for something Linus caused.
Red Hat made the decisions about if and when to ship it.
And they were the last Enterprise Linux vendor to ship it. Their customers wanted the featureset 2.6 provided, and CIPE wasn't high on their list.
CIPE doesn't have to play any game, but a version for 2.6 is available and has been for some time. And before you say things had to be in Fedora before shipping in RHEL, look at the Mysql version included in RHEL4.
MySQL is more important than CIPE to the majority of Red Hat's customers, and MySQL 4 was considered a critical feature by Red Hat. I myself don't use MySQL (I use PostgreSQL, which to me is much better in every way). Red Hat was getting beat up for the whole MySQL problem (which was a MySQL AB caused problem), and they really had to correct it. That might very well bite them in the future.
I think I asked before what things are measurably better in 2.6 and didn't get any answers. Are there any?
Sure. LVM. IPtables. ATM support. SMP. NUMA. HyperThreading (which really isn't enterprise-grade yet, unfortunately). Sizeably better performance in the core LAMP role. Seriously better performance (nearly twice) in typical file server roles. Red Hat is much much closer to upstream kernels in the 2.6 line rather than the very heavily patched and nearly 2.6 RHEL3 2.4 kernels. Filesystem size not limited to 2TB. Support for more CPU's in SMP. Support for much larger physical memory. More devices and processes available for larger servers. Better interactive response for desktops.
See http://www.2cpu.com/articles/98_1.html and http://librenix.com/?inode=3972 for more. Yes, 2.6 is worth the pain for the gain. Both of these articles were written nearly a year before the release of RHEL4; sounds to me like it wasn't an 'untested' kernel. Also see http://www.infoworld.com/infoworld/article/04/01/30/05FElinux_1.html
But I do somewhat sympathize with your position; after all, I still have a mission critical server running Red Hat Linux 5.2. Yes, 5.2. Long ago in the mists of time support stopped; but I have a mission critical binary-only application (that will remain nameless) that won't run on anything higher. We're attempting to find a replacement, but none exist that is backwards compatible to some of our clients; we have the later version for newer clients, but keep the old server running for the clients that won't work with the newer version.
The reason? The app was actually written for Red Hat 4 (I've come full circle; I started out with Red Hat Linux 4 and am now, eight years later, running a clone of a Red Hat 4 distribution!) and the compat libs needed won't work on Red Hat 6 or higher. There was some work to get it running on Red Hat 6.2, but it wasn't stable. The 5.2 server runs like clockwork; but I have to be extremely careful to packport highly critical fixes (2.0.40 kernel, for instance). And I can't run any Internet-visible apps on it; too old; it's so old it doesn't even have SSH! So I had to pull in a second server to run the Internet-visible stuff. That was the cost. This app's support of Red Hat 4 was, by the way, one of the reasons I bought Red Hat Linux 4.1 in the first place.
Also, I am in a similar position with newer versions of software; in my case, there is a repository catering to my needs: KDE-Redhat. I need the features of KDE 3.4 (Kstars in particular) and Rex does a great job keeping up. But I wouldn't deign to ask the core CentOS team to maintain a KDE 3.4 release and the over 500MB of needed files (it touches so many things you must download a vast array of packages to just get KDE 3.4). If Rex (Dieter) weren't doing it, I would have the difficult choice of whether to do all that work myself or do without the features. In that particular case I would probably switch to using Xephem instead of Kstars and maintaining it myself, but even then I'd have to look at the decision very carefully.
I also have applications that would like to have Python 2.4 available; but CentOS 4 comes with Python 2.3. When one of the core Python apps I use (either GNUradio or Plone/Zope) begins requiring Python 2.4 then I will have another choice to make: are the features of the newer versions of GNUradio and Plone worth the effort of maintaining my own Python 2.4 installation? I'm already maintaining my own fftw3 distribution (GNUradio requires the single-precision build, which is not the default), so I can do this (I maintained the PostgreSQL RPMs for five years (not anymore; people were demanding too much backwards compatibility for me to realistically continue volunteering my valuable time!) so I know what I'm doing). But the question distills to whether the features are worth it or not. And I do not expect the CentOS team or Red Hat to do any of this for me; if an upgrade breaks something I have done then I'm on my own.
I have also done this before with Tcl; I run an application server framework called OpenACS at a couple of sites, and it requires Tcl 8.4.6+ built for multithreaded operation; this is not the default version nor is it the default configure switch.
And I don't use any of the RPMs for Zope; Zope is way too customized at my site for that.
On 5/28/05, Lamar Owen lowen@pari.edu wrote:
On Saturday 28 May 2005 01:54, Les Mikesell wrote:
I think I asked before what things are measurably better in 2.6 and didn't get any answers. Are there any?
Sure. LVM.
I'm curious about this. At work we haven't finished our evaluation of RHEL3/RHEL4 (CentOS is out of the question, since SLA is king here). Most of our servers and desktops are RH9 legacy, and we use LVM on all of them. It's my understanding that RHEL4(CentOS4) only offers LVM2, and it doesn't appear that you can extend and ext3 filesystem using LVM2 tools. If that is indeed true, why woud LVM be "measurably better?
On Saturday 28 May 2005 14:41, Collins Richey wrote:
On 5/28/05, Lamar Owen lowen@pari.edu wrote:
Sure. LVM.
I'm curious about this. At work we haven't finished our evaluation of RHEL3/RHEL4 (CentOS is out of the question, since SLA is king here). Most of our servers and desktops are RH9 legacy, and we use LVM on all of them. It's my understanding that RHEL4(CentOS4) only offers LVM2, and it doesn't appear that you can extend and ext3 filesystem using LVM2 tools. If that is indeed true, why woud LVM be "measurably better?
While I am no LVM expert, it seems that ext2online does what you want. My CentOS4 box here has ext2online included; I have not had a chance to try it out, so I can't comment on how well it works.
However, what I have read is that the LVM2 metadata format is more robust and removes many of the limitations of LVM1. In trying to check on this, I looked for specific information on how this was the case, but have not been able to at this point.
The ext2/3 filesystem wasn't designed for resizing online, so it's quite nice one can do it at all; but it appears the LVM2 approach is to rely on the e2fsprogs to do the work, so, taking you question literally, no, there is no LVM2 tool to resize; you would do a two-step, resizing the logical volume then resizing the filesystem with e2fsprogs. But, having never had opportunity to do this, I have no direct experience with it.
How was the CIPE author supposed to know that what would be released as FC2 would have a changed kernel interface?
My God, I actually can't believe you could even make such a statement. At this point, it's _futile_ to even really debate you anymore, you keep talking from a total standing of "assumption" and "unfamilarity." Something a maintainer of a package would not be, unless he honestly didn't care.
I'm talking about the CIPE author, who had to be involved to write the 1.6 version not an RPM maintainer who probably couldn't have.
Fedora Core 2 (2004May) was released 6 months _after_ Linux 2.6 (2003Dec).
So how does any of this relate to the CIPE author, who didn't write CIPE for fedora and almost certainly didn't have an experimental 2.6 kernel on some unreleased distribution, knowing that CIPE wasn't going to work? On the other hand, someone involved in building FC2 must have known and I don't remember seeing any messages going to the CIPE list asking if anyone was working on it.
He did respond with 1.6 as quickly as could be expected after a released distribution didn't work with it and a user reported problems on the mailing list.
How can you blame this on distributions? Honestly, I don't see it at all!
Who else knew about the change? Do you expect every author of something that has been rpm-packaged to keep checking with Linus to see if he feels like changing kernel interfaces this month so as not to disrupt the FC release schedule?
Quit the cipe yapping already. The developer should really get his cipe module into the mainline kernel.
If you are really so miffed about no cipe, try this cipe source rpm for centOS4.
It will not build unless you have texinfo, openssl-devel and kernel-devel (for your current running kernel) packages installed.
If you need to build for a smp kernel, use the modified smp cipe spec file and the patch that I have attached. You will still need to download the cipe tarball from its site.
The patch stops the rsa-keygen script for the identity file used by pkcipe from being run.
Summary: CIPE - encrypted IP over UDP tunneling Name: cipe-smp Version: 1.6.0 Release: 1-centOS-4 Copyright: GPL Group: System Environment/Daemons Source: http://sites.inka.de/bigred/sw/%%7Bname%7D-%%7Bversion%7D.tar.gz BuildRoot: /var/tmp/%{name}-root Patch0: cipe-no-rsa-keygen.patch
%description CIPE (the name is shortened from *Crypto IP Encapsulation*) is a package for an encrypting IP tunnel device. This can be used to build encrypting routers for VPN (Virtual Private Networks) and similar applications.
requires: kernel-smp-devel requires: openssl-devel requires: texinfo
%prep %setup -q %patch0 -p1 %define kversion `uname -r` %define machine `uname -m` ./configure --prefix=/usr --enable-cryptoapi --with-linux=/usr/src/kernels/%{kversion}-%{machine}
%build make
%install rm -rf $RPM_BUILD_ROOT
make install BINDIR=$RPM_BUILD_ROOT/usr/sbin INFODIR=$RPM_BUILD_ROOT/usr/info \ MODDIR=$RPM_BUILD_ROOT/lib/modules/%{kversion}/misc \ sbindir=$RPM_BUILD_ROOT/usr/sbin bindir=$RPM_BUILD_ROOT/usr/bin gzip $RPM_BUILD_ROOT/usr/info/cipe.info
%post depmod -a install-info --info-file=/usr/info/cipe.info.gz \ --dir-file=/usr/info/dir \ --item="* cipe: (cipe)cipe. CIPE - encrypted IP over UDP tunneling" echo "Please run 'rsa-keygen /etc/cipe/identity' to create your identity file"
%preun install-info --delete \ --info-file=/usr/info/cipe.info.gz \ --dir-file=/usr/info/dir \ --item="* cipe: (cipe)cipe. CIPE - encrypted IP over UDP tunneling"
%postun depmod -a
%clean rm -rf $RPM_BUILD_ROOT
%files %defattr(-,root,root) %doc COPYING CHANGES README README.key-bug tcpdump.patch samples /usr/info/cipe.info.gz /usr/sbin/ciped-c /usr/sbin/pkcipe /usr/bin/rsa-keygen /lib/modules/*/misc/cipc.ko
%changelog * Sat Dec 16 2000 Olaf Titz olaf@bigred.inka.de - integrated pkcipe
* Fri Jul 21 2000 Mirko Zeibig mz@webideal.de - created spec-file
On Thu, 2005-05-26 at 06:52, Feizhou wrote:
Quit the cipe yapping already. The developer should really get his cipe module into the mainline kernel.
If you are really so miffed about no cipe, try this cipe source rpm for centOS4.
Thanks - as you probably know, the long-winded discussion isn't really about CIPE specifically so much as the philosophy behind bundling a few thousand things together and then trying to please anyone with a blanket policy about maintaining backwards compatibility vs bug fixes vs new features. And while I realize that there may never be a good solution, I'll be really happy if some new options fall out of it, like this being included in a centos kernel-unsupported-something-or-other package.
Les Mikesell wrote:
On Thu, 2005-05-26 at 06:52, Feizhou wrote:
Quit the cipe yapping already. The developer should really get his cipe module into the mainline kernel.
If you are really so miffed about no cipe, try this cipe source rpm for centOS4.
Thanks - as you probably know, the long-winded discussion isn't really about CIPE specifically so much as the philosophy behind bundling a few thousand things together and then trying to please anyone with a blanket policy about maintaining backwards compatibility vs bug fixes vs new features. And while I realize that there may never be a good solution, I'll be really happy if some new options fall out of it, like this being included in a centos kernel-unsupported-something-or-other package.
If there was not a kernel-devel rpm, it would have been really tough to get you a rpm that actually builds let alone work. I don't use cipe so I don't know whether the thing actually works.
The developer really should try to get it like netfilter, that is, the kernel module into the mainline kernel and the user space program separately provided just as iptables is separately provided from the kernel modules that make up netfilter.