[CentOS] Design changes are done in Fedora

Warren Young

wyml at etr-usa.com
Tue Jan 6 03:22:00 UTC 2015


On Jan 3, 2015, at 2:17 PM, Les Mikesell <lesmikesell at gmail.com> wrote:

> On Fri, Jan 2, 2015 at 5:52 PM, Warren Young <wyml at etr-usa.com> wrote:
>> 
>> 
>> where is the part of EL7 that doesn’t add columns of numbers correctly?
> 
> If the program won't start or the distribution libraries are
> incompatible (which is very, very likely) then it isn't going to add
> anything.

That’s ABI compatibility again, and it isn’t a CentOS specific thing.

The primary reason for it is that most things on your Linux box were built from an SRPM that contains readable — and therefore recompilable — source code.

I’ve successfully rebuilt an old SRPM to run on a newer OS several times.  It isn’t always easy, but it is always possible.

This fact means there is much less incentive to keep ABIs stable in the Linux world than in the Windows and OS X worlds, where raw binaries are often all you get.

If your situation is that you do have binary-only programs, they’re likely commercial software of some sort, so you might think about whether you should have a maintenance contract for them, so you can get newer binaries as you move to new platforms.

If a support contract is out of the question, you have the option of creating a VM to run the old binaries today.  

Docker will eat away at this problem going forward.  You naturally will not already have Dockerized versions of apps built 10 years ago, and it may not be practical to create them now, but you can start insisting on getting them today so that your future OS changes don’t break things for you again.

> I built a
> CentOS7 to match my old CentOS5 pair.   It can do the same thing, but
> there is no way to make them actively cluster together so the new one
> is aware of the outstanding leases at cutover or to have the ability
> to revert if the new one introduces problems.

I believe that was ISC’s fault, not Red Hat’s.

Red Hat did their job: they supported an old version of ISC dhcpd for 7 years.  ISC broke their failover API during that period.  You can’t blame Red Hat for not going in and reverting whatever change caused the breakage when they finally did upgrade dhcpd with EL6.

This is a consequence of pulling together software developed by many different organizations into a software distribution, as opposed to developing everything in-house.

You are free to backport the old EL5 dhcpd SRPM to EL7.

Perhaps your point is that Red Hat should have either a) continued to distribute an 8-year-old version of dhcpd with EL7 [1], or b) somehow given you the new features of ISC dhcpd 4.2.5 without breaking anything?  If so, I take this point back up at the end.


[1] https://lists.isc.org/pipermail/dhcp-announce/2006-November/000090.html

> The ability to fail back is important, unless you think new software
> is always perfect.

You fall back in this case by turning off the EL7 dhcpd and going back to a redundant set of EL5 dhcpds.

All dhcpd needs to do here is help you to migrate forward.  As long as that doesn’t break, you have what you need, if not what you *want*.

>> The nature of embedded systems is that you design them for a specific task, with a fixed scope.  You deploy them, and that’s what they do from that point forward.
> 
> And those things
> span much longer that 10 years.  If you are very young you might not
> understand that.

My first look at the Internet was on a VT102.  I refer not to a terminal emulator, but to something that crushes metatarsals if you drop it on your foot.

I think I’ve got enough gray in my beard to hold my own in this conversation.

>> And yet, 90% of new software continues to *not* be developed in Java.
> 
> Lots of people do lots  of stupid things that I can't explain.

Just because you can’t explain it doesn’t mean it’s an irrational choice.

> But if
> numbers impress you, if you count android/Dalvik which is close enough
> to be the stuff of lawsuits, there's probably more instances of
> running programs than anything else.

There are more JavaScript interpreters in the world than Dalvik, ART,[2] and Java ® VMs combined.  Perhaps we should rewrite everything in JavaScript instead?

If we consider only the *ix world, there are more Bourne-compatible shell script interpreters than Perl, Python, or Ruby interpreters.  Why did anyone bother to create these other languages, and why do we spend time maintaining these environments and writing programs for them?

Why even bother with ksh or Bash extensions, for that matter?  The original Bourne shell achieved Turing-completeness in 1977.  There is literally nothing we can ask a computer to do that we cannot cause to happen via a shell script.  (Except run fast.)

If you think I’m wrong about that, you probably didn’t ever use sharchives.


[2] http://en.wikipedia.org/wiki/Android_Runtime

>> Could be there are more downsides to that plan than upsides.
> 
> You didn't come up with portable non-java counterexamples

I already told you that I didn’t want to start a Java argument.  This isn’t the place for it.

Still, you seem to be Jonesing for one, so I will answer a few of your points, with the hope that, sated, you will allow this subtopic to die before it slides into the teeth of Godwin’s Law, as so many other advocacy threads have before it.

> elasticsearch, jenkins, opennms

Yes, Java has had several big wins.  So have C, C++, C#, Objective C, Perl, Python, Ruby, PHP, Visual Basic, R, PL/SQL, COBOL, and Pascal.

About half of those are machine-portable.  UCSD Pascal (1978!) even ran in a JRE-like VM.

Though Java is one of the top languages in the software world, that world is so fragmented that even its enviable spot near the top [3] means it’s got hold of less than a sixth of the market.

If that still impresses you, you aren’t thinking clearly about how badly outnumbered that makes you.  *Everyone* with a single-langauge advocacy position is badly outnumbered.  This is why I learn roughly one new programming language per year.


[3] http://www.tiobe.com/index.php/content/paperinfo/tpci/

> I'd add eclipse

Yes, much praise for the IDE that starts and runs so slowly that even us poor limited humans with our 50-100 ms cycle times notice it.

It takes Eclipse 2-3x as long to start as Visual Studio on the same hardware, despite both having equally baroque feature sets and bloated disk footprints.  

(Yes, I tested it, several times.  I even gave VS a handicap by running its host OS under VMware.  It’s actually a double-handicap, since much of VS is written in .NET these days!)

Once the Eclipse beast finally stumbles into wakefulness, it’s as surly and slow to move as a teenager after an all-nighter.  I can actually see the delay in pulling down menus, for Ritchie’s sake.  *Menus*.  In *2015*.  Until Java came along, I thought slow menu bars were an historical artifact even less likely to make a reappearance than Classic Mac OS.

It isn’t just Eclipse that does this.  Another Java-based tool I use here (oXygen) shows similar performance problems.

Oh, and somehow both of these apps have managed to stay noticeably slow despite the many iterations of Moore’s Law since The Bubble, when these two packages were first perpetrated.

>> This method works for me:
>> 
>>  # scp -r stableserver:/usr/local/boost-1.x.y /usr/local
>>  # cd /usr/local
>>  # ln -s boost-1.x.y boost
>> 
>> Then build with CXXFLAGS=-I/usr/local/boost/include.
> 
> I don't think CMake is happy with that since in knows where the stock
> version should be and will find duplicates,

Only if you use CMake’s built-in Boost module to find it.  It’s easy to roll your own, which always does the right thing for your particular environment.

The built-in Boost module is for situations where a piece of software needs *a* version of Boost, and isn’t particular about which one, other than possibly to set a minimum compatible version.

> And then you have to
> work out how to distribute your binaries.  Are you really advocating
> copying unmanaged.unpackaged libraries around to random places.

I’ve never had to use any of the Boost *libraries* proper.  A huge subset of Boost is implemented as C++ templates, which compile into the executable.

If I *did* have to distribute libboost*.so, I’d just ship them in the same RPM that contains the binaries that reference those libraries.

This is no different than what you often see on Windows or Mac: third-party DLLs in c:\Program Files that were installed alongside the .exe, or third-party .dylib files buried in foo.app/Contents/Frameworks or similar.

>> Seriously?  I mean, you actually believe that if RHEL sat still, right where it is now, never changing any ABIs, that it would finally usher in the Glorious Year of Linux?  That’s all we have to do?
> 
> Yes, they can add without changing/breaking interfaces that people use
> or commands they already know.    The reason people use RHEL at all is
> because they do a pretty good job of that within the life of a major
> version.

Red Hat avoids breaking APIs and ABIs within a major release by purposely not updating anything they don’t absolutely have to; they rarely add new features to old software.  You can’t just extend one activity to the other with the verb “can”.  Red Hat “can” do all things, but Red Hat will not do all things.

(My 20% project if I get hired by Red Hat: design 1F @ 1MV capacitors, so that Red Hat can power North Carolina for a year or so, completely off the grid.  This project will allow Red Hat to move to a foundation model, completely supported by the interest from the invested proceeds of the first year’s energy sales.  (Yes, I did the arithmetic.))

Fantasy aside, Red Hat’s actual current operating model is how we get into a situation where a supported OS (EL5) will still be shipping Perl 5.8 two years from now, into a world where there are an increasing number of major Perl modules that refuse to run on anything less than Perl 5.10.  (e.g. Catalyst)  This is not an enviable position.

My company only recently stopped supporting SeaMonkey 1.09 in our web app because we finally dropped support for EL3 in our newest branch of the software.  (We can still build binaries for old branches on EL3, though there is little call to do so.)

The only reason we had to support such an ancient browser [4] in the first place was because it was what EL3 shipped with, and we decided it would be an embarrassment if we sent an old customer an RPM containing our web app for a platform whose native browser wouldn’t even load said web app.

This is what heroic levels of backwards compatibility buys, and you’re saying you want more of the same?


[4] Contemporaneous with — but in many ways inferior to — MSIE 7

> How can you possibly think that the people attracted to
> that stability only want it for a short length of time relative to the
> life of their businesses.

Just because X is good doesn’t mean 3X is better.  More Xs cost more $s.  Red Hat’s job gets harder and harder, the longer they push out the EOL date on an OS version.  That money has to come from somewhere.

How high do you think the cost of a RHEL sub can go before its users start abandoning it?  If Red Hat loses its market share, CentOS will slide into obscurity, too.

Even if Red Hat does this — say they push the EOL date of EL5 out to 2027, a full two decades — and even if you continue to use one of the free clones of RHEL, you’re going to pay the price for the upgrade eventually.  (If not you, then your successor.)  If you think EL5 to EL7 was a disaster, how do you plan on choking down two decades worth of change in a single lump?

Or were you planning on demanding that EL5 be supported forever, with no changes, except for magical cost-free features?


More information about the CentOS mailing list