On Fri, Jan 2, 2015 at 5:52 PM, Warren Young wyml@etr-usa.com wrote:
OK, but should one developer make an extra effort or the bazillion people affected by it?
That developer is either being paid by a company with their own motivations or is scratching his own itch. You have no claim on his time.
Agreed - but I'm not going to say I like his breakage.
Why do you believe this is a stringent requirement? I thought CentOS was the distro targeted at organizations staffed by competent technical professionals. That’s me. Isn’t that you, too?
Yes, but I'd rather be building things on top of a solid foundation than just using planned obsolescence as job security since the same thing needs to be done over and over. And I'll admit I can't do it the right way with the approach google uses of just tossing the distribution and its tools.
Mathematics doesn’t change. The business and technology worlds do. Your example is a non sequitur.
If you are embedding business logic in your library interfaces, something is wrong.
Once again you’re making non sequiturs.
Your example was that arithmetic doesn’t change, then you go off and try to use that to explain why EL7 is wrong. So, where is the part of EL7 that doesn’t add columns of numbers correctly?
If the program won't start or the distribution libraries are incompatible (which is very, very likely) then it isn't going to add anything.
Take something simple like the dhcp server in the disto. It allows for redundant servers - but the versions are not compatible. How do you manage that by individual node upgrades when they won't fail over to each other?
Is that hypothetical, or do you actually know of a pair of dhcpd versions where the new one would fail to take over from the older one when run as its backup? Especially a pair actually shipped by Red Hat?
It's my experience or I wouldn't have mentioned it. I built a CentOS7 to match my old CentOS5 pair. It can do the same thing, but there is no way to make them actively cluster together so the new one is aware of the outstanding leases at cutover or to have the ability to revert if the new one introduces problems.
I’m not interested in the reverse case, where an old server could not take over from a newer one, because there’s no good reason to manage the upgrade that way. You drop the new one in as a backup, take the old one offline, upgrade it, and start it back up, whereupon it can take over as primary again.
The ability to fail back is important, unless you think new software is always perfect. Look through some changelogs if you think that...
Which sort of points out that the wild and crazy changes in the mainstream distributions weren't all that necessary either…
No. The nature of embedded systems is that you design them for a specific task, with a fixed scope. You deploy them, and that’s what they do from that point forward. (Routers, print servers, media streamers…)
Well yes. We have computers doing specific things. And those things span much longer that 10 years. If you are very young you might not understand that.
You'd probably be better off in java if you aren't already.
If you actually had a basis for making such a sweeping prescription like that, 90% of software written would be written in Java.
I do. ...The java stuff has been much less problematic in porting across systems - or running the same code concurrently under different OS's/versions at once.
And yet, 90% of new software continues to *not* be developed in Java.
Lots of people do lots of stupid things that I can't explain. But if numbers impress you, if you count android/Dalvik which is close enough to be the stuff of lawsuits, there's probably more instances of running programs than anything else.
Could be there are more downsides to that plan than upsides.
You didn't come up with portable non-java counterexamples to elasticsearch, jenkins, opennms, etc. I'd add eclipse, jasper reports and the Pentaho tools to the list too - all used here.
I don't think the C++ guys have even figured out a sane way to use a standard boost version on 2 different Linux's, even doing separate builds for them.
This method works for me:
# scp -r stableserver:/usr/local/boost-1.x.y /usr/local # cd /usr/local # ln -s boost-1.x.y boost
Then build with CXXFLAGS=-I/usr/local/boost/include.
I don't think CMake is happy with that since in knows where the stock version should be and will find duplicates, And then you have to work out how to distribute your binaries. Are you really advocating copying unmanaged.unpackaged libraries around to random places.
I’ve never done much with Windows Server, but my sense is that they have plenty of churn over in their world, too.
Yes, there are changes - and sometimes mysterious breakage. But an outright abandonment of an existing interface that breaks previously working code s pretty rare
Yes, well, that’s one of the things you can do when you’ve got a near-monopoly on PC OSes, which allows you to employ 128,000 people. [1]
And you only get that with code that keeps users instead of driving them away.
Seriously? I mean, you actually believe that if RHEL sat still, right where it is now, never changing any ABIs, that it would finally usher in the Glorious Year of Linux? That’s all we have to do?
Yes, they can add without changing/breaking interfaces that people use or commands they already know. The reason people use RHEL at all is because they do a pretty good job of that within the life of a major version. How can you possibly think that the people attracted to that stability only want it for a short length of time relative to the life of their businesses.