On Wed, Dec 31, 2014 at 11:03 AM, Warren Young wyml@etr-usa.com wrote:
On Dec 29, 2014, at 10:07 PM, Les Mikesell lesmikesell@gmail.com wrote:
it's not necessary for either code interfaces or data structures to change in backward-incompatible ways.
You keep talking about the cost of coping with change, but apparently you believe maintaining legacy interfaces is cost-free.
Take it from a software developer: it isn’t.
OK, but should one developer make an extra effort or the bazillion people affected by it?
People praise Microsoft for maintaining ancient interfaces, and attribute their success to it, but it’s really the other way around: their success pays for the army of software developers it takes to keep a handle on the complexity that results from piling 20-30 years of change on top of the same base.
That's what it takes to build and keep a user base.
Most organizations cannot afford to create the equivalents of WOW64, which basically emulates Win32 on top of Win64. (Or *its* predecessor, WOW, which emulates Win16 on top of Win32.) That isn’t trivial to do, especially at the level Microsoft does it, where a whole lot of clever low-level code is employed to allow WOW64 code to run nearly as fast as native Win64 code.
It's hard to the extent that you made bad choices in interfaces in the first place. Microsoft's job was hard. But Unix SysV which Linux basically emulates wasn't so bad. Maybe a few size definitions could have been better.
Result? We cannot afford to maintain every interface created during the quarter century of Linux’s existence. Every now and then, we have to throw some ballast overboard.
And the user base that depended on them.
If your software is DBMS-backed and a new feature changes the schema, you can use one of the many available systems for managing schema versions. Or, roll your own; it isn’t hard.
Are you offering to do it for free?
This is one of the things my employer pays me to do. This is what I’m telling you: the job description is, “Cope with change.”
So either it "isn't hard", or "you need a trained, experienced, professional staff to do it". Big difference. Which is it?
I'm asking if computer science has advanced to the point where adding up a total needs new functionality, or if you would like the same total for the same numbers that you would have gotten last year.
Mathematics doesn’t change. The business and technology worlds do. Your example is a non sequitur.
If you are embedding business logic in your library interfaces, something is wrong. I'm talking about things that are shipped in the distribution and the commands to manage them. The underlying jobs they do were pretty well established long ago.
How many single computers have to be up 24/7? I mean really.
All of our customer-facing services - a nd most internal infrastructure. Admittedly, not individual boxes - but who wants to have systems running concurrently with major differences in code base and operations/maintenance procedures?
If you have any form of cluster — from old-school shared-everything style to new-style shared-nothing style — you can partition it and upgrade individual nodes.
Yes, everything is redundant. But when changes are not backwards compatible it makes piecemeal updates way harder than they should be. Take something simple like the dhcp server in the disto. It allows for redundant servers - but the versions are not compatible. How do you manage that by individual node upgrades when they won't fail over to each other?
If your system isn’t in use across the world, you must have windows of low or zero usage where upgrades can happen. If your system *is* in use across the world, you likely have it partitioned across continents anyway.
How nice for you...
This means we’re still building binaries for EL3.
I have a few of those, but I don't believe that is a sane thing to recommend.
It depends on the market. A lot of Linux boxes are basically appliances. When was the last time you upgraded the OS on your home router? I don’t mean flashing new firmware — which is rare enough already — I mean upgrading it to a truly different OS.
Okay, so that’s embedded Linux, it doesn’t seem remarkable that such systems never change, once deployed.
Which sort of points out that the wild and crazy changes in the mainstream distributions weren't all that necessary either...
This also means our software must *remain* broadly portable.
You'd probably be better off in java if you aren't already.
If you actually had a basis for making such a sweeping prescription like that, 90% of software written would be written in Java.
I do. We have a broad mix of languages, some with requirements that force it, some just for historical reasons and the team that maintains it. The java stuff has been much less problematic in porting across systems - or running the same code concurrently under different OS's/versions at once. I don't think the C++ guys have even figured out a sane way to use a standard boost version on 2 different Linux's, even doing separate builds for them.
There’s a pile of good reasons why software continues to be written in other languages, either on top of other runtimes or on the bare metal.
Maybe. I think there's a bigger pile of not-so-good reasons that things aren't done portably. Java isn't the only way to be portable, but you don't see much on the scale of elasticsearch, jenkins or opennms done cross-platform in other languages.
No, don’t argue. I don’t want to start a Java flame war here. Just take it from a software developer, Java is not a universal, unalloyed good.
The syntax is cumbersome - but there are things like groovy or jruby that run on top of it. And there's a lot of start-up overhead, but that doesn't matter much to long-running servers.
Take systemd. You can go two ways here:
sysvinit should also be supported as a first-class citizen in EL7. If that’s your point, then just because the sysvinit code was already written doesn’t mean there isn’t a cost to continuing to maintain and package it.
sysvinit should never have been replaced. If that’s your position, you’re free to switch to a sysvinit based OS, or fork EL6. What, sounds like work? Too costly? That must be because it isn’t free to keep maintaining old code.
Yes, I'm forced to deal with #1. That doesn't keep me from wishing that whatever code change had been done had kept backwards compatibility in the user interface commands and init scripts department.
I’ve never done much with Windows Server, but my sense is that they have plenty of churn over in their world, too.
Yes, there are changes - and sometimes mysterious breakage. But an outright abandonment of an existing interface that breaks previously working code s pretty rare
Yes, well, that’s one of the things you can do when you’ve got a near-monopoly on PC OSes, which allows you to employ 128,000 people. [1]
And you only get that with code that keeps users instead of driving them away.
And conversely, they felt is was worth _not_ doing for a very very long time. So can the rest of us wait until we have google's resources?
You’re never going to have Google’s resources. Therefore, you will never have the *option* to roll your own custom OS.
So, cope with change.
What google does points out how unsuitable the distro really is. I just don't see why it has to stay that way.