On Dec 29, 2014, at 10:07 PM, Les Mikesell lesmikesell@gmail.com wrote:
it's not necessary for either code interfaces or data structures to change in backward-incompatible ways.
You keep talking about the cost of coping with change, but apparently you believe maintaining legacy interfaces is cost-free.
Take it from a software developer: it isn’t.
People praise Microsoft for maintaining ancient interfaces, and attribute their success to it, but it’s really the other way around: their success pays for the army of software developers it takes to keep a handle on the complexity that results from piling 20-30 years of change on top of the same base.
Even having mobilized that army, a huge amount of the problems with Windows come directly as a result of choosing to maintain such a huge legacy of backwards compatibility.
Just one example: By default, anyone can write to the root of the C: drive on Windows. Why? Because DOS and Win16 allowed it, so a huge amount of software was written to expect that they could do it, too. Hence, the root of your Windows box’s filesystem is purposely left insecure.
Most organizations cannot afford to create the equivalents of WOW64, which basically emulates Win32 on top of Win64. (Or *its* predecessor, WOW, which emulates Win16 on top of Win32.) That isn’t trivial to do, especially at the level Microsoft does it, where a whole lot of clever low-level code is employed to allow WOW64 code to run nearly as fast as native Win64 code.
Meanwhile over in the Linux world, we have a whole lot of the code being written by unpaid volunteers, and a lot of the rest is being written by developers employed by organizations that do not enjoy a legal means for forcing their customers to pay for each and every seat of the software their developers created.
Result? We cannot afford to maintain every interface created during the quarter century of Linux’s existence. Every now and then, we have to throw some ballast overboard.
I’m not saying that CentOS should be killed off, and all its users be forced to pay for RHEL licenses. I’m saying that one of the trade-offs of using a free OS is that you have to pick up some of the slack on your end.
If your software is DBMS-backed and a new feature changes the schema, you can use one of the many available systems for managing schema versions. Or, roll your own; it isn’t hard.
Are you offering to do it for free?
This is one of the things my employer pays me to do. This is what I’m telling you: the job description is, “Cope with change.”
I'm asking if computer science has advanced to the point where adding up a total needs new functionality, or if you would like the same total for the same numbers that you would have gotten last year.
Mathematics doesn’t change. The business and technology worlds do. Your example is a non sequitur.
How many customers for your service did you keep running non-stop across those transitions?
Most of our customers are K-12 schools, so we’re not talking about a 24/7 system to begin with.
That's a very different scenario than a farm of data servers that have to be available 24/7.
How many single computers have to be up 24/7? I mean really.
If you have any form of cluster — from old-school shared-everything style to new-style shared-nothing style — you can partition it and upgrade individual nodes.
If your system isn’t in use across the world, you must have windows of low or zero usage where upgrades can happen. If your system *is* in use across the world, you likely have it partitioned across continents anyway.
The days of the critical single mainframe computer are fading fast. We’re going to get to a point where it makes as much sense to talk about 100% uptime for single computers as it does to talk about hard drives that never fail.
We rarely change out hardware or the OS at a particular site. We generally run it until it falls over, dead.
This means we’re still building binaries for EL3.
I have a few of those, but I don't believe that is a sane thing to recommend.
It depends on the market. A lot of Linux boxes are basically appliances. When was the last time you upgraded the OS on your home router? I don’t mean flashing new firmware — which is rare enough already — I mean upgrading it to a truly different OS.
Okay, so that’s embedded Linux, it doesn’t seem remarkable that such systems never change, once deployed.
The thing is, there really isn’t a narrow, bright line between “embedded” and the rest of the Linux world. It’s a wide, gray line, covering a huge amount of the Linux world.
This also means our software must *remain* broadly portable.
You'd probably be better off in java if you aren't already.
If you actually had a basis for making such a sweeping prescription like that, 90% of software written would be written in Java.
There’s a pile of good reasons why software continues to be written in other languages, either on top of other runtimes or on the bare metal.
No, don’t argue. I don’t want to start a Java flame war here. Just take it from a software developer, Java is not a universal, unalloyed good.
Everyone’s moaning about systemd...at least it’s looking to be a real de facto standard going forward.
What you expect to pay to re-train operations staff -just- for this change, -just- to keep things working the same..
You ask that as if you think you have a no-cost option in the question of how to address the churn.
I ask it as if I think that software developers could make changes without breaking existing interfaces. And yes, I do think they could if they cared about anyone who built on those interfaces.
Legacy code isn’t free to keep around.
Take systemd. You can go two ways here:
1. sysvinit should also be supported as a first-class citizen in EL7. If that’s your point, then just because the sysvinit code was already written doesn’t mean there isn’t a cost to continuing to maintain and package it.
2. sysvinit should never have been replaced. If that’s your position, you’re free to switch to a sysvinit based OS, or fork EL6. What, sounds like work? Too costly? That must be because it isn’t free to keep maintaining old code.
I’ve never done much with Windows Server, but my sense is that they have plenty of churn over in their world, too.
Yes, there are changes - and sometimes mysterious breakage. But an outright abandonment of an existing interface that breaks previously working code s pretty rare
Yes, well, that’s one of the things you can do when you’ve got a near-monopoly on PC OSes, which allows you to employ 128,000 people. [1]
When you only employ 6,500 [2] and a huge chunk of your customer base doesn’t pay you for the use of the software you write, you necessarily have to do business differently.
[1] http://en.wikipedia.org/wiki/Microsoft [2] http://en.wikipedia.org/wiki/Red_Hat
Were you paying attention when Microsoft wanted to make XP obsolete? There is a lot of it still running.
Were you paying attention when Target’s XP-based POS terminals all got pwned?
Stability and compatibility are not universal goods.
Well, some things you have to get right in the first place - and then stability is good.
Security changes, too.
10 years ago, 2FA was something you only saw in high-security environments.
Today, I have two different 2FA apps on the phone in my pocket. That phone is protected by a biometric system, which protects access to a trapdoor secure data store. My *phone* does this.
The phone I had 10 years ago would let you hook a serial cable up and suck its entire contents out without even asking you for a password.
Google already did that cost/benefit calculation: they tried staying on RH 7.1 indefinitely, and thereby built up 10 years of technical debt. Then when they did jump, it was a major undertaking, though one they apparently felt was worth doing.
And conversely, they felt is was worth _not_ doing for a very very long time. So can the rest of us wait until we have google's resources?
You’re never going to have Google’s resources. Therefore, you will never have the *option* to roll your own custom OS.
So, cope with change.