On Mon, December 29, 2014 04:22, Ned Slider wrote:
What business model do you have that you can't build around a product guaranteed to be consistent/supported for the next 10 years?
Well, despite the hype from Wall St., Bay St. and The City, a large number of organisations in the world run on software that is decades old and cannot be economically replaced. In many instances in government and business seven years is a typical time-frame in which to get a major software system built and installed. And I have witnessed longer.
So, seven, even ten, years of stability is really nothing at all. And as Linux seeks to enter into more and more profoundly valuable employment the type of changes that we witnessed from v6 to v7 are simply not going to be tolerated. In fact, it my considered belief that RH in Version EL7 has done themselves a serious injury with respect to corporate adoption for core systems. Perhaps they seek a different market?
Think about it. What enterprise can afford to rewrite all of its software every ten years? What enterprise can afford to retrain all of its personnel to use different tools to accomplish the exact same tasks every seven years? The desktop software churn that the PC has inured in people simply does not scale to the enterprise.
If you wish to see what change for change's sake produces in terms of market share consider what Mozilla has done with Firefox. There is absolutely no interface that is as easy to use as the one you have been working on for the past ten years. And that salient fact seems to be completely ignored by many people in the FOSS community.
On Mon, Dec 29, 2014 at 9:02 AM, James B. Byrne byrnejb@harte-lyne.ca wrote:
So, seven, even ten, years of stability is really nothing at all.
Yes exactly. Do you want your bank to manage your accounts with new and not-well-tested software every 7 years or would you prefer the stability of incremental improvements?
Think about it. What enterprise can afford to rewrite all of its software every ten years? What enterprise can afford to retrain all of its personnel to use different tools to accomplish the exact same tasks every seven years?
It's worse than that - since you can't just replace all of your servers and code at once, your staff has to be trained on at least two and probably three major versions at any given time - and aware of which server runs what, and which command set has to be used. And the cost and risk of errors increases with the number of arbitrary changes across versions.
On Mon, December 29, 2014 9:02 am, James B. Byrne wrote:
On Mon, December 29, 2014 04:22, Ned Slider wrote:
What business model do you have that you can't build around a product guaranteed to be consistent/supported for the next 10 years?
Well, despite the hype from Wall St., Bay St. and The City, a large number of organisations in the world run on software that is decades old and cannot be economically replaced. In many instances in government and business seven years is a typical time-frame in which to get a major software system built and installed. And I have witnessed longer.
So, seven, even ten, years of stability is really nothing at all. And as Linux seeks to enter into more and more profoundly valuable employment the type of changes that we witnessed from v6 to v7 are simply not going to be tolerated. In fact, it my considered belief that RH in Version EL7 has done themselves a serious injury with respect to corporate adoption for core systems. Perhaps they seek a different market?
I said elsewhere that these changes are partly induced by changes started in kernel some 5 years ago. But now I do realize that at least part of them was pushed on the kernel level by folks from RedHat team...
Think about it. What enterprise can afford to rewrite all of its software every ten years? What enterprise can afford to retrain all of its personnel to use different tools to accomplish the exact same tasks every seven years? The desktop software churn that the PC has inured in people simply does not scale to the enterprise.
If you wish to see what change for change's sake produces in terms of market share consider what Mozilla has done with Firefox. There is absolutely no interface that is as easy to use as the one you have been working on for the past ten years. And that salient fact seems to be completely ignored by many people in the FOSS community.
Well, there are similar changes in other areas of our [human] communication with computer hardware. Take the step "up" from Gnome 2 to Gnome 3 for instance. From the way that worked over two decades (with logical tree like access to what you need) all switched to please people without brain and ability to categorize things... just able to do search. And you can continue describing the differences each confirming that same point. Which leads me to say:
Welcome to ipad generation folks!
Valeri
++++++++++++++++++++++++++++++++++++++++ Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247 ++++++++++++++++++++++++++++++++++++++++
On Mon, Dec 29, 2014 at 10:23 AM, Valeri Galtsev galtsev@kicp.uchicago.edu wrote:
Welcome to ipad generation folks!
Yes, but Apple knows enough to stay out of the server business where stability matters - and they are more into selling content than code anyway. Client side things do need to deal with mobility these days - reconnecting automatically after sleep/wakeup and handling network connection changes transparently, but those things don't need to break existing usage.
On Mon, December 29, 2014 10:37 am, Les Mikesell wrote:
On Mon, Dec 29, 2014 at 10:23 AM, Valeri Galtsev galtsev@kicp.uchicago.edu wrote:
Welcome to ipad generation folks!
Yes, but Apple knows enough to stay out of the server business where stability matters
Not exactly. They claim they are in server business forever. There is something called MacOS Server. Which is an incarnation of their OS with some scripts added. But (apart from that that thing doesn't have documentation - "click here, then click there... and you are done" doesn't count for such) they do not maintain its consistency for any decent period of time. That is, as soon as they release next version of the system you can say goodbye to some of the components of your MacOS Server.
So, as far as "clever Apple" is concerned, I disagree with you. Unless we both agree they are clever enough to be able to fool their customers ;-)
Valeri
++++++++++++++++++++++++++++++++++++++++ Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247 ++++++++++++++++++++++++++++++++++++++++
On Mon, Dec 29, 2014 at 10:57 AM, Valeri Galtsev galtsev@kicp.uchicago.edu wrote:
So, as far as "clever Apple" is concerned, I disagree with you. Unless we both agree they are clever enough to be able to fool their customers ;-)
You can't disagree with the fact that they make a lot of money. They do it by targeting consumers without technical experience or need for backwards compatibility to preserve the value of that experience. That's obviously a big market. But whenever someone else tries to copy that model it is a loss for all of the existing work and experience that built on earlier versions and needs compatibility to continue. For what it's worth, I haven't found it to be that much harder to find Mac ported versions of complex open source software (e.g. vlc) than for RHEL/Centos - they all break things pretty badly on major upgrades, and there is usually just one OSX version needed versus a bazillion linux flavors with arbitrary differences).
On Dec 29, 2014, at 8:02 AM, James B. Byrne byrnejb@harte-lyne.ca wrote:
In many instances in government and business seven years is a typical time-frame in which to get a major software system built and installed. And I have witnessed longer.
As a software developer, I think I can speak to both halves of that point.
First, the world where you design, build, and deploy The System is disappearing fast.
The world is moving toward incrementalism, where the first version of The System is the smallest thing that can possibly do anyone any good. That is deployed ASAP, and is then built up incrementally over years.
Though you spend the same amount of time, you will not end up in the same place because the world has changed over those years. Instead of building on top of an increasingly irrelevant foundation, you track the actual evolving needs of the organization, so that you end up where the organization needs you to be now, instead of where you thought it would need to be 7 years ago.
Instead of trying to go from 0 to 100 over the course of ~7 years, you deliver new functionality to production every 1-4 weeks, achieving 100% of the desired feature set over the course of years.
This isn’t pie-in-the-sky theoretical BS. This is the way I’ve been developing software for decades, as have a great many others. Waterfall is dead, hallelujah!
Second, there is no necessary tie between OS and software systems built on top of it. If your software only runs on one specific OS version, you’re doing it wrong.
I don’t mean that glibly. I mean you have made a fundamental mistake if your system breaks badly enough due to an OS change that you can’t fix it within an iteration or two of your normal development process. The most likely mistake is staffing your team entirely with people who have never been through a platform shift before.
Again, this is not theoretical bloviation. The software system I’ve been working on for the past 2 decades has been through several of these platform changes. It started on x86 SVR4, migrated to Linux, bounced around several distros, and occasionally gets updated for whatever version of OS X or FreeBSD someone is toying with at the moment.
Unix is about 45 years old now. It’s been thorough shifts that make my personal experience look trivial. (We have yet to get off x86, after all. How hard could it have been, really?) The Unix community knows how to do portability.
If you aren’t planning for platform shift, you aren’t planning.
We have plenty of technology for coping with platform shift. The autotools, platform-independence libraries (Qt, APR, Boost…), portable language platforms (Perl, Java, .NET…), and on and on.
Everyone’s moaning about systemd, and how it’s taking over the Linux world, as if it would be better if Red Hat kept on with systemd and all the other Linux distro providers shunned it. Complain about its weaknesses if it you like, but at least it’s looking to be a real de facto standard going forward.
So, seven, even ten, years of stability is really nothing at all. And as Linux seeks to enter into more and more profoundly valuable employment the type of changes that we witnessed from v6 to v7 are simply not going to be tolerated.
Every other OS provider does this.
(Those not in the process of dying, at any rate. A corpse is stable, but that’s no basis for recommending the widespread assumption of ambient temperature.)
Windows? Check. (Vista, Windows 8, Windows CE/Pocket PC/Windows Mobile/Windows RT/Windows Phone)
Apple? Check. (OS 9->X, Lion, Mavericks, Yosemite, iOS 6, iOS 7, iOS 8…)
And when all these breakages occurred, what was the cry heard throughout the land of punditry? “This is Linux’s chance! Having forced everyone to rewrite their software [bogus claim], Bad OS will make everyone move to Linux!” Except it doesn’t happen. Interesting, no?
Could it be that software for these other platforms *also* manages to ride through major breaking changes?
What enterprise can afford to rewrite all of its software every ten years?
Straw man.
If you have to rewrite even 1% of your system to accommodate the change from EL6 to EL7, you are doing it wrong.
If you think EL6 to EL7 is an earth-shaking change, you must not have been through something actually serious, like Solaris to Linux, or Linux to BSD, or (heaven forfend) Linux to Windows. Here you *might* crest the 1% rewrite level, but if you do that right, you just made it possible to port to a third new platform much easier.
What enterprise can afford to retrain all of its personnel to use different tools to accomplish the exact same tasks every seven years?
Answer: Every enterprise that wants to remain an enterprise.
This is exactly what happens with Windows and Apple, only on a bit swifter pace, typically.
(The long dragging life of XP is an exception. Don’t expect it to occur ever again.)
The desktop software churn that the PC has inured in people simply does not scale to the enterprise.
Tell that to Google.
What, you think they’re still building Linux boxes based on the same kernel 2.2 custom distro they were using when they started in the mid 1990s?
We don’t have to guess, they’ve told us how they coped:
http://events.linuxfoundation.org/sites/events/files/lcjp13_merlin.pdf
Check out the slide titled "How did that strategy of patching Red Hat 7.1 work out?”
Read through the rest of it, for that matter.
If you come away from it with “Yeah, that’s what I’m telling you, this is a hard problem!” you’re probably missing the point, which is that while your resources aren’t as extensive as Google’s, your problem isn’t nearly as big as Google’s, either.
Bottom line: This is the job. This is what you get paid to do.
On Mon, Dec 29, 2014 at 3:03 PM, Warren Young wyml@etr-usa.com wrote:
As a software developer, I think I can speak to both halves of that point.
First, the world where you design, build, and deploy The System is disappearing fast.
Sure, if you don't care if you lose data, you can skip those steps. Lots of free services that call everything they release 'beta' can get away with that, and when it breaks it's not the developer answering the phones if anyone answers at all.
The world is moving toward incrementalism, where the first version of The System is the smallest thing that can possibly do anyone any good. That is deployed ASAP, and is then built up incrementally over years.
That works if it was designed for rolling updates. Most stuff isn't, some stuff can't be.
Instead of trying to go from 0 to 100 over the course of ~7 years, you deliver new functionality to production every 1-4 weeks, achieving 100% of the desired feature set over the course of years.
If you are, say, adding up dollars, how many times do you want that functionality to change?
This isn’t pie-in-the-sky theoretical BS. This is the way I’ve been developing software for decades, as have a great many others. Waterfall is dead, hallelujah!
How many people do you have answering the phone about the wild and crazy changes you are introducing weekly? How much does it cost to train them?
I don’t mean that glibly. I mean you have made a fundamental mistake if your system breaks badly enough due to an OS change that you can’t fix it within an iteration or two of your normal development process. The most likely mistake is staffing your team entirely with people who have never been through a platform shift before.
Please quantify that. How much should a business expect to spend per person to re-train their operations staff to keep their systems working across a required OS update? Not to add functionality. To keep something that was working running the way it was? And separately, how much developer time would you expect to spend to follow the changes and perhaps eventually make something work better?
Again, this is not theoretical bloviation. The software system I’ve been working on for the past 2 decades has been through several of these platform changes. It started on x86 SVR4, migrated to Linux, bounced around several distros, and occasionally gets updated for whatever version of OS X or FreeBSD someone is toying with at the moment.
How many customers for your service did you keep running non-stop across those transitions? Or are you actually talking about providing a reliable service?
Everyone’s moaning about systemd, and how it’s taking over the Linux world, as if it would be better if Red Hat kept on with systemd and all the other Linux distro providers shunned it. Complain about its weaknesses if it you like, but at least it’s looking to be a real de facto standard going forward.
Again, it's only useful to talk about if you can quantify the cost. What you expect to pay to re-train operations staff -just- for this change, -just- to keep things working the same.. And separately, what will it cost in development time to take advantage of any new functionality?
So, seven, even ten, years of stability is really nothing at all. And as Linux seeks to enter into more and more profoundly valuable employment the type of changes that we witnessed from v6 to v7 are simply not going to be tolerated.
Every other OS provider does this.
(Those not in the process of dying, at any rate. A corpse is stable, but that’s no basis for recommending the widespread assumption of ambient temperature.)
Windows? Check. (Vista, Windows 8, Windows CE/Pocket PC/Windows Mobile/Windows RT/Windows Phone)
We've got lots of stuff that will drop into Windows server versions spanning well over a 10 year range. And operators that don't have a lot of special training on the differences between them.
And when all these breakages occurred, what was the cry heard throughout the land of punditry? “This is Linux’s chance! Having forced everyone to rewrite their software [bogus claim], Bad OS will make everyone move to Linux!” Except it doesn’t happen. Interesting, no?
No, Linux doesn't offer stability either.
Could it be that software for these other platforms *also* manages to ride through major breaking changes?
Were you paying attention when Microsoft wanted to make XP obsolete? There is a lot of it still running.
What enterprise can afford to rewrite all of its software every ten years?
Straw man.
Not really. Ask the IRS what platform they use. And estimate what it is going to cost us when they change.
What enterprise can afford to retrain all of its personnel to use different tools to accomplish the exact same tasks every seven years?
Answer: Every enterprise that wants to remain an enterprise.
This is exactly what happens with Windows and Apple, only on a bit swifter pace, typically.
(The long dragging life of XP is an exception. Don’t expect it to occur ever again.)
No, that is the way things work. And the reason Microsoft is in business.
Tell that to Google.
With their eternally beta software? With the ability to just drop things they don't feel like supporting any more? Not everyone has that luxury.
What, you think they’re still building Linux boxes based on the same kernel 2.2 custom distro they were using when they started in the mid 1990s?
We don’t have to guess, they’ve told us how they coped:
http://events.linuxfoundation.org/sites/events/files/lcjp13_merlin.pdf
Check out the slide titled "How did that strategy of patching Red Hat 7.1 work out?”
Read through the rest of it, for that matter.
If you come away from it with “Yeah, that’s what I’m telling you, this is a hard problem!” you’re probably missing the point, which is that while your resources aren’t as extensive as Google’s, your problem isn’t nearly as big as Google’s, either.
So again, quantify that. How much should it cost a business _just_ to keep working the same way? And why do you think it is a good thing for this to be a hard problem or for every individual user to be forced to solve it himself?
Bottom line: This is the job. This is what you get paid to do.
But it could be better, if anyone cared.
On Dec 29, 2014, at 4:03 PM, Les Mikesell lesmikesell@gmail.com wrote:
On Mon, Dec 29, 2014 at 3:03 PM, Warren Young wyml@etr-usa.com wrote:
the world where you design, build, and deploy The System is disappearing fast.
Sure, if you don't care if you lose data, you can skip those steps.
How did you jump from incremental feature roll-outs to data loss? There is no necessary connection there.
In fact, I’d say you have a bigger risk of data loss when moving between two systems released years apart than two systems released a month apart. That’s a huge software market in its own right: legacy data conversion.
If your software is DBMS-backed and a new feature changes the schema, you can use one of the many available systems for managing schema versions. Or, roll your own; it isn’t hard.
You test before rolling something to production, and you run backups so that if all else fails, you can roll back to the prior version.
None of this is revolutionary. It’s just what you do, every day.
when it breaks it's not the developer answering the phones if anyone answers at all.
Tech support calls shouldn’t go straight to the developers under any development model, short of sole proprietorship, and not even then, if you can get away with it. There needs to be at least one layer of buffering in there: train up the secretary to some basic level of cluefulness, do everything via email, or even hire some dedicated support staff.
It simply costs too much to break a developer out of flow to allow a customer to ring a bell on a developer’s desk at will.
The world is moving toward incrementalism, where the first version of The System is the smallest thing that can possibly do anyone any good. That is deployed ASAP, and is then built up incrementally over years.
That works if it was designed for rolling updates. Most stuff isn’t,
Since we’re contrasting with waterfall development processes that may last many years, but not decades, I’d say the error has already been made if you’re still working with a waterfall-based methodology today.
The first strong cases for agile development processes were first made about 15 years ago, so anything started 7 years ago (to use the OP’s example) was already disregarding a shift a full software generation old.
some stuff can't be.
Very little software must be developed in waterfall fashion.
Avionics systems and nuclear power plant control systems, for example. Such systems make up a tiny fraction of all software produced.
A lot of commercial direct-to-consumer software also cannot be delivered incrementally, but only because the alternative messes with the upgrade treadmill business model.
Last time I checked, this sort of software only accounted for about ~5% of all software produced, and that fraction is likely dropping, with the moves toward cloud services, open source software, subscription software, and subsidized software.
The vast majority of software developed is in-house stuff, where the developers and the users *can* enter into an agile delivery cycle.
Instead of trying to go from 0 to 100 over the course of ~7 years, you deliver new functionality to production every 1-4 weeks, achieving 100% of the desired feature set over the course of years.
If you are, say, adding up dollars, how many times do you want that functionality to change?
I’m not sure what you’re asking.
If you’re talking about a custom accounting system, the GAAP rules change several times a year in the US:
http://www.fasb.org/jsp/FASB/Page/SectionPage&cid=1176156316498
The last formal standard put out by FASB was 2009, and they’re working on another version all the time. Chances are good that if you start a new 7-year project, a new standard will be out before you finish.
If instead you’re talking about the cumulative cost of incremental change, it shouldn’t be much different than the cost of a single big-bang change covering the same period.
In fact, I’d bet the incremental changes are easier to adopt, since each change can be learned piecemeal. A lot of what people are crying about with EL7 comes down to the fact that Red Hat is basically doing waterfall development: many years of cumulative change gets dumped on our HDDs in one big lump.
Compare a rolling release model like that of Cygwin or Ubuntu (not LTS). Something might break every few months, which sounds bad until you consider that the alternative is for *everything* to break at the same time, every 3-7 years.
I’m not arguing for CentOS/RHEL to turn into Ubuntu Desktop. I’m just saying that there is a cost for stability: every 3-7 years, you must hack your way through a big-bang change bolus.
(6-7 years being for those organizations that skip every other major release by taking advantage of the way the EL versions overlap. EL5 was still sunsetting as EL7 was rising.)
This isn’t pie-in-the-sky theoretical BS. This is the way I’ve been developing software for decades, as have a great many others. Waterfall is dead, hallelujah!
How many people do you have answering the phone about the wild and crazy changes you are introducing weekly?
The burden of tech support has more to do with proper QA and roll-out strategies than with the frequency of updates.
For the most part, we roll new code to a site in response to a support call, rather than field calls in response to an update. The new version solves their problem, and we don’t hear back from them for months or years.
We don’t update all sites to every new release. We merely ship *a* new release every 1-4 weeks, which goes out to whoever needs the new features and fixes. It’s also what goes out on each new server we ship.
How much does it cost to train them?
Most of our sites get only one training session, shortly after the new system is first set up.
We rarely get asked to do any follow-up training. The users typically pick up on the incremental feature updates as they happen, without any additional help from us. We attribute that to solid UX design.
That first session is mostly about giving the new users an idea of what the system can do. We teach them enough to teach themselves.
How often do most people get trained to use a word processor? I’ll bet a lot of people got trained just once, in grade school. They just cope with changes as they come.
The worst changes are when you skip many versions. Word 97 to Word 2007, for example. *shudder*
I don’t mean that glibly. I mean you have made a fundamental mistake if your system breaks badly enough due to an OS change that you can’t fix it within an iteration or two of your normal development process. The most likely mistake is staffing your team entirely with people who have never been through a platform shift before.
Please quantify that. How much should a business expect to spend per person to re-train their operations staff to keep their systems working across a required OS update? Not to add functionality. To keep something that was working running the way it was?
If you hire competent people, you pay zero extra to do this, because this is the job they have been hired to do.
That's pretty much what IT/custom development is: coping with churn.
Most everything you do on a daily basis is a reaction to some change external to the IT/development organization:
- Capacity increases
- Obsolete ‘ware upgrades
- New seat/site deployments
- Failed equipment replacements
- Compatibility breakage repair (superseded de facto standard, old de jure standard replaced, old proprietary item no longer available…)
- Tracking business rule change (GAAP, regulations, mergers…)
- Effecting business change (entering new markets, automation, solving new problems developing from new situations…)
- Tracking business strategy change (new CEO, market shift…)
Setting aside retail software development, IT and internal development organizations *should* be chasing this kind of thing, not being “proactive.” We’re not trying to surprise our users with things they didn’t even ask for, we’re trying to solve their problems.
Maybe we solve problems in a *manner* our users did not expect — hopefully a better way — but we’re rarely trying to innovate, as such.
how much developer time would you expect to spend to follow the changes and perhaps eventually make something work better?
Pretty much 100%, after subtracting overhead. (Meetings, email, breaks, reading…)
Again: This is what we do. Some new thing happens in the world, and we go out and solve the resulting problems.
The only question is one of velocity: the more staff you add, the faster you go. So, how fast do you want to go?
(Yes, I’ve read “The Mythical Man Month.” The truths within that fine book don’t change the fact that Microsoft can develop a new OS faster than I can all by my lonesome.)
The software system I’ve been working on for the past 2 decades has been through several of these platform changes.
How many customers for your service did you keep running non-stop across those transitions?
Most of our customers are K-12 schools, so we’re not talking about a 24/7 system to begin with. K-12 runs maybe 9 hours a day (7am - 4pm), 5 days a week, 9 months out of the year. That gives us many upgrade windows.
We rarely change out hardware or the OS at a particular site. We generally run it until it falls over, dead.
This means we’re still building binaries for EL3.
This also means our software must *remain* broadly portable. When we talk about porting to EL7, we don’t mean that it stops working on EL6 and earlier. We might have some graceful feature degradation where the older OS simply can’t do something the newer one can, but we don’t just chop off an old OS because a new one came out.
All that having been said, we do occasionally roll a change to a site, live. We can usually do it in such a way that the site users never even notice the change, except for the changed behavior.
This is not remarkable. It’s one of the benefits you get from modern centralized software development and deployment stacks.
Everyone’s moaning about systemd...at least it’s looking to be a real de facto standard going forward.
What you expect to pay to re-train operations staff -just- for this change, -just- to keep things working the same..
You ask that as if you think you have a no-cost option in the question of how to address the churn.
Your only choices are:
1. Don’t upgrade
2. Upgrade and cope
3. Switch to something else
Each path carries a cost.
You think path 1 is free? If you skip EL7, you’re just batching up the changes. You’ll pay eventually, when you finally adopt a new platform. One change set plus one change set equals about 1.9 change sets, plus compound penalties.
Penalties? Yes.
You know the old joke about how you eat an elephant? [*] By the time you eat 1.9 elephants, you’ve probably built up another ~0.3 change sets worth of new problems. Time you spend grinding through nearly two full change sets is time you don’t spend keeping your current backlog short.
We call this technical debt in the software development world. It’s fine to take out a bit of technical debt occasionally, as long as you don’t let it build up too long. The longer you let it build, the more the interest & penalties accrue, so the harder it is to pay down.
We've got lots of stuff that will drop into Windows server versions spanning well over a 10 year range.
Yes, well, Linux has always had a problem with ABI stability. Apparently the industry doesn’t really care about this, evidenced by the fizzling of LSB, and the current attacks on the work at freedesktop.org. Apparently we’d all rather be fractious than learn to get along well enough that we can nail down some real standards.
Once again, though, there’s a fine distinction between stable and moribund.
And operators that don't have a lot of special training on the differences between them.
I’ve never done much with Windows Server, but my sense is that they have plenty of churn over in their world, too. We’ve got SELinux and SystemD, they’ve got UAC, SxS DLLs, API deprecation, and tools that shuffle positions on every release. (Where did they move the IPv4 configuration dialog this time?!)
We get worked up here about things like the loss of 32-bit support, but over in MS land, they get API-of-the-year. JET, ODBC, OLE DB, or ADO? Win32, .NET desktop, Silverlight, or Metro? GDI, WinG, DirectX, Windows Forms or XAML? On and on, and that’s just if you stay within the MSDN walls.
Could it be that software for these other platforms *also* manages to ride through major breaking changes?
Were you paying attention when Microsoft wanted to make XP obsolete? There is a lot of it still running.
Were you paying attention when Target’s XP-based POS terminals all got pwned?
Stability and compatibility are not universal goods.
What enterprise can afford to rewrite all of its software every ten years?
Straw man.
Not really. Ask the IRS what platform they use. And estimate what it is going to cost us when they change.
Monopolies are inherently inefficient and plodding. Government is special only because it is the biggest monopoly.
(That’s why we have antitrust law: not because it’s good for the consumer, but because it fights the trend toward zaibatsu rule.)
Few organizations are working under such stringent constraints, if only because it’s a danger to the health of the organization. Only monopolies can get away with it.
(The long dragging life of XP is an exception. Don’t expect it to occur ever again.)
No, that is the way things work. And the reason Microsoft is in business.
Microsoft stopped retail sale of Windows 7 a few months ago, and Vista back in April.
A few months ago, there was a big stink when MS killed off Windows 8.0 updates, requiring that everyone upgrade to 8.1.
Yes, I know about downgrade rights for pro versions of Windows.
Nevertheless, the writing is on the wall.
while your resources aren’t as extensive as Google’s, your problem isn’t nearly as big as Google’s, either.
So again, quantify that. How much should it cost a business _just_ to keep working the same way?
Google already did that cost/benefit calculation: they tried staying on RH 7.1 indefinitely, and thereby built up 10 years of technical debt. Then when they did jump, it was a major undertaking, though one they apparently felt was worth doing.
There’s a cost to staying put, too.
And why do you think it is a good thing for this to be a hard problem or for every individual user to be forced to solve it himself?
I never said it was a good thing. I’m just reporting some observations from the field.
—————
[*] One bite at a time.
On Mon, Dec 29, 2014 at 8:04 PM, Warren Young wyml@etr-usa.com wrote:
the world where you design, build, and deploy The System is disappearing fast.
Sure, if you don't care if you lose data, you can skip those steps.
How did you jump from incremental feature roll-outs to data loss? There is no necessary connection there.
No, it's not necessary for either code interfaces or data structures to change in backward-incompatible ways. But the people who push one kind of change aren't likely to care about the other either.
In fact, I’d say you have a bigger risk of data loss when moving between two systems released years apart than two systems released a month apart. That’s a huge software market in its own right: legacy data conversion.
I'm not really arguing about the timing of changes, I'm concerned about the cost of unnecessary user interface changes, code interface breakage, and data incompatibility, regardless of when it happens. RHEL's reason for existence is that it mostly shields users from that within a major release. That doesn't make it better when it happens when you are forced to move to the next one.
If your software is DBMS-backed and a new feature changes the schema, you can use one of the many available systems for managing schema versions. Or, roll your own; it isn’t hard.
Are you offering to do it for free?
You test before rolling something to production, and you run backups so that if all else fails, you can roll back to the prior version.
That's fine if you have one machine and can afford to shut down while you make something work. Most businesses aren't like that.
None of this is revolutionary. It’s just what you do, every day.
And it is time consuming and expensive.
when it breaks it's not the developer answering the phones if anyone answers at all.
Tech support calls shouldn’t go straight to the developers under any development model, short of sole proprietorship, and not even then, if you can get away with it. There needs to be at least one layer of buffering in there: train up the secretary to some basic level of cluefulness, do everything via email, or even hire some dedicated support staff.
It simply costs too much to break a developer out of flow to allow a customer to ring a bell on a developer’s desk at will.
Beg your pardon? How about not breaking the things that trigger the calls in the first place - or taking some responsibility for it. Do you think other people have nothing better to do?
Since we’re contrasting with waterfall development processes that may last many years, but not decades, I’d say the error has already been made if you’re still working with a waterfall-based methodology today.
We never change more than half of a load-balenced set of servers at once. So all changes have to be compatible when running concurrently, or worth rolling out a whole replacement farm.
some stuff can't be.
Very little software must be developed in waterfall fashion.
If you run continuous services you either have to be able to run new/old concurrently or completely duplicate your server farm as you roll out incompatible clients.
Last time I checked, this sort of software only accounted for about ~5% of all software produced, and that fraction is likely dropping, with the moves toward cloud services, open source software, subscription software, and subsidized software.
The vast majority of software developed is in-house stuff, where the developers and the users *can* enter into an agile delivery cycle.
OK, but they have to not break existing interfaces when they do that. And that's not the case with OS upgrades.
If you are, say, adding up dollars, how many times do you want that functionality to change?
I’m not sure what you’re asking.
I'm asking if computer science has advanced to the point where adding up a total needs new functionality, or if you would like the same total for the same numbers that you would have gotten last year. Or more to the point, if the same program ran correctly last year, wouldn't it be nice if it still ran the same way this year, in spite of the OS upgrade you need to do because of the security bugs that keep getting shipped while developers spend their time making arbitrary changes to user interfaces.
Compare a rolling release model like that of Cygwin or Ubuntu (not LTS). Something might break every few months, which sounds bad until you consider that the alternative is for *everything* to break at the same time, every 3-7 years.
When your system requires extensive testing, the few times it breaks the better. Never would be nice...
I don’t mean that glibly. I mean you have made a fundamental mistake if your system breaks badly enough due to an OS change that you can’t fix it within an iteration or two of your normal development process. The most likely mistake is staffing your team entirely with people who have never been through a platform shift before.
Please quantify that. How much should a business expect to spend per person to re-train their operations staff to keep their systems working across a required OS update? Not to add functionality. To keep something that was working running the way it was?
If you hire competent people, you pay zero extra to do this, because this is the job they have been hired to do.
That's nonsense for any complex system. There are always _many_ different OS versions in play and many different development groups that only understand a subset, and every new change they need to know about costs time and risks mistakes.
That's pretty much what IT/custom development is: coping with churn.
And it is expensive. Unnecessarily so, in my opinion.
How many customers for your service did you keep running non-stop across those transitions?
Most of our customers are K-12 schools, so we’re not talking about a 24/7 system to begin with. K-12 runs maybe 9 hours a day (7am - 4pm), 5 days a week, 9 months out of the year. That gives us many upgrade windows.
That's a very different scenario than a farm of data servers that have to be available 24/7.
We rarely change out hardware or the OS at a particular site. We generally run it until it falls over, dead.
This means we’re still building binaries for EL3.
I have a few of those, but I don't believe that is a sane thing to recommend.
This also means our software must *remain* broadly portable. When we talk about porting to EL7, we don’t mean that it stops working on EL6 and earlier. We might have some graceful feature degradation where the older OS simply can’t do something the newer one can, but we don’t just chop off an old OS because a new one came out.
You'd probably be better off in java if you aren't already.
Everyone’s moaning about systemd...at least it’s looking to be a real de facto standard going forward.
What you expect to pay to re-train operations staff -just- for this change, -just- to keep things working the same..
You ask that as if you think you have a no-cost option in the question of how to address the churn.
I ask it as if I think that software developers could make changes without breaking existing interfaces. And yes, I do think they could if they cared about anyone who built on those interfaces.
We've got lots of stuff that will drop into Windows server versions spanning well over a 10 year range.
Yes, well, Linux has always had a problem with ABI stability. Apparently the industry doesn’t really care about this, evidenced by the fizzling of LSB, and the current attacks on the work at freedesktop.org. Apparently we’d all rather be fractious than learn to get along well enough that we can nail down some real standards.
Well, that has done a great job of keeping Microsoft in business.
I’ve never done much with Windows Server, but my sense is that they have plenty of churn over in their world, too. We’ve got SELinux and SystemD, they’ve got UAC, SxS DLLs, API deprecation, and tools that shuffle positions on every release. (Where did they move the IPv4 configuration dialog this time?!)
We get worked up here about things like the loss of 32-bit support, but over in MS land, they get API-of-the-year. JET, ODBC, OLE DB, or ADO? Win32, .NET desktop, Silverlight, or Metro? GDI, WinG, DirectX, Windows Forms or XAML? On and on, and that’s just if you stay within the MSDN walls.
Yes, there are changes - and sometimes mysterious breakage. But an outright abandonment of an existing interface that breaks previously working code s pretty rare (and I don't like it when they do it either...).
Were you paying attention when Microsoft wanted to make XP obsolete? There is a lot of it still running.
Were you paying attention when Target’s XP-based POS terminals all got pwned?
Stability and compatibility are not universal goods.
Well, some things you have to get right in the first place - and then stability is good.
Google already did that cost/benefit calculation: they tried staying on RH 7.1 indefinitely, and thereby built up 10 years of technical debt. Then when they did jump, it was a major undertaking, though one they apparently felt was worth doing.
And conversely, they felt is was worth _not_ doing for a very very long time. So can the rest of us wait until we have google's resources?
And why do you think it is a good thing for this to be a hard problem or for every individual user to be forced to solve it himself?
I never said it was a good thing. I’m just reporting some observations from the field.
Maybe I misunderstood - I thought you were defending the status quo - and the fedora developers that bring it to us.
On Dec 29, 2014, at 10:07 PM, Les Mikesell lesmikesell@gmail.com wrote:
it's not necessary for either code interfaces or data structures to change in backward-incompatible ways.
You keep talking about the cost of coping with change, but apparently you believe maintaining legacy interfaces is cost-free.
Take it from a software developer: it isn’t.
People praise Microsoft for maintaining ancient interfaces, and attribute their success to it, but it’s really the other way around: their success pays for the army of software developers it takes to keep a handle on the complexity that results from piling 20-30 years of change on top of the same base.
Even having mobilized that army, a huge amount of the problems with Windows come directly as a result of choosing to maintain such a huge legacy of backwards compatibility.
Just one example: By default, anyone can write to the root of the C: drive on Windows. Why? Because DOS and Win16 allowed it, so a huge amount of software was written to expect that they could do it, too. Hence, the root of your Windows box’s filesystem is purposely left insecure.
Most organizations cannot afford to create the equivalents of WOW64, which basically emulates Win32 on top of Win64. (Or *its* predecessor, WOW, which emulates Win16 on top of Win32.) That isn’t trivial to do, especially at the level Microsoft does it, where a whole lot of clever low-level code is employed to allow WOW64 code to run nearly as fast as native Win64 code.
Meanwhile over in the Linux world, we have a whole lot of the code being written by unpaid volunteers, and a lot of the rest is being written by developers employed by organizations that do not enjoy a legal means for forcing their customers to pay for each and every seat of the software their developers created.
Result? We cannot afford to maintain every interface created during the quarter century of Linux’s existence. Every now and then, we have to throw some ballast overboard.
I’m not saying that CentOS should be killed off, and all its users be forced to pay for RHEL licenses. I’m saying that one of the trade-offs of using a free OS is that you have to pick up some of the slack on your end.
If your software is DBMS-backed and a new feature changes the schema, you can use one of the many available systems for managing schema versions. Or, roll your own; it isn’t hard.
Are you offering to do it for free?
This is one of the things my employer pays me to do. This is what I’m telling you: the job description is, “Cope with change.”
I'm asking if computer science has advanced to the point where adding up a total needs new functionality, or if you would like the same total for the same numbers that you would have gotten last year.
Mathematics doesn’t change. The business and technology worlds do. Your example is a non sequitur.
How many customers for your service did you keep running non-stop across those transitions?
Most of our customers are K-12 schools, so we’re not talking about a 24/7 system to begin with.
That's a very different scenario than a farm of data servers that have to be available 24/7.
How many single computers have to be up 24/7? I mean really.
If you have any form of cluster — from old-school shared-everything style to new-style shared-nothing style — you can partition it and upgrade individual nodes.
If your system isn’t in use across the world, you must have windows of low or zero usage where upgrades can happen. If your system *is* in use across the world, you likely have it partitioned across continents anyway.
The days of the critical single mainframe computer are fading fast. We’re going to get to a point where it makes as much sense to talk about 100% uptime for single computers as it does to talk about hard drives that never fail.
We rarely change out hardware or the OS at a particular site. We generally run it until it falls over, dead.
This means we’re still building binaries for EL3.
I have a few of those, but I don't believe that is a sane thing to recommend.
It depends on the market. A lot of Linux boxes are basically appliances. When was the last time you upgraded the OS on your home router? I don’t mean flashing new firmware — which is rare enough already — I mean upgrading it to a truly different OS.
Okay, so that’s embedded Linux, it doesn’t seem remarkable that such systems never change, once deployed.
The thing is, there really isn’t a narrow, bright line between “embedded” and the rest of the Linux world. It’s a wide, gray line, covering a huge amount of the Linux world.
This also means our software must *remain* broadly portable.
You'd probably be better off in java if you aren't already.
If you actually had a basis for making such a sweeping prescription like that, 90% of software written would be written in Java.
There’s a pile of good reasons why software continues to be written in other languages, either on top of other runtimes or on the bare metal.
No, don’t argue. I don’t want to start a Java flame war here. Just take it from a software developer, Java is not a universal, unalloyed good.
Everyone’s moaning about systemd...at least it’s looking to be a real de facto standard going forward.
What you expect to pay to re-train operations staff -just- for this change, -just- to keep things working the same..
You ask that as if you think you have a no-cost option in the question of how to address the churn.
I ask it as if I think that software developers could make changes without breaking existing interfaces. And yes, I do think they could if they cared about anyone who built on those interfaces.
Legacy code isn’t free to keep around.
Take systemd. You can go two ways here:
1. sysvinit should also be supported as a first-class citizen in EL7. If that’s your point, then just because the sysvinit code was already written doesn’t mean there isn’t a cost to continuing to maintain and package it.
2. sysvinit should never have been replaced. If that’s your position, you’re free to switch to a sysvinit based OS, or fork EL6. What, sounds like work? Too costly? That must be because it isn’t free to keep maintaining old code.
I’ve never done much with Windows Server, but my sense is that they have plenty of churn over in their world, too.
Yes, there are changes - and sometimes mysterious breakage. But an outright abandonment of an existing interface that breaks previously working code s pretty rare
Yes, well, that’s one of the things you can do when you’ve got a near-monopoly on PC OSes, which allows you to employ 128,000 people. [1]
When you only employ 6,500 [2] and a huge chunk of your customer base doesn’t pay you for the use of the software you write, you necessarily have to do business differently.
[1] http://en.wikipedia.org/wiki/Microsoft [2] http://en.wikipedia.org/wiki/Red_Hat
Were you paying attention when Microsoft wanted to make XP obsolete? There is a lot of it still running.
Were you paying attention when Target’s XP-based POS terminals all got pwned?
Stability and compatibility are not universal goods.
Well, some things you have to get right in the first place - and then stability is good.
Security changes, too.
10 years ago, 2FA was something you only saw in high-security environments.
Today, I have two different 2FA apps on the phone in my pocket. That phone is protected by a biometric system, which protects access to a trapdoor secure data store. My *phone* does this.
The phone I had 10 years ago would let you hook a serial cable up and suck its entire contents out without even asking you for a password.
Google already did that cost/benefit calculation: they tried staying on RH 7.1 indefinitely, and thereby built up 10 years of technical debt. Then when they did jump, it was a major undertaking, though one they apparently felt was worth doing.
And conversely, they felt is was worth _not_ doing for a very very long time. So can the rest of us wait until we have google's resources?
You’re never going to have Google’s resources. Therefore, you will never have the *option* to roll your own custom OS.
So, cope with change.
On Wed, Dec 31, 2014 at 11:03 AM, Warren Young wyml@etr-usa.com wrote:
On Dec 29, 2014, at 10:07 PM, Les Mikesell lesmikesell@gmail.com wrote:
it's not necessary for either code interfaces or data structures to change in backward-incompatible ways.
You keep talking about the cost of coping with change, but apparently you believe maintaining legacy interfaces is cost-free.
Take it from a software developer: it isn’t.
OK, but should one developer make an extra effort or the bazillion people affected by it?
People praise Microsoft for maintaining ancient interfaces, and attribute their success to it, but it’s really the other way around: their success pays for the army of software developers it takes to keep a handle on the complexity that results from piling 20-30 years of change on top of the same base.
That's what it takes to build and keep a user base.
Most organizations cannot afford to create the equivalents of WOW64, which basically emulates Win32 on top of Win64. (Or *its* predecessor, WOW, which emulates Win16 on top of Win32.) That isn’t trivial to do, especially at the level Microsoft does it, where a whole lot of clever low-level code is employed to allow WOW64 code to run nearly as fast as native Win64 code.
It's hard to the extent that you made bad choices in interfaces in the first place. Microsoft's job was hard. But Unix SysV which Linux basically emulates wasn't so bad. Maybe a few size definitions could have been better.
Result? We cannot afford to maintain every interface created during the quarter century of Linux’s existence. Every now and then, we have to throw some ballast overboard.
And the user base that depended on them.
If your software is DBMS-backed and a new feature changes the schema, you can use one of the many available systems for managing schema versions. Or, roll your own; it isn’t hard.
Are you offering to do it for free?
This is one of the things my employer pays me to do. This is what I’m telling you: the job description is, “Cope with change.”
So either it "isn't hard", or "you need a trained, experienced, professional staff to do it". Big difference. Which is it?
I'm asking if computer science has advanced to the point where adding up a total needs new functionality, or if you would like the same total for the same numbers that you would have gotten last year.
Mathematics doesn’t change. The business and technology worlds do. Your example is a non sequitur.
If you are embedding business logic in your library interfaces, something is wrong. I'm talking about things that are shipped in the distribution and the commands to manage them. The underlying jobs they do were pretty well established long ago.
How many single computers have to be up 24/7? I mean really.
All of our customer-facing services - a nd most internal infrastructure. Admittedly, not individual boxes - but who wants to have systems running concurrently with major differences in code base and operations/maintenance procedures?
If you have any form of cluster — from old-school shared-everything style to new-style shared-nothing style — you can partition it and upgrade individual nodes.
Yes, everything is redundant. But when changes are not backwards compatible it makes piecemeal updates way harder than they should be. Take something simple like the dhcp server in the disto. It allows for redundant servers - but the versions are not compatible. How do you manage that by individual node upgrades when they won't fail over to each other?
If your system isn’t in use across the world, you must have windows of low or zero usage where upgrades can happen. If your system *is* in use across the world, you likely have it partitioned across continents anyway.
How nice for you...
This means we’re still building binaries for EL3.
I have a few of those, but I don't believe that is a sane thing to recommend.
It depends on the market. A lot of Linux boxes are basically appliances. When was the last time you upgraded the OS on your home router? I don’t mean flashing new firmware — which is rare enough already — I mean upgrading it to a truly different OS.
Okay, so that’s embedded Linux, it doesn’t seem remarkable that such systems never change, once deployed.
Which sort of points out that the wild and crazy changes in the mainstream distributions weren't all that necessary either...
This also means our software must *remain* broadly portable.
You'd probably be better off in java if you aren't already.
If you actually had a basis for making such a sweeping prescription like that, 90% of software written would be written in Java.
I do. We have a broad mix of languages, some with requirements that force it, some just for historical reasons and the team that maintains it. The java stuff has been much less problematic in porting across systems - or running the same code concurrently under different OS's/versions at once. I don't think the C++ guys have even figured out a sane way to use a standard boost version on 2 different Linux's, even doing separate builds for them.
There’s a pile of good reasons why software continues to be written in other languages, either on top of other runtimes or on the bare metal.
Maybe. I think there's a bigger pile of not-so-good reasons that things aren't done portably. Java isn't the only way to be portable, but you don't see much on the scale of elasticsearch, jenkins or opennms done cross-platform in other languages.
No, don’t argue. I don’t want to start a Java flame war here. Just take it from a software developer, Java is not a universal, unalloyed good.
The syntax is cumbersome - but there are things like groovy or jruby that run on top of it. And there's a lot of start-up overhead, but that doesn't matter much to long-running servers.
Take systemd. You can go two ways here:
sysvinit should also be supported as a first-class citizen in EL7. If that’s your point, then just because the sysvinit code was already written doesn’t mean there isn’t a cost to continuing to maintain and package it.
sysvinit should never have been replaced. If that’s your position, you’re free to switch to a sysvinit based OS, or fork EL6. What, sounds like work? Too costly? That must be because it isn’t free to keep maintaining old code.
Yes, I'm forced to deal with #1. That doesn't keep me from wishing that whatever code change had been done had kept backwards compatibility in the user interface commands and init scripts department.
I’ve never done much with Windows Server, but my sense is that they have plenty of churn over in their world, too.
Yes, there are changes - and sometimes mysterious breakage. But an outright abandonment of an existing interface that breaks previously working code s pretty rare
Yes, well, that’s one of the things you can do when you’ve got a near-monopoly on PC OSes, which allows you to employ 128,000 people. [1]
And you only get that with code that keeps users instead of driving them away.
And conversely, they felt is was worth _not_ doing for a very very long time. So can the rest of us wait until we have google's resources?
You’re never going to have Google’s resources. Therefore, you will never have the *option* to roll your own custom OS.
So, cope with change.
What google does points out how unsuitable the distro really is. I just don't see why it has to stay that way.
On Dec 31, 2014, at 4:41 PM, Les Mikesell lesmikesell@gmail.com wrote:
On Wed, Dec 31, 2014 at 11:03 AM, Warren Young wyml@etr-usa.com wrote:
You keep talking about the cost of coping with change, but apparently you believe maintaining legacy interfaces is cost-free.
Take it from a software developer: it isn’t.
OK, but should one developer make an extra effort or the bazillion people affected by it?
That developer is either being paid by a company with their own motivations or is scratching his own itch. You have no claim on his time.
Open source is a do-ocracy: those who do the work, rule.
People throw all kinds of hate at Poettering, but he is *doing things* and getting those things into consequential Linux distributions. The haters are just crying about it, not out there trying to do things differently, not trying to win an audience.
I don’t just mean an audience standing around your soapbox. Crazies on the street corner can manage that. I mean an audience of people who are willing to roll your new Linux distro out over their infrastructure.
I repeat my call to action: If you think EL6 was better, fork it. If enough people agree with you that sitting still is better than what Red Hat is doing, you *will* get Red Hat’s attention. Even if the end the result is an EGCS-like divergence and re-merging, you will cause something to happen.
Result? We cannot afford to maintain every interface created during the quarter century of Linux’s existence. Every now and then, we have to throw some ballast overboard.
And the user base that depended on them.
Nonsense. This is open source. EL6 is still there, if that’s what you want. You’re free to maintain it as long as you like.
Are you offering to do it for free?
This is one of the things my employer pays me to do. This is what I’m telling you: the job description is, “Cope with change.”
So either it "isn't hard", or "you need a trained, experienced, professional staff to do it". Big difference. Which is it?
You’re trying to shove a crowbar in where there is no gap: It isn’t hard for an experienced staff. It is one of the reasons our various organizations employ us.
Why do you believe this is a stringent requirement? I thought CentOS was the distro targeted at organizations staffed by competent technical professionals. That’s me. Isn’t that you, too?
I'm asking if computer science has advanced to the point where adding up a total needs new functionality, or if you would like the same total for the same numbers that you would have gotten last year.
Mathematics doesn’t change. The business and technology worlds do. Your example is a non sequitur.
If you are embedding business logic in your library interfaces, something is wrong.
Once again you’re making non sequiturs.
Your example was that arithmetic doesn’t change, then you go off and try to use that to explain why EL7 is wrong. So, where is the part of EL7 that doesn’t add columns of numbers correctly?
When the rest of the technology world changes, Red Hat has two options: they can react to it, or they can keep churning out the same thing they always have done. The latter is a path toward irrelevance.
OS/2 is a fine OS for solving 1993’s problems.
Given a choice between disruptive change and a future where RHEL/CentOS is the next OS/2, I’ll take the change.
Take something simple like the dhcp server in the disto. It allows for redundant servers - but the versions are not compatible. How do you manage that by individual node upgrades when they won't fail over to each other?
Is that hypothetical, or do you actually know of a pair of dhcpd versions where the new one would fail to take over from the older one when run as its backup? Especially a pair actually shipped by Red Hat?
I’m not interested in the reverse case, where an old server could not take over from a newer one, because there’s no good reason to manage the upgrade that way. You drop the new one in as a backup, take the old one offline, upgrade it, and start it back up, whereupon it can take over as primary again.
Okay, so that’s embedded Linux, it doesn’t seem remarkable that such systems never change, once deployed.
Which sort of points out that the wild and crazy changes in the mainstream distributions weren't all that necessary either…
No. The nature of embedded systems is that you design them for a specific task, with a fixed scope. You deploy them, and that’s what they do from that point forward. (Routers, print servers, media streamers…)
New general purpose OSes do more things than their predecessor did. Their scope is always changing. (Widening, usually, but occasionally old functionality does get chopped off.)
If all you need is a print server, go buy a Lantronix box and be done with it.
If you need the power of CentOS, you’re probably not actually doing the same thing with it year after year.
Just to keep with the print server example, I know there’s been a lot of change in *my* world with printing in the past 10-20 years.
The change from parallel to USB wasn’t trivial. Plug & play device support is one of those things the hated systemd does for us.
In the past 5 years or so, I’ve pretty much retired all my Samba print servers, because every printer I’ve bought during that time has a competent print server built in.
It’s no surprise then that CentOS 7 isn’t anything like Red Hat Linux 7.
You'd probably be better off in java if you aren't already.
If you actually had a basis for making such a sweeping prescription like that, 90% of software written would be written in Java.
I do. ...The java stuff has been much less problematic in porting across systems - or running the same code concurrently under different OS's/versions at once.
And yet, 90% of new software continues to *not* be developed in Java.
Could be there are more downsides to that plan than upsides.
I don't think the C++ guys have even figured out a sane way to use a standard boost version on 2 different Linux's, even doing separate builds for them.
This method works for me:
# scp -r stableserver:/usr/local/boost-1.x.y /usr/local # cd /usr/local # ln -s boost-1.x.y boost
Then build with CXXFLAGS=-I/usr/local/boost/include.
I’ve never done much with Windows Server, but my sense is that they have plenty of churn over in their world, too.
Yes, there are changes - and sometimes mysterious breakage. But an outright abandonment of an existing interface that breaks previously working code s pretty rare
Yes, well, that’s one of the things you can do when you’ve got a near-monopoly on PC OSes, which allows you to employ 128,000 people. [1]
And you only get that with code that keeps users instead of driving them away.
Seriously? I mean, you actually believe that if RHEL sat still, right where it is now, never changing any ABIs, that it would finally usher in the Glorious Year of Linux? That’s all we have to do?
On Fri, Jan 2, 2015 at 5:52 PM, Warren Young wyml@etr-usa.com wrote:
OK, but should one developer make an extra effort or the bazillion people affected by it?
That developer is either being paid by a company with their own motivations or is scratching his own itch. You have no claim on his time.
Agreed - but I'm not going to say I like his breakage.
Why do you believe this is a stringent requirement? I thought CentOS was the distro targeted at organizations staffed by competent technical professionals. That’s me. Isn’t that you, too?
Yes, but I'd rather be building things on top of a solid foundation than just using planned obsolescence as job security since the same thing needs to be done over and over. And I'll admit I can't do it the right way with the approach google uses of just tossing the distribution and its tools.
Mathematics doesn’t change. The business and technology worlds do. Your example is a non sequitur.
If you are embedding business logic in your library interfaces, something is wrong.
Once again you’re making non sequiturs.
Your example was that arithmetic doesn’t change, then you go off and try to use that to explain why EL7 is wrong. So, where is the part of EL7 that doesn’t add columns of numbers correctly?
If the program won't start or the distribution libraries are incompatible (which is very, very likely) then it isn't going to add anything.
Take something simple like the dhcp server in the disto. It allows for redundant servers - but the versions are not compatible. How do you manage that by individual node upgrades when they won't fail over to each other?
Is that hypothetical, or do you actually know of a pair of dhcpd versions where the new one would fail to take over from the older one when run as its backup? Especially a pair actually shipped by Red Hat?
It's my experience or I wouldn't have mentioned it. I built a CentOS7 to match my old CentOS5 pair. It can do the same thing, but there is no way to make them actively cluster together so the new one is aware of the outstanding leases at cutover or to have the ability to revert if the new one introduces problems.
I’m not interested in the reverse case, where an old server could not take over from a newer one, because there’s no good reason to manage the upgrade that way. You drop the new one in as a backup, take the old one offline, upgrade it, and start it back up, whereupon it can take over as primary again.
The ability to fail back is important, unless you think new software is always perfect. Look through some changelogs if you think that...
Which sort of points out that the wild and crazy changes in the mainstream distributions weren't all that necessary either…
No. The nature of embedded systems is that you design them for a specific task, with a fixed scope. You deploy them, and that’s what they do from that point forward. (Routers, print servers, media streamers…)
Well yes. We have computers doing specific things. And those things span much longer that 10 years. If you are very young you might not understand that.
You'd probably be better off in java if you aren't already.
If you actually had a basis for making such a sweeping prescription like that, 90% of software written would be written in Java.
I do. ...The java stuff has been much less problematic in porting across systems - or running the same code concurrently under different OS's/versions at once.
And yet, 90% of new software continues to *not* be developed in Java.
Lots of people do lots of stupid things that I can't explain. But if numbers impress you, if you count android/Dalvik which is close enough to be the stuff of lawsuits, there's probably more instances of running programs than anything else.
Could be there are more downsides to that plan than upsides.
You didn't come up with portable non-java counterexamples to elasticsearch, jenkins, opennms, etc. I'd add eclipse, jasper reports and the Pentaho tools to the list too - all used here.
I don't think the C++ guys have even figured out a sane way to use a standard boost version on 2 different Linux's, even doing separate builds for them.
This method works for me:
# scp -r stableserver:/usr/local/boost-1.x.y /usr/local # cd /usr/local # ln -s boost-1.x.y boost
Then build with CXXFLAGS=-I/usr/local/boost/include.
I don't think CMake is happy with that since in knows where the stock version should be and will find duplicates, And then you have to work out how to distribute your binaries. Are you really advocating copying unmanaged.unpackaged libraries around to random places.
I’ve never done much with Windows Server, but my sense is that they have plenty of churn over in their world, too.
Yes, there are changes - and sometimes mysterious breakage. But an outright abandonment of an existing interface that breaks previously working code s pretty rare
Yes, well, that’s one of the things you can do when you’ve got a near-monopoly on PC OSes, which allows you to employ 128,000 people. [1]
And you only get that with code that keeps users instead of driving them away.
Seriously? I mean, you actually believe that if RHEL sat still, right where it is now, never changing any ABIs, that it would finally usher in the Glorious Year of Linux? That’s all we have to do?
Yes, they can add without changing/breaking interfaces that people use or commands they already know. The reason people use RHEL at all is because they do a pretty good job of that within the life of a major version. How can you possibly think that the people attracted to that stability only want it for a short length of time relative to the life of their businesses.
On Jan 3, 2015, at 2:17 PM, Les Mikesell lesmikesell@gmail.com wrote:
On Fri, Jan 2, 2015 at 5:52 PM, Warren Young wyml@etr-usa.com wrote:
where is the part of EL7 that doesn’t add columns of numbers correctly?
If the program won't start or the distribution libraries are incompatible (which is very, very likely) then it isn't going to add anything.
That’s ABI compatibility again, and it isn’t a CentOS specific thing.
The primary reason for it is that most things on your Linux box were built from an SRPM that contains readable — and therefore recompilable — source code.
I’ve successfully rebuilt an old SRPM to run on a newer OS several times. It isn’t always easy, but it is always possible.
This fact means there is much less incentive to keep ABIs stable in the Linux world than in the Windows and OS X worlds, where raw binaries are often all you get.
If your situation is that you do have binary-only programs, they’re likely commercial software of some sort, so you might think about whether you should have a maintenance contract for them, so you can get newer binaries as you move to new platforms.
If a support contract is out of the question, you have the option of creating a VM to run the old binaries today.
Docker will eat away at this problem going forward. You naturally will not already have Dockerized versions of apps built 10 years ago, and it may not be practical to create them now, but you can start insisting on getting them today so that your future OS changes don’t break things for you again.
I built a CentOS7 to match my old CentOS5 pair. It can do the same thing, but there is no way to make them actively cluster together so the new one is aware of the outstanding leases at cutover or to have the ability to revert if the new one introduces problems.
I believe that was ISC’s fault, not Red Hat’s.
Red Hat did their job: they supported an old version of ISC dhcpd for 7 years. ISC broke their failover API during that period. You can’t blame Red Hat for not going in and reverting whatever change caused the breakage when they finally did upgrade dhcpd with EL6.
This is a consequence of pulling together software developed by many different organizations into a software distribution, as opposed to developing everything in-house.
You are free to backport the old EL5 dhcpd SRPM to EL7.
Perhaps your point is that Red Hat should have either a) continued to distribute an 8-year-old version of dhcpd with EL7 [1], or b) somehow given you the new features of ISC dhcpd 4.2.5 without breaking anything? If so, I take this point back up at the end.
[1] https://lists.isc.org/pipermail/dhcp-announce/2006-November/000090.html
The ability to fail back is important, unless you think new software is always perfect.
You fall back in this case by turning off the EL7 dhcpd and going back to a redundant set of EL5 dhcpds.
All dhcpd needs to do here is help you to migrate forward. As long as that doesn’t break, you have what you need, if not what you *want*.
The nature of embedded systems is that you design them for a specific task, with a fixed scope. You deploy them, and that’s what they do from that point forward.
And those things span much longer that 10 years. If you are very young you might not understand that.
My first look at the Internet was on a VT102. I refer not to a terminal emulator, but to something that crushes metatarsals if you drop it on your foot.
I think I’ve got enough gray in my beard to hold my own in this conversation.
And yet, 90% of new software continues to *not* be developed in Java.
Lots of people do lots of stupid things that I can't explain.
Just because you can’t explain it doesn’t mean it’s an irrational choice.
But if numbers impress you, if you count android/Dalvik which is close enough to be the stuff of lawsuits, there's probably more instances of running programs than anything else.
There are more JavaScript interpreters in the world than Dalvik, ART,[2] and Java ® VMs combined. Perhaps we should rewrite everything in JavaScript instead?
If we consider only the *ix world, there are more Bourne-compatible shell script interpreters than Perl, Python, or Ruby interpreters. Why did anyone bother to create these other languages, and why do we spend time maintaining these environments and writing programs for them?
Why even bother with ksh or Bash extensions, for that matter? The original Bourne shell achieved Turing-completeness in 1977. There is literally nothing we can ask a computer to do that we cannot cause to happen via a shell script. (Except run fast.)
If you think I’m wrong about that, you probably didn’t ever use sharchives.
[2] http://en.wikipedia.org/wiki/Android_Runtime
Could be there are more downsides to that plan than upsides.
You didn't come up with portable non-java counterexamples
I already told you that I didn’t want to start a Java argument. This isn’t the place for it.
Still, you seem to be Jonesing for one, so I will answer a few of your points, with the hope that, sated, you will allow this subtopic to die before it slides into the teeth of Godwin’s Law, as so many other advocacy threads have before it.
elasticsearch, jenkins, opennms
Yes, Java has had several big wins. So have C, C++, C#, Objective C, Perl, Python, Ruby, PHP, Visual Basic, R, PL/SQL, COBOL, and Pascal.
About half of those are machine-portable. UCSD Pascal (1978!) even ran in a JRE-like VM.
Though Java is one of the top languages in the software world, that world is so fragmented that even its enviable spot near the top [3] means it’s got hold of less than a sixth of the market.
If that still impresses you, you aren’t thinking clearly about how badly outnumbered that makes you. *Everyone* with a single-langauge advocacy position is badly outnumbered. This is why I learn roughly one new programming language per year.
[3] http://www.tiobe.com/index.php/content/paperinfo/tpci/
I'd add eclipse
Yes, much praise for the IDE that starts and runs so slowly that even us poor limited humans with our 50-100 ms cycle times notice it.
It takes Eclipse 2-3x as long to start as Visual Studio on the same hardware, despite both having equally baroque feature sets and bloated disk footprints.
(Yes, I tested it, several times. I even gave VS a handicap by running its host OS under VMware. It’s actually a double-handicap, since much of VS is written in .NET these days!)
Once the Eclipse beast finally stumbles into wakefulness, it’s as surly and slow to move as a teenager after an all-nighter. I can actually see the delay in pulling down menus, for Ritchie’s sake. *Menus*. In *2015*. Until Java came along, I thought slow menu bars were an historical artifact even less likely to make a reappearance than Classic Mac OS.
It isn’t just Eclipse that does this. Another Java-based tool I use here (oXygen) shows similar performance problems.
Oh, and somehow both of these apps have managed to stay noticeably slow despite the many iterations of Moore’s Law since The Bubble, when these two packages were first perpetrated.
This method works for me:
# scp -r stableserver:/usr/local/boost-1.x.y /usr/local # cd /usr/local # ln -s boost-1.x.y boost
Then build with CXXFLAGS=-I/usr/local/boost/include.
I don't think CMake is happy with that since in knows where the stock version should be and will find duplicates,
Only if you use CMake’s built-in Boost module to find it. It’s easy to roll your own, which always does the right thing for your particular environment.
The built-in Boost module is for situations where a piece of software needs *a* version of Boost, and isn’t particular about which one, other than possibly to set a minimum compatible version.
And then you have to work out how to distribute your binaries. Are you really advocating copying unmanaged.unpackaged libraries around to random places.
I’ve never had to use any of the Boost *libraries* proper. A huge subset of Boost is implemented as C++ templates, which compile into the executable.
If I *did* have to distribute libboost*.so, I’d just ship them in the same RPM that contains the binaries that reference those libraries.
This is no different than what you often see on Windows or Mac: third-party DLLs in c:\Program Files that were installed alongside the .exe, or third-party .dylib files buried in foo.app/Contents/Frameworks or similar.
Seriously? I mean, you actually believe that if RHEL sat still, right where it is now, never changing any ABIs, that it would finally usher in the Glorious Year of Linux? That’s all we have to do?
Yes, they can add without changing/breaking interfaces that people use or commands they already know. The reason people use RHEL at all is because they do a pretty good job of that within the life of a major version.
Red Hat avoids breaking APIs and ABIs within a major release by purposely not updating anything they don’t absolutely have to; they rarely add new features to old software. You can’t just extend one activity to the other with the verb “can”. Red Hat “can” do all things, but Red Hat will not do all things.
(My 20% project if I get hired by Red Hat: design 1F @ 1MV capacitors, so that Red Hat can power North Carolina for a year or so, completely off the grid. This project will allow Red Hat to move to a foundation model, completely supported by the interest from the invested proceeds of the first year’s energy sales. (Yes, I did the arithmetic.))
Fantasy aside, Red Hat’s actual current operating model is how we get into a situation where a supported OS (EL5) will still be shipping Perl 5.8 two years from now, into a world where there are an increasing number of major Perl modules that refuse to run on anything less than Perl 5.10. (e.g. Catalyst) This is not an enviable position.
My company only recently stopped supporting SeaMonkey 1.09 in our web app because we finally dropped support for EL3 in our newest branch of the software. (We can still build binaries for old branches on EL3, though there is little call to do so.)
The only reason we had to support such an ancient browser [4] in the first place was because it was what EL3 shipped with, and we decided it would be an embarrassment if we sent an old customer an RPM containing our web app for a platform whose native browser wouldn’t even load said web app.
This is what heroic levels of backwards compatibility buys, and you’re saying you want more of the same?
[4] Contemporaneous with — but in many ways inferior to — MSIE 7
How can you possibly think that the people attracted to that stability only want it for a short length of time relative to the life of their businesses.
Just because X is good doesn’t mean 3X is better. More Xs cost more $s. Red Hat’s job gets harder and harder, the longer they push out the EOL date on an OS version. That money has to come from somewhere.
How high do you think the cost of a RHEL sub can go before its users start abandoning it? If Red Hat loses its market share, CentOS will slide into obscurity, too.
Even if Red Hat does this — say they push the EOL date of EL5 out to 2027, a full two decades — and even if you continue to use one of the free clones of RHEL, you’re going to pay the price for the upgrade eventually. (If not you, then your successor.) If you think EL5 to EL7 was a disaster, how do you plan on choking down two decades worth of change in a single lump?
Or were you planning on demanding that EL5 be supported forever, with no changes, except for magical cost-free features?
On Mon, Jan 5, 2015 at 9:22 PM, Warren Young wyml@etr-usa.com wrote:
Docker will eat away at this problem going forward. You naturally will not already have Dockerized versions of apps built 10 years ago, and it may not be practical to create them now, but you can start insisting on getting them today so that your future OS changes don’t break things for you again.
Yes, it is just sad that it is necessary to isolate your work from the disruptive nature of the OS distribution. But it is becoming clearly necessary.
I built a CentOS7 to match my old CentOS5 pair. It can do the same thing, but there is no way to make them actively cluster together so the new one is aware of the outstanding leases at cutover or to have the ability to revert if the new one introduces problems.
I believe that was ISC’s fault, not Red Hat’s.
Agreed - and some of the value of Red Hat shows in the fact that the breakage is not packaged into a mid-rev update.
Perhaps your point is that Red Hat should have either a) continued to distribute an 8-year-old version of dhcpd with EL7 [1], or b) somehow given you the new features of ISC dhcpd 4.2.5 without breaking anything? If so, I take this point back up at the end.
Maybe document how you were supposed to deal with the situation, keeping your lease history intact and the ability to fail over during the transition. My point here is that there are people using RHEL that care about this sort of thing, but the system design is done in Fedora where I'm convinced that no one actually manages any machines doing jobs that matter or cares what the change might break. That is, the RHEL/Fedora split divided the community that originally built RH into people who need to maintain working systems and people who just want change. And they let the ones who want change design the next release.
And those things span much longer that 10 years. If you are very young you might not understand that.
My first look at the Internet was on a VT102. I refer not to a terminal emulator, but to something that crushes metatarsals if you drop it on your foot.
I think I’ve got enough gray in my beard to hold my own in this conversation.
So, after you've spent at least 10 years rolling out machines to do things as fast as you can, and teaching the others in your organization to spell 'chkconfig' and use 'system ...' commands, wouldn't you rather continue to be productive and do more instead of having to start over and re-do the same set of things over again just to keep the old stuff working?
But if numbers impress you, if you count android/Dalvik which is close enough to be the stuff of lawsuits, there's probably more instances of running programs than anything else.
There are more JavaScript interpreters in the world than Dalvik, ART,[2] and Java ® VMs combined. Perhaps we should rewrite everything in JavaScript instead?
I'm counting the running/useful instances of actual program code, not the interpreters that might be able to run something. But JavaScript is on the rise mostly because the interpreters upgrade transparently and the hassle is somewhat hidden.
If we consider only the *ix world, there are more Bourne-compatible shell script interpreters than Perl, Python, or Ruby interpreters. Why did anyone bother to create these other languages, and why do we spend time maintaining these environments and writing programs for them?
We spend time 'maintaining' because the OS underneath churns. Otherwise we would pretty quickly have all of the programs anyone needs completed. I thought CPAN was approaching that long ago, or at least getting to the point where the new code you have to write to do about anything would take about half a page of calls to existing modules.
Why even bother with ksh or Bash extensions, for that matter? The original Bourne shell achieved Turing-completeness in 1977. There is literally nothing we can ask a computer to do that we cannot cause to happen via a shell script. (Except run fast.)
Well, Bourne didn't deal with sockets. My opinion is that you'd be better off switching to perl at the first hint of needing arrays/sockets or any library modules that already exist in CPAN instead of extending a shell beyond the basic shell-ish stuff. But nobody asked me about that.
If you think I’m wrong about that, you probably didn’t ever use sharchives.
Of course I used sharchives - and I value the fact that (unlike most other languages) things written in Bourne-compatible shell syntax at any time in the past will still run, and fairly portably - except perhaps for the plethora of external commands that were normally needed. Perl doesn't have quite as long a history but is just about as good at backwards compatibility. With the exception of interpolating @ in double-quoted strings starting around version 5, pretty much anything you ever wrote in perl will still work. I'm holding off on python and ruby until they have a similar history of not breaking your existing work with incompatible updates.
If I *did* have to distribute libboost*.so, I’d just ship them in the same RPM that contains the binaries that reference those libraries.
This is no different than what you often see on Windows or Mac: third-party DLLs in c:\Program Files that were installed alongside the .exe, or third-party .dylib files buried in foo.app/Contents/Frameworks or similar.
Yes, on windows, you can just pick a boost version out of several available and jenkins and the compiler and tools seem to do the right thing. On linux it may be possible to do that, but you have to fight with everything that knows where the stock shared libraries and include files are supposed to be. While every developer almost certainly has his own way of maintaining multiple versions of things through development and testing, the distribution pretends that only one version of things should ever be installed. Or if multiples are installed there must be a system-wide default for which thing executes set by symlinks to obscure real locations. Never mind that Linux is multi-user and different users might want different versions. Or at least it was that way before 'software collections'.
Or were you planning on demanding that EL5 be supported forever, with no changes, except for magical cost-free features?
No, what I wish is that every change would be vetted by people actively managing large sets of systems, with some documentation about how to handle the necessary conversions to keep things running. I don't believe anyone involved in Fedora and their wild and crazy changes actually has anything running that they care about maintaining or a staff of people to retrain as procedures change. There's no evidence that anyone has weighed the cost of backwards-incompatible changes against the potential benefits - or even knows what those costs are or how best to deal with them.
On Jan 6, 2015, at 11:43 AM, Les Mikesell lesmikesell@gmail.com wrote:
On Mon, Jan 5, 2015 at 9:22 PM, Warren Young wyml@etr-usa.com wrote:
So, after you've spent at least 10 years rolling out machines to do things as fast as you can, and teaching the others in your organization to spell 'chkconfig' and use 'system ...' commands, wouldn't you rather continue to be productive and do more instead of having to start over and re-do the same set of things over again just to keep the old stuff working?
Having been through a bunch of these transitions already (SysV -> Linux bingo -> BSD -> OS X…) it doesn’t greatly bother me.
Given that I’ve spent more time on Red Hattish Linuxes than any other *ix, I suppose I’m more surprised it’s lasted as long as it has than I am upset that it’s changing again.
There are more JavaScript interpreters in the world than Dalvik, ART,[2] and Java ® VMs combined. Perhaps we should rewrite everything in JavaScript instead?
I'm counting the running/useful instances of actual program code,
I rather doubt you’ve done anything like actual research on this.
If you’re just buying Oracle’s bogus “3 billion” number uncritically, there are a bunch of problems with that, but I’d have to drag us off topic to discuss them. In any case, *ix falls into the noise floor if this is the scale you’re using to gauge the success of Java.
If you’re trying to say that there’s more CPU-hours of JVM use on *ix than JS in browsers, that sort of elitist argument has been repeatedly defeated. Big Iron vs minicomputers, Unix workstations vs the PC, “real” Unix vs Linux, the PC vs the smartphone…
Time and time again, the forces driving the end-user market end up taking over the hrumph hrumph serious computing market. This has already happened with JS vs Java in the “3 billion” space Oracle wants to claim, and I see no reason why it can’t eventually steamroll the *ix world, too, via node.js.
Even if you wave a magic wand and get the JVM onto every *ix box — which is currently very much *not* the case — you’re ignoring the Tiobe numbers I posted in the previous reply. There’s plenty to argue about when it comes to Tiobe’s methodology, but not so much that you can scrape together anything like a majority for any single language.
I think the real difference we have on this point is that I’m not actually serious when I propose that the whole world rewrite all software in My Favorite Language just to make my job easier.
JavaScript is on the rise mostly because the interpreters upgrade transparently and the hassle is somewhat hidden.
No, JavaScript is on the rise because the web is on the rise, and it managed to secure the spot as the web’s scripting language through an accident of history. That is all.
Otherwise we would pretty quickly have all of the programs anyone needs completed.
"There is nothing new to be discovered in physics now. All that remains is more and more precise measurement.”
— William Thomson, Lord Kelvin, 1900
Yes, *that* Kelvin.
I thought CPAN was approaching that long ago, or at least getting to the point where the new code you have to write to do about anything would take about half a page of calls to existing modules.
Ah yes, the ever-near future where everyone will plug a few Lego-like software blocks together and thereby construct any useful program they wish, while lacking any clue about what’s going on. That future’s been 5 years away for the past 5 decades.
It’s as likely today as ever — which is to say, “not” — and it isn’t the Fedora development community’s fault that we have not yet achieved this nirvana.
Even in electronics, where you have physics to constrain the set of possible realizations, we haven’t even approached this level of nirvana. For every Arduino success story, you have ten stories of cluebies who burned up their H bridge motor driver because they didn’t understand that P=I²R, and that R is almost never zero.
(If R is zero, you’re dealing with cryogenics, so all you’ve done is shifted into a different class of cluebie errors that’s more likely to leave said cluebie injured.)
Software is “pure thought-stuff,” to quote Fred Brooks. We have very little constraint on the scope and range of things we can create in software. Any attempt to package software up into a usefully-small set of precisely-characterized little black boxes is a Quixotic dream.
Why even bother with ksh or Bash extensions, for that matter? The original Bourne shell achieved Turing-completeness in 1977. There is literally nothing we can ask a computer to do that we cannot cause to happen via a shell script. (Except run fast.)
Well, Bourne didn't deal with sockets.
BSD Sockets didn’t exist in 1977.
In any case, you’re willfully ignoring the nature of *ix type shell script programming. A shell script’s “function library” is the set of all programs in the PATH. Drop nc/ncat/netcat/socat in the PATH, and suddenly your shell script knows how to talk to BSD sockets. It’s really no different than installing libnsl on a Solaris box.
No nc-alike here? Okay, echo the machine code to disk for an ELF binary that makes the necessary syscalls, using octal escapes. No, don’t cheat and echo out a C or assembly language program, go straight to the metal, you softie.
My opinion is that you'd be better off switching to perl at the first hint of needing arrays/sockets or any library modules that already exist in CPAN instead of extending a shell beyond the basic shell-ish stuff.
And there you are, foot on the slippery slope; down you go.
Perl is even less widely deployed than Java. It isn’t even on all *ixes by default. The BSDs, Cygwin, and Arch (at least) all leave it out of the stock install. POSIX doesn’t require it, so it also isn’t in most small embedded *ix systems, where you at least find a POSIX shell.
If you were to try to get Perl into POSIX, I think you’d fail. Perl 5 is just too big and hairy. No one is going to try to reimplement the whole thing.
Python has been reimplemented several times, but I think it’s still got a smaller installed base than Perl, simply due to getting popular later.
Lua’s small enough to reimplement relatively easily, but not powerful enough on its own to rival the POSIX shell as a general-purpose *ix programming system. It needs to be embedded into something else, which provides a richer function library.
Ruby and Scheme are powerful and small enough to make it as a stepping stone in POSIX between shell and C, but not popular enough to win the war a champion for either would have to fight to accomplish it.
There you have the mess. There is no universally-installed language today that fills the gap between shell and C, and there’s no likely prospect for such a thing.
You can’t just wish it away by telling everyone to switch to Java, or JavaScript, or whatever else. All you’re going to do is create another “standard” in the XKCD sense:
pretty much anything you ever wrote in perl will still work.
That’s becoming less true as Perl becomes stricter. A lot of old code won’t run without warnings any more, and if you turn on things like “use strict,” as Perl books have been recommending for over a decade, a huge amount of old code won’t even run.
Then you have the Modern Perl movement, which is responsible for the increase of CPAN modules that only work on Perl 5.10+.
http://www.josetteorama.com/what-is-modern-perl/
Since the capacity of the Perl developers is not infinite, this movement will cause more and more old Perl mechanisms to be deprecated, so that they can eventually be removed.
Perl 6 was an attempt to achieve this in one big jump. Perl 5 is in the process of slowly accreting some of Perl 6’s features. Eventually, I think the jump to Perl 6 won’t look so big, and Perl 6 will finally get off the ground. Perhaps even soon.
holding off on python and ruby until they have a similar history of not breaking your existing work with incompatible updates.
Those counters both got reset relatively recently: the Python 3 (neé 3000) and Ruby 1.9 transitions were fairly traumatic.
I’m sure Oracle wishes Java 6 would disappear finally, too.
There is no sanctuary, Les. Change is everywhere in computing.
Or were you planning on demanding that EL5 be supported forever, with no changes, except for magical cost-free features?
No, what I wish is that every change would be vetted by people actively managing large sets of systems
EL7 had a long beta period. Where were you?
Yeah, I know, not your job. But since you’re choking throats over here on the CentOS ML instead of choking Red Hat’s throat via a RHEL support contract, I think that puts the responsibility on your shoulders. You can’t expect someone else to scratch your itch just because you loudly demand it on the Internet. You either have to pay for it somehow, or hope that someone else also has your itch *and* wishes to pay for it themselves.
Is that all this is? Trying to get someone else riled up enough that they’ll fork EL6 for you?
On Tue, 2015-01-06 at 16:07 -0700, Warren Young wrote:
"There is nothing new to be discovered in physics now. All that remains is more and more precise measurement.” — William Thomson, Lord Kelvin, 1900
Now means the current time. Now is not, and never will be, The (unknown) Future.
In the real world of using computers productively for repetitive tasks, people want stability and perhaps faster running programmes. No one ever wants a major upset of being forced to use a different method to perform the same tasks.
Young men are enthusiastic about implementing new ideas. Old men with substantially more experience wisely want to avoid disrupting well-running systems. Time is money. Disruptions waste money and cause errors.
On Jan 6, 2015, at 5:06 PM, Always Learning centos@u62.u22.net wrote:
On Tue, 2015-01-06 at 16:07 -0700, Warren Young wrote:
"There is nothing new to be discovered in physics now. All that remains is more and more precise measurement.”
— William Thomson, Lord Kelvin, 1900
Now means the current time. Now is not, and never will be, The (unknown) Future.
Yyyyeah.
Let’s rewrite the quote with that interpretation:
“We have already discovered everything we are going to discover up to and including February 18, 1900.”
— William Thomson, Lord Kelvin, 1900
Not exactly the sparkling sort of statement we expect from one of the most brilliant scientists ever to walk the planet, is it? Rather on the vapid side, yes? Could it be that that is not actually what he meant?
Don’t like Kelvin? ‘Kay, how about this one:
“The advancement of the arts, from year to year, taxes our credulity and seems to presage the arrival of that period when human improvement must end.”
— Henry J. Ellsworth, Commissioner of the U.S. Patent Office, in the office's 1843 Annual Report
I think we’ll figure out something new to do with computers tomorrow. Certainly by Friday at latest.
On Tue, 2015-01-06 at 18:51 -0700, Warren Young wrote:
I think we’ll figure out something new to do with computers tomorrow. Certainly by Friday at latest.
You seem to forget. Computers were invented to perform repetitive tasks. Computer usage should be serving mankind - not making it more difficult for mankind.
On 07 January 2015 @01:37 zulu, Always Learning wrote:
You seem to forget. Computers were invented to perform repetitive tasks.
Or maybe, some of us just seem to remember it differently. In my opinion, robots/automatons were invented to perform repetitive tasks; computers were invented to perform logic operations faster and more-reliably than humans.
On Wed, Jan 7, 2015 at 7:31 AM, Darr247 darr247@gmail.com wrote:
On 07 January 2015 @01:37 zulu, Always Learning wrote:
You seem to forget. Computers were invented to perform repetitive tasks.
Or maybe, some of us just seem to remember it differently. In my opinion, robots/automatons were invented to perform repetitive tasks; computers were invented to perform logic operations faster and more-reliably than humans.
There's still a very odd mix of art and science involved. This is part of the fun, but still it seems like when everyone has the same problem from the same causes there would be some way to automate or re-use the knowledge of the fix instead of making everyone spend time on their own new creative version.
On Jan 7, 2015, at 7:02 AM, Les Mikesell lesmikesell@gmail.com wrote:
There's still a very odd mix of art and science involved.
Yes. This is part of what I was getting at with my definition of “technology.” Once a thing becomes reliable, it stops being technology. It’s been reduced to the point Mr. Always Learning wants, something that merely serves as a tool to improve our lives.
CentOS is far from perfect, however; it’s definitely still “technology.” That means it’s going to change.
still it seems like when everyone has the same problem from the same causes there would be some way to automate or re-use the knowledge of the fix instead of making everyone spend time on their own new creative version.
Let’s reflect on two maxims often heard in computing:
1. Don’t reinvent the wheel.
2. To one who only has a hammer, every problem looks like a nail.
Both capture a slice of truth, but there’s a tension between them. If the hammer is solving the problem, why replace it? Because it isn’t the right tool for the job. What if an better tool exists, but it still isn’t the *right* tool?
Which text editor should we throw out of the distribution as redundant: emacs, vi, nano, joe, or gedit?
This is one of the consequences of openness. It’s why Debian has over 100 forks.
Try this concept on for size: creative destruction. Sometimes you have to tear something down in order to build something better in its place.
GNOME, KDE, XFCE, or OpenBox?
Would the world really be a better place if CDE had never been replaced? Me, I’ll take GNOME 3 and all its warts over CDE any day of the week. CDE never would have *evolved* to be the equal of GNOME; it had to be destroyed to make room.
Perl, Python, Ruby, C, C++, or Java?
A whole lot of brilliance has gone into replacing FORTRAN, COBOL and Lisp, but is the current zoo of high-level languages a net improvement? As a programmer, I think so, but I’ll bet we can find someone who will say that brilliance would have been better redirected to other pursuits, even if it meant continuing to program in those languages. These people usually aren’t programmers, though, so I’m not inclined to give their opinions much weight.
PostgreSQL, MySQL, or SQLite?
How many DBMSes do we really need? SQLite simply cannot replace a lot of PostgreSQL installations. Likewise, it’s usually a blunder to try to use PostgreSQL in places where SQLite is most often found. (I’ve only seen it tried once; it was indeed a blunder.) MySQL can stretch to touch either extreme, but it’s not best used at these extremes.
Subversion, Git, or Fossil?
Do we really need all the version control systems we have? Can’t we all just agree on git and call it “done”? Again, programmer here: I’m glad we have a choice. The three I listed fit into very different niches. I’m glad we’ve destroyed CVS and SourceSafe; the world is a better place now. The time it’s taken people to convert to something better doesn’t bother me in the slightest. I gladly spent some of that time myself.
Trump card: Linux. Need I say more? That’s creative destruction writ large.
On Thu, Jan 8, 2015 at 10:35 AM, Warren Young wyml@etr-usa.com wrote:
Would the world really be a better place if CDE had never been replaced? Me, I’ll take GNOME 3 and all its warts over CDE any day of the week. CDE never would have *evolved* to be the equal of GNOME; it had to be destroyed to make room.
But it doesn't matter how pretty Gnome3 is on some other box. I use remote connections through NX/freenx or x2go exclusively. Gnome3 won't work that way. And that's typical of the changes.
On Thu, January 8, 2015 10:52 am, Les Mikesell wrote:
On Thu, Jan 8, 2015 at 10:35 AM, Warren Young wyml@etr-usa.com wrote:
Would the world really be a better place if CDE had never been replaced? Me, Iâll take GNOME 3 and all its warts over CDE any day of the week. CDE never would have *evolved* to be the equal of GNOME; it had to be destroyed to make room.
But it doesn't matter how pretty Gnome3 is on some other box. I use remote connections through NX/freenx or x2go exclusively. Gnome3 won't work that way. And that's typical of the changes.
Let me second you. I for one have fled from Gnome (on my FreeBSD workstation, - once the upgrade made me switch from Gnome 2 to Gnome 3). I need the job done, and want my GUI User Interface be what was perfectly suitable for long time. I do not care of its looks or fanciness of "ultimately different" user experience. Therefore I fled from Gnome to mate. (The decision was made after 2 weeks of frustration of doing in Gnome 3 the work I usually was doing on my workstation)
This is just my $0.02 (and note, this is from me, the one who is "not ready to join ipad generation").
Just on a side note: I question intelligence of an attitude that something (that works for some people) has to be destroyed to make room for something else one thinks to be more appropriate.
Valeri
++++++++++++++++++++++++++++++++++++++++ Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247 ++++++++++++++++++++++++++++++++++++++++
On Thu, Jan 08, 2015 at 11:11:10AM -0600, Valeri Galtsev wrote:
Just on a side note: I question intelligence of an attitude that something (that works for some people) has to be destroyed to make room for something else one thinks to be more appropriate.
Let me start by saying I'm also not a fan of Gnome3, and prefer MATE. However, I believe the new interface provided by Gnome3 is both well thought out and based on the results of research on Human-Computer interactions. Gnome has published their GNOME Human Interface Guidelines here:
https://developer.gnome.org/hig-book/3.2/
https://developer.gnome.org/hig/stable/
The idea is to have a uniform and consistent interface that is intuitive to all potential users. You'll probably agree with me that UNIX/Linux interfaces tend to be extremely inconsistent between programs, and even between elements of a display interface. Most of us who have been using UNIX for decades are familiar with many of the quirks and have long since adapted. I don't fault Gnome for trying to actually provide some guidelines for design. Apple has been praised for many years over its easy-to-use interface, largely because they have very strict control over their interfaces and a walled garden approach to apps. It would be very difficult to duplicate the ease of use from Apple while maintaining the free/open spirit in FOSS, so Gnome has a difficult path to tread.
On Thu, January 8, 2015 11:27 am, Jonathan Billings wrote:
On Thu, Jan 08, 2015 at 11:11:10AM -0600, Valeri Galtsev wrote:
Just on a side note: I question intelligence of an attitude that something (that works for some people) has to be destroyed to make room for something else one thinks to be more appropriate.
Let me start by saying I'm also not a fan of Gnome3, and prefer MATE. However, I believe the new interface provided by Gnome3 is both well thought out and based on the results of research on Human-Computer interactions.
My only point there was: do not destroy what works in the name of building something better. On the other hand, if it is open source software on can not request from developers to continue to maintain what they do not want to maintain. Luckily (for some on us), mate forked off Gnome at that fundamental decision moment. My only disagreement was with "one need to destroy something to build new fancy...". This doesn't sound to me as sound way to do things. (Building new building in downtown of Chicago is rare exemption;-)
Gnome has published their GNOME Human Interface Guidelines here:
https://developer.gnome.org/hig-book/3.2/
https://developer.gnome.org/hig/stable/
The idea is to have a uniform and consistent interface that is intuitive to all potential users. You'll probably agree with me that UNIX/Linux interfaces tend to be extremely inconsistent between programs, and even between elements of a display interface. Most of us who have been using UNIX for decades are familiar with many of the quirks and have long since adapted. I don't fault Gnome for trying to actually provide some guidelines for design. Apple has been praised for many years over its easy-to-use interface,
Yes, and no. On the "no" part: have you ever heard someone saying "sometimes you need to trick macintosh into doing what you actually want to do". That is: when you are using that nice GUI interface, not on the level of command line, of course.
Valeri
largely because they have very strict control over their interfaces and a walled garden approach to apps. It would be very difficult to duplicate the ease of use from Apple while maintaining the free/open spirit in FOSS, so Gnome has a difficult path to tread.
++++++++++++++++++++++++++++++++++++++++ Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247 ++++++++++++++++++++++++++++++++++++++++
On Jan 8, 2015, at 10:11 AM, Valeri Galtsev galtsev@kicp.uchicago.edu wrote:
I question intelligence of an attitude that something (that works for some people) has to be destroyed to make room for something else one thinks to be more appropriate.
The amount of actively-maintained software has always matched the available brainpower given over to its maintenance. Therefore, if we are going to add more features, some old things have to be left to die.
I do not mean “destroyed” in the physical sense. CDE is still there, if you want it. There just hasn’t been any new development on it in many years now, because the people who were doing that have all moved on.
If you want new features *and* everything old to continue to be maintained, you have a couple of options:
1. Grow the number of active programmers.
Every open source project I monitor closely frequently sends out calls for patches and other contributions, which are usually answered with crickets. Shortly after the call goes out, the crickets are drowned out by people demanding more bug fixes, and more features, and behavior changes, and better docs, and… I don’t know *any* open source project that has more active developers than it knows what to do with.
2. Reduce the amount of effort it takes to maintain a given feature set.
A lot of work has gone into that. It’s one reason software is moving to higher- and higher-level languages. Much of the Red Hat specific code in RHEL is written in Python, for example, not C, the traditional language of Linux.
Then we get old farts complaining that the new software is less efficient, because it isn’t written in C. That’s the tradeoff: computer efficiency for programmer efficiency, because programmers are more expensive and harder to come by.
On Thu, Jan 8, 2015 at 11:42 AM, Warren Young wyml@etr-usa.com wrote:
- Reduce the amount of effort it takes to maintain a given feature set.
A lot of work has gone into that. It’s one reason software is moving to higher- and higher-level languages. Much of the Red Hat specific code in RHEL is written in Python, for example, not C, the traditional language of Linux.
Yeah, that's going to be fun when the 'push incompatible changes' mentality reaches the Python interpreter. Oh, wait - when it affects RHEL's own work, they hold back...
Then we get old farts complaining that the new software is less efficient, because it isn’t written in C. That’s the tradeoff: computer efficiency for programmer efficiency, because programmers are more expensive and harder to come by.
You get programming efficiency with well designed high level library component support with stable interfaces regardless of the language itself. How long would it take you to hook a perl or java program to a new sql database (assuming you aren't the first one to ever do it)? Or parse some xml? The things that kill programmer time are when you have to do tedious tasks like that from scratch or deal with interface changes in the code that was supposed to handle it.
On Thu, 2015-01-08 at 09:35 -0700, Warren Young wrote:
Once a thing becomes reliable, it stops being technology.
Oh No. Just because something works well it does not stop being "technology" unless the USA people, who have decimated my language (English), have a new definition for "technology".
Warren are you serious that things that do not work well are "technology" but things that do work well are *not* technology ?
Quoting Always Learning centos@u62.u22.net:
On Thu, 2015-01-08 at 09:35 -0700, Warren Young wrote:
Once a thing becomes reliable, it stops being technology.
Oh No. Just because something works well it does not stop being "technology" unless the USA people, who have decimated my language (English), have a new definition for "technology".
ah, yes. Two great nations, divided by a common language...
Dave
Warren are you serious that things that do not work well are "technology" but things that do work well are *not* technology ?
-- Regards,
Paul. England, EU. Je suis Charlie !
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Fri, 2015-01-09 at 14:20 -0800, Dave Stevens wrote:
Quoting Always Learning centos@u62.u22.net:
Oh No. Just because something works well it does not stop being "technology" unless the USA people, who have decimated my language (English), have a new definition for "technology".
ah, yes. Two great nations, divided by a common language...
The one east of the Atlantic continues to decline and is only "great" when compared in size to Brittany in the north-west of La France.
Great Britain actually means a Brittany bigger than the French area called Bretagne.
The USA has certainly damaged my language. These days (ever since George W) one no longer devises a "plan". Instead one makes a "road map" :-)
On Jan 9, 2015, at 3:15 PM, Always Learning centos@u62.u22.net wrote:
unless the USA people, who have decimated my language (English), have a new definition for "technology”.
If you roll back all the changes made to English since colonial times, you’re left with Middle English. So, how do you feel about Chaucer?
Those who try to fight the inevitable changes that language goes through end up as laughing stocks:
https://en.wikipedia.org/wiki/Acad%C3%A9mie_fran%C3%A7aise#Conservatism
On Thu, 2015-01-08 at 09:35 -0700, Warren Young wrote:
Once a thing becomes reliable, it stops being technology.
Warren are you serious that things that do not work well are "technology" but things that do work well are *not* technology ?
It’s a bit of a glib observation, but there’s a serious core to it.
Pencils, paper, carpet, and toasters were once technology. They were luxury items, manufactured by skilled artisans. Now they come off an assembly line, durable and perfect, every time.
The original sense of the word “technology” comes from its Greek roots, meaning a treatise on some art, such as a book on how to brew beer. We don’t use the word that way any more. In the 1970s, we started to distinguish some technology as “high” technology, then high-tech, and now just tech.
I’m not really serious about this new definition for technology, but it is a useful one. It is entirely within the normal scope of language evolution for it to become the new sense of the word.
On Wed, 2015-01-07 at 08:31 -0500, Darr247 wrote:
On 07 January 2015 @01:37 zulu, Always Learning wrote:
You seem to forget. Computers were invented to perform repetitive tasks.
Or maybe, some of us just seem to remember it differently. In my opinion, robots/automatons were invented to perform repetitive tasks; computers were invented to perform logic operations faster and more-reliably than humans.
My recollection was influenced by my discovery in 1966 of Power Samas (later acquired by ICT) 40? 36? column small punch cards feed-in to a printing machine to produce invoices for a then major international publisher. The replacement/upgrade was a Honeywell 1200 with tapes, 80 column card reader, printer and the ubiquitous air conditioning and humidity control system. Luxuries like keyboards, disks, screens had not been invented. It was plain, simple, effective and fairly reliable Data Processing.
Two years later on a smaller H-120 I was writing commercial Cobol programmes for a Norwegian angling distributor - orders, invoices, statements, stock control. Again it was essentially repetitive processing done much faster than before the usage of mainframes.
Computers can not function without logic. One of the the most important advancements was of the "stored programme" feature instead of having to reload the data processing programme, contained on punched cards in strict sequence, every time a job needed doing.
Never bumped into "robots/automatons" anywhere at all in Data Processing nor encountered anyone using that term. Computers evolved very slowly from automated machine processing. A major advancement was made at the USA's famous Bell Labs (subsequently destroyed in the interests of shareholder profits) who invented the DTMF. It was the IC.
SPAM was another USA invention and so too were Microsoft-suitable viruses.
On 01/07/2015 01:06 PM, Always Learning wrote:
On Tue, 2015-01-06 at 16:07 -0700, Warren Young wrote:
"There is nothing new to be discovered in physics now. All that remains is more and more precise measurement.” — William Thomson, Lord Kelvin, 1900
Now means the current time. Now is not, and never will be, The (unknown) Future.
In the real world of using computers productively for repetitive tasks, people want stability and perhaps faster running programmes. No one ever wants a major upset of being forced to use a different method to perform the same tasks.
Young men are enthusiastic about implementing new ideas. Old men with substantially more experience wisely want to avoid disrupting well-running systems. Time is money. Disruptions waste money and cause errors.
New disruption causing ideas are also the food for innovation and progress. Bring them on, we need them. I think RH / CentOS strike a reasonable / measured balance, more or less freeze a set of function / capability for each major release - only release security patches and changes that are necessary to maintain functioning across the www. Then with each major release do a jump - yes this is disruptive - yes it means learning new ways of doing old things, hopefully those making these decisions in RH have the knowledge and perspective to make the calls / evaluations necessary. If you think you can do better, go get a job at RH.
On Tue, Jan 6, 2015 at 5:07 PM, Warren Young wyml@etr-usa.com wrote:
There are more JavaScript interpreters in the world than Dalvik, ART,[2] and Java ® VMs combined. Perhaps we should rewrite everything in JavaScript instead?
I'm counting the running/useful instances of actual program code,
I rather doubt you’ve done anything like actual research on this.
Not really, but start with the number of running android devices - and I think it is reasonable to assume that they are running something, checking gmail, etc. It's safe enough to say that's a big number.
If you’re trying to say that there’s more CPU-hours of JVM use on *ix than JS in browsers, that sort of elitist argument has been repeatedly defeated. Big Iron vs minicomputers, Unix workstations vs the PC, “real” Unix vs Linux, the PC vs the smartphone…
No, I'm saying it pretty much owns the phone/tablet space and the examples of elasticsearch (and other lucene-based stuff), jenkins, etc. show it scales to the other end as well.
Time and time again, the forces driving the end-user market end up taking over the hrumph hrumph serious computing market. This has already happened with JS vs Java in the “3 billion” space Oracle wants to claim, and I see no reason why it can’t eventually steamroll the *ix world, too, via node.js.
node.js seems like the worst of all worlds - a language designed _not_ to speak to the OS directly with an OS library glued in. But it is a good example of how programmers think - if one says you shouldn't do something, it pretty much guarantees that someone else will.
I think the real difference we have on this point is that I’m not actually serious when I propose that the whole world rewrite all software in My Favorite Language just to make my job easier.
I'd settle for just not changing the languages in ways that break already written software. But even that seems too much to expect.
JavaScript is on the rise mostly because the interpreters upgrade transparently and the hassle is somewhat hidden.
No, JavaScript is on the rise because the web is on the rise, and it managed to secure the spot as the web’s scripting language through an accident of history. That is all.
I think you are underestimating the way the working interpreters get to the users. And the way the code libraries mask the differences. If users were exposed to version incompatibilities the popularity would vanish. In fact, being able to actively detect and work around incompatibilities is a large part of the reason it is used at all.
I thought CPAN was approaching that long ago, or at least getting to the point where the new code you have to write to do about anything would take about half a page of calls to existing modules.
Ah yes, the ever-near future where everyone will plug a few Lego-like software blocks together and thereby construct any useful program they wish, while lacking any clue about what’s going on. That future’s been 5 years away for the past 5 decades.
Not it the sense that there's nothing new to invent, but that every sysadmin in every organization and every home user should not need to invent new processes to manage their machines.
It’s as likely today as ever — which is to say, “not” — and it isn’t the Fedora development community’s fault that we have not yet achieved this nirvana.
Really? How have they made it any easier to manage your 2nd machine than your first?
Software is “pure thought-stuff,” to quote Fred Brooks. We have very little constraint on the scope and range of things we can create in software. Any attempt to package software up into a usefully-small set of precisely-characterized little black boxes is a Quixotic dream.
It is also purely reproducible. Do something right once and no one should ever have to waste that thought again.
Why even bother with ksh or Bash extensions, for that matter? The original Bourne shell achieved Turing-completeness in 1977. There is literally nothing we can ask a computer to do that we cannot cause to happen via a shell script. (Except run fast.)
Well, Bourne didn't deal with sockets.
BSD Sockets didn’t exist in 1977.
Precisely. Once there was close to a 1-1 mapping of shell and external commands to system calls. Now there isn't.
In any case, you’re willfully ignoring the nature of *ix type shell script programming. A shell script’s “function library” is the set of all programs in the PATH. Drop nc/ncat/netcat/socat in the PATH, and suddenly your shell script knows how to talk to BSD sockets. It’s really no different than installing libnsl on a Solaris box.
I wasn't ignoring it. I just was avoiding another rant about how those have not been maintained in a backwards compatible way either. Even though shell syntax is stable, the programs are usually just glue around an assortment of tools that no longer match section 1 of the unix manuals.
No nc-alike here? Okay, echo the machine code to disk for an ELF binary that makes the necessary syscalls, using octal escapes. No, don’t cheat and echo out a C or assembly language program, go straight to the metal, you softie.
I might have done that for 8-bit z80 code, but I've since learned that it is rarely worth getting that intimate with the hardware.
You can’t just wish it away by telling everyone to switch to Java, or JavaScript, or whatever else. All you’re going to do is create another “standard” in the XKCD sense:
https://xkcd.com/927/
Well no, at this point you can't say that java is yet another new thing - probably not even groovy since that's mix/match with java. Maybe scala.
Perl 6 was an attempt to achieve this in one big jump. Perl 5 is in the process of slowly accreting some of Perl 6’s features. Eventually, I think the jump to Perl 6 won’t look so big, and Perl 6 will finally get off the ground. Perhaps even soon.
I think everyone involved with perl is sane enough to know that perl 6 is not a replacement for perl 5. It's something different. I just hope the Fedora people get it.
holding off on python and ruby until they have a similar history of not breaking your existing work with incompatible updates.
Those counters both got reset relatively recently: the Python 3 (neé 3000) and Ruby 1.9 transitions were fairly traumatic.
Well, I've avoided a couple of traumas then. But those transitions don't seem to have completed. And they probably won't until all distros ship multiple versions so programs can co-exist long enough to fix the broken parts.
Is that all this is? Trying to get someone else riled up enough that they’ll fork EL6 for you?
No - mostly hoping someone would point out something I had overlooked that makes the transition easy. I thought the computers were supposed to work for us instead of the other way around.
On Jan 6, 2015, at 5:07 PM, Les Mikesell lesmikesell@gmail.com wrote:
On Tue, Jan 6, 2015 at 5:07 PM, Warren Young wyml@etr-usa.com wrote:
There are more JavaScript interpreters in the world than Dalvik, ART,[2] and Java ® VMs combined. Perhaps we should rewrite everything in JavaScript instead?
I'm counting the running/useful instances of actual program code,
I rather doubt you’ve done anything like actual research on this.
Not really, but start with the number of running android devices - and I think it is reasonable to assume that they are running something, checking gmail, etc. It's safe enough to say that's a big number.
Sigh…
I said I didn’t want to get into Oracle’s bogus 3 billion number, or how it’s exceeded by actual JS deployments. Just take it as read that I’ve thought through it, and I’m still sure JS has more deployments.
Then apply my Tiobe argument to JS instead of Java: even if JS is significantly bigger than Java, it *still* doesn’t prove anything because it’s still a minority player, so there will continue to be a pressure to use something incompatible with it.
There. Is. No. Nirvana.
it pretty much owns the phone/tablet space and the examples of elasticsearch (and other lucene-based stuff), jenkins, etc. show it scales to the other end as well.
:shakes head in disbelief:
What continuum is a smartphone the other end of, exactly? The one in my pocket has multiple general-purpose GHz-class CPU cores, a few specialized coprocessors, several hundred megs of RAM, dozens of gigs of fast local storage, and several high-tech radios. Its raw processing power is on the order of 100 GFLOPS.
This is the low end of…what…the Top 500 List from 1998?
…except that my phone achieves that parity on a few watts, and doesn’t require a staff of acolytes to tend to its needs. This is a device that would make Captain Kirk jealous, but it’s just one of a billion. Booooring.
We are *so* spoiled.
You want to talk about stupid-high deployment numbers? Let’s talk embedded. 18.1 billion eight-bitters shipped *last year*.
The PIC10F200 is somewhere near what *I* call the low end: 16 bytes of RAM and a bit under 256 usable words of program memory.
http://www.microchip.com/wwwproducts/Devices.aspx?dDocName=en019863
It’s definitely not the actual bottom, though. They still make 4-bitters.
node.js seems like the worst of all worlds - a language designed _not_ to speak to the OS directly with an OS library glued in.
JavaScript’s a better language than a lot of other scripting languages that have found widespread use.
(Please, don’t ask. I *still* don’t want to start an advocacy flamewar.)
The economic drive behind it is making it uncommonly fast for a dynamic scripting language. And if the resource hits of dynamic programming are a problem, why were we just talking about Perl? :)
But it is a good example of how programmers think - if one says you shouldn't do something, it pretty much guarantees that someone else will.
No, the thing you should not do is Jaxer:
http://en.wikipedia.org/wiki/Aptana#Aptana_Jaxer
node.js is a vastly better plan.
it isn’t the Fedora development community’s fault that we have not yet achieved this nirvana.
Really? How have they made it any easier to manage your 2nd machine than your first?
I didn’t ask them to solve that problem. My company already solved it adequately, years ago, in a massive shell script. Thus my thoughtful treatment of the problem of the gap between shell and C; it was on my mind prior to this exchange.
I’m kind of surprised to find that Puppet and Chef are not in EL7, though. I don’t really need either, having solved my particular automatic deployment problem already, but I thought they achieved sufficient popularity before the EL7 feature freeze.
Oh, and in case you were wondering, yes, that massive shell script did require some adjustment to move it from EL5 to EL7. It took me a day or two, during which time I was also working on portability differences in the rest of the software. A few days of work to get back on track for the next 7 years; not too shabby.
Look, I doubt I’m any happier about how they moved my cheese than you are. I’ve just chosen to go hunt down the cheese and hack a few blocks off, instead of yelling about how they shouldn’t have moved it.
Software is “pure thought-stuff,” to quote Fred Brooks. We have very little constraint on the scope and range of things we can create in software. Any attempt to package software up into a usefully-small set of precisely-characterized little black boxes is a Quixotic dream.
It is also purely reproducible. Do something right once and no one should ever have to waste that thought again.
Arguably Smalltalk was GUIs and OO done right. Go spin up a Squeak VM and see if you still feel at home in that world:
Not only does technology change, so does our perception of what “good technology” looks like.
I suspect most of those using Squeak are using it for the same reason that I continue to pour a lot of time into Apple ][ emulators: nostalgia, not utility.
Why even bother with ksh or Bash extensions, for that matter? The original Bourne shell achieved Turing-completeness in 1977. There is literally nothing we can ask a computer to do that we cannot cause to happen via a shell script. (Except run fast.)
Well, Bourne didn't deal with sockets.
BSD Sockets didn’t exist in 1977.
Precisely. Once there was close to a 1-1 mapping of shell and external commands to system calls. Now there isn’t.
What’s your solution to that? Never invent sockets, because Saint Thompson didn’t think of it first?
I can already predict that you didn’t like the other plan — Plan 9, that is — where they redesigned Unix to include sockets, ioctls and more at the filesystem level:
Plan 9 broke compatibility, after all.
Bash fakes /dev/tcp (http://goo.gl/bbRysJ) but Bash isn’t everywhere, and you’re likely to run into problems if you try to share that socket with another program, since it isn’t a real /dev node.
No nc-alike here? Okay, echo the machine code to disk for an ELF binary that makes the necessary syscalls, using octal escapes. No, don’t cheat and echo out a C or assembly language program, go straight to the metal, you softie.
I might have done that for 8-bit z80 code, but I've since learned that it is rarely worth getting that intimate with the hardware.
That was a reductio ad absurdum argument. The point is that if backwards compatibility is the be-all and end-all of software design, we had everything we needed decades ago, so why not just fire up a 4.3BSD VM, and do everything that way?
You can’t just wish it away by telling everyone to switch to Java, or JavaScript, or whatever else. All you’re going to do is create another “standard” in the XKCD sense:
Well no, at this point you can't say that java is yet another new thing
What’s it really accomplished? It’s still just a big fish in a pond full of many other fishes, smaller than some, and not all that much bigger than a lot of others.
That sounds like XKCD’s “soon” scenario to me.
On Tue, Jan 06, 2015 at 06:37:42PM -0700, Warren Young wrote:
Noise removed.
Quick question, if I may? What does this have to do with CentOS?
John -- Spring is nature's way of saying, "Let's party!"
-- Robin Williams (1952-), American actor and comedian
On Jan 6, 2015, at 6:49 PM, John R. Dennison jrd@gerdesas.com wrote:
On Tue, Jan 06, 2015 at 06:37:42PM -0700, Warren Young wrote:
Noise removed.
Quick question, if I may? What does this have to do with CentOS?
Some people are annoyed that CentOS keeps changing on them, and keep going to greater and greater lengths to try and argue that CentOS should not change.
I am explaining to them why this is not a productive view.
On Tue, Jan 6, 2015 at 7:52 PM, Warren Young wyml@etr-usa.com wrote:
On Jan 6, 2015, at 6:49 PM, John R. Dennison jrd@gerdesas.com wrote:
On Tue, Jan 06, 2015 at 06:37:42PM -0700, Warren Young wrote:
Noise removed.
Quick question, if I may? What does this have to do with CentOS?
Some people are annoyed that CentOS keeps changing on them, and keep going to greater and greater lengths to try and argue that CentOS should not change.
No one has said it should not change. Just that it breaks all users existing work when it changes in non-backwards compatible ways.
I am explaining to them why this is not a productive view.
The non-productive part is that every user who has ever built anything of non-stable components has to deal with the problem on an individual basis. Is there any centralized approach to converting something that worked on CentOS6 to run on CentOS7? Does the program that is supposed to try to automatically upgrade versions have any tricks hidden away to fix things so they work after the upgrade, and could any of them be run separately?
On Tue, 2015-01-06 at 20:19 -0600, Les Mikesell wrote:
Is there any centralized approach to converting something that worked on CentOS6 to run on CentOS7? Does the program that is supposed to try to automatically upgrade versions have any tricks hidden away to fix things so they work after the upgrade, and could any of them be run separately?
Brilliant task to assign to Warren Young. That will keep him away from his disruptive "improvements" philosophy.
On Jan 6, 2015, at 7:40 PM, Always Learning centos@u62.u22.net wrote:
On Tue, 2015-01-06 at 20:19 -0600, Les Mikesell wrote:
Is there any centralized approach to converting something that worked on CentOS6 to run on CentOS7?
Brilliant task to assign to Warren Young.
You’re awfully free with the disposition of my time. Why is it that you feel you have a claim on it?
In any case, I thought it was you and Les who were annoyed about the current state of things, not me. It’s your itch. Why are you demanding that someone else scratch it for you, for free?
That will keep him away from his disruptive "improvements" philosophy.
I only have one package in the CentOS package repo, MySQL++. Here is its documentation detailing the incompatible changes made over the decade I’ve been maintaining it:
http://tangentsoft.net/mysql++/doc/html/userman/breakages.html
Some questions:
0. Which one of these “disruptive improvements" broke something for you?
1. You wanted documentation detailing changes made and ways to move forward. Since this is the only piece of CentOS that I control — and that only loosely, since the RPMs are maintained by someone else — what more did you want from this document?
2. Are you aware of a MySQL++ breakage that isn’t documented here?
3. How many of those breakages do you view as capricious?
4. How many of the changes were done in a backward-incompatible way, not counting those done at major version transitions?
Yes, I’m aware that the answer to #4 is not “zero,” to my chagrin. Some of those regressions were reverted shortly after the breakage was noticed by someone, while others were allowed to slide, as they didn’t seem to affect any existing code, based on mailing list traffic.
We now run an ABI compatibility checker as part of the release process, to prevent this from happening again.
Does any of this match up with your preconception of me as a developer? I’m pretty sure you slagged off on me without knowing about any of this.
On Tue, Jan 06, 2015 at 06:52:48PM -0700, Warren Young wrote:
Some people are annoyed that CentOS keeps changing on them, and keep going to greater and greater lengths to try and argue that CentOS should not change.
I am explaining to them why this is not a productive view.
It's not relevant in _any_ sense. CentOS is nothing more than (at it's core) a rebuild of RHEL. This type of nonsense should be directed to Red Hat in a Red Hat venue. It's nothing but off-topic noise here as CentOS will not deviate from its upstream in its core offerings.
John
On Tue, Jan 06, 2015 at 08:45:29PM -0600, John R. Dennison wrote:
It's not relevant in _any_ sense. CentOS is nothing more than (at it's core) a rebuild of RHEL. This type of nonsense should be directed to Red Hat in a Red Hat venue. It's nothing but off-topic noise here as CentOS will not deviate from its upstream in its core offerings.
Agreed.
If you want to participate in how the upstream OS is being shaped, I suggest looking at the Fedora Project, which is driven by volunteers.
If you notice the Subject: of this thread, it is "Design changes are done in Fedora". Pretty clear message.
On Jan 6, 2015, at 7:45 PM, John R. Dennison jrd@gerdesas.com wrote:
On Tue, Jan 06, 2015 at 06:52:48PM -0700, Warren Young wrote:
I am explaining to them why this is not a productive view.
It's not relevant in _any_ sense. CentOS is nothing more than (at it's core) a rebuild of RHEL. This type of nonsense should be directed to Red Hat in a Red Hat venue.
I’m answering here because the messages keep coming here, and there doesn’t seem to be anyone else here willing to take up the other side of the argument.
You are, however, correct, that yelling about the problems here is unlikely to change anything. I’ve made that point at least twice in prior messages. Yet, the replies keep coming.
If we can’t get people to go scratch their own itches, maybe we can at least get them to stop complaining about things that cannot be changed at this level.
John R. Dennison писал 2015-01-07 04:49:
Quick question, if I may? What does this have to do with CentOS?
I for one read this thread with interest. Let it be. And IMHO the topics are relevant for anybody professionally involved with computers.
On Jan 2, 2015, at 4:52 PM, Warren Young wyml@etr-usa.com wrote:
I’m not interested in the reverse case, where an old server could not take over from a newer one, because there’s no good reason to manage the upgrade that way. You drop the new one in as a backup, take the old one offline, upgrade it, and start it back up, whereupon it can take over as primary again.
Most professional upgrades require a written and tested roll-back plan — and a published set of criteria (usually down-time) where the roll-back out of the failed “upgrade” MUST occur.
Just sayin’… that’s a professional upgrade. What you described left out a lot of steps.
An upgrade where the admin is “not interested” in going backward, is not what most of our employers will allow these days.
Most of the ways to manage that on most OS’s without utilizing virtual machines, or entire filesystem snapshots… is truly awful if you try to do without either of those. The package managers sure aren’t written for it. It’s virtually impossible to find package management systems that can handle a properly done professional upgrade/test/revert cycle on any OS… (well, not without buying them, anyway… and they’re still slim pickin’s on *nix.)
The other area most OS package managers suck badly is in creating IDENTICAL systems. It takes a LOT of sysadmin work and coddling of the package management tools currently en vogue, to make them do what businesses really want:
e.g. If the QA machine has these exact 1000 packages installed on it, by god, that’s what I want on the Pre-Production server, and later on the Production server when they eventually get updates. I don’t want “yum update” grabbing whatever’s current, on those three different dates.
Most of us are leveraging things like Ansible and Puppet to force such things…
Things a good package management system would know how to do… since every business I’ve ever worked at needs both the ability to roll back, and the ability to make identical systems/package sets.
Ah well, rant off…
— Nate
On 12/29/2014 09:04 PM, Warren Young wrote:
On Dec 29, 2014, at 4:03 PM, Les Mikesell lesmikesell@gmail.com wrote:
On Mon, Dec 29, 2014 at 3:03 PM, Warren Young wyml@etr-usa.com wrote:
the world where you design, build, and deploy The System is disappearing fast.
Sure, if you don't care if you lose data, you can skip those steps.
How did you jump from incremental feature roll-outs to data loss? There is no necessary connection there.
In fact, I’d say you have a bigger risk of data loss when moving between two systems released years apart than two systems released a month apart. That’s a huge software market in its own right: legacy data conversion.
If your software is DBMS-backed and a new feature changes the schema, you can use one of the many available systems for managing schema versions. Or, roll your own; it isn’t hard.
You test before rolling something to production, and you run backups so that if all else fails, you can roll back to the prior version.
None of this is revolutionary. It’s just what you do, every day.
when it breaks it's not the developer answering the phones if anyone answers at all.
Tech support calls shouldn’t go straight to the developers under any development model, short of sole proprietorship, and not even then, if you can get away with it. There needs to be at least one layer of buffering in there: train up the secretary to some basic level of cluefulness, do everything via email, or even hire some dedicated support staff.
It simply costs too much to break a developer out of flow to allow a customer to ring a bell on a developer’s desk at will.
The world is moving toward incrementalism, where the first version of The System is the smallest thing that can possibly do anyone any good. That is deployed ASAP, and is then built up incrementally over years.
That works if it was designed for rolling updates. Most stuff isn’t,
Since we’re contrasting with waterfall development processes that may last many years, but not decades, I’d say the error has already been made if you’re still working with a waterfall-based methodology today.
The first strong cases for agile development processes were first made about 15 years ago, so anything started 7 years ago (to use the OP’s example) was already disregarding a shift a full software generation old.
some stuff can't be.
Very little software must be developed in waterfall fashion.
Avionics systems and nuclear power plant control systems, for example. Such systems make up a tiny fraction of all software produced.
A lot of commercial direct-to-consumer software also cannot be delivered incrementally, but only because the alternative messes with the upgrade treadmill business model.
Last time I checked, this sort of software only accounted for about ~5% of all software produced, and that fraction is likely dropping, with the moves toward cloud services, open source software, subscription software, and subsidized software.
The vast majority of software developed is in-house stuff, where the developers and the users *can* enter into an agile delivery cycle.
Where did you get the 5% from according to google there are
"over 200 billion lines of existing COBOL code, much of it running mission-critical 24/7 applications, it is simply too costly (in the short run) for many organizations to convert."
And what about Fortran, RPG etc.
Also how big is the outfit you work for? Sounds like you have no shortage of help, a lot of place don't have unlimited resources like you seem to have.
Instead of trying to go from 0 to 100 over the course of ~7 years, you deliver new functionality to production every 1-4 weeks, achieving 100% of the desired feature set over the course of years.
If you are, say, adding up dollars, how many times do you want that functionality to change?
I’m not sure what you’re asking.
If you’re talking about a custom accounting system, the GAAP rules change several times a year in the US:
http://www.fasb.org/jsp/FASB/Page/SectionPage&cid=1176156316498
The last formal standard put out by FASB was 2009, and they’re working on another version all the time. Chances are good that if you start a new 7-year project, a new standard will be out before you finish.
If instead you’re talking about the cumulative cost of incremental change, it shouldn’t be much different than the cost of a single big-bang change covering the same period.
In fact, I’d bet the incremental changes are easier to adopt, since each change can be learned piecemeal. A lot of what people are crying about with EL7 comes down to the fact that Red Hat is basically doing waterfall development: many years of cumulative change gets dumped on our HDDs in one big lump.
Compare a rolling release model like that of Cygwin or Ubuntu (not LTS). Something might break every few months, which sounds bad until you consider that the alternative is for *everything* to break at the same time, every 3-7 years.
I’m not arguing for CentOS/RHEL to turn into Ubuntu Desktop. I’m just saying that there is a cost for stability: every 3-7 years, you must hack your way through a big-bang change bolus.
(6-7 years being for those organizations that skip every other major release by taking advantage of the way the EL versions overlap. EL5 was still sunsetting as EL7 was rising.)
This isn’t pie-in-the-sky theoretical BS. This is the way I’ve been developing software for decades, as have a great many others. Waterfall is dead, hallelujah!
How many people do you have answering the phone about the wild and crazy changes you are introducing weekly?
The burden of tech support has more to do with proper QA and roll-out strategies than with the frequency of updates.
For the most part, we roll new code to a site in response to a support call, rather than field calls in response to an update. The new version solves their problem, and we don’t hear back from them for months or years.
We don’t update all sites to every new release. We merely ship *a* new release every 1-4 weeks, which goes out to whoever needs the new features and fixes. It’s also what goes out on each new server we ship.
How much does it cost to train them?
Most of our sites get only one training session, shortly after the new system is first set up.
We rarely get asked to do any follow-up training. The users typically pick up on the incremental feature updates as they happen, without any additional help from us. We attribute that to solid UX design.
That first session is mostly about giving the new users an idea of what the system can do. We teach them enough to teach themselves.
How often do most people get trained to use a word processor? I’ll bet a lot of people got trained just once, in grade school. They just cope with changes as they come.
The worst changes are when you skip many versions. Word 97 to Word 2007, for example. *shudder*
I don’t mean that glibly. I mean you have made a fundamental mistake if your system breaks badly enough due to an OS change that you can’t fix it within an iteration or two of your normal development process. The most likely mistake is staffing your team entirely with people who have never been through a platform shift before.
Please quantify that. How much should a business expect to spend per person to re-train their operations staff to keep their systems working across a required OS update? Not to add functionality. To keep something that was working running the way it was?
If you hire competent people, you pay zero extra to do this, because this is the job they have been hired to do.
That's pretty much what IT/custom development is: coping with churn.
Most everything you do on a daily basis is a reaction to some change external to the IT/development organization:
Capacity increases
Obsolete ‘ware upgrades
New seat/site deployments
Failed equipment replacements
Compatibility breakage repair (superseded de facto standard, old de jure standard replaced, old proprietary item no longer available…)
Tracking business rule change (GAAP, regulations, mergers…)
Effecting business change (entering new markets, automation, solving new problems developing from new situations…)
Tracking business strategy change (new CEO, market shift…)
Setting aside retail software development, IT and internal development organizations *should* be chasing this kind of thing, not being “proactive.” We’re not trying to surprise our users with things they didn’t even ask for, we’re trying to solve their problems.
Maybe we solve problems in a *manner* our users did not expect — hopefully a better way — but we’re rarely trying to innovate, as such.
how much developer time would you expect to spend to follow the changes and perhaps eventually make something work better?
Pretty much 100%, after subtracting overhead. (Meetings, email, breaks, reading…)
Again: This is what we do. Some new thing happens in the world, and we go out and solve the resulting problems.
The only question is one of velocity: the more staff you add, the faster you go. So, how fast do you want to go?
(Yes, I’ve read “The Mythical Man Month.” The truths within that fine book don’t change the fact that Microsoft can develop a new OS faster than I can all by my lonesome.)
The software system I’ve been working on for the past 2 decades has been through several of these platform changes.
How many customers for your service did you keep running non-stop across those transitions?
Most of our customers are K-12 schools, so we’re not talking about a 24/7 system to begin with. K-12 runs maybe 9 hours a day (7am - 4pm), 5 days a week, 9 months out of the year. That gives us many upgrade windows.
We rarely change out hardware or the OS at a particular site. We generally run it until it falls over, dead.
This means we’re still building binaries for EL3.
This also means our software must *remain* broadly portable. When we talk about porting to EL7, we don’t mean that it stops working on EL6 and earlier. We might have some graceful feature degradation where the older OS simply can’t do something the newer one can, but we don’t just chop off an old OS because a new one came out.
All that having been said, we do occasionally roll a change to a site, live. We can usually do it in such a way that the site users never even notice the change, except for the changed behavior.
This is not remarkable. It’s one of the benefits you get from modern centralized software development and deployment stacks.
Everyone’s moaning about systemd...at least it’s looking to be a real de facto standard going forward.
What you expect to pay to re-train operations staff -just- for this change, -just- to keep things working the same..
You ask that as if you think you have a no-cost option in the question of how to address the churn.
Your only choices are:
Don’t upgrade
Upgrade and cope
Switch to something else
Each path carries a cost.
You think path 1 is free? If you skip EL7, you’re just batching up the changes. You’ll pay eventually, when you finally adopt a new platform. One change set plus one change set equals about 1.9 change sets, plus compound penalties.
Penalties? Yes.
You know the old joke about how you eat an elephant? [*] By the time you eat 1.9 elephants, you’ve probably built up another ~0.3 change sets worth of new problems. Time you spend grinding through nearly two full change sets is time you don’t spend keeping your current backlog short.
We call this technical debt in the software development world. It’s fine to take out a bit of technical debt occasionally, as long as you don’t let it build up too long. The longer you let it build, the more the interest & penalties accrue, so the harder it is to pay down.
We've got lots of stuff that will drop into Windows server versions spanning well over a 10 year range.
Yes, well, Linux has always had a problem with ABI stability. Apparently the industry doesn’t really care about this, evidenced by the fizzling of LSB, and the current attacks on the work at freedesktop.org. Apparently we’d all rather be fractious than learn to get along well enough that we can nail down some real standards.
Once again, though, there’s a fine distinction between stable and moribund.
And operators that don't have a lot of special training on the differences between them.
I’ve never done much with Windows Server, but my sense is that they have plenty of churn over in their world, too. We’ve got SELinux and SystemD, they’ve got UAC, SxS DLLs, API deprecation, and tools that shuffle positions on every release. (Where did they move the IPv4 configuration dialog this time?!)
We get worked up here about things like the loss of 32-bit support, but over in MS land, they get API-of-the-year. JET, ODBC, OLE DB, or ADO? Win32, .NET desktop, Silverlight, or Metro? GDI, WinG, DirectX, Windows Forms or XAML? On and on, and that’s just if you stay within the MSDN walls.
Could it be that software for these other platforms *also* manages to ride through major breaking changes?
Were you paying attention when Microsoft wanted to make XP obsolete? There is a lot of it still running.
Were you paying attention when Target’s XP-based POS terminals all got pwned?
Stability and compatibility are not universal goods.
What enterprise can afford to rewrite all of its software every ten years?
Straw man.
Not really. Ask the IRS what platform they use. And estimate what it is going to cost us when they change.
Monopolies are inherently inefficient and plodding. Government is special only because it is the biggest monopoly.
(That’s why we have antitrust law: not because it’s good for the consumer, but because it fights the trend toward zaibatsu rule.)
Few organizations are working under such stringent constraints, if only because it’s a danger to the health of the organization. Only monopolies can get away with it.
(The long dragging life of XP is an exception. Don’t expect it to occur ever again.)
No, that is the way things work. And the reason Microsoft is in business.
Microsoft stopped retail sale of Windows 7 a few months ago, and Vista back in April.
A few months ago, there was a big stink when MS killed off Windows 8.0 updates, requiring that everyone upgrade to 8.1.
Yes, I know about downgrade rights for pro versions of Windows.
Nevertheless, the writing is on the wall.
while your resources aren’t as extensive as Google’s, your problem isn’t nearly as big as Google’s, either.
So again, quantify that. How much should it cost a business _just_ to keep working the same way?
Google already did that cost/benefit calculation: they tried staying on RH 7.1 indefinitely, and thereby built up 10 years of technical debt. Then when they did jump, it was a major undertaking, though one they apparently felt was worth doing.
There’s a cost to staying put, too.
And why do you think it is a good thing for this to be a hard problem or for every individual user to be forced to solve it himself?
I never said it was a good thing. I’m just reporting some observations from the field.
—————
[*] One bite at a time. _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Jan 1, 2015, at 9:52 AM, Steve Clark sclark@netwolves.com wrote:
On 12/29/2014 09:04 PM, Warren Young wrote:
The vast majority of software developed is in-house stuff, where the developers and the users *can* enter into an agile delivery cycle.
Where did you get the 5% from
An industry publication, probably 10-15 years ago. (You know, back when they printed these things on paper.) I haven’t bothered looking into it again since.
If you want a fuzzy but more current picture of the state of things, go to a technology job board, and narrow the search to an area of the country that isn’t Silicon Valley, New York City, Boston, or Austin. (i.e. Not one of the hotbeds of commercial packaged software development.) Ideally, somewhere you don’t live now, never have lived in, never have visited, and have no desire to visit/live in. Anchorage, Buffalo, Cleveland...
Now count how many jobs are for positions on a team creating packaged commercial software, then make a ratio of that vs the total job count. Then discount for the fact that internal software development has a lot less turnover than all these flash-in-the-pan App Store darlings.
Don’t mistake a job for a NASDAQ Top 100 company for one developing commercial packaged software. Apple, IBM, and Microsoft need internal software, too.
I think you’ll find a lot of companies you’ve never heard of. Obscurity doesn’t remove these companies’ need for custom software.
One of the biggest movements in software development in recent years (Agile/XP) got started as a result of work done for *Chrysler.*
according to google there are
"over 200 billion lines of existing COBOL code, much of it running mission-critical 24/7 applications, it is simply too costly (in the short run) for many organizations to convert."
And what about Fortran, RPG etc.
Yeah, that’s all going to be in-house stuff.
The fact that this software is irreplaceable doesn’t affect my point at all. If this software is neither perfect nor dying, there is an in-house staff working on its maintenance and enhancement.
The developers could instead be consultants, but that’s a question of outsourcing. There is still a tight coupling between user requests and what gets developed.
My point is that that loop is much larger and looser with packaged commercial software, to the point that user requests are nearly disconnected from visible effects.
I don’t know about you, but my success rate at convincing commercial packaged software producers to add my pet features and fixes is probably under 5%, and it often takes 12-24 months to get a change into a released version. At least with open source, I have the *option* of developing the fix/feature myself.
With in-house software, you should be getting nearly 100% of your requests implemented, within a reasonable schedule given the staffing provided.
(If you put 1,000 “top priority” features on a wish list and give it to a single developer, it will necessarily take a long time to get a feature through the pipeline.)
Sounds like you have no shortage of help
Ahaha. No.