On Mon, Jan 5, 2015 at 9:22 PM, Warren Young wyml@etr-usa.com wrote:
Docker will eat away at this problem going forward. You naturally will not already have Dockerized versions of apps built 10 years ago, and it may not be practical to create them now, but you can start insisting on getting them today so that your future OS changes don’t break things for you again.
Yes, it is just sad that it is necessary to isolate your work from the disruptive nature of the OS distribution. But it is becoming clearly necessary.
I built a CentOS7 to match my old CentOS5 pair. It can do the same thing, but there is no way to make them actively cluster together so the new one is aware of the outstanding leases at cutover or to have the ability to revert if the new one introduces problems.
I believe that was ISC’s fault, not Red Hat’s.
Agreed - and some of the value of Red Hat shows in the fact that the breakage is not packaged into a mid-rev update.
Perhaps your point is that Red Hat should have either a) continued to distribute an 8-year-old version of dhcpd with EL7 [1], or b) somehow given you the new features of ISC dhcpd 4.2.5 without breaking anything? If so, I take this point back up at the end.
Maybe document how you were supposed to deal with the situation, keeping your lease history intact and the ability to fail over during the transition. My point here is that there are people using RHEL that care about this sort of thing, but the system design is done in Fedora where I'm convinced that no one actually manages any machines doing jobs that matter or cares what the change might break. That is, the RHEL/Fedora split divided the community that originally built RH into people who need to maintain working systems and people who just want change. And they let the ones who want change design the next release.
And those things span much longer that 10 years. If you are very young you might not understand that.
My first look at the Internet was on a VT102. I refer not to a terminal emulator, but to something that crushes metatarsals if you drop it on your foot.
I think I’ve got enough gray in my beard to hold my own in this conversation.
So, after you've spent at least 10 years rolling out machines to do things as fast as you can, and teaching the others in your organization to spell 'chkconfig' and use 'system ...' commands, wouldn't you rather continue to be productive and do more instead of having to start over and re-do the same set of things over again just to keep the old stuff working?
But if numbers impress you, if you count android/Dalvik which is close enough to be the stuff of lawsuits, there's probably more instances of running programs than anything else.
There are more JavaScript interpreters in the world than Dalvik, ART,[2] and Java ® VMs combined. Perhaps we should rewrite everything in JavaScript instead?
I'm counting the running/useful instances of actual program code, not the interpreters that might be able to run something. But JavaScript is on the rise mostly because the interpreters upgrade transparently and the hassle is somewhat hidden.
If we consider only the *ix world, there are more Bourne-compatible shell script interpreters than Perl, Python, or Ruby interpreters. Why did anyone bother to create these other languages, and why do we spend time maintaining these environments and writing programs for them?
We spend time 'maintaining' because the OS underneath churns. Otherwise we would pretty quickly have all of the programs anyone needs completed. I thought CPAN was approaching that long ago, or at least getting to the point where the new code you have to write to do about anything would take about half a page of calls to existing modules.
Why even bother with ksh or Bash extensions, for that matter? The original Bourne shell achieved Turing-completeness in 1977. There is literally nothing we can ask a computer to do that we cannot cause to happen via a shell script. (Except run fast.)
Well, Bourne didn't deal with sockets. My opinion is that you'd be better off switching to perl at the first hint of needing arrays/sockets or any library modules that already exist in CPAN instead of extending a shell beyond the basic shell-ish stuff. But nobody asked me about that.
If you think I’m wrong about that, you probably didn’t ever use sharchives.
Of course I used sharchives - and I value the fact that (unlike most other languages) things written in Bourne-compatible shell syntax at any time in the past will still run, and fairly portably - except perhaps for the plethora of external commands that were normally needed. Perl doesn't have quite as long a history but is just about as good at backwards compatibility. With the exception of interpolating @ in double-quoted strings starting around version 5, pretty much anything you ever wrote in perl will still work. I'm holding off on python and ruby until they have a similar history of not breaking your existing work with incompatible updates.
If I *did* have to distribute libboost*.so, I’d just ship them in the same RPM that contains the binaries that reference those libraries.
This is no different than what you often see on Windows or Mac: third-party DLLs in c:\Program Files that were installed alongside the .exe, or third-party .dylib files buried in foo.app/Contents/Frameworks or similar.
Yes, on windows, you can just pick a boost version out of several available and jenkins and the compiler and tools seem to do the right thing. On linux it may be possible to do that, but you have to fight with everything that knows where the stock shared libraries and include files are supposed to be. While every developer almost certainly has his own way of maintaining multiple versions of things through development and testing, the distribution pretends that only one version of things should ever be installed. Or if multiples are installed there must be a system-wide default for which thing executes set by symlinks to obscure real locations. Never mind that Linux is multi-user and different users might want different versions. Or at least it was that way before 'software collections'.
Or were you planning on demanding that EL5 be supported forever, with no changes, except for magical cost-free features?
No, what I wish is that every change would be vetted by people actively managing large sets of systems, with some documentation about how to handle the necessary conversions to keep things running. I don't believe anyone involved in Fedora and their wild and crazy changes actually has anything running that they care about maintaining or a staff of people to retrain as procedures change. There's no evidence that anyone has weighed the cost of backwards-incompatible changes against the potential benefits - or even knows what those costs are or how best to deal with them.