On Wed, April 19, 2017 16:22, Chris Murphy wrote:
Apple has had massively disruptive changes on OS X and iOS. Windows has had a fairly disruptive set of changes in Windows 10. About the only things that don't change are industrial OS's.
I have no idea how this reference applies to my earlier post. We do not use Apple or Windows servers and the desktop environment is stabilised at Win7pro. There will be be no Windows 10 here ever. OSX / iOS employment is limited to personal devices, none of which are permitted on premise in any case.
When it comes to breaking user space, there's explicit rules against that in Linux kernel development. And internally consistent API/ABI stability is something you're getting in CentOS/RHEL kernels, it's one of the points the distributions exist. But the idea that Windows and OS X have better overall API stability I think is untrue, having spoken to a very wide assortment of developers who build primarily user space apps.
This may be true. It is likely important to software developers. It is also totally irrelevant to a business.
Businesses, other than software development houses and consultants, are software users. When a vendor massively rearranges things in their software, deprecates scripting syntax that has existed for years if not decades, and fundamentally changes the way the administration of an operating system is presented it really matter not a wit to a business that the internal kernel level api remains unchanged. It is the accumulated administrative experience that is lost in consequence that concerns a business given that replacing that loss will cost either directly in retraining or indirectly in error and resultant disruption; or both.
What does happen, in kernel ABI changes can break your driver, as there's no upstream promise for ABI compatibility within the kernel itself. The effect of this is very real on say, Android, and might be one of the reasons for Google's Fuscia project which puts most of the drivers, including video drivers, into user space. And Microsoft also rarely changes things in their kernel, so again drivers tend to not break.
And this illustrates the point that I attempting to make. A business owner assumes that whatever OS is used it will deal with the various devices that make up its hardware environment. For if it does not then they seek an OS that does. However, vanishingly few firms in my experience (i.e.NONE) have ever had operational programming staff write or even modify a device driver. A business is in existence to make money for its owners not dick around with esoteric computer theory and practice.
Red Hat, again in my sole opinion, increasingly appears to me to be emulating another company notorious for shuffling the user interface to little evident purpose other than profit. That is good business for them. It is not good for us.
Bear in mind that we have been RedHat/Whitebox/CA-OS/CentOS users since 1998 so it is not like we are moving away from Linux with anything like enthusiasm. But this upgrade treadmill that has developed within RH is simply too costly for us to bear any longer. The idea that one has to rebuild from scratch entire host systems and then laboriously port over data and customised portions to a new host simply to upgrade the underlying OS is absolutely ludicrous. Consider the tremendous labour costs regularly incurred in accomplishing what amounts to maintaining the status quo.
We just upgraded a FreeBSD host from 10.3 to 11.0 in situ without problem; and with very little downtime (three reboots in the space of 30 minutes). This was no standalone device either. It was the OS running the metal for multiple BHyve virtual machines, themselves running various operating systems (but mainly FreeBSD-11). One of said vms being our Samba-4 AD-DC. And, had it all gone south then, given we use ZFS in FreeBSD, and that we snapshot regularly, getting back to 10.3 would have been, and still could be, nearly instantaneous.
Think about what that would take in terms of man hours to accomplish moving from EL6 to 7. And moving from 5 to 6 was not much better. This is just too expensive to repeat every three years.
And allow me to forestall any claims that the chimera that is 'cloud computing' is the answer. All that does is make creating the requisite new platforms marginally less tedious. And that small advantage is purchased at the cost of handling over control of all your data to entities who are thoroughly discredited with respect to security and privacy.
I am not anti or pro systemd, upstart, or etc/rc (or any other software although I admit to holding a generally dim view of things from Redmond). I do not really care what is used so long as it works and that introducing it does not greatly diminish the value of existing user skills and knowledge. However, I am past the point of patience with gratuitous changes that offer no appreciable benefit to the parties tasked with dealing them. Systemd is not the problem. It is a symptom of a deeper malaise, indifference.
Think about what that would take in terms of man hours to accomplish moving from EL6 to 7. And moving from 5 to 6 was not much better. This is just too expensive to repeat every three years.
So why do it? There is absolutely nothing wrong with sticking with EL6 for a long time, certainly for the lifetime of the hardware - EL5 has only just gone EoL, and if you pay RH you can still have it on support. Just because EL7 exists, it doesn't mean that you have to upgrade to it. I've only just started to seriously roll out CentOS7, and then mostly only on new machines.
P.
On Thu, Apr 20, 2017 at 09:33:30AM -0400, James B. Byrne wrote:
Red Hat, again in my sole opinion, increasingly appears to me to be emulating another company notorious for shuffling the user interface to little evident purpose other than profit. That is good business for them. It is not good for us.
From my perspective as a Red Hat customer who supports hundreds of
RHEL7 Workstation systems, Red Hat really doesn't seem to care or test their Workstation product. Their support doesn't seem to have much training when it comes to problems with the GUI. Since GNOME itself moves along at a much faster pace than RHEL, I always end up looking for archives of documentation, and trawling through GNOME's bugzilla.
Red Hat makes its business on the Server side. They don't really care about graphical user interfaces apart from the installer.
On Apr 20, 2017, at 7:33 AM, James B. Byrne byrnejb@harte-lyne.ca wrote:
When a vendor ... fundamentally changes the way the administration of an operating system is presented
I’ve gotten the sense from this other part of the thread that the answer to my question, “What are you moving to?” is FreeBSD.
If you think FreeBSD system administration hasn’t changed over the past 10 years, you must not have been using it that long. What makes you think it won’t change again in the next 10 years, possibly in very large breaking ways?
vanishingly few firms in my experience (i.e.NONE) have ever had operational programming staff write or even modify a device driver.
My company is very small. I’ve modified device drivers to make them work properly on Linux, purely in a “scratch my own itch” kind of way.
I assure, you, many larger organizations also do this or something similar. Netflix is famous for using FreeBSD on their streaming servers and for tuning the FreeBSD kernel heavily for that purpose.
A business is in existence to make money for its owners not dick around with esoteric computer theory and practice.
I’m not glorifying change for its own sake. I’m just saying it happens, and however inessential it may be to your business’ operations is really not on-point. The fact is that it happens everywhere in this industry, so your only choice is in which bag of changes you want to deal with, not whether you get a bag of changes.
The idea that one has to rebuild from scratch entire host systems and then laboriously port over data and customised portions to a new host simply to upgrade the underlying OS is absolutely ludicrous.
I find that most hardware is ready to fall over by the time the CentOS that was installed on it drops out of support anyway.
That is to say, I think the right way to use CentOS is to install one major version on the hardware when it’s built, and then ride it for the 7-10 years until that OS version drops out of support. (7 being the worst case, when you install a new system juuuust before the next major OS version comes out.)
Then there’s all the change that is outside the OS proper. For example, there’s all the current changes in the way encryption is handled, which would require operational changes anyway. You can’t keep running BIND 4 on your public-facing DNS servers, for example, even if all the security problems were somehow fixed without changing any user interface.
Ditto mail, HTTP, and many other critical services, since old versions often don’t even speak today’s required protocols. (TLS 1.1 minimum, DMARC, DKIM, SPF, etc.)
FreeBSD, this supposed bastion of stability, now actively discourages you from using BIND in the first place, for example. Now they want you to migrate to NSD + Unbound. Oh noes, more change!
Consider the tremendous labour costs regularly incurred in accomplishing what amounts to maintaining the status quo.
If you only wanted the status quo ante, why upgrade at all?
Obvious answer: because you actually do want *some* change.
We just upgraded a FreeBSD host from 10.3 to 11.0 in situ without problem
Lucky you. I’ve had such upgrades take a system out for a day, working around all the breakages.
Upgrading FreeBSD is historically one of the most painful things about it. It’s getting better, but only by changing how everything about packaging was done. Holy ChangeLogs, Batman!
Don’t get the wrong idea that I don’t like FreeBSD, by the way. I know these things about it because I use it regularly. This is one of those “bags of changes” I referred to above. Sometimes I want the Linux bag, and sometimes I want the FreeBSD bag, and I know going into the decision that each bag implies a future bag of changes I’ll have to deal with.
It was the OS running the metal for multiple BHyve virtual machines
Ah, more change. Bhyve only goes back to FreeBSD 10, so if you were using FreeBSD prior to that, you’d have had to either drag forward whatever VM manager you were using or migrate to bhyve.
given we use ZFS in FreeBSD, and that we snapshot regularly, getting back to 10.3 would have been, and still could be, nearly instantaneous.
That’s a great reason to pick FreeBSD. Just don’t fool yourself that by switching that you’ve somehow gotten off the upgrade treadmill. You’ve only switched bags.
Systemd is not the problem. It is a symptom of a deeper malaise, indifference.
systemd offers benefits to certain classes of end users which could not have been achieved without *some* kind of change.
We can argue about how well systemd did its job — I share many of the negative opinions about it — but I think you’ll have a very tough time convincing me that we could have gotten all the benefits without changing the user interface.
Again it comes back to the bag of features: if you didn’t want any of the features systemd brought, then you may be right to abandon Linux. (“May” because it feels like being a one-issue voter, to me.) It is good that we still have substantially different OSes to choose from.
And that’s why I use *all* the major OSes and several weird ones besides. None of it is perfect, yet it all has its place.
On 04/20/2017 05:55 PM, Warren Young wrote:
... I find that most hardware is ready to fall over by the time the CentOS that was installed on it drops out of support anyway. ...
James' point isn't the hardware cost, it's the people cost for retraining. In many ways the Fedora treadmill is easier, being that there are many more smaller jumps than the huge leap from C6 to C7. For the most part, however, I agree with most of your post. I strongly disagree with the paragraph above, though.
I have worked for non-profits for most of my career thus far, which spans almost 30 years. Non-profits by their very nature live on the slimmest of margins, and donations of hardware by individuals and companies have been in my experience the bread and butter for obtaining server-quality hardware. The typical donation will be at least one or two generations old before the non-profit gets it; my current employer is just putting in production some IBM BladeCenters with the dual-socket Opteron LS20 blades (10+ years old). Given the spiky workload, these blades are suitable for the targeted use, and the electrical requirements aren't a problem (I've done the math; it would take ten years or more to justify the purchase price of a new blade based on power savings alone, and our power is quite inexpensive here). At least I can use very recent blades, and the eBay prices for 5-year-old blades are pretty good, so when I need that much more power I can get it.
Oh, and the LS20 blades are built like tanks. We have a couple hundred of them that were donated, and we're going to use them.
For what it's worth, CentOS 7, once installed, works great as long as the lack of a GUI console isn't a problem (something with the BladeCenter's KVM switch and C7's kernel keeps the keyboard from working properly).
And don't even get me started on networking equipment, where I still have Catalyst 5500-series hardware in production. (going on 20 years old and still trucking!)
And having said that, I just pulled out of service a server for another non-profit that had a power supply fan seize. I posted about moving its application Friday. It is an AMD K6-2/400 with a Western Digital 6GB boot drive and a Maxtor 30GB data drive, running Red Hat Linux 5.2. The Antec power supply was put into service in 1999. It stopped working Friday, and could have probably been put back into operation with a new power supply without a huge amount of work, but I decided it was time. Heh, it was time ten years ago!
The 6GB WD drive was only 19 years old; while I honestly wanted to see it turn 20, it was time (power supply glitches caused by overheating of the power supply; worst-case for hard disk death in my experience). Yeah, 24x7 operation for 19 years with minimal downtime. I'm going to personally put it back into service for hysterical raisins, since the VA-503+ board doesn't need re-cap and it runs very well for what it is. I'm not sure what I'm going to run on it yet. (It will be in service for the same reasons I'm going to put a Reh CPU280 running UZI280 into service.....).
And that’s why I use *all* the major OSes and several weird ones besides. None of it is perfect, yet it all has its place.
I couldn't agree more.
On Apr 24, 2017, at 7:53 AM, Lamar Owen lowen@pari.edu wrote:
James' point isn't the hardware cost, it's the people cost for retraining.
Unless you’ve hired monkeys so that you must train them to do their tasks by rote, that is a soft cost, not a hard cost. If you’ve hired competent IT staff, they will indeed need some time to work out the differences, but they will do that on their own if only given that time.
Note also that Byrne’s solution was to move to an entirely different OS, but we don’t hear about the “retraining cost” involved with that. Surely it was a larger jump from C6 or C7 to FreeBSD 10 than from C6 to C7?
He also seems to be sweeping aside the fact that FreeBSD major releases generally stay in support for about half the span of RHEL and its derivatives. If he wants to stay on a supported OS the whole time that C7 remains in support, he’s probably looking at 2 major OS version upgrades.
It’ll be interesting to see how much change FreeBSD gets in the next 7 years.
In many ways the Fedora treadmill is easier, being that there are many more smaller jumps than the huge leap from C6 to C7.
That depends on the organization and its goals.
If you have a true IT staff that exists just to keep servers up to date and working properly, then yes, you’re right, smaller upgrades every 3-6 months are often easier to handle than trying to choke down 2-10 years of changes all at once, depending on the LTS release strategy and how many major upgrades you skip.
If you’re trying to treat the OS as a base atop which you do something else, and you just need something that will keep working for 2-10 years despite being continually patched, then choking that big ball of changes down every 2-10 years might be preferable.
My main point is that if you’re going to take the second path, don’t cry about how much change there is to choke down when you’re finally forced to move forward. You choose to put off dealing with it for many years; the chickens have come back home to roost, so there will of course be a lot of work to do.
...dual-socket Opteron LS20 blades (10+ years old)...CentOS 7, once installed, works great...
That doesn’t really contradict my point.
First, I said “most” hardware, but you’ve gone and cherry-picked uncommonly durable hardware here; you’re probably out in +3 sigma territory. A lot of commodity PC-grade SOHO “server” hardware won’t even last the 3 years between major CentOS upgrades before dying of something. There was a period where I’d budget 1-2 years for a Netgear switch, for example. (They appear to be lasting longer now.)
Second, the application of my quoted opinion to your situation is that you should run that hardware with CentOS 7 through the EOL of the hardware or software, whichever comes first. That is, I’m advising the change-adverse members of the audience to opt into the second group above, taking OS changes in big lumps when it’s time to move to new hardware anyway.
On Mon, April 24, 2017 10:52 am, Warren Young wrote:
On Apr 24, 2017, at 7:53 AM, Lamar Owen lowen@pari.edu wrote:
James' point isn't the hardware cost, it's the people cost for retraining.
Unless youâve hired monkeys so that you must train them to do their
tasks by rote, that is a soft cost, not a hard cost. If youâve hired competent IT staff, they will indeed need some time to work out the differences, but they will do that on their own if only given that time.
I've been through that, I agree almost on all counts with James Byrne, so I can give some comments from my chair here. Yes, I do consider myself a notch more intelligent sysadmin than a monkey, and it does cost me time to adjust to the differences, and it is annoying, and most annoying is to adjust to some changes in philosophy (whoever considers the last non-existent is allowed to re-qualify me back to the level of monkey ;-)
Note also that Byrneâs solution was to move to an entirely different
OS,
but we donât hear about the âretraining costâ involved with that.
Surely it was a larger jump from C6 or C7 to FreeBSD 10 than from C6 to C7?
Yes and no. Maybe it is just my case, as I stared migrating servers to FreeBSD even before C7 was released. FreeBSD feels closer to C5, whereas difference between C5 and C7 is more dramatic (in my by no means objective feeling). So, everyone who maintained C5 after quick "jump start" may feel at hone with FreeBSD. My case may be even simpler as as many older sysadmins I maintained a few UNIXes in the past, including FreeBSD.
He also seems to be sweeping aside the fact that FreeBSD major releases
generally stay in support for about half the span of RHEL and its derivatives.
True, but keeping your system incrementing in smaller steps that happen more often is not a big deal. But this is a question of taste: both long support life like RHEL and CentOS and shorter but smoother changes like FreeBSD or some Linuxes (Debian and its clone Ubuntu come to mind) - they both have their advantages and their place where they shine.
If he wants to stay on a supported OS the whole time that C7 remains in support, heâs probably looking at 2 major OS version upgrades.
I've been through several FreeBSD major version upgrades on servers I migrated to FreeBSD earliest, and they went smoothly, requiring just 3 reboots in the process. They all had a bunch of jails that were upgraded as well. Not a single major issue that I had to resolve in a process (call me lucky... knocking on wood ;-)
Itâll be interesting to see how much change FreeBSD gets in the next 7
years.
It really is. Unless my usual luck in choosing what I expect to be in a future fails me, not much change will happen to FreeBSD. I was thanking my luck big time for choosing RedHat (and continuing to Fedora, then CentOS) instead of Debian once when big flop in Debian (and all clones) was discovered that was sitting there for over two years (search for Debian predictable keys). My Debian friend was re-creating all his certificates, re-generating ssh keys, rebuilding systems from scratch (as you don't know who might have had root access to your box). And I was repeating myself, that RedHat never had such a big flop. So I hope, I will be the same lucky with my choice of FreeBSD as I was with my choice of RedHat (and clones) back then.
And while we are here: My big thanks to RedHat, and big thanks to CentOS team for the great job you guys are doing!! I wish I could help you more than just maintaining CentOS and centosvault public mirrors.
Valeri
In many ways the Fedora treadmill is easier, being that there are many
more smaller jumps than the huge leap from C6 to C7.
That depends on the organization and its goals.
If you have a true IT staff that exists just to keep servers up to date
and working properly, then yes, youâre right, smaller upgrades every 3-6
months are often easier to handle than trying to choke down 2-10 years
of
changes all at once, depending on the LTS release strategy and how many
major upgrades you skip.
If youâre trying to treat the OS as a base atop which you do something
else, and you just need something that will keep working for 2-10 years despite being continually patched, then choking that big ball of changes down every 2-10 years might be preferable.
My main point is that if youâre going to take the second path, donât
cry about how much change there is to choke down when youâre finally forced to move forward. You choose to put off dealing with it for many years; the chickens have come back home to roost, so there will of course
be a lot of work to do.
...dual-socket Opteron LS20 blades (10+ years old)...CentOS 7, once
installed, works great...
That doesnât really contradict my point.
First, I said âmostâ hardware, but youâve gone and cherry-picked
uncommonly durable hardware here; youâre probably out in +3 sigma territory. A lot of commodity PC-grade SOHO âserverâ hardware wonât
even last the 3 years between major CentOS upgrades before dying of
something. There was a period where Iâd budget 1-2 years for a Netgear
switch, for example. (They appear to be lasting longer now.)
Second, the application of my quoted opinion to your situation is that
you
should run that hardware with CentOS 7 through the EOL of the hardware
or
software, whichever comes first. That is, Iâm advising the change-adverse members of the audience to opt into the second group
above,
taking OS changes in big lumps when itâs time to move to new hardware
anyway.
CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
++++++++++++++++++++++++++++++++++++++++ Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247 ++++++++++++++++++++++++++++++++++++++++
On 04/24/2017 11:52 AM, Warren Young wrote:
On Apr 24, 2017, at 7:53 AM, Lamar Owen lowen@pari.edu wrote:
James' point isn't the hardware cost, it's the people cost for retraining.
Unless you’ve hired monkeys so that you must train them to do their tasks by rote, that is a soft cost, not a hard cost.
Dollars are dollars. An hour spent in training as one hour less to 'do work.' (I'm intentionally playing devil's advocate here; I personally don't have a problem with the changes other than I now have to remember to check the OS type and version every time I log in to a server prior to issuing commands).
Note also that Byrne’s solution was to move to an entirely different OS, but we don’t hear about the “retraining cost” involved with that. Surely it was a larger jump from C6 or C7 to FreeBSD 10 than from C6 to C7?
Guaranteed that it was a much larger jump. Although I am tangentially reminded of Apollo Domain/OS 10 where the SysV/BSD/Aegis behavior was settable by changing an environment variable.....
It’ll be interesting to see how much change FreeBSD gets in the next 7 years.
What is interesting to me, having just worked on a 20-year-old server stack last week, is how much hasn't changed as well as how much of what gets used a lot has changed (remember life before yum? How about early yum that needed to download individual headers?). But 90% of what I learned 30 years ago on Xenix System 3 for the Tandy 6000 still works (mainly because I still use vi.... :-) ).
That depends on the organization and its goals.
Very much true. My IT department that I run has a bit of a reputation; our 'stock' answer to any IT question is rumored to be 'it depends.' YMMV, etc.
...dual-socket Opteron LS20 blades (10+ years old)...CentOS 7, once installed, works great...
That doesn’t really contradict my point.
First, I said “most” hardware, but you’ve gone and cherry-picked uncommonly durable hardware here; you’re probably out in +3 sigma territory.
Hey, I just picked what I have here, that's all. I could also talk about our 2007, 2009, and 2010-vintage donated EMC Clariion hardware. We have gotten many Dell PowerEdge servers and Optiplex/Precision desktops donated to us; got 19 Dell PE1950's donated in a lot three years ago, and those are some of our best servers. The last servers we actually bought were a pair of Dell PE6950's in 2007; a grant funded two of them plus VMware VI3 and a couple of EMC Clariion CX3-10c SANs. (All of those are still running and still doing their jobs.)
I'd rather have a five-year-old Precision than a 2017-model generic desktop. A bit slower, but it's going to last a whole lot longer. For my own personal use I never buy new; I'll take the same money that would buy a low-end current-year marvel and buy a three to five year-old Precision that will run faster and much longer. My current laptop is a Precision M6700 with a Core i7-3740QM. It was $600 and will run rings around anything built today at that price point (and even twice or thrice that price point I dare say!).
But we're talking servers here, and the LS20 blade for the BladeCenter is middle-of-the-road as far as server hardware is concerned. The PE1950 is on the lower side of MOR.
A lot of commodity PC-grade SOHO “server” hardware won’t even last the 3 years between major CentOS upgrades before dying of something. There was a period where I’d budget 1-2 years for a Netgear switch, for example. (They appear to be lasting longer now.)
I haven't looked at the lower end of the server hardware scale in a long time, although we did get some older low-end Dell PE SC1425's donated to us a while back. They run C7 quite well, too. I'd rather buy a used higher-end box than a new low-end box, which is going to both cost more and wear out sooner.
But that's just SOP for a non-profit.
Second, the application of my quoted opinion to your situation is that you should run that hardware with CentOS 7 through the EOL of the hardware or software, whichever comes first. That is, I’m advising the change-adverse members of the audience to opt into the second group above, taking OS changes in big lumps when it’s time to move to new hardware anyway.
There is no easy solution. The sysadmin's work and continuing education is never done. I don't mind learning new things nor is my budgeted time so tight that I can't spend company time getting familiar with newer admin paradigms. I understand that everyone is not like me (which is probably a good thing).
The sysadmin 'political landscape' is not too different from the 'regular' political landscape, really. You have conservatives, and you have progressives. They both think they're right, and they both tend to demonize those who disagree. And both are growing more extremist with time. Is there no middle ground to be had (in the sysadmin world, at least)?
I certainly understand and sympathize with James' point of view. I also understand that if we never try something new we might never find something we might like better than what we've already got. (As an example: I've always though the 'service' invocation was slap-backwards, and always thought it was a bit inane to have 'service' to control the running of the services and 'chkconfig' to enable or disable. For that matter, how does 'chkconfig' translate to 'enable or disable services?' The systemctl invocation is cleaner and more consistent by far, at least in my opinion. I wish it had come first!).