Once upon a time, Peter Duffy peter@pwduffy.org.uk said:
Ultimately it's all software, and software can be written/changed/updated to do anything required - all that's needed is the skill and the motivation.
Well sure, and I can build a rig to replace a wheel on your car while you're driving down the highway; doesn't mean it is practical to do so. You could also build a distribution with both Linux and FreeBSD kernels (and IIRC somebody tried with Debian), but that doesn't mean it is a practical thing, especially for a comercially-supported, long-term distribution like RHEL.
If systemd is so "core" that it can't be unplugged and plugged easily, and glues together a lot of otherwise unrelated components, then it's just bad software - end of story
Nope. I said "the init system" is core, not "systemd". Someone building a coherent distribution has to make choices about what is practical to support (and nobody has unlimited man-hours to build magic tools that can swap out init systems with zero outside impact). And once you include systemd, there are features that it makes sense to take advantage of, rather than ignore because somebody has a "multiple init systems" requirement.
Sun and Apple already figured out that a "know nothing" super-simple init didn't handle all that was really needed for a modern OS. The Linux world had some earlier attempts, like Upstart (used in RHEL 5 and 6 IIRC), but it never got the critical mass to use its functionality (and IIRC fairly early on, it became apparent it took some wrong approaches).
The init system being PID 1 does have a bunch of "magic" abilities on a Unix-like system, so trying to strip it down to a minimal thing turns out to not be the best approach. Of course, a lot of the crap that is in systemd-the-package (and there is a bunch, although RHEL ignores some of that at least for now) is not in PID 1.
- parallel startup of services. Not sure that I'd want that anyway - too
much risk of two services trying to grab the same resource at the same time - I'm absolutely happy with the sysvinit approach of one service startup completing/failing before the next one happens. That way, things are nice and orderly.
So, the parallel startup has shown a few issues along the way, where there were undefined dependencies. There have always been dependency issues with SysV-style init - service deps can't always be described properly as an ordered list, more of a directed graph (which systemd's unit files allow and handle). Was it annoying if you encountered such a bug? Yes, but those types of bugs came up with SysV-style init repeatedly over the years anyway. You had poor solutions like init scripts calling other init scripts to make sure they had the things they needed (and "soft" deps, like on a database server, were really a mess).
For example, AFAIK it is still the case that RHEL 6 and before don't enable quotas on a filesystem on an iSCSI device. The only way to "fix" that would be to copy all the quota code from rc.sysinit to another, post-netfs, script. With systemd, I'm pretty sure the same quota unit can re-trigger after new filesystems are "discovered".
On the flip side, when I need to add a new one-off service, I can write a dozen line (or more often less) systemd unit file much easier than writing an init script. All the odd corner cases are handled for me, I don't have to worry about something like daemontools if I want the service restarted on fail (one line in the unit file), standard out and/or error can be redirected to log files (without having to pipe to logger), etc.
As we all know (don't we just?) sysadmin work and responsibilities are heavy, and frequently eat into evenings, nights, weekends and (so-called) holidays. Anything which increases the sysadmin workload - e.g. suddenly faced with a vertical learning curve just to do the tasks they did yesterday
Okay, but change is the only constant in this business. I agree you should not be running into the learning curve significantly on production systems, but if you are running systems with thousands of users, you should always be looking ahead to new technologies all the time. I've always worked in the Internet service provider "world"; when I started, a T1 and a router you could fit in a backpack made you an ISP. Now we have a router that is a third of a rack, requires a lift to move, with a couple of 10 gig ethernet links to the world, and that's still considered a "small" ISP.
It is so much easier now to lab up new versions for testing and learning (just fire up some VMs). If you want to have an idea of "what is coming", run some Fedora releases now and then. I personally have used Fedora on my desktop since the project started (and Red Hat Linux for many years before that).
It is called professional education; lots of jobs require you to learn new skills on an on-going basis.
I've been running CentOS 7 on all my new server installs for a year now. Is it perfect? No, but I haven't seen the perfect OS yet. Am I still learning? Yes, always. Do I think it is worth the higher learning steps than say from CentOS 5->6? Yes, I do.