On Fri, Jul 11, 2014 at 11:34 PM, Nico Kadel-Garcia nkadel@gmail.com wrote:
But, I don't think there is even anything that will tell you the differences between two systems or verify that they have matching installations at the same update levels.
This has drifted profoundly from the delta rpms question, and might be better over in the userland. I'll toss in a last note, if no one minds.
I just brought it up here on the slim chance the the rpm-building events could be used as a repository transaction id to constrain updates.
Everyone in the business for a while runs into this. Almost everyone winds up writing their own, because they have subtle differences in what they want, and I'm afraid that most of them are quite amateurish and do not scale well.
But that's a horrible thing to do. When the person who wrote it leaves, no one else will know how to deal with it, and there will always be subtle changes in the underlying tools that make it need maintenance.
Full blown package management at complete RPM version and release control is theoretically straightforward. We've discussed a basic "select a template environment, record it, and propagate that package list" approach, which I've had fairly good success with in environments up to 200 hosts. And the "do it in a chroot cage", which is supported by mock based setups these days, is very helpful because it allows very fast template setup and modification, very efficiently, without actually building new hosts.
The part I don't understand about this is that no one would consider source code work without using a version control system that could assemble the set of file versions they want on demand. Yet they ignore exactly the same need for the resulting binaries.
Auditing system packages is straightforward: no one I know if does it without enormous infrastructure used for other ereasons, or without simply running "ssh $hosnname rpm -qa" from an authoized central server across all the hosts. You don't need yum for that, which is a very network burdensome tool and blocks other yum activities.
I run ocsinventory-ng so I actually have a central database of all installed software. But nothing really uses it and it doesn't have a 'compare' operation.
But there's additional infrastructure needed. For example, the CentOS releases go out of date. If you want to replicate a stable CentOS 6.1 environment, or pull in even *one* expired package from that distribution for between your test setup and your production setup, you either have to enable and talk to the CenOS 6.1 archive server, which is not widely mirrored and therefore quite slow, or you have to set up an internal CentOS 6.1 local mirror and configure it.
And worse, the usual scenario is that you have to replicate some other company's back-rev system to duplicate and find a bug.
That's.... a bunch of extra work. A lot of admins wind up simply ignoring it and hoping the problem will not surface, much like they ignore putting passphrases on their SSH keys or much like they just disable SELinux instead of learning it. They just don't consider it worth the effort.
Yes, there is that nasty bottom line issue. It is not cheap to retrain groups engineers to deal with the subtle problems of any single platform when they are supposed to be doing other things. And Centos is a fairly small percentage of systems for us.
And until they git bit, hard, by subtle package discrepancies, they're often quite correct. It's not worth the effort, just do 'yum install' and hope for the best. It's not my approach, but it's understandable.
The point of using an 'enterprise' OS is that it is never supposed to have subtle package discrepancies. I mentioned the java versions to show there are rare exceptions. But, going back to that bottom line issue, realistically we would have just grabbed an Oracle binary if we had to fix that one quickly.