The ongoing discussion regarding Perl modules and RPMs, prompted me
to post an issue I am currently facing. I support a development team
that uses various third party tools, some free, some not, some of
which are installed via RPMs, and some that are not. The development
cycle is 18-24 months long, and then there are many years of support
(continuing engineering) after that, for which the various versions
of the tools need to be available. There are a few dozen engineers
on the project.
Let say that I want to install SuperTool V1.1, and it's available to
install only via RPMs. After thorough testing/qualification on one
workstation, I need to install it on the few dozen workstations,
which is straightforward but possibly time consuming (and you have to
deal with workstations that may be down when you are doing the
installs). When V1.2 comes out, I can again test it on one
workstation, and then deploy it on the remaining workstations. But
what if I there is a bug, and we need to test it against the old
version of the tool? With RPMs, it is cumbersome to go back and
forth between versions. Contrast that with a tar based install,
where I can untar the tool in a directory that includes a version
number in the path, and then automount that directory to all the
workstations. There is only one install, and I can easily move back
and forth among the various versions.
Now I understand the benefits of RPMs, but when it comes to
supporting tool vendors, dealing with tar based installations and
using automounts really simplifies things. I am thinking about
installing the RPM based tools on one system, and then copying the
entire install tree to a NAS and automount it. I know this is not
ideal, but I can't think of a better solution. Any thoughts from the
members of this list? Have you tried to solve similar problems in
the past? If so, how did you do it?
Alfred