[CentOS] A BIG Thank You

Thu Nov 3 18:41:39 UTC 2005
Pasi Pirhonen <upi at iki.fi>

Hi,


On Thu, Nov 03, 2005 at 08:33:38AM -0800, Robert wrote:
> 
> ummmmm because i have not specifically done what you folks are doing, maybe
> you can give us some specific insights and time tables.
> 
> in general, how many hours does it take from start to finish to do whatever
> it is that you do (gather/compile/whatever else etc) to finished product of
> 4 ISO's etc for downloading on the site???

I don't know about others, but i'd assume it being pretty much same for
us everyone (different arches and versions of CentOS).

Usually the CPU-power isn't issue at all. Sometimes, when you do have
to iterate things over and over again (in case of problems) one might
think that 'infinite amount of CPU-speed' would be good, but it's not
about that at all. At least for me.

As i see it, it's more like keeping 'certified system' up for building.
I know that most of things could be done in chroot, but some of the
things has been pretty weird under chroot, so personally i trust only
building on real installation. That's why i do have pretty much
dedicated environemts for anything i support. Those are incrementally
updated from U-level to U-level and security updates between that. 

That's why for example my CentOS-3/s390(x) hosts are still named as
taos390x and taos390. Those are very much same installations still and
those have been evolving from initial build to what those are now. I
have no intention to tamper with those. For s390(x) archs is't just
some wasted disk space (18GB/emulated platform for me) while those are
not actively not doing anything, so it's not a big deal to keep those.

For ia64 it's actually two rx1600-boxes (one for CentOS-3 and other for
CentOS-4), but i do have kind of loads of those anyway, so it's not
issue at all there either :)

The build process itself depends a lot of how much stuff is updated,
but generally i'd say it's some maybe 12h to make all the compilation
for U-level. Then making some 'centos mods where needed' (might be
interleaved while waiting other stuff to compile).

When all the needed RPMS are there, one must merge the previous release
with all the after U-level security updates and current U-level updates
together and start the image neneration process. I'd say it's some hour
time (if one uses neat utilities like repomanage by Seth Vidal).

Personbally i do find the testing phase most time consuming and boring
(after those damn announcements :). I do find it necessary to test all
those methods available like mini-boot.iso combined with NFS-install.
CD-install, DVD-install. I don't just want to push out images that i
don't know being working. That will take some maybe 6h or so.

If we talk about 'a lot faster machines', i don't see much relevance
between 1h glibc compilation against some 30mins. That usually just
isn't issue at all. As mentioned, it's IMO more about having dedicated
reference system which is used to stack all those years together.

If we talk about 'donated power', it would be ok for normal
maintenance, but i personally feel needing hardware as the actual
installation testing is to be done locally (even tho i don't move my
ass from my couch for ia64 netboot + install testing or anything
releated to s390(x) :). For physical DC/DVD insertion one usually needs
to get ones ass moving a few feet .....

So summarizing it: I'd say it's woth some 24h wall clock hours to cruch
up a U-level release from start. That doesn't mean that one would be
100% occupied with the current task.

Then we propably need some 24h more to get the images online, unpacked,
torrented, distributed to some more machines to even somehow cope the
load/demand for release (which haven't been so successfull for past two
update cycles anymore :)



> 
> i am wondering how much time could be saved per distribution if you had A
> LOT faster hardware?

As above. It would be nice, but only good for some very narrowed down
situations.

> 
> is it a small, medium, or large percentage of time that could be saved if we
> as the CentOS community came up with A LOT faster hardware for the cause
> etc...

Out of my mind i'd say some 10% maybe even less. mostly it's about
interleaving things anyway. Even while writing this email, machines are
working with things. Sometimes oen just doesn't find anything else than
waiting the current task to complete, but that is the insertion point
of <the movies/tv-series i've been holding> :)

> 
> i guess the bottom line is, how much "machine time" is there now per
> platform and how much time can be saved if we were to try to pony up better,
> faster, newer, whatever hardware?
> 


>From my point, it's more like how much complation power i can keep
online at specific time. If i even try to power everything up same
time, i'll just blow my fuses. From my point of view i do prefer
machines that i can ether-wake or start some other way remotely, so i
do have my build platforms, but i don't need to keep those online all
the time (i do pay some 200 euros/month only for electricity :)

That's why i for example keep a little AlphaStation as reference build
host instead of the ES45, which is big box - the small box is always on
and available for security fixes compilation from remote location.

The raw numbers for me would be something like

ia64:

5 rx1600
2 rx2600

s390/s390x:

usually all four are running on single Athlon64/X2
Sometimes on dual-opteron, Athlon64 and .... :)

axp:

ES45 - 4x1000Mhz EV68CB + 12GB mem
<other ancient alpha hardware>

sparc:

Netra T1405 (quad-440Mhz)
Netra T1405 (dual-440Mhz)
Netra X1
2 U30
1 U10
1 U2
1 U1
and loads of those 32bit boxes which isn't much of use anymore.


Then there are 'the support hardware' which mainly is for me a
NFS/iscsi-server. dual-1.4Ghz Opteron, 2GB mem. areca/3ware 8-port
controllers and 16x250GB SATA-disks, HP procurve 4000M w/ 56 ports.
couple of 8-port gigabit switches and such stuff.

Generally so much hardware and so little juice to run it all :P


-- 
Pasi Pirhonen - upi at iki.fi - http://iki.fi/upi/