Seeing the latest round of updates rolling in.... I just 'again' felt the need to say "Hats Off" (pun intended) to the CentOS team! The organization very much behind the scenes is not apparent, but very much must exist. It almost looks like there is an 'internal' competition between members, each trying to roll out the updates faster than the other. I know this must be a bit of a grueling task and one that interrupts the day's activities.
Again, GREAT job folks!!! And a big THANKS!!
John Hinton
On Wed, 2005-11-02 at 13:47 -0500, John Hinton wrote:
Seeing the latest round of updates rolling in.... I just 'again' felt the need to say "Hats Off" (pun intended) to the CentOS team! The organization very much behind the scenes is not apparent, but very much must exist. It almost looks like there is an 'internal' competition between members, each trying to roll out the updates faster than the other. I know this must be a bit of a grueling task and one that interrupts the day's activities.
Again, GREAT job folks!!! And a big THANKS!!
Thanks John,
Not much of a competition though ... Pasi always wins the update race :)
} } Not much of a competition though ... Pasi always wins the update race :) }
does Pasi have the fastest workstations and servers ??
- rh
-- Robert Hanson - Abba Communications Computer & Internet Services (509) 624-7159 - www.abbacomm.net
Hi,
On Wed, Nov 02, 2005 at 10:59:09PM -0800, Robert wrote:
} } Not much of a competition though ... Pasi always wins the update race :) }
does Pasi have the fastest workstations and servers ??
Actually most likely not. All the ia64 updates are compiled on puny 1Ghz rx1600-boxes. s390(x) are done under emulator. alpha is maintained normally on 266Mhz EV4 (== old and slow). the beta for sparc is now sec.maintained on 500Mhz us-II (Netra X1) even tho it's in it's beta-stages, so there are no announcements for that.
I am just dedicated for the work i do. I do make it priority to handle security updates as those arrive. Same applies to quaterly updates. We can pretty much guess beforehand the week those arrive and i am already prepared to make those when those comes up.
Usually i do have all the beta stage compiled and sorted out already and i just look what i need to compile and try to push out updates that i already have. This helps specially the s390(x) archs which are under emulator (== traclates to 48hish time to compile gcc again).
I know other people are pretty committed to this too. Specially i know that Johnny too tends to 'just make time' to get quaterly updates out, which is why we usually only have 'only few days round trip time' for 4.x quaterly updates. Karanbir usually just 'is available' too when it's time to cruch something out.
Generally speaking. Sooner you make it, less prone the process is to 'oh! i forgot that again' mistakes. And when you get it out, you're done with it. The time used for example rolling quaterly updates won't change a bit when you delay it.
The above is not ment to say that only above people do work hard with the project. We all do. There is so much happening 'behind the scenes' which is not visible while it's all working. The amount of installations are so big that out own ifra does need much work to be able to push out all that what is needed. Most of people must have seen those errors 'no more mirrors' like past month or so when we weren't quite able to deliver what was demanded. That kind of issues makes a lot of effort to be handled when you do try to keep some 400Mbit/s sustained rate up to all over the world from non-centralized network infra (hint: Gbit-connected host required :).
Above just being random thought coming out of my keyboard which is just how i see it at this very moment.
SNIP
} I am just dedicated for the work i do. I do make it priority to handle } security updates as those arrive. Same applies to quaterly updates. We } can pretty much guess beforehand the week those arrive and i am already } prepared to make those when those comes up. } } Usually i do have all the beta stage compiled and sorted out already } and i just look what i need to compile and try to push out updates that } i already have. This helps specially the s390(x) archs which are under } emulator (== traclates to 48hish time to compile gcc again). } } I know other people are pretty committed to this too. Specially i know } that Johnny too tends to 'just make time' to get quaterly updates out, } which is why we usually only have 'only few days round trip time' for } 4.x quaterly updates. Karanbir usually just 'is available' too when } it's time to cruch something out.
SNIP
} -- } Pasi Pirhonen - upi@iki.fi - http://iki.fi/upi/
ummmmm because i have not specifically done what you folks are doing, maybe you can give us some specific insights and time tables.
in general, how many hours does it take from start to finish to do whatever it is that you do (gather/compile/whatever else etc) to finished product of 4 ISO's etc for downloading on the site???
i am wondering how much time could be saved per distribution if you had A LOT faster hardware?
is it a small, medium, or large percentage of time that could be saved if we as the CentOS community came up with A LOT faster hardware for the cause etc...
i guess the bottom line is, how much "machine time" is there now per platform and how much time can be saved if we were to try to pony up better, faster, newer, whatever hardware?
please let us know and thanks
- rh
-- Robert Hanson - Abba Communications Computer & Internet Services (509) 624-7159 - www.abbacomm.net
Hi,
On Thu, Nov 03, 2005 at 08:33:38AM -0800, Robert wrote:
ummmmm because i have not specifically done what you folks are doing, maybe you can give us some specific insights and time tables.
in general, how many hours does it take from start to finish to do whatever it is that you do (gather/compile/whatever else etc) to finished product of 4 ISO's etc for downloading on the site???
I don't know about others, but i'd assume it being pretty much same for us everyone (different arches and versions of CentOS).
Usually the CPU-power isn't issue at all. Sometimes, when you do have to iterate things over and over again (in case of problems) one might think that 'infinite amount of CPU-speed' would be good, but it's not about that at all. At least for me.
As i see it, it's more like keeping 'certified system' up for building. I know that most of things could be done in chroot, but some of the things has been pretty weird under chroot, so personally i trust only building on real installation. That's why i do have pretty much dedicated environemts for anything i support. Those are incrementally updated from U-level to U-level and security updates between that.
That's why for example my CentOS-3/s390(x) hosts are still named as taos390x and taos390. Those are very much same installations still and those have been evolving from initial build to what those are now. I have no intention to tamper with those. For s390(x) archs is't just some wasted disk space (18GB/emulated platform for me) while those are not actively not doing anything, so it's not a big deal to keep those.
For ia64 it's actually two rx1600-boxes (one for CentOS-3 and other for CentOS-4), but i do have kind of loads of those anyway, so it's not issue at all there either :)
The build process itself depends a lot of how much stuff is updated, but generally i'd say it's some maybe 12h to make all the compilation for U-level. Then making some 'centos mods where needed' (might be interleaved while waiting other stuff to compile).
When all the needed RPMS are there, one must merge the previous release with all the after U-level security updates and current U-level updates together and start the image neneration process. I'd say it's some hour time (if one uses neat utilities like repomanage by Seth Vidal).
Personbally i do find the testing phase most time consuming and boring (after those damn announcements :). I do find it necessary to test all those methods available like mini-boot.iso combined with NFS-install. CD-install, DVD-install. I don't just want to push out images that i don't know being working. That will take some maybe 6h or so.
If we talk about 'a lot faster machines', i don't see much relevance between 1h glibc compilation against some 30mins. That usually just isn't issue at all. As mentioned, it's IMO more about having dedicated reference system which is used to stack all those years together.
If we talk about 'donated power', it would be ok for normal maintenance, but i personally feel needing hardware as the actual installation testing is to be done locally (even tho i don't move my ass from my couch for ia64 netboot + install testing or anything releated to s390(x) :). For physical DC/DVD insertion one usually needs to get ones ass moving a few feet .....
So summarizing it: I'd say it's woth some 24h wall clock hours to cruch up a U-level release from start. That doesn't mean that one would be 100% occupied with the current task.
Then we propably need some 24h more to get the images online, unpacked, torrented, distributed to some more machines to even somehow cope the load/demand for release (which haven't been so successfull for past two update cycles anymore :)
i am wondering how much time could be saved per distribution if you had A LOT faster hardware?
As above. It would be nice, but only good for some very narrowed down situations.
is it a small, medium, or large percentage of time that could be saved if we as the CentOS community came up with A LOT faster hardware for the cause etc...
Out of my mind i'd say some 10% maybe even less. mostly it's about interleaving things anyway. Even while writing this email, machines are working with things. Sometimes oen just doesn't find anything else than waiting the current task to complete, but that is the insertion point of <the movies/tv-series i've been holding> :)
i guess the bottom line is, how much "machine time" is there now per platform and how much time can be saved if we were to try to pony up better, faster, newer, whatever hardware?
From my point, it's more like how much complation power i can keep
online at specific time. If i even try to power everything up same time, i'll just blow my fuses. From my point of view i do prefer machines that i can ether-wake or start some other way remotely, so i do have my build platforms, but i don't need to keep those online all the time (i do pay some 200 euros/month only for electricity :)
That's why i for example keep a little AlphaStation as reference build host instead of the ES45, which is big box - the small box is always on and available for security fixes compilation from remote location.
The raw numbers for me would be something like
ia64:
5 rx1600 2 rx2600
s390/s390x:
usually all four are running on single Athlon64/X2 Sometimes on dual-opteron, Athlon64 and .... :)
axp:
ES45 - 4x1000Mhz EV68CB + 12GB mem <other ancient alpha hardware>
sparc:
Netra T1405 (quad-440Mhz) Netra T1405 (dual-440Mhz) Netra X1 2 U30 1 U10 1 U2 1 U1 and loads of those 32bit boxes which isn't much of use anymore.
Then there are 'the support hardware' which mainly is for me a NFS/iscsi-server. dual-1.4Ghz Opteron, 2GB mem. areca/3ware 8-port controllers and 16x250GB SATA-disks, HP procurve 4000M w/ 56 ports. couple of 8-port gigabit switches and such stuff.
Generally so much hardware and so little juice to run it all :P
On Thu, 2005-11-03 at 20:41 +0200, Pasi Pirhonen wrote:
Hi,
On Thu, Nov 03, 2005 at 08:33:38AM -0800, Robert wrote:
ummmmm because i have not specifically done what you folks are doing, maybe you can give us some specific insights and time tables.
in general, how many hours does it take from start to finish to do whatever it is that you do (gather/compile/whatever else etc) to finished product of 4 ISO's etc for downloading on the site???
I don't know about others, but i'd assume it being pretty much same for us everyone (different arches and versions of CentOS).
Usually the CPU-power isn't issue at all. Sometimes, when you do have to iterate things over and over again (in case of problems) one might think that 'infinite amount of CPU-speed' would be good, but it's not about that at all. At least for me.
I agree with Pasi here ... by far, the biggest issue is the boring part of the process. (Release announcements, spinning and testing the installs, maintaining the build machines in a pristine build environment, comparing the linked libraries to upstream and investigating/correcting differences, fixing things that are broken {like the dhcpd and glade2 issues and before also thunderbird}.
I also think that we need to do the build process on local machines, as that allows us more control over the private keys used to sign and prevents accidental release of development files.
The biggest problem we have had during the last (2) CentOS-4 update cycles is being able to handle the distribution load. That is were we can use some major help :)
Donations of large space (>200gb), fast connected (100mbit/1gbit to the internet), unlimited bandwith (we served 24TB last 3 weeks) servers that we can control would help the cause a great deal.
If you have a machine like that donate (especially Hosting Providers and ISPs who are using a free enterprise level distro now in CentOS) please see:
[snip]
} I agree with Pasi here ... by far, the biggest issue is the boring part } of the process. (Release announcements, spinning and testing the } installs, maintaining the build machines in a pristine build } environment, comparing the linked libraries to upstream and } investigating/correcting differences, fixing things that are broken } {like the dhcpd and glade2 issues and before also thunderbird}. } } I also think that we need to do the build process on local machines, as } that allows us more control over the private keys used to sign and } prevents accidental release of development files. } } The biggest problem we have had during the last (2) CentOS-4 update } cycles is being able to handle the distribution load. That is were we } can use some major help :) } } Donations of large space (>200gb), fast connected (100mbit/1gbit to the } internet), unlimited bandwith (we served 24TB last 3 weeks) servers that } we can control would help the cause a great deal. } } If you have a machine like that donate (especially Hosting Providers and } ISPs who are using a free enterprise level distro now in CentOS) please } see: } } http://www.centos.org/donate/ } } [snip]
great info gentlemen...
one thing i was specifically getting at in terms of machine processing speed was this.....
if machines for your _local_ use _were_ to be donated to the cause that would help decrease your compilation and other machine processing time factors.... umm what would you need compared to what you have now in and for the various architectures?
i know there are folks out there that should and can afford to donate bigger better processors, i/o backplanes, and storage devices to the cause
to me every little bit helps to an extent and so why not take advantage of it if possible?
take care,
- rh
-- Robert Hanson - Abba Communications Computer & Internet Services (509) 624-7159 - www.abbacomm.net
On Fri, 2005-11-04 at 07:26 -0800, Robert wrote:
} I agree with Pasi here ... by far, the biggest issue is the boring part } of the process. (Release announcements, spinning and testing the } installs, maintaining the build machines in a pristine build } environment, comparing the linked libraries to upstream and } investigating/correcting differences, fixing things that are broken } {like the dhcpd and glade2 issues and before also thunderbird}. } } I also think that we need to do the build process on local machines, as } that allows us more control over the private keys used to sign and } prevents accidental release of development files. } } The biggest problem we have had during the last (2) CentOS-4 update } cycles is being able to handle the distribution load. That is were we } can use some major help :) } } Donations of large space (>200gb), fast connected (100mbit/1gbit to the } internet), unlimited bandwith (we served 24TB last 3 weeks) servers that } we can control would help the cause a great deal. } } If you have a machine like that donate (especially Hosting Providers and } ISPs who are using a free enterprise level distro now in CentOS) please } see: } } http://www.centos.org/donate/ } } [snip]
great info gentlemen...
one thing i was specifically getting at in terms of machine processing speed was this.....
if machines for your _local_ use _were_ to be donated to the cause that would help decrease your compilation and other machine processing time factors.... umm what would you need compared to what you have now in and for the various architectures?
Well, I would love to have better / faster local build machines than I have for x86_64 and i386 arches....
Currently I build i386 on a dual processor Xeon 1.8 machine and x86_64 on a single processor AMD Athlon 64 Processor 3000+
(both of which I provided myself, and are only used to build CentOS)
If someone wants to donate a top of the line multi processor Opteron box for x86_64 and/or a nice new multi processor Xeon box for i386 ... that would be fine by me :)
i know there are folks out there that should and can afford to donate bigger better processors, i/o backplanes, and storage devices to the cause
to me every little bit helps to an extent and so why not take advantage of it if possible?
We would certainly take advantage of it if something like that were provided to build on as well, and it would be an appreciated donation.
BUT ... it would be much better overall for the project (and the CentOS community) if we could get donations of fast and big boxes to distribute the updates with instead.
Hi,
On Fri, Nov 04, 2005 at 07:26:33AM -0800, Robert wrote:
one thing i was specifically getting at in terms of machine processing speed was this.....
if machines for your _local_ use _were_ to be donated to the cause that would help decrease your compilation and other machine processing time factors.... umm what would you need compared to what you have now in and for the various architectures?
As i said before, the machine speeds ATM isn't much of issue. Issues are sadly with the bandwith for distributing.
I don't even have to take bets on this when i say i am sure that Johnny and Lancelan has been spending more time on keeping the distribution up past three weeks that what it did take time to cruch up the 3.6/4.2 releases. Making some new script that are able of monitoring the status of external mirrors etc. to get some kind of picture where it goes wrong.
i know there are folks out there that should and can afford to donate bigger better processors, i/o backplanes, and storage devices to the cause
I'd be willing to receive FibreChannel connected box which would eat SATA-disks any day ,-)
Actually most of 'would be neat to have availbale to be able to solve better problems hardware' would for me be something else than CPU.
to me every little bit helps to an extent and so why not take advantage of it if possible?
Network bandwith? :P
I do believe that we were pretty good covered on torren-side last time. I do have my own little feeder and after the initial rush, i didn't actually feed anymore on full 10Mbit/s, as there were so many fast feeds covering up the new downloads, so big thaks for whom ever those are.
On Friday 04 November 2005 10:51, Pasi Pirhonen wrote:
Actually most of 'would be neat to have availbale to be able to solve better problems hardware' would for me be something else than CPU.
to me every little bit helps to an extent and so why not take advantage of it if possible?
Network bandwith? :P
I do believe that we were pretty good covered on torren-side last time. I do have my own little feeder and after the initial rush, i didn't actually feed anymore on full 10Mbit/s, as there were so many fast feeds covering up the new downloads, so big thaks for whom ever those are.
I hosted bittorrent on by dedicated server and gave up a constant 7mbit/s for first week and a bit then capped it at 5mbit/s and served up about 700gb. im sure there were others that did the same.
And ill be happy to do it again with each and every release.
Dennis
Hi,
On Fri, Nov 04, 2005 at 07:09:11AM -0600, Johnny Hughes wrote:
On Thu, 2005-11-03 at 20:41 +0200, Pasi Pirhonen wrote:
The biggest problem we have had during the last (2) CentOS-4 update cycles is being able to handle the distribution load. That is were we can use some major help :)
Donations of large space (>200gb), fast connected (100mbit/1gbit to the internet), unlimited bandwith (we served 24TB last 3 weeks) servers that we can control would help the cause a great deal.
If you have a machine like that donate (especially Hosting Providers and ISPs who are using a free enterprise level distro now in CentOS) please see:
Yeap. I did mention this briefly myself too, but as i really don't do that part of project, i am not so sure about details myself (unless i ask Johnny or Lance etc.).
Sadly i'd have few extra Sun A1000 with 12x18GB disks just laying around as stack which would be quite kickass boxes still to really push stuff out of the pipe. Then again i'd see the pipe needing to be at Finland for me being able to make things better on hardware side.
The really sad part is that those are too much heat for me normally only for some 180GB/box of space :)
It's quite funny that when things get 'more popular', it's first the distribution that doesn't scale up. Actually the work with distro doesn't change at all. It's very same amount of work on release even if there is 1 user or 1 million users.
Shortly: i have to agree 100% with Jonny. We're in serious troubles with the network distribution and it seems to be worse now than it was with U1, so U3 ....