[CentOS] Dag's comment at linuxtag

Mihai T. Lazarescu mtlagm at gmail.com
Tue Jun 30 10:07:03 UTC 2009


On Mon, Jun 29, 2009 at 06:51:54PM -0700, Radu-Cristian FOTESCU wrote:

> > led to the great compiler we have today.  The same
> > would hold for any large project (the kernel, firefox, etc.)
> 
> And... are you happy with the quality of the huge $h1t which
> is Firefox? Because I am not.

Firefox was better than Mozilla.  Epiphany is less bloated
than Firefox.  It's definitely worth noting that, Epiphany &
Firefox popped up so quickly because they built on Mozilla's
rendering, etc.

That's the powerful FLOS idea: get inspired and build upon
previous work to suit current needs.  The other very important
ingredient is people putting efforts in common projects *and*
even more people using the projects and giving *constructive*
feedback.

> As for the Linux kernel, they pushed in all kind of crap. 
> Back in 1996, I was running Linux with X in only 8 Megs of RAM!
> Now, I doubt I could even boot with such a memory...

Things get pushed in the kernel, Xorg, etc. for a good reason,
even if we fail to see it.

The 2.6 kernels boot and run just fine in maybe as few as 1Mb
in embedded systems and brings features and performance the
1996 version simply lacked.  That's a flexibility you don't
find easily elsewhere, not to mention you get it for free.

Besides, the HW is getting cheaper and more efficient fast.
I started programming on a 1MHz 8 bit system with 64kb of RAM,
shared with the BIOS and the OS (maybe half of it left for
the applications).  Nowadays even a mouse driver may need much
more memory.

I write this email on a HW that was in the supercomputer range
10 years ago or so.  But I don't know of people that double
their "SW developing efficiency" every 18 months as Moore
law goes for the HW.  That's why I value so much the creative
efforts pushing forward all kinds of features, whether I need
them or not.  These efforts give me an environment that helps
my productivity and stimulate my creativity like nothing else.

> > I fail to see why tens of micro repos are easier
> > to maintain consistent than a large one. 
> 
> They're not. But at least you don't have to make people
> get along. 

And you get a source nightmare of packages that do not get
along, too.  This system may produce daily problems that are
multiplied by tens of thousand of end users, each of them
having to spend time fixing them themselves.  That's a huge
value trash, in my view.

I was using Dag's repo since the RH7 days.  Along the years I
explored alternatives as ATrpms, livna, etc. but I was always
very glad to come back to the richness and stability of Dag,
Matthias, Dries repos.  For me they made a huge and wonderful
job of putting up so much sheer value with so few resources.
But things change and it's a pity to see it eroded by narrow
choices, regardless of the efforts still thrown at it.

> > > 7,600 packages is really too much for a couple of
> > people to
> > > maintain. Unless it's scaled *down*...
> > 
> > ...or scale the maintainers up.
> 
> Still, 7,600 is unmaintainable. For their ~20k packages,
> both Debian and Ubuntu use dozens and dozens of packages.
> (And I won't mention the quality of Ubuntu's packages.)
> As for TUV, they decided they can only support ~2.5k packages,
> regardless of the fact that they're the #1 Linux company.
> 
> I maintain that RF is way too large to be properly maintainable.

Well, you just said a few lines up that enough maintainers
are proven to keep up even 3x this size.  Not to mention the
(PLD, I think) examples someone else brought in the thread.

I see this whole issue as a matter of perceiving the real
value of a well maintained and vast repo.  Once that is well
perceived, the effort required definitely looks a lot more
worth it.

Mihai



More information about the CentOS mailing list