Hello Producers
"Longevity of Support" is an attractive drawcard for CentOS if it means the exact opposite of Fedora's "short support cycle" that does not provide updating of infrastructural libraries for very long, libraries which newer versions of applications (like Firefox, Thunderbird, Opera etc) depend on and which wont install unless the libraries are also newer versions? But is that what it means -- ie that those infrastructural libraries (libpango, libcairo etc) are continuously updateable to fairly recent versions?
If so, the problem is in reconciling that meaning with the reputation of CentOS to only support older versions of applications (eg Firefox-1.5, Thunderbird-1.0 etc). It does reconcile, of course, if the implications are merely that the CentOS user must compile and install the later versions of such applications from source, rather than having the luxury of pre-packaged binaries. It doesn't reconcile if there is some other critical reason why newer such applications just wont install. But which?
I ask here because the profusion of vague mission statements and 'target-enduser-profile' claims that litter the internet re '*nix distros' seldom actually address those real issues. And hopefully someone can enlighten. My complex production & developement desktop takes months to fully port to a new OS (or OS-version), so OS updates to get library updates (ala Fedora philosophy) becomes increasingly untenable.
Then there is a further question, I'm afraid. Since CentOS also does specifically target the profile of a so-called 'enterprise/server-user' what does that actually entail. Does it mean concrete security strictures which bolt down non-'root' users or does it merely mean the availability of SELinux (but which can be turned OFF)? For instance, (with SELinux OFF), can a user still: (a) su root via Kterm anytime? (b) Access services-admin anytime via Menu+Pam to control printers, modems, daemons etc? (c) compile (d) have 6 to 8 desktops running (e) call up 'konquerorsu.desktop' (root-konqueror with embedded root-Kterm) (f) have normal cron scheduling .......................................................... maybe more, but that's a start.
Thanks for listening.
Sean
On Fri, 17 Dec 2010, Sean wrote:
To: centos@centos.org From: Sean soso@orcon.net.nz Subject: [CentOS] two cents or not two cents
Hello Producers
"Longevity of Support" is an attractive drawcard for CentOS if it means the exact opposite of Fedora's "short support cycle" that does not provide updating of infrastructural libraries for very long, libraries which newer versions of applications (like Firefox, Thunderbird, Opera etc) depend on and which wont install unless the libraries are also newer versions? But is that what it means -- ie that those infrastructural libraries (libpango, libcairo etc) are continuously updateable to fairly recent versions?
If so, the problem is in reconciling that meaning with the reputation of CentOS to only support older versions of applications (eg Firefox-1.5, Thunderbird-1.0 etc). It does reconcile, of course, if the implications are merely that the CentOS user must compile and install the later versions of such applications from source, rather than having the luxury of pre-packaged binaries. It doesn't reconcile if there is some other critical reason why newer such applications just wont install. But which?
I ask here because the profusion of vague mission statements and 'target-enduser-profile' claims that litter the internet re '*nix distros' seldom actually address those real issues. And hopefully someone can enlighten. My complex production & developement desktop takes months to fully port to a new OS (or OS-version), so OS updates to get library updates (ala Fedora philosophy) becomes increasingly untenable.
You might be interested in giving my ALI scripts a whirl on a spare machine (even an old laptop) to start with, so you get used to how they work.
I wrote these especially to deal with doing a fresh linux installation.
http://www.karsites.net/centos/anyuser/auto-linux-installer.php
I can set up the services I want running in under 10 seconds. Beats sitting there doing it manually for 3 days!
The general idea is that you modify the installer scripts to work with a particular system - just do it one time. Then you can replay the scripts as often as you want, to re-install your system.
Please let the list know if they help with your installation/update woes.
BTW. Some applications such as Firefox need to be updated to their latest versions, otherwise websites will not work with an older version. I had these issues with running an old version of FF on Fedora 8. I went from F8 to F12 using my ALI scripts without any problems.
Kind Regards,
Keith Roberts
Interesting, and probably worth a play with indeed, although I tend to steer clear of Bash (unhappy with) whenever possible to do the same in Perl (happy with). I imagine there is machine level stuff involved that would rule out a pure Perl version? However, my difficulties for OS replacement are not so much the OS setup itself but the 'production' stuff that needs to go on top and a raft of dependencies -- compilers, BerkeleyDB, myriad Perl modules etc etc etc. Since the system is 'live', I usually have to run 2 versions in parallel for a long time... so lots of rollbacks, synchronising overhead and so on. Usually newer versions of some things have to be replaced with older versions and then inter-dependency issues arise... some of the stuff I upgraded specifically for suddenly stops working. You are familiar with the general picture, I'm sure. But thanks for the thought. Sean
<div class="moz-text-flowed" style="font-family: -moz-fixed">On Fri, 17 Dec 2010, Sean wrote:
To: centos@centos.org From: Sean soso@orcon.net.nz Subject: [CentOS] two cents or not two cents
Hello Producers
"Longevity of Support" is an attractive drawcard for CentOS if it means the exact opposite of Fedora's "short support cycle" that does not provide updating of infrastructural libraries for very long, libraries which newer versions of applications (like Firefox, Thunderbird, Opera etc) depend on and which wont install unless the libraries are also newer versions? But is that what it means -- ie that those infrastructural libraries (libpango, libcairo etc) are continuously updateable to fairly recent versions?
If so, the problem is in reconciling that meaning with the reputation of CentOS to only support older versions of applications (eg Firefox-1.5, Thunderbird-1.0 etc). It does reconcile, of course, if the implications are merely that the CentOS user must compile and install the later versions of such applications from source, rather than having the luxury of pre-packaged binaries. It doesn't reconcile if there is some other critical reason why newer such applications just wont install. But which?
I ask here because the profusion of vague mission statements and 'target-enduser-profile' claims that litter the internet re '*nix distros' seldom actually address those real issues. And hopefully someone can enlighten. My complex production & developement desktop takes months to fully port to a new OS (or OS-version), so OS updates to get library updates (ala Fedora philosophy) becomes increasingly untenable.
You might be interested in giving my ALI scripts a whirl on a spare machine (even an old laptop) to start with, so you get used to how they work.
I wrote these especially to deal with doing a fresh linux installation.
http://www.karsites.net/centos/anyuser/auto-linux-installer.php
I can set up the services I want running in under 10 seconds. Beats sitting there doing it manually for 3 days!
The general idea is that you modify the installer scripts to work with a particular system - just do it one time. Then you can replay the scripts as often as you want, to re-install your system.
Please let the list know if they help with your installation/update woes.
BTW. Some applications such as Firefox need to be updated to their latest versions, otherwise websites will not work with an older version. I had these issues with running an old version of FF on Fedora 8. I went from F8 to F12 using my ALI scripts without any problems.
Kind Regards,
Keith Roberts
On 12/17/10 2:12 PM, Sean wrote:
Interesting, and probably worth a play with indeed, although I tend to steer clear of Bash (unhappy with) whenever possible to do the same in Perl (happy with). I imagine there is machine level stuff involved that would rule out a pure Perl version? However, my difficulties for OS replacement are not so much the OS setup itself but the 'production' stuff that needs to go on top and a raft of dependencies -- compilers, BerkeleyDB, myriad Perl modules etc etc etc. Since the system is 'live', I usually have to run 2 versions in parallel for a long time... so lots of rollbacks, synchronising overhead and so on. Usually newer versions of some things have to be replaced with older versions and then inter-dependency issues arise... some of the stuff I upgraded specifically for suddenly stops working. You are familiar with the general picture, I'm sure. But thanks for the thought.
You didn't exactly make it clear whether you've used CentOS or not, but keeping those interfaces from changing in ways that break things that used to work is the whole point of 'enterprise' distributions and CentOS inherits the work of backporting bug/security fixes without introducing behavior changes over the long life span from RHEL.
You might also do your own homework and avoid components with a history of breaking backwards compatibility (like BerkeleyDB...). As you have probably noticed, core perl has excellent historical stability - interpolating unquoted @ in strings is just about the only change in perl 5 that might require a change all the way back from perl1 code. But the modules are done by lots of other people and occasionally are re-factored in ways that require coordinated changes. If you are getting these from a 3rd party repository, someone else has usually done the work of vetting the dependencies among them.
Or, you might move to java for a more self-contained, OS/distribution independent way of doing things.
Les Mikesell wrote:
<div class="moz-text-flowed" style="font-family: -moz-fixed">On 12/17/10 2:12 PM, Sean wrote: > Interesting, and probably worth a play with indeed, although I tend to > steer clear of Bash (unhappy with) whenever possible to do the same in > Perl (happy with). I imagine there is machine level stuff involved that > would rule out a pure Perl version? > However, my difficulties for OS replacement are not so much the OS setup > itself but the 'production' stuff that needs to go on top and a raft of > dependencies -- compilers, BerkeleyDB, myriad Perl modules etc etc etc. > Since the system is 'live', I usually have to run 2 versions in parallel > for a long time... so lots of rollbacks, synchronising overhead and so > on. Usually newer versions of some things have to be replaced with older > versions and then inter-dependency issues arise... some of the stuff I > upgraded specifically for suddenly stops working. You are familiar with > the general picture, I'm sure. > But thanks for the thought.
You didn't exactly make it clear whether you've used CentOS or not, but keeping those interfaces from changing in ways that break things that used to work is the whole point of 'enterprise' distributions and CentOS inherits the work of backporting bug/security fixes without introducing behavior changes over the long life span from RHEL.
You might also do your own homework and avoid components with a history of breaking backwards compatibility (like BerkeleyDB...). As you have probably noticed, core perl has excellent historical stability - interpolating unquoted @ in strings is just about the only change in perl 5 that might require a change all the way back from perl1 code. But the modules are done by lots of other people and occasionally are re-factored in ways that require coordinated changes. If you are getting these from a 3rd party repository, someone else has usually done the work of vetting the dependencies among them.
Or, you might move to java for a more self-contained, OS/distribution independent way of doing things.
Why Perl? Because writing/maintaining 20,000 lines of terse Perl code is manageable, whereas the equivalent 200,000+ in Java ruled itself out at the very beginning, (even at a time when I knew some Java but no Perl). A practical decision I clap myself on the back for every single day despite knowing that had I gone with Java (and this project fallen over long ago) I could now be getting big quids from some corporate developer who needs a team of new Java graduates overseen.....(hmmmmm or was it the right decision?).....
Core Perl stability? I agree.
Why BerkeleyDB? I dont know of an embedded-db equivalent that will store 'any and every data exactly as is'.
Originally on RH8 for 3/4 years, the first attempt to port onto a brand new release of FC4 broke everywhere. The second attempt a year or so later went better and remains. In the meantime FC support philosophy has tightened/altered to the point where I simply must abandon it. I believe it has become just an 'alpha test ground' for RHEL. Reminds me of the painful Dos saga -- the first version to work properly (Dos-6) was just about irrelevant when finally released. So yes, CentOS has come into my sights ... (and I'm a bit long in the tooth to tiptoe around as you may have gathered!).
Sean
On 12/18/10 3:24 PM, Sean wrote:
Or, you might move to java for a more self-contained, OS/distribution independent way of doing things.
Why Perl? Because writing/maintaining 20,000 lines of terse Perl code is manageable, whereas the equivalent 200,000+ in Java ruled itself out at the very beginning, (even at a time when I knew some Java but no Perl). A practical decision I clap myself on the back for every single day despite knowing that had I gone with Java (and this project fallen over long ago) I could now be getting big quids from some corporate developer who needs a team of new Java graduates overseen.....(hmmmmm or was it the right decision?).....
Starting from scratch now or recently, it would be hard to argue maintainability for perl vs. java, but back in java 1.4 days or before, it was probably the right choice. But java sort of isolates you from changes in the rest of the platform. And groovy eliminates most of the unnecessary verbosity if you don't mind a bit of a performance hit.
Core Perl stability? I agree.
Why BerkeleyDB? I dont know of an embedded-db equivalent that will store 'any and every data exactly as is'.
I'd think sqlite first - these days anyway. BerkelyDB had bugs in growing existing items way to long for me to ever trust it again. Or use a server instead of embedding anything. Either postgresql or mysql are fairly trouble-free although they've had their own version-specific issues. Or if you need scale, look at something like riak.
Originally on RH8 for 3/4 years, the first attempt to port onto a brand new release of FC4 broke everywhere. The second attempt a year or so later went better and remains. In the meantime FC support philosophy has tightened/altered to the point where I simply must abandon it. I believe it has become just an 'alpha test ground' for RHEL. Reminds me of the painful Dos saga -- the first version to work properly (Dos-6) was just about irrelevant when finally released. So yes, CentOS has come into my sights ... (and I'm a bit long in the tooth to tiptoe around as you may have gathered!).
If you had moved to Centos3 as the first step, you could have run that with nothing more drastic than a periodic 'yum update' for years, then jumped to Centos5 with no rush to change again even now.
Les Mikesell wrote:
<div class="moz-text-flowed" style="font-family: -moz-fixed">On 12/18/10 3:24 PM, Sean wrote: > >> >> Or, you might move to java for a more self-contained, OS/distribution >> independent way of doing things. >> > Why Perl? Because writing/maintaining 20,000 lines of terse Perl code is > manageable, whereas the equivalent 200,000+ in Java ruled itself out at > the very beginning, (even at a time when I knew some Java but no Perl). > A practical decision I clap myself on the back for every single day > despite knowing that had I gone with Java (and this project fallen over > long ago) I could now be getting big quids from some corporate developer > who needs a team of new Java graduates overseen.....(hmmmmm or was it > the right decision?).....
Starting from scratch now or recently, it would be hard to argue maintainability for perl vs. java, but back in java 1.4 days or before, it was probably the right choice. But java sort of isolates you from changes in the rest of the platform. And groovy eliminates most of the unnecessary verbosity if you don't mind a bit of a performance hit.
Groovy is new one on me -- what is it? And surely the driver behind widespread Java adoption is still that others maintain your code more easily (ie the corporate/factory model), implying a price still to pay for a developer who just needs to maintain own code suite? Besides being anathema to me, strong data typing, for example, is also just one feature that explodes code size, but fits perfectly with the factory model. In 5+ years of intense coding with non-typed R/Basic I recall a total of maybe 3 compile-crashes from trying to do math on a string (seriously a non-issue for the die hard maverick!) Is code size under-rated?, conveniently swept under the carpet?
Core Perl stability? I agree.
Why BerkeleyDB? I dont know of an embedded-db equivalent that will store 'any and every data exactly as is'.
I'd think sqlite first - these days anyway. BerkelyDB had bugs in growing existing items way to long for me to ever trust it again. Or use a server instead of embedding anything. Either postgresql or mysql are fairly trouble-free although they've had their own version-specific issues. Or if you need scale, look at something like riak.
I do use postgresql for data that is person-entered, ie where interactivity facilitates personal on-the-spot correction of rejected inputs. The inbuilt constraints of the server db-model clearly targets multi-person updaters who may or may not be focussing on what they are doing. Great for keeping mega stores of artificially structured (simple) stuff like phone lists, not so good at accepting all the vagaries the real world may throw at it in automated background capture scenarios, sometimes from suspect sources. BerkeleyDB may break occasionally, but is recoverable with basic OS tools and text-editor if provided recovery tools fail (not locked in a proprietary binary closet -- been there, done that, still hurting!).
Originally on RH8 for 3/4 years, the first attempt to port onto a brand new release of FC4 broke everywhere. The second attempt a year or so later went better and remains. In the meantime FC support philosophy has tightened/altered to the point where I simply must abandon it. I believe it has become just an 'alpha test ground' for RHEL. Reminds me of the painful Dos saga -- the first version to work properly (Dos-6) was just about irrelevant when finally released. So yes, CentOS has come into my sights ... (and I'm a bit long in the tooth to tiptoe around as you may have gathered!).
If you had moved to Centos3 as the first step, you could have run that with nothing more drastic than a periodic 'yum update' for years, then jumped to Centos5 with no rush to change again even now.
Ah, now you tell me!
Sean
On 12/19/10 2:40 PM, Sean wrote:
Starting from scratch now or recently, it would be hard to argue maintainability for perl vs. java, but back in java 1.4 days or before, it was probably the right choice. But java sort of isolates you from changes in the rest of the platform. And groovy eliminates most of the unnecessary verbosity if you don't mind a bit of a performance hit.
Groovy is new one on me -- what is it?
Groovy is java to the extent that it runs under the same jvm and has access to compiled code in standard jars. But it can be run without compiling and is more dynamic and less verbose. See http://groovy.codehaus.org/ - their description is that it is the language java would be if it had been invented in this century. It adds a runtime method dispatcher so there's a bit of overhead to straight java. There's also a companion 'grails' which is the equivalent of rails for ruby (web site mostly by convention) but again, running under a stable jvm.
And surely the driver behind widespread Java adoption is still that others maintain your code more easily (ie the corporate/factory model), implying a price still to pay for a developer who just needs to maintain own code suite?
And that you get some massive libraries - spring/hibernate,lucene, etc. plus jdbc drivers for about every DB known to man. And there are IDEs like eclipse that do a lot of the grunge work boilerplate for you, and maven to manage components as you scale up.
I do agree personally - I can't think in java and do much better when you can squeeze the logic of a routine onto one page where you can see it all at once.
Besides being anathema to me, strong data typing, for example, is also just one feature that explodes code size, but fits perfectly with the factory model. In 5+ years of intense coding with non-typed R/Basic I recall a total of maybe 3 compile-crashes from trying to do math on a string (seriously a non-issue for the die hard maverick!) Is code size under-rated?, conveniently swept under the carpet?
I'd relate the importance of code size to the amount of RAM you can afford. For a long time now it has been cheaper to buy RAM than to hire someone capable of shrinking your code base - unless maybe you have a mass-market application that will run on millions of boxes.
Core Perl stability? I agree.
Why BerkeleyDB? I dont know of an embedded-db equivalent that will store 'any and every data exactly as is'.
I'd think sqlite first - these days anyway. BerkelyDB had bugs in growing existing items way to long for me to ever trust it again. Or use a server instead of embedding anything. Either postgresql or mysql are fairly trouble-free although they've had their own version-specific issues. Or if you need scale, look at something like riak.
I do use postgresql for data that is person-entered, ie where interactivity facilitates personal on-the-spot correction of rejected inputs. The inbuilt constraints of the server db-model clearly targets multi-person updaters who may or may not be focussing on what they are doing. Great for keeping mega stores of artificially structured (simple) stuff like phone lists, not so good at accepting all the vagaries the real world may throw at it in automated background capture scenarios, sometimes from suspect sources.
You can always do input into temporary tables structured more like the input data and process to normalized form later (if needed).
BerkeleyDB may break occasionally, but is recoverable with basic OS tools and text-editor if provided recovery tools fail (not locked in a proprietary binary closet -- been there, done that, still hurting!).
Sqlite should be equally usable - and easier to convert to/from server backends. That might not have been true long ago, though.
If you had moved to Centos3 as the first step, you could have run that with nothing more drastic than a periodic 'yum update' for years, then jumped to Centos5 with no rush to change again even now.
Ah, now you tell me!
You should have asked sooner. I still have a few centos3 boxes going strong. I had problems with perl modules and a few other things in the early stages of centos4 and skipped over that for most systems.
And there are IDEs like eclipse that do a lot of the grunge work boilerplate for you, and maven to manage components as you scale up.
eclipse froze my first FC4 tryout ... is for me what BerkeleyDB is for you.
I do agree personally - I can't think in java and do much better when you can squeeze the logic of a routine onto one page where you can see it all at once. ..... I'd relate the importance of code size to the amount of RAM you can afford. For a long time now it has been cheaper to buy RAM than to hire someone capable of shrinking your code base - unless maybe you have a mass-market application that will run on millions of boxes.
By 'size' I was actually referring to 'source size' : (1) you say it above "..[all micro] logic..[on] one page ..". (2) the same idea but in a project-macro-logic sense viz a viz sheer quantity of code lines to manage overall. I do rate java for designing GUI-interfaces. No argument there. GUI components ARE objects. But most of the real world aint ..(malfits being the reason for extensibility in OOP).. and it turns out I think that re-usability mostly goes hugely custardly. (And aside, if best programming is the truly 'creative' kind, not just spending time finding the right lego blocks to make new combinations with, then OOP fits badly anyway).
Why BerkeleyDB? I dont know of an embedded-db equivalent that will store 'any and every data exactly as is'.
I'd think sqlite first - these days anyway.
You can always do input into temporary tables structured more like the input data and process to normalized form later (if needed).
Not if normalising implies any user-interactivity (the usual scenario). An early input in the automated background input stream off the wire generates some KEY which a later input of the same stream hours later may try to match against. Would the latest sqlite accept a KEY that was say (& maybe badly) both QP-encoded and HTML-encoded, and even if it did so now would that be guaranteed to endure through future versions of those encodings without ever rejecting? Even if the 'normalising' could be automated satisfactorily to avoid all rejections for sqlite right now within the capture process, it would be biased to sqlite's constraints, an unwanted extra layer that may well also actually corrupt the heavy duty (search-engine)-normalising already being performed on difficult data ..(eg stemming, scoring, indexing).
In a sense, BDB serves the "temporary tables" suggestion already, but so much more as to be sufficient in itself. You seem unduly anti-BDB? Quite frankly I have had far less trouble with it than any other db ever. In the past year I have had to do one dump/(re)-load [about 1 hour], and twice delete the environment files [about 1 minute] so that they would self-rebuild on next access. That's it!
Which doesn't mean I'm not also always open to suggestions.
Sqlite should be equally usable - and easier to convert to/from server backends. That might not have been true long ago, though.
If you had moved to Centos3 as the first step, you could have run that with nothing more drastic than a periodic 'yum update' for years, then jumped to Centos5 with no rush to change again even now.
Ah, now you tell me!
You should have asked sooner. I still have a few centos3 boxes going strong. I had problems with perl modules and a few other things in the early stages of centos4 and skipped over that for most systems.
And proves that the time I can make available for discovery falls short by heaps. Nearing closure on a very long project right now, thinking ahead to next steps, reviewing the robustness of past decisions, is very enjoyable and an unusual luxury for me. Playing the 'distro-hopping' game that many seem to indulge in for instance has just been out of the question. I have some processes shared with win boxes over RPC (producung excel charts) which require that both Perl version and versions of modules like Storable.pm exactly match, so am largely at the mercy of what the Activestate repo provides as to what must be run on the linux box too. I need lots of browsers too (alongside Firefox) for day to day work. The old versions of Mozilla, Konqueror and Opera which will run under FC4 are critically dysfunctional on some operations needed. So am looking to try a more up to date team. CentOS is beginning to look more & more like my cup of tea, and since I gather that a new major is immanent maybe it will support the new Google Chrome (along with Seamonkey, Opera-11+)? I wonder if there is a list of packages somewhere. If the repo web-page for CentOS provided the actual repo-address I was going to try direct my FC4-yum there for listings, but cannot seem to find it. It may be still the case that I cannot have 'both worlds' on one box, or maybe I can try a CentOS + VM-XXX configuration.... hmmm.
Sean
On Tue, Dec 21, 2010 at 11:56:26AM +1300, Sean wrote:
CentOS is beginning to look more & more like my cup of tea, and since I gather that a new major is immanent maybe it will support the new Google Chrome (along with Seamonkey, Opera-11+)?
Just for the record, someone just made a very nice build of google-chrome for CentOS 5.x that works quite well--as does Opera 11.
See the CentOS forums--if you are interested, but can't find it, post again and I'll find the thread.
On Mon, Dec 20, 2010 at 3:05 PM, Scott Robbins scottro@nyc.rr.com wrote:
On Tue, Dec 21, 2010 at 11:56:26AM +1300, Sean wrote:
CentOS is beginning to look more & more like my cup of tea, and since I gather that a new major is immanent maybe it will support the new Google Chrome (along with Seamonkey, Opera-11+)?
Just for the record, someone just made a very nice build of google-chrome for CentOS 5.x that works quite well--as does Opera 11.
See the CentOS forums--if you are interested, but can't find it, post again and I'll find the thread.
What do you mean you cannot find it :-P
https://www.centos.org/modules/newbb/viewtopic.php?topic_id=23746&forum=...
Akemi
On Mon, Dec 20, 2010 at 03:12:46PM -0800, Akemi Yagi wrote:
What do you mean you cannot find it :-P
https://www.centos.org/modules/newbb/viewtopic.php?topic_id=23746&forum=...
I didn't look at that time, I was multi-tasking. :)
However, thank you as always. :)
On 12/20/10 4:56 PM, Sean wrote:
By 'size' I was actually referring to 'source size' : (1) you say it above "..[all micro] logic..[on] one page ..". (2) the same idea but in a project-macro-logic sense viz a viz sheer quantity of code lines to manage overall.
Agreed, but long term the main thing is that you can trust the components you've used not to change interfaces and make you re-visit them once something is working. If you can treat something as a black box and trust it, the size of the component isn't that important.
You can always do input into temporary tables structured more like the input data and process to normalized form later (if needed).
Not if normalising implies any user-interactivity (the usual scenario).
Normalizing is for database efficiency and reducing redundancy and usually doesn't relate well to the set of information a user would have at once anyway.
An early input in the automated background input stream off the wire generates some KEY which a later input of the same stream hours later may try to match against. Would the latest sqlite accept a KEY that was say (& maybe badly) both QP-encoded and HTML-encoded, and even if it did so now would that be guaranteed to endure through future versions of those encodings without ever rejecting?
I'm not sure how the database involved matters for that. Or why you wouldn't fix it at the first opportunity.
In a sense, BDB serves the "temporary tables" suggestion already, but so much more as to be sufficient in itself. You seem unduly anti-BDB? Quite frankly I have had far less trouble with it than any other db ever. In the past year I have had to do one dump/(re)-load [about 1 hour], and twice delete the environment files [about 1 minute] so that they would self-rebuild on next access. That's it!
I suppose I hold a grudge longer than necessary. Long ago I inherited a bulletin board system that held user data (login/password/read message list) in a bdb. It had been put in production but the author left before the 'read lists' grew to a point where a bug in bdb made them regularly overwrite random adjacent data, including other people's accounts. It was not a fun experience. Also, you might note that subversion was originally released with bdb as the default backend storage. It isn't now, due to regular problems - although those were probably file locking/sharing issues in multiuser use more than anything else. And the other problem is that if you are just tossing blobs of data in your application's native representation into the DB, it doesn't give you a chance to access it from other languages or platforms when you want to make evolutionary changes or add different components.
I have some processes shared with win boxes over RPC (producung excel charts) which require that both Perl version and versions of modules like Storable.pm exactly match, so am largely at the mercy of what the Activestate repo provides as to what must be run on the linux box too. I need lots of browsers too (alongside Firefox) for day to day work. The old versions of Mozilla, Konqueror and Opera which will run under FC4 are critically dysfunctional on some operations needed. So am looking to try a more up to date team.
I'd think java would do a better job of charting. Have you looked at the Pentaho tool set? Their site is site is somewhat confusing about what is commercial and what is free, but look for the 'community edition'. Among other things there is a GUI report writing tool to do the layout you want with data from an assortment of sources. Then there is the 'bi-server' that will render the report as a web service with html/pdf/excel/csv/test output - and you can schedule them to be run at pre-set times into a cache or email. Jasper reports and BIRT have similar capabilities.
CentOS is beginning to look more& more like my cup of tea, and since I gather that a new major is immanent maybe it will support the new Google Chrome (along with Seamonkey, Opera-11+)? I wonder if there is a list of packages somewhere. If the repo web-page for CentOS provided the actual repo-address I was going to try direct my FC4-yum there for listings, but cannot seem to find it. It may be still the case that I cannot have 'both worlds' on one box, or maybe I can try a CentOS + VM-XXX configuration.... hmmm.
Look at what RHEL 6 has. I think you can still download their beta - or Scientific Linux has their alpha release out (SL is very similar to Centos, rebuilding from the same source but they add a few things). http://distrowatch.com/?newsid=06401
Les Mikesell wrote:
If you can treat something as a black box and trust it, the size of the component isn't that important.
"If" or "IFF" ..(IF AND ONLY IF)..? A deep scepticism forces me to treat all boxes as grey no matter how long since last visited... (including my own, which are a sort of dark grey!?).
Sean
On 12/21/2010 1:06 PM, Sean wrote:
If you can treat something as a black box and trust it, the size of the component isn't that important.
"If" or "IFF" ..(IF AND ONLY IF)..? A deep scepticism forces me to treat all boxes as grey no matter how long since last visited... (including my own, which are a sort of dark grey!?).
Yes, especially my own. That's the value of using components that are maintained by others and widely used. The code gets much better QA than I could ever do myself and all you have to do is peek at the mail list once in a while to know if previously-working interfaces are going to be broken if you update. For things from the base CentOS package repositories and to a slightly lesser extent EPEL, you can assume someone else has already made sure that the updates aren't behavior-changing and required dependencies are met.
Java stuff seems to be more self-contained so there is a little more freedom to mix component versions between applications and you aren't completely tied to someone else's update schedule.
On Tue, Dec 21, 2010 at 2:29 PM, Les Mikesell lesmikesell@gmail.com wrote:
On 12/21/2010 1:06 PM, Sean wrote:
If you can treat something as a black box and trust it, the size of the component isn't that important.
"If" or "IFF" ..(IF AND ONLY IF)..? A deep scepticism forces me to treat all boxes as grey no matter how long since last visited... (including my own, which are a sort of dark grey!?).
Yes, especially my own. That's the value of using components that are maintained by others and widely used. The code gets much better QA than I could ever do myself and all you have to do is peek at the mail list once in a while to know if previously-working interfaces are going to be broken if you update. For things from the base CentOS package repositories and to a slightly lesser extent EPEL, you can assume someone else has already made sure that the updates aren't behavior-changing and required dependencies are met.
Although sometimes the one finding the bug is *me*, or people like me.
Java stuff seems to be more self-contained so there is a little more freedom to mix component versions between applications and you aren't completely tied to someone else's update schedule.
Hmm. Getting it to play nicely with others can be an adventure, particularly getting the bits installed cleanly. (Working on RPM for glassfish right now...)
Les Mikesell wrote:
<div class="moz-text-flowed" style="font-family: -moz-fixed">On 12/21/2010 1:06 PM, Sean wrote: > >> If you can treat something as a black box and trust it, the size of >> the component isn't that important. > "If" or "IFF" ..(IF AND ONLY IF)..? A deep scepticism forces me to > treat all boxes as grey no matter how long since last visited... > (including my own, which are a sort of dark grey!?).
Yes, especially my own. That's the value of using components that are maintained by others and widely used. The code gets much better QA than I could ever do myself and all you have to do is peek at the mail list once in a while to know if previously-working interfaces are going to be broken if you update. For things from the base CentOS package repositories and to a slightly lesser extent EPEL, you can assume someone else has already made sure that the updates aren't behavior-changing and required dependencies are met.
Java stuff seems to be more self-contained so there is a little more freedom to mix component versions between applications and you aren't completely tied to someone else's update schedule.
Yes, superior exploitation must be granted Java (over say Cpan, C-libraries etc) in scenarios that are naturally exploitation-heavy, such as you indicate. But for everything? Hmmm. A long ago tale goes thus: There was once a problem I would have attacked with half a page of Prolog had I known I would end up writing all the code myself, no matter how hard to actually get it right. I conceded to Java for the sake of team effort and wrote my portion as far as I could, but was unable to test properly without the other 3 portions which, as it turned out, never eventuated. Towards the death knell I stayed up and wrote them myself, chapter after chapter .. on .. and on .. and on. It ran, but no surprise it produced incorrect results, and too much code to go back through and try fix all that spaghetti logic in the time available. A lesson learnt, and I haven't written a line of Java since!
On 12/23/2010 2:23 PM, Sean wrote:
Java stuff seems to be more self-contained so there is a little more freedom to mix component versions between applications and you aren't completely tied to someone else's update schedule.
Yes, superior exploitation must be granted Java (over say Cpan, C-libraries etc) in scenarios that are naturally exploitation-heavy, such as you indicate. But for everything? Hmmm. A long ago tale goes thus: There was once a problem I would have attacked with half a page of Prolog had I known I would end up writing all the code myself, no matter how hard to actually get it right. I conceded to Java for the sake of team effort and wrote my portion as far as I could, but was unable to test properly without the other 3 portions which, as it turned out, never eventuated. Towards the death knell I stayed up and wrote them myself, chapter after chapter .. on .. and on .. and on. It ran, but no surprise it produced incorrect results, and too much code to go back through and try fix all that spaghetti logic in the time available. A lesson learnt, and I haven't written a line of Java since!
I see two different ways this applies. One is that large applications typically assemble their own collections of jars instead of expecting them in the system classpath so they don't affect each others' updates (like using static libs for everything). The other is that java developers seem to buy into writing unit tests and doing full regression testing before releases more than anyone else (my impression anyway...). Maybe it is because development is so cumbersome already that adding testing overhead doesn't make that much difference - or they've had to automate it with something like Hudson.
a bug in bdb made them regularly overwrite random adjacent data, including other people's accounts. It was not a fun experience.
ouch! I wonder if a Perl 'tied-hash' interface was being implemented along with BDB 'duplicate keys'? A definite no no. You would certainly get overwrites, though not quite random.
Sean
I'm sorry (I know don't feed the trolls), but recently there have been quite a few remarks resembling this. Also, I'm beginning to believe the remark made earlier by ???, which roughly stated "Each time a new release is due, the flame wars erupt".
Just what part of "CentOS is a Mirror or Redhat OS" do you miss?
Now please, return to the rpm building and raid/lvm discussions, as I find them very interesting and educational.
michael...
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org]On Behalf Of Sean Sent: Thursday, December 16, 2010 2:46 PM To: centos@centos.org Subject: [CentOS] two cents or not two cents
Hello Producers
"Longevity of Support" is an attractive drawcard for CentOS if it means the exact opposite of Fedora's "short support cycle" that does not provide updating of infrastructural libraries for very long, libraries which newer versions of applications (like Firefox, Thunderbird, Opera etc) depend on and which wont install unless the libraries are also newer versions? But is that what it means -- ie that those infrastructural libraries (libpango, libcairo etc) are continuously updateable to fairly recent versions?
If so, the problem is in reconciling that meaning with the reputation of CentOS to only support older versions of applications (eg Firefox-1.5, Thunderbird-1.0 etc). It does reconcile, of course, if the implications are merely that the CentOS user must compile and install the later versions of such applications from source, rather than having the luxury of pre-packaged binaries. It doesn't reconcile if there is some other critical reason why newer such applications just wont install. But which?
I ask here because the profusion of vague mission statements and 'target-enduser-profile' claims that litter the internet re '*nix distros' seldom actually address those real issues. And hopefully someone can enlighten. My complex production & developement desktop takes months to fully port to a new OS (or OS-version), so OS updates to get library updates (ala Fedora philosophy) becomes increasingly untenable.
Then there is a further question, I'm afraid. Since CentOS also does specifically target the profile of a so-called 'enterprise/server-user' what does that actually entail. Does it mean concrete security strictures which bolt down non-'root' users or does it merely mean the availability of SELinux (but which can be turned OFF)? For instance, (with SELinux OFF), can a user still: (a) su root via Kterm anytime? (b) Access services-admin anytime via Menu+Pam to control printers, modems, daemons etc? (c) compile (d) have 6 to 8 desktops running (e) call up 'konquerorsu.desktop' (root-konqueror with embedded root-Kterm) (f) have normal cron scheduling .......................................................... maybe more, but that's a start.
Thanks for listening.
Sean
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Ah, a reminder that it is always dangerous to unveil the vague? Sorry ... I should have pre-read 6000 pages from Redhat ... (but maybe I did!). Sean
Michael R. Dilworth wrote:
I'm sorry (I know don't feed the trolls), but recently there have been quite a few remarks resembling this. Also, I'm beginning to believe the remark made earlier by ???, which roughly stated "Each time a new release is due, the flame wars erupt".
Just what part of "CentOS is a Mirror or Redhat OS" do you miss?
Now please, return to the rpm building and raid/lvm discussions, as I find them very interesting and educational.
michael...
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org]On Behalf Of Sean Sent: Thursday, December 16, 2010 2:46 PM To: centos@centos.org Subject: [CentOS] two cents or not two cents
Hello Producers
"Longevity of Support" is an attractive drawcard for CentOS if it means the exact opposite of Fedora's "short support cycle" that does not provide updating of infrastructural libraries for very long, libraries which newer versions of applications (like Firefox, Thunderbird, Opera etc) depend on and which wont install unless the libraries are also newer versions? But is that what it means -- ie that those infrastructural libraries (libpango, libcairo etc) are continuously updateable to fairly recent versions?
If so, the problem is in reconciling that meaning with the reputation of CentOS to only support older versions of applications (eg Firefox-1.5, Thunderbird-1.0 etc). It does reconcile, of course, if the implications are merely that the CentOS user must compile and install the later versions of such applications from source, rather than having the luxury of pre-packaged binaries. It doesn't reconcile if there is some other critical reason why newer such applications just wont install. But which?
I ask here because the profusion of vague mission statements and 'target-enduser-profile' claims that litter the internet re '*nix distros' seldom actually address those real issues. And hopefully someone can enlighten. My complex production & developement desktop takes months to fully port to a new OS (or OS-version), so OS updates to get library updates (ala Fedora philosophy) becomes increasingly untenable.
Then there is a further question, I'm afraid. Since CentOS also does specifically target the profile of a so-called 'enterprise/server-user' what does that actually entail. Does it mean concrete security strictures which bolt down non-'root' users or does it merely mean the availability of SELinux (but which can be turned OFF)? For instance, (with SELinux OFF), can a user still: (a) su root via Kterm anytime? (b) Access services-admin anytime via Menu+Pam to control printers, modems, daemons etc? (c) compile (d) have 6 to 8 desktops running (e) call up 'konquerorsu.desktop' (root-konqueror with embedded root-Kterm) (f) have normal cron scheduling .......................................................... maybe more, but that's a start.
Thanks for listening.
Sean
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Thursday, December 16, 2010 11:45:36 pm Sean wrote:
Hello Producers
"Longevity of Support" is an attractive drawcard for CentOS if it means the exact opposite of Fedora's "short support cycle" that does not provide updating of infrastructural libraries for very long, libraries which newer versions of applications (like Firefox, Thunderbird, Opera etc) depend on and which wont install unless the libraries are also newer versions? But is that what it means -- ie that those infrastructural libraries (libpango, libcairo etc) are continuously updateable to fairly recent versions?
Longevity (things continue to work without breakage for a long time): This kind of implies "don't keep stuff continously updated to recent versions" don't you think?
Support (help if it breaks, security updates etc.): Is often realised by fixing bugs in the shipped versions and/or backporting fixes.
If so, the problem is in reconciling that meaning with the reputation of CentOS to only support older versions of applications (eg Firefox-1.5, Thunderbird-1.0 etc).
"yum list firefox" on CentOS-5 as of right now: ... firefox.x86_64 3.6.13-2.el5.centos updates
It does reconcile, of course, if the implications are merely that the CentOS user must compile and install the later versions of such applications from source, rather than having the luxury of pre-packaged binaries. It doesn't reconcile if there is some other critical reason why newer such applications just wont install. But which?
It's very hard to get both "I want to run the latest softwares" and "I want it to be stable for many years". When you run something like, for example, CentOS-5 you get stability (this means things to change completely from last month) and a long life (you can run it with updates enabled for many years).
What you _don't_ get is the latest upstream version of libfoobar that would allow you to build or install application-whatever.
...
Then there is a further question, I'm afraid. Since CentOS also does specifically target the profile of a so-called 'enterprise/server-user' what does that actually entail.
It means pretty much what I've outlined above.
Does it mean concrete security strictures which bolt down non-'root' users or does it merely mean the availability of SELinux (but which can be turned OFF)? For instance,
Enterprise vs. non-enterprise linux has very little to do with default security behaviour. It has more to do with lifetime, support and what kind of 3rd party software and hardware it's been tested and qualified with.
/Peter
(with SELinux OFF), can a user still: (a) su root via Kterm anytime? (b) Access services-admin anytime via Menu+Pam to control printers, modems, daemons etc? (c) compile (d) have 6 to 8 desktops running (e) call up 'konquerorsu.desktop' (root-konqueror with embedded root-Kterm) (f) have normal cron scheduling .......................................................... maybe more,
On 12/17/10 8:18 AM, Peter Kjellström wrote:
Longevity (things continue to work without breakage for a long time): This kind of implies "don't keep stuff continously updated to recent versions" don't you think?
It could work that way if the upstream developers of the thousands of included projects understood the need for backwards compatibility to keep things working. They don't.
On Friday, December 17, 2010 10:55:58 am Les Mikesell wrote:
On 12/17/10 8:18 AM, Peter Kjellström wrote:
Longevity (things continue to work without breakage for a long time): This kind of implies "don't keep stuff continously updated to recent versions" don't you think?
It could work that way if the upstream developers of the thousands of included projects understood the need for backwards compatibility to keep things working. They don't.
In some cases the breakage is intentional. In others, components become unmaintained, or worse. Case in point: way back in KDE 1.x or 2.x days I made up some documents in KWord that included some embedded diagrams using a component included in that old KDE but not in newer KDE. Result? While KWord opens the files ok, there are no longer any embedded diagrams.
So I actually keep a really old Linux dist (Mandrake 5.3, or maybe Red Hat 6.2; can't remember at the moment, been too long) around just in case I need to open one of those files; none of the export choices in KWord of that day include the ability to export the diagrams, and I just haven't had time to convert the diagrams (it's been a long time since I needed one of those anyway, long enough that I forget the name of the component....argh....).
On 12/17/10 10:54 AM, Lamar Owen wrote:
On Friday, December 17, 2010 10:55:58 am Les Mikesell wrote:
On 12/17/10 8:18 AM, Peter Kjellström wrote:
Longevity (things continue to work without breakage for a long time): This kind of implies "don't keep stuff continously updated to recent versions" don't you think?
It could work that way if the upstream developers of the thousands of included projects understood the need for backwards compatibility to keep things working. They don't.
In some cases the breakage is intentional. In others, components become unmaintained, or worse. Case in point: way back in KDE 1.x or 2.x days I made up some documents in KWord that included some embedded diagrams using a component included in that old KDE but not in newer KDE. Result? While KWord opens the files ok, there are no longer any embedded diagrams.
So I actually keep a really old Linux dist (Mandrake 5.3, or maybe Red Hat 6.2; can't remember at the moment, been too long) around just in case I need to open one of those files; none of the export choices in KWord of that day include the ability to export the diagrams, and I just haven't had time to convert the diagrams (it's been a long time since I needed one of those anyway, long enough that I forget the name of the component....argh....).
To overgeneralize, that's one of the big differences between free and commercial software. Commercial software that has a customer base that they can't afford to lose will rarely break backwards compatibility, or if they do, they'll provide conversion tools to manage the migration. But free software developers have nothing to lose from wild and crazy changes that apparently are what they like to do. That's what makes 'enterprise' distributions so important because they help manage the changes. Linux would be much less popular (if you can call it's tiny share that) without them, especially after the kernel dropped the convention of putting its experimental changes on an odd-numbered branch.
Les Mikesell wrote:
<snip>
To overgeneralize, that's one of the big differences between free and commercial software. Commercial software that has a customer base that they can't afford to lose will rarely break backwards compatibility, or if they do, they'll provide conversion tools to manage the migration.
<snip>
Either you never dealt with Apple abruptly terminating support for your hardware - the CPU for instance - or the memory was so painful that you blocked it out :). On the flip side, Apple fans seem to be unusually resiliant. I'm not bitter...
On 12/18/10 1:25 AM, cpolish@surewest.net wrote:
Les Mikesell wrote:
<snip>
To overgeneralize, that's one of the big differences between free and commercial software. Commercial software that has a customer base that they can't afford to lose will rarely break backwards compatibility, or if they do, they'll provide conversion tools to manage the migration.
<snip>
Either you never dealt with Apple abruptly terminating support for your hardware - the CPU for instance - or the memory was so painful that you blocked it out :). On the flip side, Apple fans seem to be unusually resiliant. I'm not bitter...
Apple is not really a software company. Everything you buy from them is tied/bundled with hardware. I think their goal in updating software is always to force you to buy new hardware.
On Friday, December 17, 2010 04:55:58 pm Les Mikesell wrote:
On 12/17/10 8:18 AM, Peter Kjellström wrote:
Longevity (things continue to work without breakage for a long time): This kind of implies "don't keep stuff continously updated to recent
versions" don't you think?
It could work that way if the upstream developers of the thousands of included projects understood the need for backwards compatibility to keep things working. They don't.
While fine in theory this wouldn't work in real life since they would have to be backwards compatible not only for their official features but also for bugs/quirks/unintended features.
So even if those thousands of upstream projects managed to remain (from their perspective) perfectly backwards compatible things would still break.
Not to mention the need to break backwards compatibility once in a while to move projects along (read: major versions).
/Peter
On 12/17/10 11:11 AM, Peter Kjellström wrote:
It could work that way if the upstream developers of the thousands of included projects understood the need for backwards compatibility to keep things working. They don't.
While fine in theory this wouldn't work in real life since they would have to be backwards compatible not only for their official features but also for bugs/quirks/unintended features.
So even if those thousands of upstream projects managed to remain (from their perspective) perfectly backwards compatible things would still break.
Not to mention the need to break backwards compatibility once in a while to move projects along (read: major versions).
That 'need' kind of depends on how bad your original interface designs were. How much has the kernel needed to break from either Posix or the SysVr4 spec?
On Thursday, December 16, 2010 05:45:36 pm Sean wrote:
If so, the problem is in reconciling that meaning with the reputation of CentOS to only support older versions of applications (eg Firefox-1.5, Thunderbird-1.0 etc).
Where do people get this? On one of my up to date CentOS 5 VM's: [root@zoneminder1 ~]# cat /etc/redhat-release CentOS release 5.5 (Final) [root@zoneminder1 ~]# rpm -qi firefox Name : firefox Relocations: (not relocatable) Version : 3.6.13 Vendor: CentOS [snip] [root@zoneminder1 ~]# yum list thunderbird Available Packages thunderbird.x86_64 2.0.0.24-13.el5.centos updates [root@zoneminder1 ~]#
On one of my CentOS 4 boxes, fully up to date: [root@pachyderm ~]# cat /etc/redhat-release CentOS release 4.8 (Final) [root@pachyderm ~]# yum list firefox Available Packages firefox.i386 3.6.13-3.el4.centos update [root@pachyderm ~]# yum list thunderbird Available Packages thunderbird.i386 1.5.0.12-34.el4.centos update [root@pachyderm ~]#
Hmmm, how about CentOS 3 (of course, I have DAG enabled on that box, so it shows up): [root@campus root]# cat /etc/redhat-release CentOS release 3.9 (Final) [root@campus root]# yum list firefox [snip] Looking in Available Packages: Name Arch Version Repo -------------------------------------------------------------------------------- firefox i386 0.8-3.1.el3.dag dag
And thunderbird isn't available. Not surprised at the age, though, as that's Fedora Core 1 timeframes for C3.
So on the currently supported CentOS releases, 4 and 5, Firefox 3.6.13, Firefox 3.6.13 is available. So where does this FUD of 'FF 1.5 only' come from?
Lamar Owen wrote:
Where do people get this? On one of my up to date CentOS 5 VM's: [root@zoneminder1 ~]# cat /etc/redhat-release CentOS release 5.5 (Final) [root@zoneminder1 ~]# rpm -qi firefox Name : firefox Relocations: (not relocatable) Version : 3.6.13 Vendor: CentOS
<snip>
(Nod). Wish I had time to work on Scott Shawcroft's distrology (http://www.oswatershed.org/) to add Red Hat and CentOS to the evaluations. The ability to compare current metrics ranking distribution package freshness is pretty cool.