[CentOS] looking for cool, post-install things to do on a centos 5.5 system

Fri Sep 17 17:18:14 UTC 2010
Les Mikesell <lesmikesell at gmail.com>

On 9/17/2010 10:47 AM, m.roth at 5-cent.us wrote:
>
> Ah, no. I wrote 30 scripts around '91-'92 to take datafiles from 30
> sources and reformat them, to feed to the C program I'd written with
> embedded sql, in place of the d/b's sqlloader (*bleah*). Then, 11 years
> ago, I wrote a validation program for data that was being loaded by
> another program that I didn't want to change; the data had been exported
> from ArcInfo, and had to go into our Oracle d/b.
>
> Really simple to do in awk - just so much of it, and no, perl would have
> offered no improved/shorter way to do it,

I don't get it.  Why wouldn't you just talk to the db directly with 
perl's dbi/dbd, replacing both the awk and C parts?  I do that all the 
time.  Or was that before dbi - or the dbd you needed?

  and yes, I do know perl - in
> '04, for example, I rewrote a call routing and billing system from perl
> (written by my then-manager, who'd never studied programming, can you say
> spaghetti?) into reasonable perl. Actually, I just wrote a scraper in
> perl, using HTML::Parser.  Anyway, the point of that was to demonstrate
> that I know both, and awk is better, IMO, for some jobs.

That depends on how you define better.  I can see how it could save a 
microsecond of loading time on tiny jobs, but not how it can do anything 
functionally better.  Have you tried feeding one of your long scripts to 
a2p and timing some job with enough input to matter?  I'd expect perl to 
win anything where there is enough actual work to make up for the 
compile/tokenize pass.

-- 
   Les Mikesell
    lesmikesell at gmail.com