On Fri, Feb 18, 2011 at 11:08 PM, Larry Vaden vaden@texoma.net wrote:
On Fri, Feb 18, 2011 at 9:51 PM, Nico Kadel-Garcia nkadel@gmail.com wrote:
I've repeatedly seen this sort of "I can do it better myself, just the way I think it should work!" with system auditing tools, source control systems, and software building structures. It's usually far, far more efficient to learn the existing structure well and build on it than to start from scratch: a lot of hardwon lessons are very expensive to relearn.
And, since I've been around since the ASR 33 days of paper tape when you had to really think straight and maintain good relations with the operators in order to get 7 compiles a day, I wonder why we still spend time waiting on files to be compressed and to be decompressed when you can't fill up a modern day disk drive with a project's code, much less an array of said drives most modern build systems would have.
Maybe you could ease up on the attitude while you're at it.
There are several answers. One is that bandwidth, and time, for sending hundreds of Megabytes or full DVD's of material still costs time and money and ties up disk space that is better reserved for things that are sensitive enough to streaming perfomance that 10 blocks are more likely to be contiguous, and perform better, than 20 blocks. And even the bandwidth of reading off local disk matters for high performance components, such as the many Megabytes of dynamically decompressed Java ".jar' files and RPM based "compressed CPIO" format. Even simple operations like checksumming and PGP signing take longer for larger files, and the performance penalty in that form can grow quite large.
And while disks are cheap, splitting content across multiple disks (whether DVD or external USB) is more expensive and more awkard), and the compression of streaming media such as audio and video allows a quite modest network connection to more effectively carry your desired content. So it's quite useful.