On 6 April 2014 17:45, Les Mikesell lesmikesell@gmail.com wrote:
On Sun, Apr 6, 2014 at 3:01 PM, Stephen John Smoogen smooge@gmail.com wrote:
But OSTree is much more flexible than traditional block replication because:
It is very unlike the unix philosophy to need a lot of ways to do something. Why isn't there one tool that does it right in the first place - and can work appropriately for 2 systems the same as 2,000?
What Unix philosophy or religion?
Ken Thompson's.
That is the one liner elevator pitch. It gets to be much more nuanced when you get to the details and the complexity of where things go. Plus it was written when text was universal and the main thing that people dealt with. Today text is a deep down thing that gets dealt with but not what people actually interact with.
The whole 'one tool does one thing' is not what Unix has ever been.. it is a rosy glasses view forgetting all the arguments over whether awk+sed+sh or sh+sed+awk or special program G was
the
best way to solve a problem. Which gets down to the fundamental issue. Humans are not lock step creatures.. they will think differently and see different ways to solve a problem and the universe is complex enough
that it
works so much that people get frustrated with each other because they
would
like just one way to do things (their way).
That's a good description of what has kept unix out of the mainstream for decades. I can understand having all the different incompatible flavors when the driving force was the attempt to lock users of the commercial versions in with their own different minor extensions, but it doesn't make any sense in the free/reusable software world. It's not that I want something that works 'my' way, it is that I want something that I can learn once - or train staff to do once, not monthly. And I want it to work the same way for 2 machines as for thousands, and without rebuilding a bunch of supporting infrastructure every time someone has a different idea. Is that too much to ask? Why does every change have to make you throw out your existing knowledge and infrastructure instead of just fixing the small remaining problems?
Rule 3 of his philosophy:
Design and build software, even operating systems, to be tried early, ideally within weeks. Don't hesitate to throw away the clumsy parts and rebuild them.
Rule 4 of the Unix philosophy:
Use tools in preference to unskilled help to lighten a programming task, even if you have to detour to build the tools and expect to throw some of them out after you've finished using them.
And then
When in doubt use brute force.
These are the core issues that we all run into and causes us to froth and foam about change. We had a lovely 30 years where the following factors buffered us from those rules:
1) The number of programmers who were solving these problems were in the hundreds versus the millions that they are today. Out of those hundreds, you had camps which meant the central architects were probably counted on your hand. That meant that group think would allow for 'consensus' of how to solve problems and keep the number of different solutions to a minimum.
2) Due to the cost of leased lines and mailing tapes, it would take years to filter out the churn that the above rules hit. Thus your 2-4 Unix boxes could go for years without anything but software patches and might not see the next architecture change to the OS until the next hardware rollout. [Unless you were adventurous and could have the Vax down for a week as you worked on the phone with some tech on why cc didn't produce the kernel you wanted.]
3) Your site was pretty isolated and would reinforce group think via purchasing. You might be a Vax shop or a Sun shop or a HP shop but in most cases the majority of the boxes would be one architecture and one vendor would supply the OS. You might have other systems but they would be small fry compared to the main architecture.. and normally you would tool to make the one-offs look like the main vendor via symlinks etc.
4) The ability to buy systems was hard because they were extremely expensive.. counting in inflation we are talking millions of dollars of investment for 1-2 systems while today we can have hundreds of boxes for that amount. You bought 1 box and made it do everything you needed.
However, none of these exist anymore. Systems are cheaper which means we can have tons of boxes dedicated to one task which means you can write your tools for it very well but it won't work on other stuff that well. The internet and other changes make it that you are not isolated and you can see all the different ways to solve a problem without an outside filter that if you go with vendor/architecture X you will solve all your problems with X's way versus Y's way. The internet makes it also easier for the churn to be pushed out. While this stuff might go into a lab for years and then get pushed out when the vendor decides to push a new hardware system out you can see it now. And finally we now have much less groupthink than we had back then because the barrier to think differently is much lower. [Doesn't mean it isn't there but it is a lot less.] Thus instead of 5-10 brute force solutions to a problem you have 100's or 1000's.
It isn't nice or pretty and there are days when I feel like a dinosaur wishing the damn meteor would hurry up and get here.. but it also is just how things have changed and I just need to figure out ways to filter stuff better to make it work for the environments I need to work it in.