I can't resist. Read the thread that was pointed to on lkml. ROTFLMAO.
*Real* UNIX addressed these problems long ago. I guess the "Gurus" suffer from NIH (Not Invented Here) syndrome.
Given a "general purpose" system, tunability is a must. UNIX, as delivered by USL in such examples as Sys V, had tunables that let admins tune to their needs. A single "swappiness" value is woefully inadequate.
Among the tunables were how much memory for cache, how much for buffers, how much for X/Y/Z, high and low water marks for all sorts of memory related stuff and a very valuable attribute bit for executables called the "sticky bit". It is not the "sticky bit" as used now. It said lock this app in memory and never swap it. A variation on that (couldn't keep original semantics with the size of apps these days) would address some of the "responsiveness" issues raised by some. Some admin tunables would address the other issues.
The "Gurus" need to learn something my father taught me. "A smart man learns from his mistakes. A wise man learns from the mistakes of others". I'm really smart. :-( And, apparently, so are the "Gurus". To think they have VM this long and no one has thought to swipe these good ideas from real UNIX. And they're still argueing about all that as if it has never been hashed out and addressed before.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Mon, Jun 05, 2006 at 05:00:52PM -0400, William L. Maltby wrote:
I can't resist. Read the thread that was pointed to on lkml. ROTFLMAO.
*Real* UNIX addressed these problems long ago. I guess the "Gurus" suffer from NIH (Not Invented Here) syndrome.
Given a "general purpose" system, tunability is a must. UNIX, as delivered by USL in such examples as Sys V, had tunables that let admins tune to their needs. A single "swappiness" value is woefully inadequate.
Among the tunables were how much memory for cache, how much for buffers, how much for X/Y/Z, high and low water marks for all sorts of memory related stuff and a very valuable attribute bit for executables called the "sticky bit". It is not the "sticky bit" as used now. It said lock this app in memory and never swap it. A variation on that (couldn't keep original semantics with the size of apps these days) would address some of the "responsiveness" issues raised by some. Some admin tunables would address the other issues.
The "Gurus" need to learn something my father taught me. "A smart man learns from his mistakes. A wise man learns from the mistakes of others". I'm really smart. :-( And, apparently, so are the "Gurus". To think they have VM this long and no one has thought to swipe these good ideas from real UNIX. And they're still argueing about all that as if it has never been hashed out and addressed before.
Well Bill, I'm sure they will be happy to receive a patch from you, with a new VM implementation that actually works (including for SMP machines, and multiple platforms).
No ? Well, talk is cheap, isn't it ?
- -- Rodrigo Barbosa rodrigob@suespammers.org "Quid quid Latine dictum sit, altum viditur" "Be excellent to each other ..." - Bill & Ted (Wyld Stallyns)
On Mon, 2006-06-05 at 18:14 -0300, Rodrigo Barbosa wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Mon, Jun 05, 2006 at 05:00:52PM -0400, William L. Maltby wrote:
I can't resist. Read the thread that was pointed to on lkml. ROTFLMAO.
*Real* UNIX addressed these problems long ago. I guess the "Gurus" suffer from NIH (Not Invented Here) syndrome.
<snip>
The "Gurus" need to learn something my father taught me. "A smart man learns from his mistakes. A wise man learns from the mistakes of others". I'm really smart. :-( And, apparently, so are the "Gurus". To think they have VM this long and no one has thought to swipe these good ideas from real UNIX. And they're still argueing about all that as if it has never been hashed out and addressed before.
Well Bill, I'm sure they will be happy to receive a patch from you, with a new VM implementation that actually works (including for SMP machines, and multiple platforms).
No ? Well, talk is cheap, isn't it ?
Yep. And so is baiting snide commentary. I did my stint and now only write code that I want to. And tolerate idiocy only as I desire. And comment as I see fit. And laugh at irony. And ... well, it doesn't matter does it.
Having worked on *IX systems since 1978, I've gained an appreciation for some of the alternatives. And the methods used to accomplish things.
<snip>
On Mon, 2006-06-05 at 17:00 -0400, William L. Maltby wrote:
I can't resist. Read the thread that was pointed to on lkml. ROTFLMAO.
*Real* UNIX addressed these problems long ago. I guess the "Gurus" suffer from NIH (Not Invented Here) syndrome.
Given a "general purpose" system, tunability is a must. UNIX, as delivered by USL in such examples as Sys V, had tunables that let admins tune to their needs. A single "swappiness" value is woefully inadequate.
Actually, having these computed dynamically is much better than having to manually tune them every time your mix of programs change or you add memory except in some very unusual circumstances like a server that does a single job forever. In the general case consider whether you'd rather hire an expert admin to keep your system tuned or buy an extra gig of ram and let the OS figure out how to use it.
Les Mikesell wrote:
On Mon, 2006-06-05 at 17:00 -0400, William L. Maltby wrote:
I can't resist. Read the thread that was pointed to on lkml. ROTFLMAO.
*Real* UNIX addressed these problems long ago. I guess the "Gurus" suffer from NIH (Not Invented Here) syndrome.
Given a "general purpose" system, tunability is a must. UNIX, as delivered by USL in such examples as Sys V, had tunables that let admins tune to their needs. A single "swappiness" value is woefully inadequate.
Actually, having these computed dynamically is much better than having to manually tune them every time your mix of programs change or you add memory except in some very unusual circumstances like a server that does a single job forever. In the general case consider whether you'd rather hire an expert admin to keep your system tuned or buy an extra gig of ram and let the OS figure out how to use it.
Well, technology marches on. These days, it's extremely cheap to throw hardware at the problem. It really wasn't that long ago that a gig of RAM would have cost a month or two (or more) of a typical admin's salary. :-)
Cheers,
On Mon, 2006-06-05 at 17:20 -0400, Chris Mauritz wrote:
Les Mikesell wrote:
On Mon, 2006-06-05 at 17:00 -0400, William L. Maltby wrote:
<snip my blather>
Well, technology marches on. These days, it's extremely cheap to throw hardware at the problem. It really wasn't that long ago that a gig of RAM would have cost a month or two (or more) of a typical admin's salary. :-)
I can't tell you how many times I've marveled at that. And I've had to make decisions using that guideline too many times to suit my "old school" "craftsmanship" mentality. But it does say something (and I'll not get into that as I have nothing original there, I'm sure) when folks with 9ms HDs, DDR RAM in the GB range, CPUs at 3+GHz, GB networking, SANs, NASs, ... etc. still complain about lack of responsiveness, inadequate load capability, etc.
Just proves ol' Albert right, "Everything's Relative".
Anyway, regardless of cost, "Those who ignore history are doomed to repeat it". And the good solutions that allowed a blending of excellent code and human decision-making and use of external factors, is ignored. And so, as I pointed out (apparently to the irritation of Rodrigo), the discussion goes on for more generations.
Look at the upside of a *possible* POV that only code is needed. If that is carried to the extremes in all possible areas, lots of admins are unemployed to the benefit of business. So they won't ship jobs to low- wage third world contries anymore because there won't be any need for the jobs.
The computer will do it all.
<snip sig stuff>
On Mon, 2006-06-05 at 16:17 -0500, Les Mikesell wrote:
On Mon, 2006-06-05 at 17:00 -0400, William L. Maltby wrote:
I can't resist. Read the thread that was pointed to on lkml. ROTFLMAO.
*Real* UNIX addressed these problems long ago. I guess the "Gurus" suffer from NIH (Not Invented Here) syndrome.
Given a "general purpose" system, tunability is a must. UNIX, as delivered by USL in such examples as Sys V, had tunables that let admins tune to their needs. A single "swappiness" value is woefully inadequate.
Actually, having these computed dynamically is much better than having to manually tune them every time your mix of programs change or you add memory except in some very unusual circumstances like a server that does a single job forever. In the general case consider whether you'd rather hire an expert admin to keep your system tuned or buy an extra gig of ram and let the OS figure out how to use it.
I agree, sort of. The problem occurs in that the OS is only somewhat heuristic. It learns, but only a small amount. The admin can run SAR reports and use other tools that profile activities and he can use that information to "pre-bias" the VM so that it better achieves *long-term* performance and responsiveness goals and is less sensitive to "spikes" in one factor or another.
For many business applications, there is a list of performance requirements that can be prioritized. Priorities can vary, even within a node, based on many things, including time of day. An admin with access to the means can help ensure that his installation is meeting the goals necessary for the business.
When the best of the code is combined with the best of an admin's effort and data analysis, a good outcome is likely. Code only or admin with tools/means both produce less optimal results.
On Mon, 2006-06-05 at 17:37 -0400, William L. Maltby wrote:
On Mon, 2006-06-05 at 16:17 -0500, Les Mikesell wrote:
On Mon, 2006-06-05 at 17:00 -0400, William L. Maltby wrote:
<snip>
and data analysis, a good outcome is likely. Code only or admin with
s/with/without/
tools/means both produce less optimal results.
<snip sig stuff>
On Mon, 2006-06-05 at 17:37 -0400, William L. Maltby wrote:
Given a "general purpose" system, tunability is a must. UNIX, as delivered by USL in such examples as Sys V, had tunables that let admins tune to their needs. A single "swappiness" value is woefully inadequate.
Actually, having these computed dynamically is much better than having to manually tune them every time your mix of programs change or you add memory except in some very unusual circumstances like a server that does a single job forever. In the general case consider whether you'd rather hire an expert admin to keep your system tuned or buy an extra gig of ram and let the OS figure out how to use it.
I agree, sort of. The problem occurs in that the OS is only somewhat heuristic. It learns, but only a small amount. The admin can run SAR reports and use other tools that profile activities and he can use that information to "pre-bias" the VM so that it better achieves *long-term* performance and responsiveness goals and is less sensitive to "spikes" in one factor or another.
If you remember those tunables, you might also remember that there was a whole fairly large (and boring) book that told an administrator how to interpret various permutations of SAR reports and which way to adjust the tunable values. Even when Linux had more hard-coded tunables I was never able to find any equivalent reference to use them correctly.
For many business applications, there is a list of performance requirements that can be prioritized. Priorities can vary, even within a node, based on many things, including time of day. An admin with access to the means can help ensure that his installation is meeting the goals necessary for the business.
When the best of the code is combined with the best of an admin's effort and data analysis, a good outcome is likely. Code only or admin with tools/means both produce less optimal results.
Yes, but if you can define the solution, the computer could probably implement it better and certainly faster by itself. I think these days the developers would rather write the code to do it instead of the documentation to tell you what needs to be done.
On Mon, 2006-06-05 at 18:40 -0500, Les Mikesell wrote:
On Mon, 2006-06-05 at 17:37 -0400, William L. Maltby wrote:
<snip>
<snip> ... Sys V, had tunables that let admins tune to their needs. A single "swappiness" value is woefully inadequate.
Actually, having these computed dynamically is much better than having to manually tune them every time <snip>
... consider whether you'd rather hire an expert admin to keep your system tuned or buy an extra gig of ram and let the OS figure out how to use it.
I agree, sort of. The problem occurs in that the OS is only somewhat heuristic. It learns, but only a small amount. The admin can run SAR reports and use other tools that profile activities and he can use that information to "pre-bias" the VM so that it better achieves *long-term* performance and responsiveness goals and is less sensitive to "spikes" in one factor or another.
If you remember those tunables, you might also remember that there was a whole fairly large (and boring) book that told an administrator how to interpret various permutations of SAR reports and which way to adjust the tunable values. Even when Linux had more hard-coded tunables I was never able to find any equivalent reference to use them correctly.
Amen to that. When several flavors of real UNIX were being aggressively marketed by several corporations, when "craftsmanship" and engineering were somewhat important in keeping your job, when marketing had to offer "The Complete Package" (TM) in order to woo customers from a competitor, corporations saw value in investing in good documentation. Plus, your performance evaluation as an engineer might depend on producing documents that your PHB could appreciate and that helped to sell the product. Development was "structured" (in *some* fashion, yes it *really* was!) and there was peer pressure to "Do a Good Job". Complete opposite of what seems to be predominate today. I *think* - no way to tell with my limited view of the virtual world. From what I see, it seems to be "throw the s.*t out there and fix the reported bugs. Later".
A side-effect of labor becoming more and more expensive and hardware becoming cheaper and "free software" and "open source" reducing the returns to business on proprietary product. Now add the fact that business can *cheaply* put something out and have it "maintained" for free and they rake the profits...
For many business applications, there is a list of performance requirements that can be prioritized. <snip>
When the best of the code is combined with the best of an admin's effort and data analysis, a good outcome is likely. Code only or admin with tools/means both produce less optimal results.
Yes, but if you can define the solution, the computer could probably implement it better and certainly faster by itself. I think these days the developers would rather write the code to do it instead of the documentation to tell you what needs to be done.
That is so true. None of us (myself, as a long-time developer, included) enjoyed the documentation process. It's like washing the dishes after eating a fine meal. Just no interest in that part of the evening.
The trouble with the real UNIX params, aside from the problem of learning curve you mention, was that the parameters were the instantiation (in effect) of the *results* of the calculations that the admin must perform and the programmer had to envision in the first place. Since the programmer had envisioned what data had to be processed for what processing scenarios, an ideal solution would have been to have a data entry sheet that accepted various expected load parameters, time frames, performance objectives, ... and would have generated the set of appropriate tunables, which is what you suggest.
The often overlooked point in these retrospectives is that hardware was much less powerful (and still expensive) when all this was developed (my 1st UNIX PC, a 186 IIRC, with 64K ram, 5MB HD, 40ms seek?, 12" mono monitor, nada for graphics, ... like that about $4,000). And I specifically recall a PC with DOS only advertised in 1985 PC Tech Journal with a "huge fast 10MB HD 2 5.25" FDs, only $10,995. With a 12MHz 286 IIRC. Sold like hotcakes.
The point of that is there was good reason the developers did not generate the software to automatically determine the correct parameters. Too labor intensive and too hardware/cost intensive to implement. Programmers weren't a dime-a-dozen back then and time constraints also often limited what could be provided in a given release cycle.
Now, post a request on the web somewhere and there is already free software from someone or someone will develop it for you for free if you're patient enough.
The documentation will still suck (if you want the truth, read the code) and a substantial portion of the user community will still be dissatisfied.
William L. Maltby wrote:
I can't resist. Read the thread that was pointed to on lkml. ROTFLMAO.
*Real* UNIX addressed these problems long ago. I guess the "Gurus" suffer from NIH (Not Invented Here) syndrome.
Given a "general purpose" system, tunability is a must. UNIX, as delivered by USL in such examples as Sys V, had tunables that let admins tune to their needs. A single "swappiness" value is woefully inadequate.
Among the tunables were how much memory for cache, how much for buffers, how much for X/Y/Z, high and low water marks for all sorts of memory related stuff and a very valuable attribute bit for executables called the "sticky bit". It is not the "sticky bit" as used now. It said lock this app in memory and never swap it. A variation on that (couldn't keep original semantics with the size of apps these days) would address some of the "responsiveness" issues raised by some. Some admin tunables would address the other issues.
The "Gurus" need to learn something my father taught me. "A smart man learns from his mistakes. A wise man learns from the mistakes of others". I'm really smart. :-( And, apparently, so are the "Gurus". To think they have VM this long and no one has thought to swipe these good ideas from real UNIX. And they're still argueing about all that as if it has never been hashed out and addressed before.
When I started out in UNIX, it was Interactive 1.3, if I remember, and it was a dog. Coming from a DOS world, I guess I was overwhelmed by all the things that could be tuned in the kernel. I probably didn't learn much about them then, and still don't know a lot about it, but unless you actively try some of the suggestions and see how things work or behave, then I guess you still don't know. There seems to have been some built in tradeoffs over the course of the years in the unix-like OS's, but it's still seems to work the same. I wonder how well Interactive 3.4 would stack up to today's versions of CentOS, all the *BSD's and such.
On Mon, 2006-06-05 at 17:18 -0400, Sam Drinkard wrote:
William L. Maltby wrote:
<snip>
When I started out in UNIX, it was Interactive 1.3, if I remember, and it was a dog.
In performance or trying to understand it? On my 1st 8088 (an Onyx machine) and 286 I was so impressed that I could carry 4/5/... 8 users on dumb terminals and still have very good performance on an interactive application (accounting data entry, purchasing,...). They were performing almost as well as the DEC PDP 11/70 minis.
Coming from a DOS world, I guess I was overwhelmed by all the things that could be tuned in the kernel. I probably didn't learn much about them then, and still don't know a lot about it, but unless you actively try some of the suggestions and see how things work or behave, then I guess you still don't know.
I was in a fog a long time as I tried to learn (what seemed) thousands of different components that resulted from the "do one thing and do it well philosophy". I now see that philosophy is severely under- appreciated.
There seems to have been some built in tradeoffs over the course of the years in the unix-like OS's, but it's still seems to work the same. I wonder how well Interactive 3.4 would stack up to today's versions of CentOS, all the *BSD's and such.
Consider the number of "brains and bodies" that have attacked the Linux kernel, GNU utilities, etc. If even a small portion of those resources had been diligently applied to the continued development, enhancement and maintenance of "traditional" OSs, I'm sure they would hold up well.
But because big (and small) business is the biggest beneficiary of the "open source" and "free software" movement, they had no reason to continue with their efforts. They can sit back and take from the open source community and reduce their costs.
And, as could be predicted, only a small percentage give back to the community.