-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Sat, Apr 07, 2007 at 03:21:45PM -0400, William L. Maltby wrote:
On Sat, 2007-04-07 at 15:37 -0300, Rodrigo Barbosa wrote:
Your solution would have a precision of 5 to 10 seconds, I estimate. If that is good enough, it is a simple way to do it. That should give higher than 95% precision, usually higher than 98%. Not bad for a small script.
5 - 10 seconds =:-O I think it would be better than that... If we have the right "trigger". Knowing, for instance, that the last "setup" issue would be some distinct event (like opening a new output file, probably not /var/pid because that should be early) would then allow us to consider all remaining activities to be "processing". Then, if wall clock was the only criteria, we s/b pretty accurate. Naturally, on heavily loaded servers, some other mile-marker, like user CPU time, would be better. But on single-user workstations, the simple remedies we have touched on will certainly do better than and 8 to ten second window. That's betting user doesn't run the stuff during his nightly full-copy-backup operation.
My chief worry are other processes generation I/O, not the target itself. That is why I assign that kind of precision.
It IS overkill :) I'm just considering a generic implementation, not the need of OP. Actually, I'm considering creating a small GPLed program to do this, so I have to cover as many situations as possible.
I think you misunderstood me, when you say the target has to have this facility. I was talking about a callibration child, not the exec() one.
Yes, I misunderstood. But along that vein, it seems to me than that the calibration should be a concurrent process so that it can determine in "real time" when enough "wall clock" time has passed. As long as wall clock is the criteria, we get stuck with using any pre-acquired benchmarks as reference data only. The current processing environment/load would need to be blended in *throughout* the life of the target application and the "calibrator" when then decide in real- time that mayhem was due.
The idea is the callibrator to get a general feeling of the machine load, before starting the target. The process I meant to keep open is the one that dlopen()s for preloading.
If we get to that point, it *seems* to me that the only reliable metric becomes the user CPU time and/or I/O completions, depending on the applications profile.
And the general system state.
And that would tend to indicate that relatively high-precision (I hope my understanding of your concept of that is "good enough") can only be approached (ignoring real-time acquisition of the target's accounting, I/O, CPU, ...) by the calibrator running concurrently and seeing the current deviation from the database of captured test runs of the past.
That would be the ideal case. And couple with the pre-callibration I proposed earlier, would wield even more precise results.
I enjoy this sort of stuff. Don't get to do it often enough. And in the last upmteen years, haven't kept my shell/C skills up-to-snuff. Essentially retired from the race after enough decades of it.
I second that. Since I started my own company, the business side of it is taking so much time I'm getting really rusty on shell/C skills.
Ditto when I had my own biz. Plus, I found I didn't like "managing" others. They weren't enough like me! =:O
TELL ME ABOUT IT :) ehehehehe
[]s
- -- Rodrigo Barbosa "Quid quid Latine dictum sit, altum viditur" "Be excellent to each other ..." - Bill & Ted (Wyld Stallyns)