On 8/12/2010 4:15 PM, John R Pierce wrote:
On 08/12/10 2:51 PM, Warren Young wrote:
Only one server on a given LAN should be running ntpd. It's overkill for every machine to keep themselves synced with such a complex and fussy server. All the others should just call ntpdate or msntp every hour or so as a cron job to keep their own time close to that of the LAN time server's.
I disagree.
Simply setting a systems time at fixed intervals will result in discontinuities in delta time measurements. if the systems local clock is fast, a given time will occur twice, and a delta between two time readings could be negative. if the clock is slow, a delta between two readings would jump by whatever correction.
This is one of the points from the paper I referenced: there are three main uses for clocks, and a single implementation isn't appropriate for all uses. Only an ideal absolute time clock would work for all three cases. Since we don't have that, you have to consider your own case before deciding on a clock synchronization strategy.
The strategy I recommended is based on the fact that its worst case behavior (a small negative jump every hour) is not a problem for me. If it is a problem for your application, you need a different design.
Once an ntpd synchs and stabilizes to its reference, its very low overhead.
True only as long as it's being given stable time input. See figure 5 in the paper for the kind of wild, damped oscillations you get with ntpd when the input is not stable.
The time series plot is crystal clear, but don't overlook the fact that the IQR plots use different axes. There's a 4x difference hiding behind that bad visual display of quantitative information. (Yes, that's my inner Tufte you're seeing poking out there.)