Hi!
We have encountered a rather weird problem on a machine that we use for our network latency check (smokeping with fping). For an hour (or so) after boot it reports the ping time with a microsecond accuracy (ad it should). Then the millisecond accuracy starts. We have yet to identify the cause and the logs currently do not show anything. Please find an example of a good and a bad result below.
Regards, Mitja
When it works: (microsecond accuracy): [root@server ~]# ping test.example.net PING test.example.net (123.1.2.3) 56(84) bytes of data. 64 bytes from test.example.net (123.1.2.3): icmp_seq=1 ttl=62 time=0.562 ms 64 bytes from test.example.net (123.1.2.3): icmp_seq=2 ttl=62 time=0.452 ms 64 bytes from test.example.net (123.1.2.3): icmp_seq=3 ttl=62 time=0.579 ms
--- test.example.net ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 1998ms rtt min/avg/max/mdev = 0.452/0.531/0.579/0.056 ms
And when it does not (millisecond accuracy): [root@server ~]# ping test.example.net PING test.example.net (123.1.2.3) 56(84) bytes of data. 64 bytes from test.example.net (123.1.2.3): icmp_seq=1 ttl=62 time=1.99 ms 64 bytes from test.example.net (123.1.2.3): icmp_seq=2 ttl=62 time=0.999 ms 64 bytes from test.example.net (123.1.2.3): icmp_seq=3 ttl=62 time=0.000 ms 64 bytes from test.example.net (123.1.2.3): icmp_seq=4 ttl=62 time=0.000 ms 64 bytes from test.example.net (123.1.2.3): icmp_seq=5 ttl=62 time=0.000 ms
--- test.example.net ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 3999ms rtt min/avg/max/mdev = 0.000/0.599/1.999/0.800 ms