Discussion:
HowTo calibrate system clock frequency using NTP
(too old to reply)
Daniel Kabs
2006-01-27 16:35:32 UTC
Permalink
Hi,

I like to measure the system clock drift and use the resulting value to
correct the system clock using the "adjtimex" tool (-t and -f option).

I can think of at least two ways to measure the drift and I'd like to
ask you whether this is the correct way to do it.

Plan A)

Run ntpd using a reliable time server. ntpd will measure and record the
intrinsic clock frequency offset in the so called "drift file".

Depending on the computer clock oscillator's frequency error this may
take some hours (or even days?) to stabilize. When the value has
converged, the "drift file" contains the frequency offset measured in
parts-per-million (PPM).

Plan B)

Run ntpd using local clock (127.127.1.0) as server. Execute "ntpdate -q"
on a synchronized system against the system you want to measure. ntpdate
will output the precise time offset in seconds. If you record the offset
(and time) periodically, you can fit a straight line to the data points.
The slope * 86400 will give the estimated offset in seconds per day.
This can be converte into ppm (100 ppm == 8.64 sec/day). Measuring for
one hour will be enough to get a reasonable accurate value.



Now use adjtimex to correct the system clock for systematic drift as
described in the example of the man page.

I tried B) and will now run plan A) over the weekend to record a drift
file. I hope this will give a similar result.

Cheers
Daniel
--
Refactor, don't archive! - SamHasler - 28 Aug 2004 - twiki.org
David L. Mills
2006-01-27 18:54:58 UTC
Permalink
Daniel,

Plan A

1. Run ntptime -f 0 to remove any leftover kernel bias.

2. Configure for a reliable server over a quiet network link.

3. Remove the frequency file ntp.drift.

4. Start the daemon and wait for at least 15 minutes until the state
shows 4. Record the frequency offset shown with the ntpq rv command. It
should be within 1 PPM of the actual frequency offset. For enhanced
confidence, wait until the first frequency file update after one hour or so.

Plan B

1. Run ntptime -f 0 to remove any leftover kernel bias.

2. Configure for a reliable server over a quiet network link.

3. Start the daemon with disable ntp in the configuration file.

4. Record the offset over a period of hours. Do a least-squares fit; the
regression line slope is the frequency.

Dave
Post by Daniel Kabs
Hi,
I like to measure the system clock drift and use the resulting value to
correct the system clock using the "adjtimex" tool (-t and -f option).
I can think of at least two ways to measure the drift and I'd like to
ask you whether this is the correct way to do it.
Plan A)
Run ntpd using a reliable time server. ntpd will measure and record the
intrinsic clock frequency offset in the so called "drift file".
Depending on the computer clock oscillator's frequency error this may
take some hours (or even days?) to stabilize. When the value has
converged, the "drift file" contains the frequency offset measured in
parts-per-million (PPM).
Plan B)
Run ntpd using local clock (127.127.1.0) as server. Execute "ntpdate -q"
on a synchronized system against the system you want to measure. ntpdate
will output the precise time offset in seconds. If you record the offset
(and time) periodically, you can fit a straight line to the data points.
The slope * 86400 will give the estimated offset in seconds per day.
This can be converte into ppm (100 ppm == 8.64 sec/day). Measuring for
one hour will be enough to get a reasonable accurate value.
Now use adjtimex to correct the system clock for systematic drift as
described in the example of the man page.
I tried B) and will now run plan A) over the weekend to record a drift
file. I hope this will give a similar result.
Cheers
Daniel
Daniel Kabs
2006-01-31 15:56:24 UTC
Permalink
Hello Professor Mills,

now I have a plan :-) Thank you.

I copied your valuable instructions into the NTP wiki at

https://ntp.isc.org/bin/view/Support/HowToCalibrateSystemClockUsingNTP


I wonder why your procedures do not use "ntpdate". Is it because
"ntpdate" is to be retired soon from your ntp distribution? Or will
"ntpdate" fail to provide data as precise as ntpd? Indeed, I see a
difference when I compare the drift file value against the offset
measured by ntpdate:

A) I had ntp running for one day against a time server. The value in the
drift file converged monotonously to -268.173. This gives a clock error
of -23.17 s/day.

B) I configured ntp in order to serve the pseudo local clock. I
periodically measured the time offset from a time synchronized PC using
"ntpdate". A least-squares fit gives a slope of 23.87 s/day.

Any idea why is this?

Cheers
Daniel
--
Refactor, don't archive! - SamHasler - 28 Aug 2004 - twiki.org
David L. Mills
2006-02-01 03:44:50 UTC
Permalink
Daniel,

Looks like you have a sign reversal. Better recheck your math.

The ntpdate is deprecated and replaced by sntp in recent versions. I
hear it was eaten by a Grue and if you know what that means, you have
some idea how ancient ntpdate had become.

Dave
Post by Daniel Kabs
Hello Professor Mills,
now I have a plan :-) Thank you.
I copied your valuable instructions into the NTP wiki at
https://ntp.isc.org/bin/view/Support/HowToCalibrateSystemClockUsingNTP
I wonder why your procedures do not use "ntpdate". Is it because
"ntpdate" is to be retired soon from your ntp distribution? Or will
"ntpdate" fail to provide data as precise as ntpd? Indeed, I see a
difference when I compare the drift file value against the offset
A) I had ntp running for one day against a time server. The value in the
drift file converged monotonously to -268.173. This gives a clock error
of -23.17 s/day.
B) I configured ntp in order to serve the pseudo local clock. I
periodically measured the time offset from a time synchronized PC using
"ntpdate". A least-squares fit gives a slope of 23.87 s/day.
Any idea why is this?
Cheers
Daniel
Daniel Kabs
2006-02-01 11:02:49 UTC
Permalink
Hello David!

Na, there's no time reversal as only Chief Engineer "Scotty" could do
this! :-)

I figure the sign depends whether I measure on the system or from a
remote system.


Cheers
Daniel

PS: What is a "Grue"?
--
Refactor, don't archive! - SamHasler - 28 Aug 2004 - twiki.org
Brian Utterback
2006-02-01 14:10:19 UTC
Permalink
Post by Daniel Kabs
PS: What is a "Grue"?
Something that eats you if you insist on remaining "unenlightened".
--
blu

Quidquid latine dictum sit, altum sonatur.
----------------------------------------------------------------------
Brian Utterback - OP/N1 RPE, Sun Microsystems, Inc.
Ph:877-259-7345, Em:brian.utterback-at-ess-you-enn-dot-kom
David L. Mills
2006-02-02 16:03:15 UTC
Permalink
Daniel,

Yes the sign reverses when measuring from the local or the remote
machine. However, your message compared one measurement with another
very similar magnitude but sign reversed. Go figure.

Dave
Post by Daniel Kabs
Hello David!
Na, there's no time reversal as only Chief Engineer "Scotty" could do
this! :-)
I figure the sign depends whether I measure on the system or from a
remote system.
Cheers
Daniel
PS: What is a "Grue"?
Daniel Kabs
2006-02-03 09:26:32 UTC
Permalink
Post by David L. Mills
Yes the sign reverses when measuring from the local or the remote
machine.
Sorry, all my fault. My question "why is this?" was just too vague. :-)

I was not asking about the sign reversal but about the differing results
(23.17 s/day vs. 23.87 s/day).
Post by David L. Mills
However, your message compared one measurement with another
very similar magnitude but sign reversed. Go figure.
Comparing results from Plan A vs. Plan B, I currently fail to figure why
there is this difference.

Maybe it has to do with acquiring the offset for Plan B. I read the
"offset" variable using

ntpq -c 'rv <assoc_id_of_peer>'

but which variable reflects the time when this offset was measured: Is
is the value "reftime", "org", "rec" or "xmt"?

Cheers
Daniel
David L. Mills
2006-02-03 14:28:46 UTC
Permalink
Daniel,

Your question does not parse. The frequency correction has nothing to do
with the state variables you quote. The time the frequency correction is
monitored is prominently displayed on the ntpq rv billboard.

Dave
Post by Daniel Kabs
Post by David L. Mills
Yes the sign reverses when measuring from the local or the remote
machine.
Sorry, all my fault. My question "why is this?" was just too vague. :-)
I was not asking about the sign reversal but about the differing results
(23.17 s/day vs. 23.87 s/day).
Post by David L. Mills
However, your message compared one measurement with another very
similar magnitude but sign reversed. Go figure.
Comparing results from Plan A vs. Plan B, I currently fail to figure why
there is this difference.
Maybe it has to do with acquiring the offset for Plan B. I read the
"offset" variable using
ntpq -c 'rv <assoc_id_of_peer>'
but which variable reflects the time when this offset was measured: Is
is the value "reftime", "org", "rec" or "xmt"?
Cheers
Daniel
Daniel Kabs
2006-02-06 11:09:00 UTC
Permalink
Good afternoon David,

I think we are talking at cross-purposes: in case of Plan B, you
suggested to use the option "disable ntp" (so there should be no
frequency correction) and to read the time offset at intervals using
ntpq -c 'rv <assoc_id_of_peer>'

If I configure ntp that way, this is how the billboard looks on my
system after some days of running time:

# ntpq -c 'rv 16532'
status=9634 reach, conf, sel_sys.peer, 3 events, event_reach,
srcadr=10.0.0.254, srcport=123, dstadr=0.0.0.0, dstport=123, keyid=0,
stratum=1, precision=-18, rootdelay=0.000, rootdispersion=6.989,
refid=DCFa, reftime=c7919b26.416249a1 Mon, Feb 6 2006 10:53:42.255,
delay=0.688, offset=-114389.792, jitter=75.264, dispersion=0.947,
reach=377, valid=7, hmode=3, pmode=4, hpoll=6, ppoll=6, leap=00,
flash=00 ok, org=c7919b4f.ed9e6256 Mon, Feb 6 2006 10:54:23.928,
rec=c7919bc2.517e5647 Mon, Feb 6 2006 10:56:18.318,
xmt=c7919bc2.514cec41 Mon, Feb 6 2006 10:56:18.317,
filtdelay= 0.69 0.68 0.77 0.79 0.88 0.77 0.69 0.69,
filtoffset= -114389 -114372 -114353 -114335 -114316 -114299 -114282 -114264,
filtdisp= 0.01 0.97 1.96 2.92 3.91 4.87 5.83 6.82


I was asking about reading the "offset" variable:
offset=-114389.792
What is the corresponding time when this variable was set? The
billboard show several candidates ("reftime", "org", "rec" or "xmt"),
but I don't know which one is the correct one.


Cheers
Daniel
Post by David L. Mills
Daniel,
Your question does not parse. The frequency correction has nothing to do
with the state variables you quote. The time the frequency correction is
monitored is prominently displayed on the ntpq rv billboard.
Dave
--
Refactor, don't archive! - SamHasler - 28 Aug 2004 - twiki.org
David L. Mills
2006-02-06 22:15:39 UTC
Permalink
Daniel,

The peer offset is recorded for each client/server exchange. Let it run
for a few hours and record the time of day and the offsets at the
beginning and end. Subtract and divide by the interval. It's really not
complicated.

Dave
Post by Daniel Kabs
Good afternoon David,
I think we are talking at cross-purposes: in case of Plan B, you
suggested to use the option "disable ntp" (so there should be no
frequency correction) and to read the time offset at intervals using
ntpq -c 'rv <assoc_id_of_peer>'
If I configure ntp that way, this is how the billboard looks on my
# ntpq -c 'rv 16532'
status=9634 reach, conf, sel_sys.peer, 3 events, event_reach,
srcadr=10.0.0.254, srcport=123, dstadr=0.0.0.0, dstport=123, keyid=0,
stratum=1, precision=-18, rootdelay=0.000, rootdispersion=6.989,
refid=DCFa, reftime=c7919b26.416249a1 Mon, Feb 6 2006 10:53:42.255,
delay=0.688, offset=-114389.792, jitter=75.264, dispersion=0.947,
reach=377, valid=7, hmode=3, pmode=4, hpoll=6, ppoll=6, leap=00,
flash=00 ok, org=c7919b4f.ed9e6256 Mon, Feb 6 2006 10:54:23.928,
rec=c7919bc2.517e5647 Mon, Feb 6 2006 10:56:18.318,
xmt=c7919bc2.514cec41 Mon, Feb 6 2006 10:56:18.317,
filtdelay= 0.69 0.68 0.77 0.79 0.88 0.77 0.69 0.69,
filtoffset= -114389 -114372 -114353 -114335 -114316 -114299 -114282 -114264,
filtdisp= 0.01 0.97 1.96 2.92 3.91 4.87 5.83 6.82
offset=-114389.792
What is the corresponding time when this variable was set? The
billboard show several candidates ("reftime", "org", "rec" or "xmt"),
but I don't know which one is the correct one.
Cheers
Daniel
Post by David L. Mills
Daniel,
Your question does not parse. The frequency correction has nothing to
do with the state variables you quote. The time the frequency
correction is monitored is prominently displayed on the ntpq rv
billboard.
Dave
Daniel Kabs
2006-02-15 11:32:52 UTC
Permalink
Hello!
Post by David L. Mills
The peer offset is recorded for each client/server exchange. Let it run
for a few hours and record the time of day and the offsets at the
beginning and end. Subtract and divide by the interval. It's really not
complicated.
"record the time of day" sounds easy, but it is not if I want to measure
with good precision. Actually that's everything NTP is about and it
takes a lot to get the time right :-)

I have NTP running using the option "disable ntp". I read the time
offset using ntpq -c 'rv <assoc_id_of_peer>' at two points in time.

Example:
# ntpq -c "rv 32468"
assID=32468 status=9614 reach, conf, sel_sys.peer, 1 event, event_reach,
srcadr=10.0.0.254, srcport=123, dstadr=10.0.2.100, dstport=123, leap=00,
stratum=1, precision=-18, rootdelay=0.000, rootdispersion=16.220,
refid=DCFa, reach=377, unreach=0, hmode=3, pmode=4, hpoll=6, ppoll=6,
flash=00 ok, keyid=0, ttl=0, offset=-69.648, delay=0.685,
dispersion=2.849, jitter=61.865,
reftime=c79d992c.40e63a5c Wed, Feb 15 2006 13:12:28.253,
org=c79d9943.421cac08 Wed, Feb 15 2006 13:12:51.258,
rec=c79d9943.58e27e0e Wed, Feb 15 2006 13:12:51.347,
xmt=c79d9943.57713272 Wed, Feb 15 2006 13:12:51.341,
filtdelay= 5.58 0.68 4.16 5.30 9.84 0.70 0.69 14.76,
filtoffset=-86.17 -69.65 -51.84 -35.91 -20.53 1.51 18.26 29.64,
filtdisp= 0.03 1.02 1.98 2.93 3.89 4.86 5.82 6.80

The offset in this example is offset=-69.648 milliseconds.

I'd like to know: what is the corresponding time of day when the
client/server exchange took place that lead to this offset measurement.

The above output contains the timestamps "reftime", "org", "rec" and
"xmt". Which one is the correct one? Maybe it doesn't matter as long as
I use the same one?

Cheers
Daniel Kabs
David L. Mills
2006-02-15 14:44:50 UTC
Permalink
Daniel,

As to which timestamp is "correct" you will need to read the
architecture briefing on the NTP project page. While at it, understand
the raw offset measurement does not reflect the actual clock offset, as
the latter is determined by the clock discipline algorithm described in
the briefings on the project page. The discipline acts as a lowpass
filter where the real offset is typically a fraction of the measured
offset, usually in the order of a thousand times smaller. To justify
that claim, it is necessary to plow the mathematics of phase-locked
loops. One of the appendices of rfc1305 and any of several documents
listed on the project page explores these things.

Dave
Post by Daniel Kabs
Hello!
Post by David L. Mills
The peer offset is recorded for each client/server exchange. Let it
run for a few hours and record the time of day and the offsets at the
beginning and end. Subtract and divide by the interval. It's really
not complicated.
"record the time of day" sounds easy, but it is not if I want to measure
with good precision. Actually that's everything NTP is about and it
takes a lot to get the time right :-)
I have NTP running using the option "disable ntp". I read the time
offset using ntpq -c 'rv <assoc_id_of_peer>' at two points in time.
# ntpq -c "rv 32468"
assID=32468 status=9614 reach, conf, sel_sys.peer, 1 event, event_reach,
srcadr=10.0.0.254, srcport=123, dstadr=10.0.2.100, dstport=123, leap=00,
stratum=1, precision=-18, rootdelay=0.000, rootdispersion=16.220,
refid=DCFa, reach=377, unreach=0, hmode=3, pmode=4, hpoll=6, ppoll=6,
flash=00 ok, keyid=0, ttl=0, offset=-69.648, delay=0.685,
dispersion=2.849, jitter=61.865,
reftime=c79d992c.40e63a5c Wed, Feb 15 2006 13:12:28.253,
org=c79d9943.421cac08 Wed, Feb 15 2006 13:12:51.258,
rec=c79d9943.58e27e0e Wed, Feb 15 2006 13:12:51.347,
xmt=c79d9943.57713272 Wed, Feb 15 2006 13:12:51.341,
filtdelay= 5.58 0.68 4.16 5.30 9.84 0.70 0.69 14.76,
filtoffset=-86.17 -69.65 -51.84 -35.91 -20.53 1.51 18.26 29.64,
filtdisp= 0.03 1.02 1.98 2.93 3.89 4.86 5.82 6.80
The offset in this example is offset=-69.648 milliseconds.
I'd like to know: what is the corresponding time of day when the
client/server exchange took place that lead to this offset measurement.
The above output contains the timestamps "reftime", "org", "rec" and
"xmt". Which one is the correct one? Maybe it doesn't matter as long as
I use the same one?
Cheers
Daniel Kabs
Daniel Kabs
2006-02-16 08:50:47 UTC
Permalink
Hello David!
Post by David L. Mills
As to which timestamp is "correct" you will need to read the
architecture briefing on the NTP project page.
That's not fair. I just wanted to use NTP as a tool to measure the time
drift of my system clock and now you pull the dreaded "read the
architecture briefing" weapon on me. What have I done to you to deserve
this? :-)
Post by David L. Mills
While at it, understand
the raw offset measurement does not reflect the actual clock offset,
as the latter is determined by the clock discipline algorithm
described in the briefings on the project page. [...]
You are talking about "clock discipline". That's confusing me as I am
running ntpd using option "disable ntp" which (according to your
implementation documentation) should disable time and frequency discipline.

Cheers
Daniel
--
Refactor, don't archive! - SamHasler - 28 Aug 2004 - twiki.org
David L. Mills
2006-02-16 12:45:39 UTC
Permalink
Daniel,

I don't see any productive process in responding to your message other
than to suggest you read the available documentation. The four
timestamps are developed at very specific events in the protocol
operations and you have to interpret the time and other data with
respect to the operations involved. If this is "unfair", that indeed is
the case. If the issue is about actual accuracy versus the clock
discipline algorithm, I have no definitive advise other than to study
the design in the available documentation.

Dave
Post by Daniel Kabs
Hello David!
Post by David L. Mills
As to which timestamp is "correct" you will need to read the
architecture briefing on the NTP project page.
That's not fair. I just wanted to use NTP as a tool to measure the time
drift of my system clock and now you pull the dreaded "read the
architecture briefing" weapon on me. What have I done to you to deserve
this? :-)
Post by David L. Mills
While at it, understand
the raw offset measurement does not reflect the actual clock offset,
as the latter is determined by the clock discipline algorithm
described in the briefings on the project page. [...]
You are talking about "clock discipline". That's confusing me as I am
running ntpd using option "disable ntp" which (according to your
implementation documentation) should disable time and frequency discipline.
Cheers
Daniel
Richard B. Gilbert
2006-02-16 15:13:22 UTC
Permalink
Post by Daniel Kabs
Hello David!
Post by David L. Mills
As to which timestamp is "correct" you will need to read the
architecture briefing on the NTP project page.
That's not fair. I just wanted to use NTP as a tool to measure the time
drift of my system clock and now you pull the dreaded "read the
architecture briefing" weapon on me. What have I done to you to deserve
this? :-)
Post by David L. Mills
While at it, understand
the raw offset measurement does not reflect the actual clock offset,
as the latter is determined by the clock discipline algorithm
described in the briefings on the project page. [...]
You are talking about "clock discipline". That's confusing me as I am
running ntpd using option "disable ntp" which (according to your
implementation documentation) should disable time and frequency discipline.
Cheers
Daniel
You could probably use any of the four time stamps if you measure over a
long enough period.

The four timestamps are:
Reference: the time the local clock was last set. (This makes no sense
to me but that's what the RFC says! It would make more sense if it were
the time the reply was received by the client.)
Originate: the time the request packet left your system
Receive: the time the request packet arrived at the server
Transmit: the time the reply packet departed the server

I would guess that the transmit timestamp would provide the best
accuracy if you have to measure over a short interval. Since the drift
of your local clock is changing constantly, if slowly, I would recommend
measuring over an interval of at least twenty-four hours.

If the environment in which your devices are used is not similar in
temperature and temperature variation, the whole effort is probably a
waste of time!!
David L. Mills
2006-02-17 00:31:07 UTC
Permalink
Richard,

The reference timestamp is not one of the four timestamps used to
calculate offset and delay. It is intended for use when calculating the
maximum error should some other method than dispersion be used to
determine it. the next three timestamps you mention are struck at the
times you mention; however, the fourth (destination) timestamp is struck
upon arrival of the message at the client. I can understand your
confusion, since there are four timestamps in the header; however, the
various briefings and RFCs are very clear on which four are intended,
and that's why I suggested Daniel see the briefing.

Dave
Post by Richard B. Gilbert
Post by Daniel Kabs
Hello David!
Post by David L. Mills
As to which timestamp is "correct" you will need to read the
architecture briefing on the NTP project page.
That's not fair. I just wanted to use NTP as a tool to measure the
time drift of my system clock and now you pull the dreaded "read the
architecture briefing" weapon on me. What have I done to you to
deserve this? :-)
Post by David L. Mills
While at it, understand
the raw offset measurement does not reflect the actual clock offset,
as the latter is determined by the clock discipline algorithm
described in the briefings on the project page. [...]
You are talking about "clock discipline". That's confusing me as I am
running ntpd using option "disable ntp" which (according to your
implementation documentation) should disable time and frequency discipline.
Cheers
Daniel
You could probably use any of the four time stamps if you measure over a
long enough period.
Reference: the time the local clock was last set. (This makes no sense
to me but that's what the RFC says! It would make more sense if it were
the time the reply was received by the client.)
Originate: the time the request packet left your system
Receive: the time the request packet arrived at the server
Transmit: the time the reply packet departed the server
I would guess that the transmit timestamp would provide the best
accuracy if you have to measure over a short interval. Since the drift
of your local clock is changing constantly, if slowly, I would recommend
measuring over an interval of at least twenty-four hours.
If the environment in which your devices are used is not similar in
temperature and temperature variation, the whole effort is probably a
waste of time!!
Richard B. Gilbert
2006-02-17 01:07:28 UTC
Permalink
Dave,

In that case, I think RFC 1305 needs some clarification. Page 100
refers to these times as "T1, T2, T3, and T4" and they are not otherwise
defined. Page 50 defines the timestamps in the NTP packet as Reference,
Originate, Receive, and Transmit. The reader is left to guess where
T1 - T4 come from. I guessed wrong. Sorry about that.

Yes, I have read the darned thing though a lot of the math is over my head.
Post by David L. Mills
Richard,
The reference timestamp is not one of the four timestamps used to
calculate offset and delay. It is intended for use when calculating the
maximum error should some other method than dispersion be used to
determine it. the next three timestamps you mention are struck at the
times you mention; however, the fourth (destination) timestamp is struck
upon arrival of the message at the client. I can understand your
confusion, since there are four timestamps in the header; however, the
various briefings and RFCs are very clear on which four are intended,
and that's why I suggested Daniel see the briefing.
Dave
Post by Richard B. Gilbert
Post by Daniel Kabs
Hello David!
Post by David L. Mills
As to which timestamp is "correct" you will need to read the
architecture briefing on the NTP project page.
That's not fair. I just wanted to use NTP as a tool to measure the
time drift of my system clock and now you pull the dreaded "read the
architecture briefing" weapon on me. What have I done to you to
deserve this? :-)
Post by David L. Mills
While at it, understand
the raw offset measurement does not reflect the actual clock offset,
as the latter is determined by the clock discipline algorithm
described in the briefings on the project page. [...]
You are talking about "clock discipline". That's confusing me as I am
running ntpd using option "disable ntp" which (according to your
implementation documentation) should disable time and frequency discipline.
Cheers
Daniel
You could probably use any of the four time stamps if you measure over
a long enough period.
Reference: the time the local clock was last set. (This makes no sense
to me but that's what the RFC says! It would make more sense if it
were the time the reply was received by the client.)
Originate: the time the request packet left your system
Receive: the time the request packet arrived at the server
Transmit: the time the reply packet departed the server
I would guess that the transmit timestamp would provide the best
accuracy if you have to measure over a short interval. Since the
drift of your local clock is changing constantly, if slowly, I would
recommend measuring over an interval of at least twenty-four hours.
If the environment in which your devices are used is not similar in
temperature and temperature variation, the whole effort is probably a
waste of time!!
David L. Mills
2006-02-17 02:50:51 UTC
Permalink
Richard,

Well, I wrote 1305 fourteen years ago when I was just a kid. The on-wire
draft< protocol spec for NTPv4 now on the project page at
http://www.eecis.udel.edu/~mills/database/brief/flow/ntp4.pdf is
hopefully much more explicit.

Dave
Dave,
In that case, I think RFC 1305 needs some clarification. Page 100
refers to these times as "T1, T2, T3, and T4" and they are not otherwise
defined. Page 50 defines the timestamps in the NTP packet as Reference,
Originate, Receive, and Transmit. The reader is left to guess where
T1 - T4 come from. I guessed wrong. Sorry about that.
Yes, I have read the darned thing though a lot of the math is over my head.
Post by David L. Mills
Richard,
The reference timestamp is not one of the four timestamps used to
calculate offset and delay. It is intended for use when calculating
the maximum error should some other method than dispersion be used to
determine it. the next three timestamps you mention are struck at the
times you mention; however, the fourth (destination) timestamp is
struck upon arrival of the message at the client. I can understand
your confusion, since there are four timestamps in the header;
however, the various briefings and RFCs are very clear on which four
are intended, and that's why I suggested Daniel see the briefing.
Dave
Post by Richard B. Gilbert
Post by Daniel Kabs
Hello David!
Post by David L. Mills
As to which timestamp is "correct" you will need to read the
architecture briefing on the NTP project page.
That's not fair. I just wanted to use NTP as a tool to measure the
time drift of my system clock and now you pull the dreaded "read the
architecture briefing" weapon on me. What have I done to you to
deserve this? :-)
Post by David L. Mills
While at it, understand
the raw offset measurement does not reflect the actual clock offset,
as the latter is determined by the clock discipline algorithm
described in the briefings on the project page. [...]
You are talking about "clock discipline". That's confusing me as I
am running ntpd using option "disable ntp" which (according to your
implementation documentation) should disable time and frequency discipline.
Cheers
Daniel
You could probably use any of the four time stamps if you measure
over a long enough period.
Reference: the time the local clock was last set. (This makes no
sense to me but that's what the RFC says! It would make more sense
if it were the time the reply was received by the client.)
Originate: the time the request packet left your system
Receive: the time the request packet arrived at the server
Transmit: the time the reply packet departed the server
I would guess that the transmit timestamp would provide the best
accuracy if you have to measure over a short interval. Since the
drift of your local clock is changing constantly, if slowly, I would
recommend measuring over an interval of at least twenty-four hours.
If the environment in which your devices are used is not similar in
temperature and temperature variation, the whole effort is probably a
waste of time!!
Richard B. Gilbert
2006-02-17 04:26:57 UTC
Permalink
Dave,

Thanks for the pointer to the draft protocol spec. It does explain
things a little more clearly.

If I may get very picky, I spotted a couple of problems with the
document. The first was the word "ant" where I believe you meant
"and". The other was the use of "decimal point" when referring to a
binary word. I think that "binary point" would be a better choice.
Post by David L. Mills
Richard,
Well, I wrote 1305 fourteen years ago when I was just a kid. The on-wire
draft< protocol spec for NTPv4 now on the project page at
http://www.eecis.udel.edu/~mills/database/brief/flow/ntp4.pdf is
hopefully much more explicit.
Dave
Dave,
In that case, I think RFC 1305 needs some clarification. Page 100
David L. Mills
2006-02-17 21:14:51 UTC
Permalink
Richard,

Thanks for the review. I'm happy to hear from others.

Dave
Post by Richard B. Gilbert
Dave,
Thanks for the pointer to the draft protocol spec. It does explain
things a little more clearly.
If I may get very picky, I spotted a couple of problems with the
document. The first was the word "ant" where I believe you meant
"and". The other was the use of "decimal point" when referring to a
binary word. I think that "binary point" would be a better choice.
Post by David L. Mills
Richard,
Well, I wrote 1305 fourteen years ago when I was just a kid. The
on-wire >draft< protocol spec for NTPv4 now on the project page at
http://www.eecis.udel.edu/~mills/database/brief/flow/ntp4.pdf is
hopefully much more explicit.
Dave
Post by Richard B. Gilbert
Dave,
In that case, I think RFC 1305 needs some clarification. Page 100
Per Hedeland
2006-02-17 08:25:13 UTC
Permalink
Post by Richard B. Gilbert
In that case, I think RFC 1305 needs some clarification. Page 100
refers to these times as "T1, T2, T3, and T4" and they are not otherwise
defined. Page 50 defines the timestamps in the NTP packet as Reference,
Originate, Receive, and Transmit. The reader is left to guess where
T1 - T4 come from. I guessed wrong. Sorry about that.
Yes, I have read the darned thing though a lot of the math is over my head.
Hm, T1 - T4 are defined in figure 14. Of course the .txt version of 1305
doesn't have figures, and trying to read the formulas is futile even if
you do understand the math. Try the PDF version, e.g. at
http://www.faqs.org/rfc/rfc1305.pdf (I guess the PostScript version can
also be found somewhere).

Incidentally, RFC 2030 (SNTP) has "textual" definitions of this basic
stuff.

--Per Hedeland
***@hedeland.org
Richard B. Gilbert
2006-02-17 16:47:44 UTC
Permalink
Post by Per Hedeland
Post by Richard B. Gilbert
In that case, I think RFC 1305 needs some clarification. Page 100
<snip>
Post by Per Hedeland
Hm, T1 - T4 are defined in figure 14. Of course the .txt version of 1305
doesn't have figures, and trying to read the formulas is futile even if
you do understand the math. Try the PDF version, e.g. at
http://www.faqs.org/rfc/rfc1305.pdf (I guess the PostScript version can
also be found somewhere).
<snip>

I have the PDF version! Figure 14 does "define" them but does not
connect them to the corresponding timestamps and/or variables in the
packet or in the software.
Per Hedeland
2006-02-17 19:53:32 UTC
Permalink
Post by Richard B. Gilbert
Post by Per Hedeland
Post by Richard B. Gilbert
In that case, I think RFC 1305 needs some clarification. Page 100
<snip>
Post by Per Hedeland
Hm, T1 - T4 are defined in figure 14. Of course the .txt version of 1305
doesn't have figures, and trying to read the formulas is futile even if
you do understand the math. Try the PDF version, e.g. at
http://www.faqs.org/rfc/rfc1305.pdf (I guess the PostScript version can
also be found somewhere).
<snip>
I have the PDF version! Figure 14 does "define" them but does not
connect them to the corresponding timestamps and/or variables in the
packet or in the software.
Not formally perhaps, but it does clearly define the timestamps in the
packet:

Originate Timestamp: This is the local time at which the request
departed the client host for the service host, in 64-bit timestamp
format.
Receive Timestamp: This is the local time at which the request arrived
at the service host, in 64-bit timestamp format.
Transmit Timestamp: This is the local time at which the reply departed
the service host for the client host, in 64-bit timestamp format.

Mapping this description to figure 14 shouldn't be insurmountable...
Mapping to the/some software is clearly outside the scope of any RFC.

--Per Hedeland
***@hedeland.org
David Woolley
2006-02-17 07:18:34 UTC
Permalink
Post by Richard B. Gilbert
Reference: the time the local clock was last set. (This makes no sense
to me but that's what the RFC says! It would make more sense if it were
the time the reply was received by the client.)
Reference time is the time on the *server* when the last change in best
offset measurement happened. It can be relatively far in the past and is
useless for the current purpose. It is not a time on the local machine.
Post by Richard B. Gilbert
Originate: the time the request packet left your system
Receive: the time the request packet arrived at the server
Transmit: the time the reply packet departed the server
The measurement takes a finite time to make (delay I think, but it might
be twice delay). Basically it takes between Originate and Originate + delay
(the client receive time isn't recorded here). Therefore you cannot state
one specific time at which the measurement was made. Using any of the
latter 3 should be OK as long as you always use the same one. Note that,
as the clock isn't being disciplined, Originate may differ drastically
from Receive and Transmit (i.e. by offset).

I think you asked how I measured the standing frequency error to calibrate
to 30 seconds a year. At the time, the office didn't have internet
access, so I used a radio controlled wristwatch and ran the command
"netdate localhost" on an exact second. To get the exact second, I typed
all of the command except for the carriage return, then got my finger
tapping lightly in time with the seconds, and, once I had the rhythm, did
one final hard tap. That seemed to be repeatable to better than 100ms.
(I think I first forced the watch to update.)

The final fine tuning was done over about a week. I'd now do ntpdate
over a baseline of about a week. In that case, I then set the drift
value, so subsequent calibrations were of the residual error. Nowadays,
I use ntptime, to set the kernel parameters, and don't run ntpd at all.
When I'm correcting phase, I make sure my modem is idle before issuing
the ntpdate command. I don't use ntpd in one shot mode because I don't
want the frequency disturbed.
David L. Mills
2006-02-17 21:24:01 UTC
Permalink
David,

Been there. Type the command except the CR. Listen to WWV and swiggle
the second finger to the tick. At the long tone poke the CR. I can get
it to within 20 ms. Can anybody else do better?

Dave
Post by David Woolley
Post by Richard B. Gilbert
Reference: the time the local clock was last set. (This makes no sense
to me but that's what the RFC says! It would make more sense if it were
the time the reply was received by the client.)
Reference time is the time on the *server* when the last change in best
offset measurement happened. It can be relatively far in the past and is
useless for the current purpose. It is not a time on the local machine.
Post by Richard B. Gilbert
Originate: the time the request packet left your system
Receive: the time the request packet arrived at the server
Transmit: the time the reply packet departed the server
The measurement takes a finite time to make (delay I think, but it might
be twice delay). Basically it takes between Originate and Originate + delay
(the client receive time isn't recorded here). Therefore you cannot state
one specific time at which the measurement was made. Using any of the
latter 3 should be OK as long as you always use the same one. Note that,
as the clock isn't being disciplined, Originate may differ drastically
from Receive and Transmit (i.e. by offset).
I think you asked how I measured the standing frequency error to calibrate
to 30 seconds a year. At the time, the office didn't have internet
access, so I used a radio controlled wristwatch and ran the command
"netdate localhost" on an exact second. To get the exact second, I typed
all of the command except for the carriage return, then got my finger
tapping lightly in time with the seconds, and, once I had the rhythm, did
one final hard tap. That seemed to be repeatable to better than 100ms.
(I think I first forced the watch to update.)
The final fine tuning was done over about a week. I'd now do ntpdate
over a baseline of about a week. In that case, I then set the drift
value, so subsequent calibrations were of the residual error. Nowadays,
I use ntptime, to set the kernel parameters, and don't run ntpd at all.
When I'm correcting phase, I make sure my modem is idle before issuing
the ntpdate command. I don't use ntpd in one shot mode because I don't
want the frequency disturbed.
Daniel Kabs
2006-02-22 07:47:28 UTC
Permalink
Hello David!
Post by David Woolley
Post by Richard B. Gilbert
Originate: the time the request packet left your system
Receive: the time the request packet arrived at the server
Transmit: the time the reply packet departed the server
The measurement takes a finite time to make (delay I think, but it might
be twice delay). Basically it takes between Originate and Originate + delay
(the client receive time isn't recorded here). Therefore you cannot state
one specific time at which the measurement was made. Using any of the
latter 3 should be OK as long as you always use the same one. Note that,
as the clock isn't being disciplined, Originate may differ drastically
from Receive and Transmit (i.e. by offset).
Thank you for clarifying the meanings and usage of the timestamps. I'll
try to include the explanations which everybody contributed to this
thread on
https://ntp.isc.org/bin/view/Support/HowToCalibrateSystemClockUsingNTP
Post by David Woolley
... Nowadays, I use ntptime, to set the kernel parameters,
and don't run ntpd at all....
Using "ntptime" is a great suggestion. I planned to use "adjtimex" for
this job but it's more than twice the size of "ntptime". So I'd rather
use the latter one to save some kBytes :-)

Cheers
Daniel
--
Refactor, don't archive! - SamHasler - 28 Aug 2004 - twiki.org
Danny Mayer
2006-02-17 03:36:45 UTC
Permalink
Post by Richard B. Gilbert
Post by Daniel Kabs
Hello David!
Post by David L. Mills
As to which timestamp is "correct" you will need to read the
architecture briefing on the NTP project page.
That's not fair. I just wanted to use NTP as a tool to measure the
time drift of my system clock and now you pull the dreaded "read the
architecture briefing" weapon on me. What have I done to you to
deserve this? :-)
Post by David L. Mills
While at it, understand
the raw offset measurement does not reflect the actual clock offset,
as the latter is determined by the clock discipline algorithm
described in the briefings on the project page. [...]
You are talking about "clock discipline". That's confusing me as I am
running ntpd using option "disable ntp" which (according to your
implementation documentation) should disable time and frequency
discipline.
Cheers
Daniel
You could probably use any of the four time stamps if you measure over a
long enough period.
Reference: the time the local clock was last set. (This makes no sense
to me but that's what the RFC says! It would make more sense if it were
the time the reply was received by the client.)
Originate: the time the request packet left your system
Receive: the time the request packet arrived at the server
Transmit: the time the reply packet departed the server
I would guess that the transmit timestamp would provide the best
accuracy if you have to measure over a short interval. Since the drift
of your local clock is changing constantly, if slowly, I would recommend
measuring over an interval of at least twenty-four hours.
If the environment in which your devices are used is not similar in
temperature and temperature variation, the whole effort is probably a
waste of time!!
No, that doesn't matter. Different devices are expected to have not only
different temperatures but also different clock accuracies,
fluctuations, etc.

And interesting variation on having NTP discipline your clock, you could
instead use it to measure the temperature fluctuations of your clock!

Danny
Hal Murray
2006-02-17 21:31:28 UTC
Permalink
Post by Danny Mayer
And interesting variation on having NTP discipline your clock, you could
instead use it to measure the temperature fluctuations of your clock!
It's pretty easy to see the correlations between temperature and drift.
They are much better if you measure the temperature of the crystal
rather than the temperature of the air leaving your box.

The HP/Agilent 2804A Quartz Thermometer used a crystal as a temperature
probe. I gather is was pretty good, but I can't find a manual online.
--
The suespammers.org mail server is located in California. So are all my
other mailboxes. Please do not send unsolicited bulk e-mail or unsolicited
commercial e-mail to my suespammers.org address or any of my other addresses.
These are my opinions, not necessarily my employer's. I hate spam.
Danny Mayer
2006-02-17 04:34:44 UTC
Permalink
Post by David L. Mills
Richard,
Well, I wrote 1305 fourteen years ago when I was just a kid. The on-wire
draft< protocol spec for NTPv4 now on the project page at
http://www.eecis.udel.edu/~mills/database/brief/flow/ntp4.pdf is
hopefully much more explicit.
Dave
Meaning that Dave is only about 30 now! :) There are rumors that he used
NTP to turn back the clock to make himself younger! :)

Danny
David L. Mills
2006-02-17 21:19:14 UTC
Permalink
Danny,

That would make my daughter born 11 years after me. Awesome.

Once upon a time I talked my way around the world and crossed the date
ine from the west. That gave me an extra day in my life and I didn't
even need to fake the time.

Dave
Post by Danny Mayer
Post by David L. Mills
Richard,
Well, I wrote 1305 fourteen years ago when I was just a kid. The on-wire
draft< protocol spec for NTPv4 now on the project page at
http://www.eecis.udel.edu/~mills/database/brief/flow/ntp4.pdf is
hopefully much more explicit.
Dave
Meaning that Dave is only about 30 now! :) There are rumors that he used
NTP to turn back the clock to make himself younger! :)
Danny
_______________________________________________
questions mailing list
https://lists.ntp.isc.org/mailman/listinfo/questions
Daniel Kabs
2006-02-16 09:25:43 UTC
Permalink
Hello Professor Mills!
Post by David L. Mills
The peer offset is recorded for each client/server exchange. Let it run
for a few hours and record the time of day and the offsets at the
beginning and end. Subtract and divide by the interval. It's really not
complicated.
The calculation is not complicated, that's correct. Getting the precise
time is. Please see the example as follows.

According to "Plan B", NTP is configured with "disable ntp". I ran a
cron job to query the peer clock variables using ntpq -c 'rv
<assoc_id_of_peer>' at 9:00 and 10:00. The billboards are attached below
as reference.

During the two "measurements", the offset (peer.offset) increased by
975.174 ms.

Given the cron jobs where started on time (which I doubt :-), the
"measurement" interval was 3600 s, thus the daily time offset of my
system clock will be:

975.174 / 3600 * 86400 / 1000 s = 23.40 s

However, the difference in peer.reftime (8:58:38.253 to 9:58:47.253)
gives an interval of 3609 s, thus the extrapolation yields:

975.174 / 3609 * 86400 / 1000 s = 23.35 s

The difference in peer.org is 3615.973 s, so the rule of proportions gives:

975.174 / 3615.973 * 86400 / 1000 s = 23.30 s

And finally peer.rec gives a time difference of 3616.977 s, making a
daily time offset of

975.174 / 3616.977 * 86400 / 1000 s = 23.29 s


I'd like to know, which calculation is most accurate. What is the
timestamp to record when reading the offset?


Cheers
Daniel Kabs


PS: The billboards look like this
# Reading at 9:00 AM
assID=32468 status=9614 reach, conf, sel_sys.peer, 1 event, event_reach,
srcadr=10.0.0.254, srcport=123, dstadr=10.0.2.100, dstport=123, leap=00,
stratum=1, precision=-18, rootdelay=0.000, rootdispersion=6.516,
refid=DCFa, reach=377, unreach=0, hmode=3, pmode=4, hpoll=6, ppoll=6,
flash=00 ok, keyid=0, ttl=0, offset=-19583.268, delay=0.718,
dispersion=3.282, jitter=49.301,
reftime=c79eaf2e.40eef1ba Thu, Feb 16 2006 8:58:38.253,
org=c79eaf2e.c3a5c1c6 Thu, Feb 16 2006 8:58:38.764,
rec=c79eaf42.628d5410 Thu, Feb 16 2006 8:58:58.384,
xmt=c79eaf42.5a03d577 Thu, Feb 16 2006 8:58:58.351,
filtdelay= 33.29 2.73 0.72 2.75 2.80 2.54 5.90 2.70,
filtoffset= -19604. -19601. -19583. -19565. -19547. -19531. -19516. -19497.,
filtdisp= 0.03 1.01 1.95 2.90 3.86 4.80 5.79 6.75

# Reading at 10:00 AM
assID=32468 status=9614 reach, conf, sel_sys.peer, 1 event, event_reach,
srcadr=10.0.0.254, srcport=123, dstadr=10.0.2.100, dstport=123, leap=00,
stratum=1, precision=-18, rootdelay=0.000, rootdispersion=6.714,
refid=DCFa, reach=377, unreach=0, hmode=3, pmode=4, hpoll=6, ppoll=6,
flash=00 ok, keyid=0, ttl=0, offset=-20558.442, delay=0.679,
dispersion=3.734, jitter=45.116,
reftime=c79ebd47.40cf6be3 Thu, Feb 16 2006 9:58:47.253,
org=c79ebd4e.bcd4562e Thu, Feb 16 2006 9:58:54.737,
rec=c79ebd63.5c88509b Thu, Feb 16 2006 9:59:15.361,
xmt=c79ebd63.5a01abd1 Thu, Feb 16 2006 9:59:15.351,
filtdelay= 9.81 0.70 9.66 0.68 9.13 2.84 0.69 1.74,
filtoffset= -20618. -20595. -20581. -20558. -20538. -20524. -20508. -20490.,
filtdisp= 0.03 1.02 2.01 2.99 3.93 4.88 5.82 6.80
David L. Mills
2006-02-16 12:49:40 UTC
Permalink
Daniel,

Have you considered what an engineer means by "almost exact?" There is
always an uncertainty in any physical measurement. Yours is no exception.

Dave
Post by Daniel Kabs
Hello Professor Mills!
Post by David L. Mills
The peer offset is recorded for each client/server exchange. Let it
run for a few hours and record the time of day and the offsets at the
beginning and end. Subtract and divide by the interval. It's really
not complicated.
The calculation is not complicated, that's correct. Getting the precise
time is. Please see the example as follows.
According to "Plan B", NTP is configured with "disable ntp". I ran a
cron job to query the peer clock variables using ntpq -c 'rv
<assoc_id_of_peer>' at 9:00 and 10:00. The billboards are attached below
as reference.
During the two "measurements", the offset (peer.offset) increased by
975.174 ms.
Given the cron jobs where started on time (which I doubt :-), the
"measurement" interval was 3600 s, thus the daily time offset of my
975.174 / 3600 * 86400 / 1000 s = 23.40 s
However, the difference in peer.reftime (8:58:38.253 to 9:58:47.253)
975.174 / 3609 * 86400 / 1000 s = 23.35 s
975.174 / 3615.973 * 86400 / 1000 s = 23.30 s
And finally peer.rec gives a time difference of 3616.977 s, making a
daily time offset of
975.174 / 3616.977 * 86400 / 1000 s = 23.29 s
I'd like to know, which calculation is most accurate. What is the
timestamp to record when reading the offset?
Cheers
Daniel Kabs
PS: The billboards look like this
# Reading at 9:00 AM
assID=32468 status=9614 reach, conf, sel_sys.peer, 1 event, event_reach,
srcadr=10.0.0.254, srcport=123, dstadr=10.0.2.100, dstport=123, leap=00,
stratum=1, precision=-18, rootdelay=0.000, rootdispersion=6.516,
refid=DCFa, reach=377, unreach=0, hmode=3, pmode=4, hpoll=6, ppoll=6,
flash=00 ok, keyid=0, ttl=0, offset=-19583.268, delay=0.718,
dispersion=3.282, jitter=49.301,
reftime=c79eaf2e.40eef1ba Thu, Feb 16 2006 8:58:38.253,
org=c79eaf2e.c3a5c1c6 Thu, Feb 16 2006 8:58:38.764,
rec=c79eaf42.628d5410 Thu, Feb 16 2006 8:58:58.384,
xmt=c79eaf42.5a03d577 Thu, Feb 16 2006 8:58:58.351,
filtdelay= 33.29 2.73 0.72 2.75 2.80 2.54 5.90 2.70,
filtoffset= -19604. -19601. -19583. -19565. -19547. -19531. -19516. -19497.,
filtdisp= 0.03 1.01 1.95 2.90 3.86 4.80 5.79 6.75
# Reading at 10:00 AM
assID=32468 status=9614 reach, conf, sel_sys.peer, 1 event, event_reach,
srcadr=10.0.0.254, srcport=123, dstadr=10.0.2.100, dstport=123, leap=00,
stratum=1, precision=-18, rootdelay=0.000, rootdispersion=6.714,
refid=DCFa, reach=377, unreach=0, hmode=3, pmode=4, hpoll=6, ppoll=6,
flash=00 ok, keyid=0, ttl=0, offset=-20558.442, delay=0.679,
dispersion=3.734, jitter=45.116,
reftime=c79ebd47.40cf6be3 Thu, Feb 16 2006 9:58:47.253,
org=c79ebd4e.bcd4562e Thu, Feb 16 2006 9:58:54.737,
rec=c79ebd63.5c88509b Thu, Feb 16 2006 9:59:15.361,
xmt=c79ebd63.5a01abd1 Thu, Feb 16 2006 9:59:15.351,
filtdelay= 9.81 0.70 9.66 0.68 9.13 2.84 0.69
1.74,
filtoffset= -20618. -20595. -20581. -20558. -20538. -20524. -20508. -20490.,
filtdisp= 0.03 1.02 2.01 2.99 3.93 4.88 5.82 6.80
Daniel Kabs
2006-02-17 09:43:25 UTC
Permalink
Hello Professor Mills!

Apropos engineers: any engineer strives for high precision measurements
and thus tries to minimize uncertainty to a reasonable level. So for any
method of measurement the magnitude of uncertainty has to be determined
and valued. That's what I am trying to do currently :-)

Enter Plan B: You suggested to "record the time of day and the offset",
without saying how you'd record the time. So I tried different methods
(for "recoding the time") and found

cron : 23.40 s = 270.83 PPM
reftime: 23.35 s = 270.25 PPM
org : 23.30 s = 269.67 PPM
rec : 23.29 s = 269.56 PPM

Mh, maybe I should have converted the values to PPMs earlier! They look
much nicer now. Leaving out the "cron job", they agree within +/- 0.5
PPM. Of course, this should be backed by repeating the "experiment" and
checking how the values scatter (disperse? I'm don't know the proper
English word here).

You are right, this uncertainty is reasonable. So I should stop bugging
you about timestamps :-)

Mission accomplished! Thanks.
Bye now,
Daniel
Post by David L. Mills
Daniel,
Have you considered what an engineer means by "almost exact?" There is
always an uncertainty in any physical measurement. Yours is no exception.
David L. Mills
2006-02-17 22:02:25 UTC
Permalink
Daniel,

You might get a better understanding of the error model from the
briefings at the NTP project page. The statistic of interest is the
Allan variance, which describes the clock frequency stability as a
function of averaging time.

Dave
Post by Daniel Kabs
Hello Professor Mills!
Apropos engineers: any engineer strives for high precision measurements
and thus tries to minimize uncertainty to a reasonable level. So for any
method of measurement the magnitude of uncertainty has to be determined
and valued. That's what I am trying to do currently :-)
Enter Plan B: You suggested to "record the time of day and the offset",
without saying how you'd record the time. So I tried different methods
(for "recoding the time") and found
cron : 23.40 s = 270.83 PPM
reftime: 23.35 s = 270.25 PPM
org : 23.30 s = 269.67 PPM
rec : 23.29 s = 269.56 PPM
Mh, maybe I should have converted the values to PPMs earlier! They look
much nicer now. Leaving out the "cron job", they agree within +/- 0.5
PPM. Of course, this should be backed by repeating the "experiment" and
checking how the values scatter (disperse? I'm don't know the proper
English word here).
You are right, this uncertainty is reasonable. So I should stop bugging
you about timestamps :-)
Mission accomplished! Thanks.
Bye now,
Daniel
Post by David L. Mills
Daniel,
Have you considered what an engineer means by "almost exact?" There is
always an uncertainty in any physical measurement. Yours is no exception.
Daniel Kabs
2006-02-01 18:36:21 UTC
Permalink
Hello Professor Mills,

I tried both of your suggestions and the results differ slightly:

Plan A)

After running NTP daemon for two days, the frequency converges to 268.3
PPM, i.e. 23.2 seconds per day.


Plan B)

Running NTP daemon using "disable ntp", I recorded the offset of the
associated peer periodically for a couple of hours. A least-squares fit
gave a slope of 23.7 seconds per day. (At the same time I recorded the
offset using deprecated ntpdate and got 23.8 seconds per day).

Please see the diagrams on

https://ntp.isc.org/bin/view/Support/HowToCalibrateSystemClockUsingNTPDev


I wonder if this difference shows the maximum precision (i.e. 500
ms/day) I will achieve with these calibration procedures or if I'm doing
something systematically wrong.


Cheers
Daniel
Post by David L. Mills
Daniel,
Plan A
1. Run ntptime -f 0 to remove any leftover kernel bias.
2. Configure for a reliable server over a quiet network link.
3. Remove the frequency file ntp.drift.
4. Start the daemon and wait for at least 15 minutes until the state
shows 4. Record the frequency offset shown with the ntpq rv command. It
should be within 1 PPM of the actual frequency offset. For enhanced
confidence, wait until the first frequency file update after one hour or so.
Plan B
1. Run ntptime -f 0 to remove any leftover kernel bias.
2. Configure for a reliable server over a quiet network link.
3. Start the daemon with disable ntp in the configuration file.
4. Record the offset over a period of hours. Do a least-squares fit; the
regression line slope is the frequency.
Dave
Terje Mathisen
2006-02-01 19:57:33 UTC
Permalink
Post by Daniel Kabs
Hello Professor Mills,
Plan A)
After running NTP daemon for two days, the frequency converges to 268.3
PPM, i.e. 23.2 seconds per day.
Plan B)
Running NTP daemon using "disable ntp", I recorded the offset of the
associated peer periodically for a couple of hours. A least-squares fit
gave a slope of 23.7 seconds per day. (At the same time I recorded the
offset using deprecated ntpdate and got 23.8 seconds per day).
This is well within the expected precision for such an experiment, the
final ntp.drift value (23.2 or 268.3) probably reflects the current
drift rate, not the average.

These two values are different because the environmental temperature
varies, often diurnally, so if you log the changes in ntp.drift then
you'll probably notice that the average corresponds closely to the
23.7/23.8 numbers.

Terje
--
- <***@hda.hydro.com>
"almost all programming can be viewed as an exercise in caching"
Daniel Kabs
2006-02-02 09:34:38 UTC
Permalink
Hello Terje!
Post by Terje Mathisen
This is well within the expected precision for such an experiment, the
final ntp.drift value (23.2 or 268.3) probably reflects the current
drift rate, not the average.
These two values are different because the environmental temperature
varies, often diurnally, so if you log the changes in ntp.drift then
you'll probably notice that the average corresponds closely to the
23.7/23.8 numbers.
Did I understand you correctly: You are insinuating that least-squares
fitting the time offset is getting an average value whereas the
frequency error (ntp.drift value) represents a "live" value.

I expected it to be the other way round: I thought the frequency error
is a "slow" value that takes hours or days to converge as a result of
the control loop phasing in and as such can only slowly react to
environmental changes (e.g. change in temperature). This contrasts to
measuring the time offset over a short periode which gives a "snapshot"
of the current clock drift and as such represents current environmental
effects.

What am I getting wrong here?

Cheers
Daniel
Richard B. Gilbert
2006-02-02 14:25:37 UTC
Permalink
Post by Daniel Kabs
Hello Terje!
Post by Terje Mathisen
This is well within the expected precision for such an experiment,
the final ntp.drift value (23.2 or 268.3) probably reflects the
current drift rate, not the average.
These two values are different because the environmental temperature
varies, often diurnally, so if you log the changes in ntp.drift then
you'll probably notice that the average corresponds closely to the
23.7/23.8 numbers.
Did I understand you correctly: You are insinuating that least-squares
fitting the time offset is getting an average value whereas the
frequency error (ntp.drift value) represents a "live" value.
I expected it to be the other way round: I thought the frequency error
is a "slow" value that takes hours or days to converge as a result of
the control loop phasing in and as such can only slowly react to
environmental changes (e.g. change in temperature). This contrasts to
measuring the time offset over a short periode which gives a
"snapshot" of the current clock drift and as such represents current
environmental effects.
What am I getting wrong here?
Cheers
Daniel
Ntpd begins correcting the frequency about five minutes after it is
started (about twenty seconds with iburst). Thereafter, the error is
recomputed at each poll interval and corrections made if necessary; by
default, this would be not less often than every 1024 seconds. The
current value is written to the drift file once each hour, to provide a
reasonable starting value if ntpd is restarted.

So, yes, it is a "live" value. It would react slowly to large "steps"
in the oscillator frequency but large steps are not the expected
behavior because temperature changes do not normally occur in large steps.
Terje Mathisen
2006-02-02 21:36:14 UTC
Permalink
Post by Daniel Kabs
Hello Terje!
Post by Terje Mathisen
This is well within the expected precision for such an experiment, the
final ntp.drift value (23.2 or 268.3) probably reflects the current
drift rate, not the average.
These two values are different because the environmental temperature
varies, often diurnally, so if you log the changes in ntp.drift then
you'll probably notice that the average corresponds closely to the
23.7/23.8 numbers.
Did I understand you correctly: You are insinuating that least-squares
fitting the time offset is getting an average value whereas the
frequency error (ntp.drift value) represents a "live" value.
I expected it to be the other way round: I thought the frequency error
is a "slow" value that takes hours or days to converge as a result of
the control loop phasing in and as such can only slowly react to
environmental changes (e.g. change in temperature). This contrasts to
measuring the time offset over a short periode which gives a "snapshot"
of the current clock drift and as such represents current environmental
effects.
What am I getting wrong here?
Afaik ntp.drift normally carries about an hour's worth of history.

It is rewritten every hour.

Terje
--
- <***@hda.hydro.com>
"almost all programming can be viewed as an exercise in caching"
David Woolley
2006-02-03 07:32:51 UTC
Permalink
Post by Terje Mathisen
Afaik ntp.drift normally carries about an hour's worth of history.
The complete control loop for ntpd has an infinite impulse response,
so the ntp.drift value has an infinite history. The period that
accounts for most of the contribution to the value depends on an
adaptive algorithm that starts by making the response fast and slows
it down as confidence increases, reducing it again if confidence drops.
This is related to the poll interval, but not in a simple way.

Nick McClaren, has, in the past, proposed the use of finite impulse
response processing, but using a statistical fit, rather than the
current, to a first approximation, linear feedback loop. That's
basically having it continually do linear regressions.

(The big problems seem to be that the response is still too slow when
initially acquiring lock and the frequency response time is reduced too
much after a subsequent time step (lost interrupt, server hop, or people
breaking the clock to test the ability to track the time).)

Incidentally, I find a simple average is good enough to get 30 second
a year accuracy in an air conditioned environment, providing you do it
over about a week.
Daniel Kabs
2006-02-03 09:38:12 UTC
Permalink
Post by David Woolley
Incidentally, I find a simple average is good enough to get 30 second
a year accuracy in an air conditioned environment, providing you do it
over about a week.
I'd like to know: how do you measure the frequency error of your system
clock? Are you reading the ntp.drift file after running ntpd for some days?


Cheers
Daniel
David L. Mills
2006-02-03 14:33:23 UTC
Permalink
Daniel,

The frequency measurement uses the feedback loop described in detail on
the architecture and clock discipline briefings on the NTP project page
linked via www.ntp.org. A simple average is not good enough as it does
not account for the heavy-tail effects due to the random-walk character
of the clock oscillator frequency.

Dave
Post by Daniel Kabs
Post by David Woolley
Incidentally, I find a simple average is good enough to get 30 second
a year accuracy in an air conditioned environment, providing you do it
over about a week.
I'd like to know: how do you measure the frequency error of your system
clock? Are you reading the ntp.drift file after running ntpd for some days?
Cheers
Daniel
Daniel Kabs
2006-02-06 10:15:32 UTC
Permalink
Hello Professor Mills,

I have a lot of respect for the delicate and sophisticated NTP
internals. Actually too much respect to get in touch with them.

Rather I'm using NTP as a tool. A tool that measures (or helps me to
measure) the system clock's frequency error.

Cheers
Daniel
Post by David L. Mills
Daniel,
The frequency measurement uses the feedback loop described in detail on
the architecture and clock discipline briefings on the NTP project page
linked via www.ntp.org. A simple average is not good enough as it does
not account for the heavy-tail effects due to the random-walk character
of the clock oscillator frequency.
Terje Mathisen
2006-02-03 11:02:16 UTC
Permalink
Post by David Woolley
Post by Terje Mathisen
Afaik ntp.drift normally carries about an hour's worth of history.
The complete control loop for ntpd has an infinite impulse response,
so the ntp.drift value has an infinite history. The period that
accounts for most of the contribution to the value depends on an
adaptive algorithm that starts by making the response fast and slows
it down as confidence increases, reducing it again if confidence drops.
This is related to the poll interval, but not in a simple way.
Right, I should possibly have mentioned how most of the statistical
measures use exponential averaging, but I was just trying to persuade
the OP that it was unrealistic to expect better than PPM agreement
between various methods to determine the "average" drift value.

Mea culpa.

(My MS is in EE, so I'm supposed to know a bit about control theory as
well. :-)
Post by David Woolley
Nick McClaren, has, in the past, proposed the use of finite impulse
response processing, but using a statistical fit, rather than the
current, to a first approximation, linear feedback loop. That's
basically having it continually do linear regressions.
(The big problems seem to be that the response is still too slow when
initially acquiring lock and the frequency response time is reduced too
much after a subsequent time step (lost interrupt, server hop, or people
breaking the clock to test the ability to track the time).)
Incidentally, I find a simple average is good enough to get 30 second
a year accuracy in an air conditioned environment, providing you do it
over about a week.
I first did this around 1985 when I wrote a DOS TSR program (remember
those?) which took over all timing-related functions in the OS & BIOS. I
calibrated it with two modem calls to the swedish UTC clock facility,
with 24 hours between them, then let it free-run for a week.

The resulting offset was 60 ms. :-)

Terje
--
- <***@hda.hydro.com>
"almost all programming can be viewed as an exercise in caching"
Daniel Kabs
2006-02-06 11:39:23 UTC
Permalink
Hello Terje!
... I was just trying to persuade the OP that it was unrealistic to expect
better than PPM agreement between various methods to determine the
"average" drift value.

Of course I took your comments about the precision that one can
reasonably expect from these experimant seriously. I didn't mean to
argue against them. I just wondered whether the two experiments showed a
difference of 500 ms per day because I was actually measuring different
values (live vs. averaged drift value).
[two modem calls ... free-run for a week]
The resulting offset was 60 ms. :-)
You achieved an exceptionally good precision of 60ms per week with just
two phone calls. I have all the "intellegence" of NTP and only achieve
500 ms per day. I should try harder, don't you think? :-)


Cheers
Daniel
Terje Mathisen
2006-02-06 13:51:03 UTC
Permalink
Post by Daniel Kabs
[two modem calls ... free-run for a week]
The resulting offset was 60 ms. :-)
You achieved an exceptionally good precision of 60ms per week with just
two phone calls. I have all the "intellegence" of NTP and only achieve
500 ms per day. I should try harder, don't you think? :-)
Maybe.

I suspect that my 60 ms was simply a case of beginner's luck. :-)

Terje
--
- <***@hda.hydro.com>
"almost all programming can be viewed as an exercise in caching"
Richard B. Gilbert
2006-02-02 00:25:50 UTC
Permalink
Post by Daniel Kabs
Hello Professor Mills,
Plan A)
After running NTP daemon for two days, the frequency converges to
268.3 PPM, i.e. 23.2 seconds per day.
Plan B)
Running NTP daemon using "disable ntp", I recorded the offset of the
associated peer periodically for a couple of hours. A least-squares
fit gave a slope of 23.7 seconds per day. (At the same time I recorded
the offset using deprecated ntpdate and got 23.8 seconds per day).
Please see the diagrams on
https://ntp.isc.org/bin/view/Support/HowToCalibrateSystemClockUsingNTPDev
I wonder if this difference shows the maximum precision (i.e. 500
ms/day) I will achieve with these calibration procedures or if I'm
doing something systematically wrong.
Cheers
Daniel
What problem are you trying to solve?

If you want to make a one-time correction to your clock frequency, 500
ms/day may be a reasonable objective. As Terje pointed out, the
frequency varies with temperature and the temperature varies with the
time of day, season of the year, whether the heat is on or off, etc.
The frequency will also change, slowly, as the crystal ages.

Whatever you set the frequency to, today, will probably be not quite
right for tomorrow, next week, etc.

This is why we run ntpd on our computers to synchronize our clocks to an
atomic clock somewhere. The atomic clock is several orders of
magnitude better than the undisciplined local clock and ntpd can
generally hold your local clock within +/- 20 milliseconds of the
correct time using servers on the internet. With a hardware reference
clock (GPS timing receiver) and a judicious choice of hardware and
operating system it is possible to hold the local clock within, perhaps,
+/-50 microseconds of the correct time.
Wolfgang S. Rupprecht
2006-02-02 08:37:12 UTC
Permalink
Post by Richard B. Gilbert
If you want to make a one-time correction to your clock frequency, 500
ms/day may be a reasonable objective. As Terje pointed out, the
frequency varies with temperature and the temperature varies with the
time of day, season of the year, whether the heat is on or off, etc.
The frequency will also change, slowly, as the crystal ages.
Modern motherboards all seem to have the ability to read the ambient
temperature. It might be possible to null out some of the temperature
variation of the xtal by generating a table of ppm-offset vs. reported
temperature while ntpd is running. Then when the system runs open
loop at some later time, some fairly simple hack can load different
ppm corrections depending on the reported temperature.

-wolfgang
--
Wolfgang S. Rupprecht http://www.wsrcc.com/wolfgang/
Direct SIP URL Dialing: http://www.wsrcc.com/wolfgang/phonedirectory.html
Hal Murray
2006-02-02 10:28:24 UTC
Permalink
Post by Wolfgang S. Rupprecht
Modern motherboards all seem to have the ability to read the ambient
temperature. It might be possible to null out some of the temperature
variation of the xtal by generating a table of ppm-offset vs. reported
temperature while ntpd is running. Then when the system runs open
loop at some later time, some fairly simple hack can load different
ppm corrections depending on the reported temperature.
http://www.ijs.si/time/temp-compensation/
--
The suespammers.org mail server is located in California. So are all my
other mailboxes. Please do not send unsolicited bulk e-mail or unsolicited
commercial e-mail to my suespammers.org address or any of my other addresses.
These are my opinions, not necessarily my employer's. I hate spam.
David L. Mills
2006-02-02 19:16:37 UTC
Permalink
Wolfgang,

You might be missing an opportunity.

I have several times defended ntpd as a valuable diagnostic tool in that
it directly measures the frequency and indirectly measures the
environmental temperature. Use it to detect failed A/C or machine fans,
even fires in the machine room. Use it to detect a Halon release without
endangering macnine room staff. Use it as a fingerprint for each
particular CPU; it might be useful in spam source identification. Of
course, we all use it to detect network and server outtages. Who needs
the time? We have a quite sensitive network sounding and reporting tool.

Dave
Post by Wolfgang S. Rupprecht
Post by Richard B. Gilbert
If you want to make a one-time correction to your clock frequency, 500
ms/day may be a reasonable objective. As Terje pointed out, the
frequency varies with temperature and the temperature varies with the
time of day, season of the year, whether the heat is on or off, etc.
The frequency will also change, slowly, as the crystal ages.
Modern motherboards all seem to have the ability to read the ambient
temperature. It might be possible to null out some of the temperature
variation of the xtal by generating a table of ppm-offset vs. reported
temperature while ntpd is running. Then when the system runs open
loop at some later time, some fairly simple hack can load different
ppm corrections depending on the reported temperature.
-wolfgang
Daniel Kabs
2006-02-02 08:53:42 UTC
Permalink
Post by Richard B. Gilbert
Post by Daniel Kabs
Please see the diagrams on
https://ntp.isc.org/bin/view/Support/HowToCalibrateSystemClockUsingNTPDev
What problem are you trying to solve?
I want to calibrate the system clock on a number of embedded systems. To
measure the frequeny error, I'm looking for a method that only takes a
few hours.

I'd prefer to use the frequency value that ntpd writes into the drift
file but my test (see URL above) showed too slow a convergence. I have
to add, I ran the test without the "iburst" server option so updating to
a new ntpd version that supports "iburst" will speed things up.
Post by Richard B. Gilbert
If you want to make a one-time correction to your clock frequency, 500
ms/day may be a reasonable objective.
It's good to have this recommendation. So 6 PPM is the goal I should
strive for.


Cheers
Daniel
--
Refactor, don't archive! - SamHasler - 28 Aug 2004 - twiki.org
Hal Murray
2006-02-03 10:44:35 UTC
Permalink
Post by Daniel Kabs
I want to calibrate the system clock on a number of embedded systems. To
measure the frequeny error, I'm looking for a method that only takes a
few hours.
I'd prefer to use the frequency value that ntpd writes into the drift
file but my test (see URL above) showed too slow a convergence. I have
to add, I ran the test without the "iburst" server option so updating to
a new ntpd version that supports "iburst" will speed things up.
What sort of accuracy are you expecting?

The main contribution to changes in drift is temperature.

Does your environment have a stable temperature?

Are you calibrating your systems in the same location that
you will be running them in? (Or calibrating them on a test
bench and shipping them to a customer?)

If there is any significant daily temperature changes, you will
probably have to calibrate over a day or several. You might be
able to work out a procedure to calibrate your calibration procedure
to account for the current temperature.
--
The suespammers.org mail server is located in California. So are all my
other mailboxes. Please do not send unsolicited bulk e-mail or unsolicited
commercial e-mail to my suespammers.org address or any of my other addresses.
These are my opinions, not necessarily my employer's. I hate spam.
Daniel Kabs
2006-02-06 12:36:01 UTC
Permalink
Hello Hal!
Post by Hal Murray
What sort of accuracy are you expecting?
I hoped to measure the time drift within (or better than) 1 PPM
Post by Hal Murray
The main contribution to changes in drift is temperature.
Right, I cooled the system down by about 30° and the clock drift
increased by about half a second per day (measured with Plan B).
Post by Hal Murray
Does your environment have a stable temperature?
As stable as "room temperature" gets :-)
Post by Hal Murray
Are you calibrating your systems in the same location that
you will be running them in? (Or calibrating them on a test
bench and shipping them to a customer?)
The latter. While the system on the test bench, I wanted to measure the
time drift in order to calibrate the system clock. I also have a
temperature controlled cabinet available so I may try this again in a
more stable environment, if neeeded. But upto now, I am faced with two
different methodes for gathering the frequency offset: both give a
precise reading if repeated, but as the results differ, only one can be
accurate :-)


Cheers
Daniel
Richard B. Gilbert
2006-02-06 16:19:49 UTC
Permalink
Post by Daniel Kabs
Hello Hal!
Post by Hal Murray
What sort of accuracy are you expecting?
I hoped to measure the time drift within (or better than) 1 PPM
To what purpose? The drift will vary with the temperature and you are
not planning to operate these devices at a controlled temperature. If
they MUST keep the correct time then you have two choices as I see it:

1. Use a much better clock; e.g. an Oven Controlled Crystal Oscillator
(OCXO) or a Temperature Compensated Crystal Oscillator (TCXO). OCXO is
the better of the two. The "oven" part is not what you would bake a
pizza in, it's a tiny thing, thermostatically controlled, to maintain
the temperature of the oscillator (especially the crystal) at a constant
value that will always be greater than the ambient temperature.

2. Correct the clock periodically: ntpd, sntp, rdate, or set it by hand.
If the devices do not have a network connection or dialup capability,
you are pretty much limited to setting the clock by hand.

The clocks will be in error no matter what you do! All you can control
is whether they are off by microseconds, milliseconds, seconds, minutes,
etc.

<snip>
Daniel Kabs
2006-02-17 08:49:40 UTC
Permalink
Post by Richard B. Gilbert
Post by Daniel Kabs
Post by Hal Murray
What sort of accuracy are you expecting?
I hoped to measure the time drift within (or better than) 1 PPM
To what purpose? The drift will vary with the temperature and you are
not planning to operate these devices at a controlled temperature. <...>
I'd like to calibrate the system clock, i.e. correct the frequency
offset (at a fixed temperature and with certain accuracy, say 5 PPM).

I also want to measure how the time drift varies with temperature.

To do this, I think, I'll have to find out to what uncertainty I can
measure the time drift of my system clock and how I can improve on this.

I know, I can always resort to running NTP on my system configured for a
reliable time server and then read the drift file. As this process takes
some time (at least one day for reasonable convergence) and I wanted to
compare with other methods of measurement I am also trying to determine
the time drift using offset readings.

Cheers
Daniel
David Woolley
2006-02-02 21:25:01 UTC
Permalink
Post by Richard B. Gilbert
If you want to make a one-time correction to your clock frequency, 500
ms/day may be a reasonable objective. As Terje pointed out, the
500ms per week is perfectly reasonable for an air-conditioned machine
room. Even with the temperature swinging between 13C and 21C in my
living room, the error was only about 130ms a day. In summer it is
much less.
David L. Mills
2006-02-02 16:09:09 UTC
Permalink
Daniel,

Beam me sideways, Scotty.

I'm not making sense with your mission. The measurements you qoute are
quite reasonable in that they argree to within a fraction PPM. That's
what I would expect. However, what's with the 500 ms per day? There is
no such provision or expectation in the specification or implementation.
The maximum frequency tolerance is 500 PPM, which works out to about 43
seconds per day. Your measurements about 23 seconds per day are well
within that tolerance.

Dave
Post by Daniel Kabs
Hello Professor Mills,
Plan A)
After running NTP daemon for two days, the frequency converges to 268.3
PPM, i.e. 23.2 seconds per day.
Plan B)
Running NTP daemon using "disable ntp", I recorded the offset of the
associated peer periodically for a couple of hours. A least-squares fit
gave a slope of 23.7 seconds per day. (At the same time I recorded the
offset using deprecated ntpdate and got 23.8 seconds per day).
Please see the diagrams on
https://ntp.isc.org/bin/view/Support/HowToCalibrateSystemClockUsingNTPDev
I wonder if this difference shows the maximum precision (i.e. 500
ms/day) I will achieve with these calibration procedures or if I'm doing
something systematically wrong.
Cheers
Daniel
Post by David L. Mills
Daniel,
Plan A
1. Run ntptime -f 0 to remove any leftover kernel bias.
2. Configure for a reliable server over a quiet network link.
3. Remove the frequency file ntp.drift.
4. Start the daemon and wait for at least 15 minutes until the state
shows 4. Record the frequency offset shown with the ntpq rv command.
It should be within 1 PPM of the actual frequency offset. For enhanced
confidence, wait until the first frequency file update after one hour or so.
Plan B
1. Run ntptime -f 0 to remove any leftover kernel bias.
2. Configure for a reliable server over a quiet network link.
3. Start the daemon with disable ntp in the configuration file.
4. Record the offset over a period of hours. Do a least-squares fit;
the regression line slope is the frequency.
Dave
Daniel Kabs
2006-02-03 09:11:32 UTC
Permalink
Hello!
Post by David L. Mills
I'm not making sense with your mission. The measurements you qoute are
quite reasonable in that they argree to within a fraction PPM. That's
what I would expect.
You are referring to Plan B, I guess:
My tests show that acquiring the time offset using either ntpdate
(remotely) or the offset value (locally) and then least-squares fitting
both data sets gives a slope that agrees within the sub-PPM regime.
Post by David L. Mills
However, what's with the 500 ms per day?
That's the 6 PPM difference I get comparing the results from Plan A
(have ntpd record the drift file) and Plan B (use the time offset and do
a least-squares fit). I wonder if this is the expected precision in
measuring the system clock's frequency error.
Post by David L. Mills
There is
no such provision or expectation in the specification or implementation.
The maximum frequency tolerance is 500 PPM, which works out to about 43
seconds per day. Your measurements about 23 seconds per day are well
within that tolerance.
What's the "frequency tolerance" you are referring to? I reckon that's
the system clocks's maximum frequency error that ntpd can compensate.


Cheers
Daniel
David L. Mills
2006-02-03 14:57:49 UTC
Permalink
Daniel,

Yes.

Dave
Post by Daniel Kabs
Hello!
Post by David L. Mills
I'm not making sense with your mission. The measurements you qoute are
quite reasonable in that they argree to within a fraction PPM. That's
what I would expect.
My tests show that acquiring the time offset using either ntpdate
(remotely) or the offset value (locally) and then least-squares fitting
both data sets gives a slope that agrees within the sub-PPM regime.
Post by David L. Mills
However, what's with the 500 ms per day?
That's the 6 PPM difference I get comparing the results from Plan A
(have ntpd record the drift file) and Plan B (use the time offset and do
a least-squares fit). I wonder if this is the expected precision in
measuring the system clock's frequency error.
Post by David L. Mills
There is no such provision or expectation in the specification or
implementation. The maximum frequency tolerance is 500 PPM, which
works out to about 43 seconds per day. Your measurements about 23
seconds per day are well within that tolerance.
What's the "frequency tolerance" you are referring to? I reckon that's
the system clocks's maximum frequency error that ntpd can compensate.
Cheers
Daniel
Daniel Kabs
2006-03-20 16:29:30 UTC
Permalink
Hello!
Post by Daniel Kabs
Plan B
1. Run ntptime -f 0 to remove any leftover kernel bias.
Here one should also remove the frequency file "ntp.drift". Otherwise
NTP daemon reads it and configures the kernel clock adjustment
parameters. In this case one ends up measuring a residual frequency
error as part of the frequency error has been compensated.
Post by Daniel Kabs
2. Configure for a reliable server over a quiet network link.
3. Start the daemon with disable ntp in the configuration file.
4. Record the offset over a period of hours. Do a least-squares fit; the
regression line slope is the frequency.
Please see URL below for the full story:

https://ntp.isc.org/bin/view/Support/HowToCalibrateSystemClockUsingNTP

Cheers
Daniel

Continue reading on narkive:
Loading...