RTC accuracy

  • Thread starter Thread starter pawihte
  • Start date Start date
P

pawihte

My computer RTC is quite accurate as computer clocks go, being
off by no more than a few seconds over a period of several days
without syncing, whereas I've noticed some computers to be off by
minutes in 24 hours even with a healthy battery.

I've heard that the accuracy of a computer clock not only depends
on the oscillator in the RTC hardware, but is also influenced by
interrupt calls (or something like that) in OS and app
environment. If this is true, do I just happen to have a good RTC
hardware or does my OS and software installation also have
something to do with it?
 
pawihte said:
My computer RTC is quite accurate as computer clocks go, being
off by no more than a few seconds over a period of several days
without syncing, whereas I've noticed some computers to be off by
minutes in 24 hours even with a healthy battery.

I've heard that the accuracy of a computer clock not only depends
on the oscillator in the RTC hardware, but is also influenced by
interrupt calls (or something like that) in OS and app
environment. If this is true, do I just happen to have a good RTC
hardware or does my OS and software installation also have
something to do with it?

In my experience, the RTC varies a bit depending on the quality and
frequency response of the oscillator they choose to use. It also
sometimes varies depending on the quality of power supplied to the
system. I had thought that since it draws from a battery it wouldn't be
an issue, but the only system I've ever had with a timing issue seemed
to be fixed with a PSU swap.
 
MCheu said:
In my experience, the RTC varies a bit depending on the quality
and frequency response of the oscillator they choose to use.
It also sometimes varies depending on the quality of power
supplied to the system. I had thought that since it draws from
a battery it wouldn't be an issue, but the only system I've
ever had with a timing issue seemed to be fixed with a PSU
swap.


It seems the RTC draws power from the PSU even in the presence of
a battery since it keeps time and BIOS settings without a battery
as long as the computer is powered on. Seems strange though that
it should be noticeably affected by voltage variations, even
spikes, in the output of a regulated PSU, especially as it's not
seriously affected by a change of several hundred millivolts in
battery voltage.
 
pawihte said:
My computer RTC is quite accurate as computer clocks go, being
off by no more than a few seconds over a period of several days
without syncing, whereas I've noticed some computers to be off by
minutes in 24 hours even with a healthy battery.

I've heard that the accuracy of a computer clock not only depends
on the oscillator in the RTC hardware, but is also influenced by
interrupt calls (or something like that) in OS and app
environment. If this is true, do I just happen to have a good RTC
hardware or does my OS and software installation also have
something to do with it?

The real-time clock runs constantly, but is used only to preset
the date/time when the operating system boots. At this point,
an operating system task begins counting out time. Too many
interrupts may cause the cpu to skip the time-keeping task now
and then, allowing operating system time to get behind.

ATX power supplies include a 5 vdc output that remains energized
as long as the power supply is connected to AC power. The RTC
runs from this power, falling back to the button battery only
if the power supply source is lost.
 
MCheu said:
In my experience, the RTC varies a bit depending on the quality and
frequency response of the oscillator they choose to use. It also
sometimes varies depending on the quality of power supplied to the
system. I had thought that since it draws from a battery it wouldn't be
an issue, but the only system I've ever had with a timing issue seemed
to be fixed with a PSU swap.

To see how it is powered, have a look at 25281202.pdf reference schematic
here. It contains a relatively modern motherboard schematic (some bits are
not realistic enough for my tastes - they should have used a real commercial
device for the Super I/O).

http://www.intel.com/design/chipsets/schematics/252812.htm

PDF page 82 has the diode-OR of CMOS battery, and onboard regulator output
PDF page 85 has +5VSB to V_3P3_STBY onboard regulator (three terminal)
PDF page 79, shows V_5PO_STBY is the +5VSB coming from the power supply.

The voltage regulator on page 85, should reduce the impact a PSU would
have on timekeeping. But if +5VSB drops low enough, I supposed there
could be a less stable voltage feeding the RTC (which lives inside
the Southbridge). The voltage really shouldn't step radically out
of bounds.

On page 85, the circuit is a dual footprint, allowing two different
devices to be used. If you look at Figure 12 here, you can see
the equation for the adjustable regulator. They're trying to set
the thing to around 3.32 volts. The CMOS battery is probably
closer to ~3.0 or so.

http://www.onsemi.com/pub_link/Collateral/MC33269-D.PDF

It the power supply is ON, then one leg of the diode OR gets
3.32 volts. If the power supply is OFF at the back, the CMOS
battery delivers around 3.0 volts. The operating frequency
of the RTC 32768Hz crystal might not be exactly the same,
for those two cases. If the power supply is weak, the voltage
could be anywhere between 3.32 and 3.0 volts. If the regulator
output drops below 3.0 volts, then the CMOS battery takes
over. Based on that example, I don't see a mechanism for the
timekeeping to be grossly affected.

*******

The timekeeping while the OS is running, is AFAIK, done with
clock tick interrupt counting. That is traceable to a different
crystal than the one used by the RTC. The RTC is there mainly
for timekeeping, when the OS is not running. The characteristics
of the OS maintained time, will be different than the RTC, both
due to the different crystal used (the one on the clockgen),
and due to the possibility of problems with the clock
tick interrupt servicing.

http://www.maxim-ic.com/appnotes.cfm/an_pk/632

"Each PC contains two clocks. Although they are known by several
different names, we will call them the "hardware clock" and the
"software clock." The software clock runs when the PC is turned
on and stops when the PC is turned off. The hardware clock uses
a backup battery and continues to run even when the PC is turned
off."

The CK409 clockgen on the Intel reference schematic, is on PDF page 21.
The quartz crystal used is 14.318MHz (4 x color burst 3.579545MHz).
The OS maintained clock will be traceable to the properties of
that 14.318MHz quartz crystal. The RTC time, on the other hand,
is traceable to its 32768Hz quartz crystal.

I don't think the computer designers care too much, about
refining either of these. My digital watch, for example,
has a trimmer capacitor inside, to adjust + or - on
the frequency. Computers don't have that.

Paul
 
The OS and software will affect the clock time. Most programs will not affect clock time, but some older programs would cause the clock to lose time when they were running.
 
Paul said:
To see how it is powered, have a look at 25281202.pdf reference
schematic
here. It contains a relatively modern motherboard schematic
(some bits are
not realistic enough for my tastes - they should have used a
real commercial
device for the Super I/O).

http://www.intel.com/design/chipsets/schematics/252812.htm

PDF page 82 has the diode-OR of CMOS battery, and onboard
regulator output
PDF page 85 has +5VSB to V_3P3_STBY onboard regulator (three
terminal)
PDF page 79, shows V_5PO_STBY is the +5VSB coming from the
power supply.

The diode-OR switchover circuit is very basic and indicates that
they don't consider stringent regulation necessary at that point.
The voltage regulator on page 85, should reduce the impact a
PSU would
have on timekeeping. But if +5VSB drops low enough, I supposed
there
could be a less stable voltage feeding the RTC (which lives
inside
the Southbridge). The voltage really shouldn't step radically
out
of bounds.

And of course, if the +5V rail fell below 4.3V which would
provide less than the 1V dropout limit of the onboard regulator
IC, that would probably affect other circuits more vital than the
RTC.

On page 85, the circuit is a dual footprint, allowing two
different
devices to be used. If you look at Figure 12 here, you can see
the equation for the adjustable regulator. They're trying to
set
the thing to around 3.32 volts. The CMOS battery is probably
closer to ~3.0 or so.

I wonder why they don't just use the fixed 3.3V version of the
MC33269. It would provide a tighter (1%) tolerance on the output
voltage than would be guaranteed by using two 1% resistors
(possible 2% error) plus the tolerance of the bandgap on-chip
reference.

http://www.onsemi.com/pub_link/Collateral/MC33269-D.PDF

It the power supply is ON, then one leg of the diode OR gets
3.32 volts. If the power supply is OFF at the back, the CMOS
battery delivers around 3.0 volts. The operating frequency
of the RTC 32768Hz crystal might not be exactly the same,
for those two cases. If the power supply is weak, the voltage
could be anywhere between 3.32 and 3.0 volts. If the regulator
output drops below 3.0 volts, then the CMOS battery takes
over. Based on that example, I don't see a mechanism for the
timekeeping to be grossly affected.

The in-circuit voltage of my CMOS battery is currently about 3.0V
vs. about 3.3V when new. I normally turn off power at the plug
for 8-10 hours a day, sometimes up to 20 hrs, but my clock is
still accurate to within a few seconds in between weekly online
synchronisations.

*******

The timekeeping while the OS is running, is AFAIK, done with
clock tick interrupt counting. That is traceable to a different
crystal than the one used by the RTC. The RTC is there mainly
for timekeeping, when the OS is not running. The
characteristics
of the OS maintained time, will be different than the RTC, both
due to the different crystal used (the one on the clockgen),
and due to the possibility of problems with the clock
tick interrupt servicing.

This is where my limited knowledge of computer technology lets me
down. If OS timekeeping depends on the base CPU clock, won't
factors like spectrum spreading also cause inaccuracies in the
RTC?

http://www.maxim-ic.com/appnotes.cfm/an_pk/632

"Each PC contains two clocks. Although they are known by
several
different names, we will call them the "hardware clock" and
the
"software clock." The software clock runs when the PC is
turned
on and stops when the PC is turned off. The hardware clock
uses
a backup battery and continues to run even when the PC is
turned
off."

The CK409 clockgen on the Intel reference schematic, is on PDF
page 21.
The quartz crystal used is 14.318MHz (4 x color burst
3.579545MHz).
The OS maintained clock will be traceable to the properties of
that 14.318MHz quartz crystal. The RTC time, on the other hand,
is traceable to its 32768Hz quartz crystal.

I don't think the computer designers care too much, about
refining either of these. My digital watch, for example,
has a trimmer capacitor inside, to adjust + or - on
the frequency. Computers don't have that.

That's probably the root of it all.
 
Bryce said:
The real-time clock runs constantly, but is used only to preset
the date/time when the operating system boots. At this point,
an operating system task begins counting out time. Too many
interrupts may cause the cpu to skip the time-keeping task now
and then, allowing operating system time to get behind.

You can test this experimentally by rebooting. See if the
time error corrects itself from the RTC.

Don't know anything about current technology, but back in the old
days, I had one system where a few microamps of capacitor leakage
drug the RTC voltage down to the point that time was unstable.
Was easy to find with "freeze spray"...but, today, a can of freon
costs more than a new motherboard.
 
pawihte said:
My computer RTC is quite accurate as computer clocks go, being
off by no more than a few seconds over a period of several days
without syncing, whereas I've noticed some computers to be off by
minutes in 24 hours even with a healthy battery.

I've heard that the accuracy of a computer clock not only depends
on the oscillator in the RTC hardware, but is also influenced by
interrupt calls (or something like that) in OS and app
environment. If this is true, do I just happen to have a good RTC
hardware or does my OS and software installation also have
something to do with it?

Saying you have an RTC component in your system says nothing about what
devices it actually uses or the design of its circuit (outside the chip
and what crystal is used).

http://en.wikipedia.org/wiki/Real-time_clock

http://www.intel.com/Assets/PDF/appnote/292276.pdf
Sections 3.3 and 3.4. They don't give actual accuracy measurements
because so much depends on the circuit design and environmental
conditions. A lot depends on the quality of the crystal used and its
variation over temperature and its drift (seconds of drift per month).

http://www.maxim-ic.com/appnotes.cfm/an_pk/504
"An error of 23ppm is about 1 minute per month."

http://www.maxim-ic.com/appnotes.cfm/an_pk/632
"PC clocks are not particularly good at keeping accurate time. Simple
clocks like a wristwatch and most of the clocks in your home keep better
time than a standard PC clock."

http://www.maxim-ic.com/view_press_release.cfm/release_id/1101
Depends on how old is your motherboard and its design as to whether it
utilizes a much more accurate (i.e., lower PPM error rate) clock. The
one described here (published in 2005) has a very low error rate that
results in a discrepancy of 2 minutes per year.

I use an NTP client to sync my clock once per hour.
 
VanguardLH said:
http://www.maxim-ic.com/view_press_release.cfm/release_id/1101
Depends on how old is your motherboard and its design as to whether it
utilizes a much more accurate (i.e., lower PPM error rate) clock. The
one described here (published in 2005) has a very low error rate that
results in a discrepancy of 2 minutes per year.

I use an NTP client to sync my clock once per hour.

As you experience more and more birthdays, precise time becomes less
compelling. I'm satisfied knowing what month it is!
 
Bryce said:
As you experience more and more birthdays, precise time becomes less
compelling. I'm satisfied knowing what month it is!

When I was much younger and working 12-hours days on a rotating shift, I
was always losing what day of the week it was. Now I'm 30 years older
and nowhere as busy but I'm again losing what day of the week it is.
I'm still doing okay on the months, though.
 
VanguardLH said:
Saying you have an RTC component in your system says nothing
about what
devices it actually uses or the design of its circuit (outside
the chip
and what crystal is used).

http://en.wikipedia.org/wiki/Real-time_clock

http://www.intel.com/Assets/PDF/appnote/292276.pdf
Sections 3.3 and 3.4. They don't give actual accuracy
measurements
because so much depends on the circuit design and environmental
conditions. A lot depends on the quality of the crystal used
and its
variation over temperature and its drift (seconds of drift per
month).

http://www.maxim-ic.com/appnotes.cfm/an_pk/504
"An error of 23ppm is about 1 minute per month."

http://www.maxim-ic.com/appnotes.cfm/an_pk/632
"PC clocks are not particularly good at keeping accurate time.
Simple
clocks like a wristwatch and most of the clocks in your home
keep better
time than a standard PC clock."

http://www.maxim-ic.com/view_press_release.cfm/release_id/1101
Depends on how old is your motherboard and its design as to
whether it
utilizes a much more accurate (i.e., lower PPM error rate)
clock. The
one described here (published in 2005) has a very low error
rate that
results in a discrepancy of 2 minutes per year.

I'm not sure I get what you're trying to say here. The point of
my post was that as I seem to have an accurate clock (as computer
clocks go), I wondered if this is entirely due to the hardware or
or partly to my software installations and usage.
I use an NTP client to sync my clock once per hour.

For the level of accuracy I need, the once-a-week auto-sync
provided by WinXP is good enough for me with my current
motherboard. In fact, I think I'll turn off auto-sync and see how
far off it goes.
 
pawihte said:
For the level of accuracy I need, the once-a-week auto-sync
provided by WinXP is good enough for me with my current
motherboard. In fact, I think I'll turn off auto-sync and see how
far off it goes.

This thread has some fun and games with clocks, and notes a couple
applications you might try out. This thread is associated with the
Nforce2 and its problems maintaining anything resembling a correct
software clock.

http://www.nforcershq.com/forum/real-time-clock-not-so-real-t19631-190.html

The ClockMon one has moved here.

http://www.softdevlabs.com/ClockMon/ClockMon.html

For the small pieces of software, I usually throw them into the grinder here.
Seems clean. MD5SUM = 6a05b340f7b9431f68dfd5496f42f079 for ClockMon.2.3.0.291.zip
(MD5SUM is so people can compare the download at a later date.)

http://www.virustotal.com/analisis/...2f9a33ba5eb339375724ed7888b2928177-1244019029

Clockmon uses "giveio.sys" for hardware access. That is a way of gaining
access to hardware. When you're finished with Clockmon, you can check
the hidden devices in Device Manager, and it might be in there. On
my other machine, I collected quite a pile of cruft in the hidden
devices, and I don't understand the security implications of having
too much stuff like that.

Have fun,
Paul
 
pawihte said:
I'm not sure I get what you're trying to say here. The point of
my post was that as I seem to have an accurate clock (as computer
clocks go), I wondered if this is entirely due to the hardware or
or partly to my software installations and usage.

The point made by another poster is that the RTC chip is only used to
keep track of time for the hardware and is used when you boot up. It si
NOT used by the operating system. If you severely stress the OS and/or
depending on the accuracy of the software clock therein, you will lose
more time while the host is running than when it is not. The RTC chip
is accurate. The OS is not hence the need to do a periodic time sync.

If you leave your computer powered up 24x7 then, yeah, the RTC clock is
still accurate but it's not getting used.
For the level of accuracy I need, the once-a-week auto-sync
provided by WinXP is good enough for me with my current
motherboard. In fact, I think I'll turn off auto-sync and see how
far off it goes.

Once per week might be okay if your clock does get off by more than a
minute in that time. SSL connects rely on the two end in a connection
to have nearly the same time. This is used to time out the handshaking
used to establish an SSL session. If your time is too far off from the
other end's clock, you won't be able to make SSL connects to it.

I don't worry much about timestamps on files being super accurate. I do
want to ensure my time is accurate so that I don't run into problems
connecting to SSL-secured sites. I don't recall how much different the
time can be but it is encoded into the hash code using in the SSL
handshaking sequence. SSL connects aren't only for web sites. You
might be using SSL for your e-mail connects. If the times are too far
apart between you and the mail server, your e-mail client might start
reporting vague errors, like "Your outgoing (SMTP) server does not
support SSL-secured connections". That's because the handshaking failed
due to too much difference in timestamps, not because the mail server
does not support SSL connections.
 
VanguardLH said:
When I was much younger and working 12-hours days on a rotating
shift, I was always losing what day of the week it was. Now I'm
30 years older and nowhere as busy but I'm again losing what day
of the week it is. I'm still doing okay on the months, though.

I have a simpler method of telling time. If the radio starts
spouting BBC news at 5 a.m. it's a weekday. Otherwise it is
Saturday or Sunday. On Saturday it doesn't spout anything until 6
am.

Regardless, I get breakfast and go back to bed until noon. Except
on Monday and Tuesday, when I have an alarm set for 8 am, to go and
play bridge at 9.
 
pawihte said:
My computer RTC is quite accurate as computer clocks go, being off by no
more than a few seconds over a period of several days without syncing,
whereas I've noticed some computers to be off by minutes in 24 hours even
with a healthy battery.

I've heard that the accuracy of a computer clock not only depends on the
oscillator in the RTC hardware, but is also influenced by interrupt calls
(or something like that) in OS and app environment. If this is true, do I
just happen to have a good RTC hardware or does my OS and software
installation also have something to do with it?



I had one PC that would maintain good accuracy when left running on a Bios
screen or booted from a Linux CD or for that matter if left off.
Time keeping in Windows was just as good until 'one' particular application
was started when it would immediately start loosing 2-3 hours per day.

As it was a till monitoring system it needed to be running most of the day.
I wasn't convenient to reinstall the OS and the software so I just set it to
get it's time from another PC on the LAN that wasn't lumbered with the
problem software.
Eventually a new PC was installed and the problem went away.

So all that goes to confirm software can interfere with the RTC.

Best
Paul.
 
PeeCee wrote (on Fri, 12 Jun 2009 16:36:18 +1200):
I had one PC that would maintain good accuracy when left running on a Bios
screen or booted from a Linux CD or for that matter if left off.
Time keeping in Windows was just as good until 'one' particular application
was started when it would immediately start loosing 2-3 hours per day.

As it was a till monitoring system it needed to be running most of the day.

Which means the RTC was *not* being used to maintain the clock. You are
using the OS clock at that point, NOT the real-time chip on the mobo
that is used to keep track of time while the host is powered down.
I wasn't convenient to reinstall the OS and the software so I just set it to
get it's time from another PC on the LAN that wasn't lumbered with the
problem software.
Eventually a new PC was installed and the problem went away.

So all that goes to confirm software can interfere with the RTC.

No, you just proved the point that intensive applications running under
the OS can affect the OS clock. That does nothing regarding the time
being recorded in the RTC. If you had a host whose system clock was way
off and powered it down and back up, it syncs back to the time that is
being separately tracked by the RTC.

The "system clock" in Windows is Windows maintaining that counter. It
doesn't poll the RTC to find out what it says is the current time. The
RTC is used to generate a Real-Time Clock Interrupt that can update the
OS counter. Programs can issue stop clock interrupts plus the
instructions for the applications are being processed by the same CPU
receiving the interrupts (which get queued as to when they are handled).
RTC interrupts to the CPU can be missed (because they were ignored when
stopped). The OS' "system time" is based on the clock interrupts from
the CPU that it sees, not the ones that are missed, stopped, nor from
the RTC itself. That the RTC sends an interrupt that the OS can use is
NOT the same as what the RTC tracks for time. An interrupt simply tells
the CPU to tick away another increment of time. It does not report the
recorded time.

Think of it this way: once per minute someone nudges you in the side to
remind you that one minute has elapsed. They don't report the time to
you. They just give you a nudge at 1-minute intervals. They get
distracted, need a bathroom break, need a 2nd cup of coffee to wake up,
get a phone call while trying to recover from their computer crashing
while their boss is standing over them demanding a status report while a
hot cup of coffee spilled on their lap, or they get called to a meeting
so your nudging gets put on hold, so they don't get around to nudging
you for awhile. Not until you get another nudge do you increment your
counter again. The next day you arrive and your friend tells you what
time is on his watch and the 1-minute but possibly interrupted nudging
begins again. You start with a known time but thereafter all you have
to track the passage of time is to know when those 1-minute nudges
happen but which may be longer than a minute apart at times.

http://en.wikipedia.org/wiki/Intel_8254

The time gets updated when the system powers up and uses the RTC.
Thereafter, clock ticks (interrupts) are used to monitor the passage of
time. The RTC still knows what time it is tracking. Everyone else
thereafter (i.e., the OS) has to use a counter based on the clock ticks.
Also, it is possible to disable interrupts from devices in the system,
including the clock interrupt from the timer; see

http://www.arl.wustl.edu/~lockwood/class/cs306/books/artofasm/Chapter_17/CH17-3.html.

http://www.intel.com/technology/itj...Thermal_Management/p04_experiment_results.htm
"The OS uses a periodic clock interrupt to keep track of time, trigger
timer objects, and schedule application threads. While the default
interrupt rate is set by the OS, applications can increase the interrupt
rate to any desired frequency (as low as 1ms)."

So it is by the nudges that the OS tracks time, not by getting the
reported time tracked by the RTC itself. With 1-minute nudges, how much
time has elapsed after getting 10 nudges? Maybe 10 minutes. Could be
longer if that nudging was stopped or delayed for awhile.
 
Back
Top