What kind of proof do you want? Do you know what the Arrehnius
equation is? That ought to be proof enough. Or how about the Diffusion
equation? When things get hot, especially when they get *near* their
critical temperature, they self destruct albeit more slowly than when
they are at the critical temperature. The approach to this destructive
region is governed by the Arrehnius equation, as any physical chemist
would know.
That generalized concept is no proof of what the threshold
temp is, the temp that would reduce service life below an
acceptible level.
You can torture your microelectronics all you want, but I do not
sanction it.
Nor do you have a real reference for what it'd take to
"torture" them if you still discount even what the
manufacturer specs as acceptible for ALL of them, stability
even with the very worst yields they sell, not even
considering the further elevation of temp it would take to
cause damage.
Let's say that an egg begins to denature at X degrees. That's its
critical temperature. Below that temperature it remains uncooked
initially. However, if the temp is close to the critical temperature,
the egg will begin to cook albeit more slowly. IOW, if you are in a
hurry to fry your CPU, then exceed the critical temperature. If you
are not in a hurry but do want to fry your CPU, then operate it under
but close to the critical temperature.
I suggest that you gain more experience with CPUs, so you
have a better frame of reference. Your idea about "too hot,
long term damage" is certainly correct, but your assumption
about that particular temp is a random unfounded guess.
How much is close? You can decide that for yourself. But for me, I
want my CPU never to reach 58C even though it its critical temperature
is 68C. That gives me the margin of error I want. And since it is not
rocket science to keep a CPU at a lower temp, then I see no reason why
people should not implement the available solutions.
True, it's not hard to get one lower. It's a noble goal,
yet completely unnecessary if two conditions are met:
1) It's cool enough to achive acceptible lifespan. Who
needs to keep their CPU running 25 years?
2) It's cool enough to remain stable in the worst
(realistically) possible ambient condition the specific
system may encounter... it requires testing, just as any
presumption of stability does no matter what the CPU (or any
other) temps, voltages, etc, are.
You sure do love your strawmen. Do you have names for them?
It's sad when someone never tries anything, doesn't have the
experience then guesses, then argues to defend a guess. Get
some evidence, do it then come back and report on it. Right
now, at this very moment systems have been running for
several years with their CPUs wearing cooling solutions that
do not keep them under that temp at full load.
No one claims there is a contest involved. There is, however, a
realization that the CPU will not get damaged the cooler it stays. The
benchmark I set is the one which can be achieved on a practical basis.
The key issue you keep ignoring is the actual temp necessary
to cause degradation enough to be significant.
My son has the 3.2 GHz Prescott CPU with a Zalman 7700. He rips DVDs
with DVD Shrink, which I found is more of a torture test than that
mathematical algorithm you have touted - Stress 95.
Yes you keep citing ONE WHOLE SYSTEM. What evidence is it?
None, because you haven't even ran it at that temp for
years. You have no logical reason to draw the conclusion
you have.
Perhaps you meant Prime95's Torture Test, not "Stress 95".
With the variability in CPU designs, some can be stressed
more by specific tests, including one Intel doesn' even
release to end-users, though the community at large does
generally consider Prime95 to be more of a universal test,
as much or more load than MPEG de/recompresssion as that app
does, but MOST IMPORTANTLY, Prime95 checks the results. A
stress test is worthless if you can't determine whether
there are any errors. DVD/MPEG/etc play back fine with a
few random errors. That is not a stress test at all, unless
the errors were so severe and frequent that it crashes the
whole app or OS... which when doing encoding, is less than
1% of the time.
But that's another
matter. Suffice it to say that Shrink will peg the CPU at least as
much as Stress 95. Under long periods of such stress, his CPU runs
below 58C.
Suffice to say that it depends on the app, and that DVD
shrink is worthless for anything related to stress testing
except to see how much that particular app, running
particular jobs, on a particular CPU, will raise the temp.
If that use is the target for the box, it could be a way to
evaluate sufficient cooling such that it stays below a
maximal value, but it cannot do the more important thing-
check for errors at the _lower_ temp at which errors
typically begin to occur.
My pokey 2.4 GHz Celeron D with the retail box heatsink runs well
below 58C when I run Shrink or Stress 95. It runs at most around 54C.
And? What does it have to do with the price of tea in
china? Your CPU is "X" number of degrees therefore you
disallow anyone else to have one hotter running, even if
it's at higher MHz or a higher frequency or core
design/bus-speed/cache that allows more MIPs so it'll
naturally generate more heat?
So I believe shooting for 58C - 10C cooler than critical at 68C - is
achievable in low heat (Celeron) and high heat (P4) environments. So
why not advise people to play it safe, especially when it is possible
to do so without getting exotic.
It's not unreasonable to suggest 58C as a target,
particularly when a system has (what seems to be)
temp-related instability. That's a bit different than
presuming something about a system based on guesses AND lack
of evidence that it's instable, or that the temp will result
in any significant lifespan depreciation.
Remember, if the CPu were to work for 28 years at one temp,
it may not matter if the lifespan is reduced to 15 years at
another. Your personal interpretation of the data is random
because it's not based on any experience with CPUs operating
above your "ideal" temp.
More strawmen. The prize is CPU longevity.
I encourage you to get some hands-on experience, a few
orders of magnitude more than a handfull of systems.
"Strawmen" is a laughable concept coming from someone who at
most has guessed about what they read in a spec sheet.
While the
I would not waste the time. When my son discovered that his P4 was
running 70C, he shut down and we got the Zalman. He had fried three
other computers with Shrink. Yep - 3 machines bit the dust because of
overheating the CPU. And they were OEM machines too. One was a Dell
and two were Acers. ZAP! Three machines all in a row got fried because
of overheating.
"3 machines bit the dust". You have not attributed cause.
Shrink DVD hasn't even existed for the number of years most
systems have ran at 68C.
To have 3 systems fail (in a row???) is no evidence of
having some kind of grand insight about cooling relative to
someone who has been doing video encoding for years. Sure
you can kill a system by letting it overheat, I never
claimed otherwise. There is a difference between choking an
entire system from airflow and having only the CPU running a
little hot at full load. CPUs are designed to tolerate
heat. Your idea about 68C being so much worse than 58C is
about as arbitrary as suggesting 38C instead of 48C, or 48C
instead of 58.
So you strive to keep them cool too. We both put in extra fans to make
sure things are cool inside.
Yes, as a general concept that's true. It's not all one big
blob of wax with the same melting point though, individual
components have different thermal margins.
Then why did it fail in three instances?
Maybe they were above 68C. Maybe the CPU wasn't even a
failure point, or only a secondary failure. Maybe you
should supply more information as simply claiming "DVD
shrink", "fail" is evidence of little more than that one (or
both) of you can't build a box that stays running at full
load, long-term. I can, and do.
How much sooner will an automobile engine fail if it is operated just
under the red line all the time?
How much time will it take for you to understand that you,
personally, are guessing at what this supposed "red-line" on
a CPU is?
Do you note your car is running at 2200 RPM and immediately
let off the gas so it drops, merely because you've had some
car failures in past years? I hope you stay in the
right-hand lane if that's the case.
The Arrehnius equation is highly non-linear. That's why cooking speed
doubles for small increases in temperature, ...
Now if only it was an egg instead of a CPU...
One popular notion is 1/2 life for 10C.
... especially near the
critical temperature.
Here's where you start going off on a tangent.
You are assuming 68C is a critical temp for this
heat-induced-damage to the extent that it will reduce CPU
life below the useful life of the system, or below the life
of the other components.
And that's why it is prudent to stay as far away
from the critical temperature as possible. Fortunately there are
straightforward solutions to achieve that.
You really do need more experience. Only then will you
fully appreciate how much you assume, how a few moments of
reading can only be taken in context if you haven't the
background to interpret it.
For example, note in spec sheets how the Tc (case temp) goes
UP with higher wattage. What does it mean? Does it mean
the same exact CPU core design can run hotter before being
damaged because it's running at a higher frequency? OF
COURSE NOT! Those are thermal design targets, what they
expect heatsink and system cooling to "target", not avoid.
Further, the spec sheets state (direct quote):
"These temperature specifications are meant to help ensure
proper operation of the processor."
Do you not yet have enough experience to know that the
threshold for stability is lower than the threshold for
thermal damage? Have you not read their spec sheets for
years, long enough to interpret their attempts to cover
absolute worst cases with worst cores *just in case* they
need to get good yields that they wouldn't otherwise?
Perhaps this is the case, since you can't recognize that the
temps are a target, to keep the CPU stable per Intel's
(other) ratings. Further, this is expected with their
retail heatsink.
Let's put it another way... If you feel the retail heatsink
is inadequate to keep the CPU it was bundled with, cool
enough, the product is defective by your account. No point
in your arguments here, notify intel and push for a recall
of half their current CPUs!
One final point- No intel CPU made in the last several years
should've been damaged by gradual (as would happen from
running with any somewhat-marginal) overheating, as they've
had thermal shutdown and/or throttling for several years.
Notebooks with P4s in them are throttling quite often
because of the heat, should be completely ignore those too?
In summary, it's not a BAD thing to have a cool CPU, nor to
plan for it within reason. That's quite different from
jumping the gun about a perceived problem based on a guess
and misinterpretation of information, then backwards arguing
it based on that.
68C is not the "ideal". It's not alarmingly hot for a
Prescott either. Suppose the CPU dies unexpectedly early,
say 6 years from now. Look around the web- 6 year old CPUs
sell for less than the cost of any proposed heatsinks good
enough to significantly lower the temps. I stand by my
original comment, that a higher-end heatsink is of most
benefit for noise-reduction (or overclocking, a topic I
dismiss as it hasn't been mentioned by the OP).