AMD's newest chip

  • Thread starter Thread starter Flasherly
  • Start date Start date
And this is old news. I've had a A10-7850 APU since the spring.

Ah. Read a recent article and switched to a review and forgotten I'd
switched. Sorry about that.
 
Ah. Read a recent article and switched to a review and forgotten I'd
switched. Sorry about that.

No apology necessary :-) They've added a couple of new APUs to the
range recently.
 
How many watts does it burn up with 12 cores going hammer and tongs?

28micron die technology...not much - w/out charts, in an idle state
20watts, loaded 80w if that.
 
How many watts does it burn up with 12 cores going hammer and tongs?

That's one of the interesting things about the new 12 "core" APU, it has
two modes, a 65W mode, and a 45W mode. In the higher consumption mode,
it's full balls out. But in the lower consumption mode, it reduces the
performance, for sure, but it relies more on turbo to make up for the
performance differential, and apparently it's only 6-7% lower
performance than in 65W mode. But you get a 30% reduction in power
consumption for your troubles too.

Yousuf Khan
 
but it relies more on turbo to make up for the
performance differential, and apparently it's only 6-7% lower
performance than in 65W mode. But you get a 30% reduction in power
consumption for your troubles too.


Geeky. America's server racks are the most intensive cosummer of its
nuclear/fossil/hydo fuel sources. Not that 8 graphic cores are in
need variously to shuffle bandwidths, though billions of dollars and
30-percentile returns are astronomical figures for cumulative amounts.
 
Geeky. America's server racks are the most intensive cosummer of its

nuclear/fossil/hydo fuel sources. Not that 8 graphic cores are in

need variously to shuffle bandwidths, though billions of dollars and

30-percentile returns are astronomical figures for cumulative amounts.

I wonder if this AMD trickery would be applicable to HPC clusters.
There you typically have all cores going 100% on some parallel crunching.
On a desktop, you tend to get different loads on each core, so you can
do the turbo/overdrive on some and low-power on others.
 
I wonder if this AMD trickery would be applicable to HPC clusters.
There you typically have all cores going 100% on some parallel crunching.
On a desktop, you tend to get different loads on each core, so you can
do the turbo/overdrive on some and low-power on others.

Low power is still low power regardless of intent once taken a step
lower at micron die processing advancements: It's getting to be
greener and greener everyday a "green world" these days, dunchano...

I've used stepping instruction sets on a desktop environment, dropping
the multiplier via system/load software detection on early (sub-2Ghz)
AMD Athlons. (Then came my first Intel, well second - I'd rather not
say what I had before a NEC V30 - which was an unstoppable
overclockable Celeron - Revision D. One hell'va one-trick pony.)

Extensively, theoretically, that sounds precisely where it should be,
exactly as you say;- obviously, an intent of efficiency through shared
resources at code-level load-to-cores (sic) distributional means,
well, that's more about what it is, so far. Really. Like running
smack into a brick wall. All or large portion of advancements being
touted out are from the chipmakers, Intel and AMD. (Notoriously
difficult to write, so they say, predictive branching routines.)
 
Back
Top