Flasherly said:
Device Manager shows it for a - ACPI under Computer
Further down - Two instances are shown under Processors
Yet Task Manager shows two graphs as well as allowing Affinity for
Core#1 / #2 under the Processes tab
Everest under CPU -- CPU Utilizations shows Multi CPU Units #1/#2 --
however categorically qualified by HTT types (Hyper Threading, I
suspect)
Similarly, SpeedFan shows two CPU graphs
I can take, initiate two instances of a program, MP3Gain, divide up
some MP3s between them and both appear to run concurrently faster than
if all MP3s are given to one process;- heat generated by the CPU at
two instances is 121F and at one instance 116F
"Intel Processor Identification Utility - Windows Version"
http://downloadcenter.intel.com/Detail_Desc.aspx?ProductID=1881&DwnldID=7838&lang=eng&iid=dc_rss
File name: pidenu31.msi <--- english version, or select another language
There is a screenshot here, of the utility while it is running.
http://i1-win.softpedia-static.com/screenshots/Intel-Processor-Identification-Utility_2.png
That one shows "Hyper-Threading No", whereas your processor is
probably "Hyper-Threading Yes". Hyperthreading means there is one
core, but with a physical and a virtual presence. The processor
contains enough registers, that if one set of registers blocks
because of an outstanding memory access, the processor uses a
second set of registers. And that counts as a "virtual" core. Such a
scheme (Hyperthreading), yields about 10% more performance from a
single core, under best case conditions. If two threads thrash on
memory access, performance actually drops.
The OS thinks there are two cores, but there really aren't two cores
present. The OS can schedule tasks to each side, but since the
sides fight for resources, one side (and its set of registers)
will block, and the other side will run. It's only when a resource
contention is avoided, that HyperThreading is a win from a
performance perspective. Both the physical and virtual core
can't run at the same time. Making them look like cores, is
to fool the OS into scheduling tasks for each side.
Program Counter #1 Program Counter #2
Other registers... Other registers...
| |
+------------+-----------+
|
rest of the
processor core
|
To memory controller
and/or Northbridge
So only one side or the other, can be running at one time.
And if the processor would normally be stopped waiting for
a memory access, the other set of registers may be able to
function by using information already in L1/L2 (from a previous
fetch perhaps). Since the cache has good rates, it means
fewer computing opportunities are lost. And the scheme squeezes
10% more performance from the processor as a result.
If you want to learn more (an article better than my simple summary),
try this one. It gives details on the real deal.
http://www.xbitlabs.com/articles/cpu/display/replay.html
Paul