mobile_step_chip=> cheap results ?

  • Thread starter Thread starter Guest
  • Start date Start date
G

Guest

I've been suprised that using VC.net with a Intel_PIVMobile (laptop) to note
that the scientific calculation computed in a separate thread used never more
than 50%of the computer's CPU any threadPriority I can choose ...?
Any idea around ?
Thanks
Xav
 
I've been suprised that using VC.net with a Intel_PIVMobile (laptop)
to note that the scientific calculation computed in a separate thread
used never more than 50%of the computer's CPU any threadPriority I
can choose ...?
Any idea around ?

Is it a hyperthreaded CPU? If so, a single thread will never use more than
50%.

-cd
 
First news for me ! (I should read more about those ****ing sheats, hardware
I mean)
Any way to bypass this stupid thing (I mean without making the consequent
effort to write a parallel like code, with syncro of threads with mutex
semaphore or tubes like in C-Unix) which leaves the CPU half unused when
unecessary ?
Is it (Hyperthreading) a way not to say for chip manufactors that they
cannot increase the speed of CPU (like they did in a 1.5 rate per year in the
past) ?
I mean, created two chips (the 50% suggest) in one, that the user
(programmer) is suppose to coordinate himself ?
Thanks
Xav
 
Thanks but...
Anyway to bypass it without creating two separate threads communicating by
semaphores mutex tubes (...), ie coding a parallel computation soft ?
 
Thanks but...
Anyway to bypass it without creating two separate threads
communicating by semaphores mutex tubes (...), ie coding a parallel
computation soft ?

No. 1 Thread -> 1 CPU.

You can disable hyperthrteading, in which case you'll see 100% utilization,
but your program will only run slightly faster. The only way to saturate
both CPUs is to use two threads. With a hyperthreading CPU, saturating both
virtual CPUs may actually result in your programm running more slowly due to
the cache affinity between the virtual CPUs.

-cd
 
Ok many thanks !
But... if displaying 100% will mean running only slightly faster... what
does the 50% mean ?
And is it possible to disable the HypeThreadring in the soft (if allready
possible to detect it by Visual.Net primitives ?) ?
X
 
Ok many thanks !
But... if displaying 100% will mean running only slightly faster...
what does the 50% mean ?
Choose option to display one CPU usage graphic per processor in task
manager, then run your process with and without hyperthreading enabled and
look the CPU usage graphic(s) : you'll understand the difference.
And is it possible to disable the HypeThreadring in the soft (if
allready possible to detect it by Visual.Net primitives ?) ?
No : HyperThreading is a system-wide setting available in BIOS.

You can force your app to run on only one processor, but this is meaningless
if your soft is monothread (since it will use only one processor by
definition).

Arnaud
MVP - VC
 
Arnaud Debaene said:
Choose option to display one CPU usage graphic per processor in task
manager, then run your process with and without hyperthreading enabled and
look the CPU usage graphic(s) : you'll understand the difference.

No : HyperThreading is a system-wide setting available in BIOS.

You can force your app to run on only one processor, but this is meaningless
obviously

if your soft is monothread

of course not, all the scientific codes, need at least two threads, one for
computating, and the other one to control the first, and able the exit of the
computation if necessary without using the Windows tasks killer, and saving
the curent advance of calculations.

The interest is not using only one CPU, but trying to use the whole system
capability, to make the computation shorter. Seeing the task manager refering
to only 50% used according to the duration of computations (hours or more for
computations) is frustrating !...
I wonder also... about the deep meaning of hyperthreading ?
For instance, when the computers used to increase their performance by a 1.5
rate
a year, simply by frequencies speed up in the recent past, they now (durably
or not, I dont know) reach a sort of asymptotic progression (I've just bought
a new 2.8GHz laptop recently, which is only 15% faster than the 2.4GHz I
bought in the middle of year 2002), which doesn't (or I don't know how to)
allow great improvement in my computation time.
Is the philosophy of hyperthreading (I'not informaticien rather scientific,
and then not aware of that) a way to bypass the performances limit obtained
on a CPU by implementing two in a chip, unabling the use of the whole system
power in a single task, except if coordinating (the works of different CPU)
it by a separated thread
in a parallel type like code, as I've done in a deep past on Cray's hardware
(I worked on a XMP in the end of the 80's) ?
Sorry for my ignorance, and many thanks for the answers.
Xav
 
Is the philosophy of hyperthreading (I'not informaticien rather
scientific, and then not aware of that) a way to bypass the
performances limit obtained on a CPU by implementing two in a chip,
unabling the use of the whole system power in a single task, except
if coordinating (the works of different CPU) it by a separated thread
in a parallel type like code, as I've done in a deep past on Cray's
hardware (I worked on a XMP in the end of the 80's) ?
Sorry for my ignorance, and many thanks for the answers.

If I understand correctly, Hyperthreading came about as a result of studies
performed by Intel (and others, no doubt) that found that typical code mixes
only utilized about 30% of the CPU core capacity and were largely bound by
memory bandwidth. A hyperthreading CPU still has only 1 set of functional
units (adders, multipliers, etc), but has a double set of registers. With
that extra set of registers (and a second instruction stream) it's possible
to simulate a second CPU in the same chip. The two separate virtual CPUs
compete for the functional units in the core, so having both virtual CPUs
compute-bound frequently results in less throughput than having a single
virtual CPU compute-bound due to internal contention for resources. The big
advantage of hyperthreading in my experience is that a single compute-bound
thread can't completely monopolize the machine - all the other (mostly
blocked) threads can use the second virtual CPU to get work done while the
compute bound thread grinds away. For today's hyperthreaded CPUs, you're
actually getting better throughput at 50% utilization than you would at 100%
in most cases.

Multicore CPUs are the next generation. In a multicore, internal resources
are not shared between cores, so contention is reduced. A multicore design
might have (for example) 4 cores, each with L1 cache, each supporting 2-way
hyperthreading, with a large, shared L2 cache that all core access, yielding
8 virtual CPUs in a single chip.

As multicore/hyperthreading become more common, advances are needed in
inter-thread coordination and communication. The current lock-based
strategies are simply too inefficient and don't scale well - core to run
efficiently on a 100-CPU machine needs to be structured very differently
from code that's suitable for 2 processors.

-cd
 
Great, 've read it. It confirms what I noticed these last two years according
to the evolution of the CPU and learn me a lot in hyperthreading. Thanks !
Xav
 
Back
Top