Is a Dual Core actually acting like 2 processors?

  • Thread starter Thread starter Kardon Coupé
  • Start date Start date
K

Kardon Coupé

Dear All,

I'm having a running debate at the moment with a family member, I've just
upgraded my pc to a intel dual core, and I was telling them it is like
having 2 processors in one....and after explaining it is a 2.13ghz, they
were saying, "does that mean that is the a program is using 1.99ghz of
processor power, does another program only have 0.14ghz to play with" and I
was saying....(and I'm thinking I'm right with this). "The machine thinks it
has two 2.13ghz chips, thus having a combined speed of 4.26ghz"

Am I right in what I'm saying? or if I'm wrong, can someone explain how it
does work, so when the conversation next arises I can explain it, and it
will actually make sense...

Regards
Paul
 
You are correct. The Core Duo CPU has two separate processors on the die
and EACH runs at 2.13 GHz.
 
DaveW said:
You are correct. The Core Duo CPU has two separate processors on the die
and EACH runs at 2.13 GHz.

But because of the overhead of shared cache and memory bus it won't be
quite twice the speed of a single but under optimum conditions it will
be very close.
 
But because of the overhead of shared cache and memory bus it won't be
quite twice the speed of a single but under optimum conditions it will
be very close.

The 'programs' also have to be multithreaded to use both processors.

For single threaded applications, you can bump up against the CPU
utilization for one core/cpu leaving the second core/cpu virtually idle.

Intel Core Duo outscores discrete dual-processor system
Serdar Yegulalp, Contributor:
http://tinyurl.com/2czvcf
 
Kardon Coupé said:
Dear All,

I'm having a running debate at the moment with a family member, I've just
upgraded my pc to a intel dual core, and I was telling them it is like
having 2 processors in one....and after explaining it is a 2.13ghz, they
were saying, "does that mean that is the a program is using 1.99ghz of
processor power, does another program only have 0.14ghz to play with" and
I was saying....(and I'm thinking I'm right with this). "The machine
thinks it has two 2.13ghz chips, thus having a combined speed of 4.26ghz"

Am I right in what I'm saying? or if I'm wrong, can someone explain how it
does work, so when the conversation next arises I can explain it, and it
will actually make sense...

It doesn't just 'think' it has 2 processors, it genuinely does have 2
processors. It just feels like 1 because both 'cores' are in the same
physical square component.

It is not accurate to say the machine has a combined speed of 4.26GHz. This
would be great, but all you have is 2 processors running at 2.13GHz. They
cannot be combined to do 1 task (thread) at 4.26GHz! They can do 2 things at
once at 2.13Ghz, but the processors cannot be pooled into 1.
 
It doesn't just 'think' it has 2 processors, it genuinely does have 2
processors. It just feels like 1 because both 'cores' are in the same
physical square component.

It is not accurate to say the machine has a combined speed of 4.26GHz.
This would be great, but all you have is 2 processors running at 2.13GHz.
They cannot be combined to do 1 task (thread) at 4.26GHz! They can do 2
things at once at 2.13Ghz, but the processors cannot be pooled into 1.

And then only if the program is a multi-threaded one, which hardly any
programs seem to be. Hence its somewhat more of a marketing coup, than a
practical computing improvement. In fact I didn't even notice any
improvement when I swapped my single 2Ghz core for my dual 2Ghz core. And as
a consumer, I was expecting a near double speed improvement (after factoring
in motherboard considerations - bus bandwidths, memory limitations, etc).

Incidentally, does anyone know if Windows XP Pro genuinely runs programs on
seperate cores when you choose Task Manager > Processes > Right-click on a
process and then choose Set Affinity?

Or is this just a hypothetical, interestingly worded concept rather than a
technical reality at present?

I've several times set the affinity of a seperate high CPU consuming task to
one processor whilst setting other tasks to complete on the other. Neither
task ever seems to complete any quicker...at least not noticeable so. Makes
me wonder what all the fuss is about.
 
Dear All,

I'm having a running debate at the moment with a family member, I've
just upgraded my pc to a intel dual core, and I was telling them it is
like having 2 processors in one....and after explaining it is a 2.13ghz,
they

Hi Paul,

Well everyone has pretty well mentioned that it is really is 2 cpus in one
socket. You need to get one and try it out.

Imagine a ditch digger, Fred. He has his shovel and he just digs all day
long. That man is a 4 Ghz ditch digger for sure. But it seems like every
15 minutes the boss wants him to crawl out of the ditch and do something
(coffee, donuts, help John, move this, whatever). One day they hire a
second, 4 Ghz, ditch digger. Now they only have the one shovel so they
both can't very well dig at the same time. But now every time the boss
needs something not only does a digger come quickly, the ditch is still
getting dug by the other digger. The second digger can also help with the
ditch by moving the dirt. And if they happen to get a job they can both
work on at the same time...............

I can not testify about Windows in any flavor with smp. But with Linux, if
you run one app at a time, you will hardly notice the difference. If you
multitask, you will never go back to a single cpu.

A couple of years ago I would read the posts in the forums at
http://2cpu.com/ (and drool). They'd always say how much more responsive a
dual cpu system (smp) was. Then someone with vast knowledge would chime in
about how it couldn't be any better as the apps........well you get the
picture.

I have a DH800 with dual 2.4 Ghz Xeons and 1 Gig Kingston ValueRAM (1
stick). I can easily encode 7 videos at a time using the command line.
That's running 7 separate instances of ffmpeg at the same time. I can surf
the web at the same time with Firefox and you can't even tell the videos
are encoding in the background. Audios will play with no distortion.

BTW: I stopped at 7 because I got tired of typing. It is great. Pretty
much the only time my machine slows down, WAY DOWN, is when my RAM gets
used and it starts hitting the swap partition. k3b burning a dvd eats my
ram as does searching my file system with Konqueror.

Using the command line script, multiburn, I can easily burn 2 dvds at a
time while surfing the web, and again you can't even tell except for the
trays of the burner hitting your knees every so often.

The boss is pleased. ;-)

I actually did a few tests with RAM usage.

http://www.linuxquestions.org/questions/showthread.php?s=&threadid=377255

And on smp.

http://www.linuxquestions.org/questions/showthread.php?t=361486

Good luck,
Steve
 
SteveSch said:
Hi Paul,

Well everyone has pretty well mentioned that it is really is 2 cpus in one
socket. You need to get one and try it out.

Imagine a ditch digger, Fred. He has his shovel and he just digs all day
long. That man is a 4 Ghz ditch digger for sure. But it seems like every
15 minutes the boss wants him to crawl out of the ditch and do something
(coffee, donuts, help John, move this, whatever). One day they hire a
second, 4 Ghz, ditch digger. Now they only have the one shovel so they
both can't very well dig at the same time. But now every time the boss
needs something not only does a digger come quickly, the ditch is still
getting dug by the other digger. The second digger can also help with the
ditch by moving the dirt. And if they happen to get a job they can both
work on at the same time...............

What sort of company would hire a second ditch digger and not get a second
shovel! The would be like investing large sums of money into devising a
processor that can do 2 things at once, each one slightly slower than its
predecessor could, even though there is only one task to focus on!

They would have been better employing a general dogsbody - cheaper than a
4GHz ditch digger!
 
What sort of company would hire a second ditch digger and not get a
second shovel!

Maybe it was a government shop?
The would be like investing large sums of money into
devising a processor that can do 2 things at once, each one slightly
slower than its predecessor could, even though there is only one task to
focus on!

In a computer there is always more than one thing to focus on. My
understanding is that's why video cards with their own cpus became so
popular. To offload some of the tasks from the main cpu. Right?
They would have been better employing a general dogsbody - cheaper than
a 4GHz ditch digger!

It's a union shop. This position can only be filled with a ditch digger.

Sometimes when they go on a job, someone brings a second shovel.

The company plans on buying a second shovel in the near future.

Steve
 
Maybe it was a government shop?

.... and that's exactly what did happen (for the same $).
Now AMD is stopping production of single cores, but before
then (and when Intel first released Core 2 Duo), you would
get lower performance per $ especially on typical "PC" uses.

However there was the other issue, of how fast, how high
performing either company could get one core to run (as a
cost effective product per the market), so adding a second
core started to make more sense when process size was small
enough to do so.

In a computer there is always more than one thing to focus on. My
understanding is that's why video cards with their own cpus became so
popular. To offload some of the tasks from the main cpu. Right?

Not really, most PCs have one dominant thread which is what
the user is actively doing, and most systems don't run at
100% CPU utilization continuously and even of those that do,
most of these are not time-critical. For example, someone
had mentioned compressing(?) several videos in the
background and it not effecting browsing or audio playback.
A single core CPU can do that job fine, the only reason it
would be a problem in the past several generations of
processors would be if the jobs running were not assigned
proper priority level.

It is just unrealistic to think that (today) a typical PC
would have a separate core for every single thread, so there
is still going to be task switching going on constantly.
The biggest difference is just as mentioned above, once you
exceed contemporary performance possible from one core, the
only two alernatives are to wait for a future technological
improvement in processors to gain more performance, or add
another, another/etc of those cores.
 
kony said:
... and that's exactly what did happen (for the same $).
Now AMD is stopping production of single cores, but before
then (and when Intel first released Core 2 Duo), you would
get lower performance per $ especially on typical "PC" uses.

However there was the other issue, of how fast, how high
performing either company could get one core to run (as a
cost effective product per the market), so adding a second
core started to make more sense when process size was small
enough to do so.

And as many overclockers have shown - 4GHz is possible with the parts
manufactured to run at 3GHz, so I for one would like to see AMD, Intel etc
pursuing the faster single core speeds already achieved by a few with money
and time to burn on cooling - there are lots of us out here that run single
threaded software and use 1 application at a time! I would benefit much more
from a single core running at 4GHz, as opposed to my dual core running at
2.13GHz - the majority of the time the second core just sits there with the
occasional 5% blip! Only people lucky enough to use multithreaded software
can really benefit from this dual core technology. Software development is
not one of them (MS Visual Studio 2005). Gaming is also not multi-threaded
yet.

How about one code at 3.5GHz and the other at 0.5GHz, the 0.5GHz can run
menial/system tasks, while the main core can be my application core.
 
And as many overclockers have shown - 4GHz is possible with the parts
manufactured to run at 3GHz, so I for one would like to see AMD, Intel etc
pursuing the faster single core speeds already achieved by a few with money
and time to burn on cooling

There are several problems with this, depending on
perspective.

1) Just because "many" do o'c that well, the yields may not
allow enough of them to so do.

2) They want to be able to later ramp up the clockspeed to
*create* new models, thus extending the product line until
it's successor is ready.

3) They want you to use their heatsink, if they are
carrying the warranty on the part. Too many variables are
involved when Joe Sixpack becomes a customer.

4) Faster speeds have higher (heat) losses, while a dual
core spreads out the area generating heat so the thermal
density is lower while the thermal conduction to heatsink is
higher.

5) Why spoil it for overclockers by making it cost more to
get that high speed chip and not have any margin left in
it? Intel is in the business to make money too.



there are lots of us out here that run single
threaded software and use 1 application at a time! I would benefit much more
from a single core running at 4GHz, as opposed to my dual core running at
2.13GHz - the majority of the time the second core just sits there with the
occasional 5% blip! Only people lucky enough to use multithreaded software
can really benefit from this dual core technology. Software development is
not one of them (MS Visual Studio 2005). Gaming is also not multi-threaded
yet.

That's where the overclocking comes in, or for the lowest
cost alternative snatch up an Athlon 64 single core while a
few are still in the new/retail market. Granted they don't
generally hit 4GHz, but there is something to be said for
letting technology make things cheaper too, as
faster/faster/faster is a neverending quest, a treadmil, an
unquenched thirst that won't go away rather just costing a
lot more money to be a year or two ahead of where you'd
otherwise be, performancewise.
 
In message <[email protected]> kony
5) Why spoil it for overclockers by making it cost more to
get that high speed chip and not have any margin left in
it? Intel is in the business to make money too.

huh?

Other way around, why make it easy for overclockers to buy the lower end
chips and run them faster? Intel makes far more money by testing and
rating each chip and selling it at the maximum speed where it will
remain stable, killing off the overclocking market entirely.
 
In message <[email protected]> "GT"
And as many overclockers have shown - 4GHz is possible with the parts
manufactured to run at 3GHz, so I for one would like to see AMD, Intel etc
pursuing the faster single core speeds already achieved by a few with money
and time to burn on cooling - there are lots of us out here that run single
threaded software and use 1 application at a time! I would benefit much more
from a single core running at 4GHz, as opposed to my dual core running at
2.13GHz - the majority of the time the second core just sits there with the
occasional 5% blip! Only people lucky enough to use multithreaded software
can really benefit from this dual core technology. Software development is
not one of them (MS Visual Studio 2005). Gaming is also not multi-threaded
yet.

Even with single threaded software, not many people only do "one" thing
at a time. You might be an exception.

My kids, for example, only do one thing at a time most of the time.
Well, except for encoding MP3s in the background while they chat or
webbrowse and their systems stay responsive.

I'm not sure what sort of coding you do, but on anything other then a
trivially small project, portions of the compiling process could use
both cores for several stages.
How about one code at 3.5GHz and the other at 0.5GHz, the 0.5GHz can run
menial/system tasks, while the main core can be my application core.

That's roughly what happens with power management turned on, although in
reality both step up when you load up either as Windows tends to move
tasks between CPUs an awful lot more then would make sense for a single
active thread without anything else substantial on the system. Windows'
algorithm works well when the system is loaded though.
 
In message <[email protected]> kony


huh?

Other way around, why make it easy for overclockers to buy the lower end
chips and run them faster?

They don't "make it easy", rather, they have not taken
additional steps to make it particularly hard (beyond locked
multiplier). There is a difference here, in policy.

Intel makes far more money by testing and
rating each chip and selling it at the maximum speed where it will
remain stable, killing off the overclocking market entirely.

No they don't, the low-end market is the vast majority of
sales and it is price not rating that ultimately sells the
chips. To then maintain value in the upper end that sells
for more, the lower end have to be artificially limited by
the low spec.
 
DevilsPGD said:
In message <[email protected]> "GT"


Even with single threaded software, not many people only do "one" thing
at a time. You might be an exception.

My kids, for example, only do one thing at a time most of the time.
Well, except for encoding MP3s in the background while they chat or
webbrowse and their systems stay responsive.

Because they are not doing anything that is CPU intensive - the MP3 encoding
is the only CPU hungry task there and it is probably a low priority task, so
stands down for other tasks
I'm not sure what sort of coding you do, but on anything other then a
trivially small project, portions of the compiling process could use
both cores for several stages.

Not in the professional Microsoft product that we use!

As I said in my last post - I use the latest version of Microsoft Visual
Studio 2005 and work on a medium sized project. There is a large DLL and a
main EXE, amongst other small components. The EXE requires the DLL to be
built, so the build is sequential and cannot run in parallel. A clean, full
build takes around 10 minutes on my Intel Core 2 Duo 2.13GHz and took around
10 minutes on my Athlon 2400+ that the intel replaced. The difference is
that the Core 2 Duo has a second core that sits and watches the first core
doing all the work! MS Visual Studio 2005 doesn't make use of multithreading
for its C++/MFC compiling!
That's roughly what happens with power management turned on, although in
reality both step up when you load up either as Windows tends to move
tasks between CPUs an awful lot more then would make sense for a single
active thread without anything else substantial on the system. Windows'
algorithm works well when the system is loaded though.

No, that isn't what happens at all - the cores always run synchronously. I
think you are talking about mobile CPUs in laptops.
 
The would be like investing large sums of money into
There are several problems with this, depending on
perspective.

1) Just because "many" do o'c that well, the yields may not
allow enough of them to so do.

You are talking about overclocking existing parts. I was talking about
Intel/AMD learning from the overclockers and manufacturing parts that
consistently run at the speeds that overclockers can already achieve. My old
Athlon 2400+ would happily run undervolted at its stock speed. Mobile
processors run at significantly lower voltages, so why not start there -
take low voltage processor technology and develop it so that with a small
increase in voltage, a large increase in GHz could be achieved and still
stay within the heat spec of the higher voltaged part, thereby still
producing a chip that can be cooled with the stock HSF.
2) They want to be able to later ramp up the clockspeed to
*create* new models, thus extending the product line until
it's successor is ready.

That's marketing for you!
3) They want you to use their heatsink, if they are
carrying the warranty on the part. Too many variables are
involved when Joe Sixpack becomes a customer.

See my point following 1.
core spreads out the area generating heat so the thermal
density is lower while the thermal conduction to heatsink is
higher.

Yes, faster speeds *contribute* to higher heat, but as you know there are
other ways around the heat problem.
5) Why spoil it for overclockers by making it cost more to
get that high speed chip and not have any margin left in
it? Intel is in the business to make money too.

Overclockers are a small percentage of the market for CPU. They are actually
spoiling it for everyone else by selling chips that aren't set to run as
fast as they 'can'.

My point was just that I would like a really fast single threaded/cored CPU
that will compile my work very quickly and play games very quickly. I don't
want to have to invest time and money in cooling so I can overclock and mess
about and I would prefer not to pay for a second core that I really don't
need! However, I have a Core 2 Duo as it was the fastest processor (on
paper) that I could afford at the time of my upgrade.
 
You are talking about overclocking existing parts. I was talking about
Intel/AMD learning from the overclockers and manufacturing parts that
consistently run at the speeds that overclockers can already achieve.

If you have some special insight on what it would take to
change their manufacturing process to achieve this, I'm sure
they'd love to hear from you.

Remember that any good engineer, engineers in as much margin
as possible when the environment of use is unknown. A
processor that might overclock well, might instead be put
into a poor generic case, wear a poor heatsink, be ran in a
fairly warm environment. AMD/Intel know, as do the
overclockers, that keeping the temperature below a certain
level is important. If they spec'd a CPU for the highest it
could run in *ideal* conditions, it would be a terrible
mistake as the vast majority of CPUs are not ran in ideal
conditions. Remember that some people don't even open up
their systems to clean the dust out, ever, rather taking it
to a shop in a few years saying "it's broken/crashes, must
be a virus".



My old
Athlon 2400+ would happily run undervolted at its stock speed.

I too have undervolted processors, but remember that it may
not run like this for the "potential" life of the system, 10
years give or take. As the motherboard capacitors and PSU
age they will have more ripple and the low voltage threshold
for stable operation will tend to rise. Even so, someone
familiar with aging and able to periodically recheck
stability might have a good effect undervolting, but this is
a bit opposite of the previously desired goal of higher
speed.

Also remember that AMD and Intel have been in the biz for
quite a few years, I'm sure they have some consideration of
what the public perception is when they can't supply parts
to meet demand when their process won't allow enough of part
X and speed Y to fill orders. They don't want a public
perception of instability either by allowing too little
margin.

Do you recall the Pentium 3 @ 1.13GHz? In stock
configuration it wasn't stable and that became a bit
notorious, looked bad for Intel. If one cooled them better
and raised voltage, they could be ran stabily, some even
past 2GHz but the industry in general was not ready for this
amount of attention put into cooling. Educating the system
integrators in smaller shops as well as end users is a slow
process, some are not so keen on the finer details of
implementation, rather slapping the system together and
installing windows as fast as possible.



Mobile
processors run at significantly lower voltages, so why not start there -

Because they cost more and the market has already shown it
wants low cost more than max performance per power. What
sells in high volume, has an additional edge when it comes
to driving down prices by concentrating on that market and
devoting manufacturing to that core... and then later rating
them to fill the orders per part, not necessarily based on
max speed any one specimen could support. They have to meet
X # of orders for part Y and differentiate the price between
part Y and Z, even when the two parts might be physcially
identical except for being set to a different multiplier for
those willing to pay more.

take low voltage processor technology and develop it so that with a small
increase in voltage, a large increase in GHz could be achieved and still
stay within the heat spec of the higher voltaged part, thereby still
producing a chip that can be cooled with the stock HSF.

This is already the case, have you bought these mobile parts
and done this? Your Athlon 2400 for example, in a mobile
Barton form did tend to allow it.
That's marketing for you!


See my point following 1.

Yes, but it still doesn't resolve system aging of
capacitors, dust buildup, or less than ideal environment.
Be glad there is so much margin instead of trying to wring
every last bit out. It does tend to make computers more
reliable in general and if a system isn't reliable, stable,
it becomes little more than a toy.

Yes, faster speeds *contribute* to higher heat, but as you know there are
other ways around the heat problem.

You're trying to get elaborate, which always tends to cause
more problems once the end users become part of the
equation. If Intel only made sealed systems with the
processors installed, it could be more viable but they are
not interested in doing this. What is possible through
attention to detail and what is practical when we can't
assume there will be this level of attention to detail are
always opposing factors. Likewise you can wring the last
foot pound of torque out of a car but ran more
conservatively the public gets better reliability long-term.

The general public doesn't need faster processors as much as
they need their present system to keep running reliably.
Just the other day a friend of the family brought me her old
K6/2-500 based system which was (several years ago) built by
me for her out of parts I was ready to throw away at the
time. Even now years later all she wanted was for it to
work still (after she had basically created multiple
hardware configurations in windows, and seeing a prompt come
up at each boot she fiddled around trying to fix it until
she had managed to set the boot partition as inactive).


Overclockers are a small percentage of the market for CPU. They are actually
spoiling it for everyone else by selling chips that aren't set to run as
fast as they 'can'.

They're not spoiling anything. I've given several reasons
why the chips aren't engineered to so thin a margin they
won't work reliably a large % of the time or remain stable
for life of the system.

Intel and AMD don't take extra steps to "change" anything
for overclockers, when it comes to design decisions for
practical purposes overclockers are ignored as if they don't
exist. It is the basic requirements of being in THIS
business, this market, making the parts they do, that
dictate the decisions made.

My point was just that I would like a really fast single threaded/cored CPU
that will compile my work very quickly and play games very quickly. I don't
want to have to invest time and money in cooling so I can overclock and mess
about and I would prefer not to pay for a second core that I really don't
need! However, I have a Core 2 Duo as it was the fastest processor (on
paper) that I could afford at the time of my upgrade.

Why are you in such a *rush*? Gamers manage to use existing
processors and typically find the video card the primary
bottleneck. Businesses have ran fine for years with far
slower processors than we have today. Enjoy the tech
instead of finding fault, it is meant to make things easier
not wrought with hopes for something more.

I have no idea which architecture performs best at your (was
it Visual Studio?) app but seeking benchmarks would be the
place to start. Since you demonstrate the basic
understanding of overclocking, it is up to you whether to do
so. Intel and AMD can't guarantee what level of competence
you will have doing so, therefore they can't spec a part to
be guaranteed to run as fast as you might be able to run it.
Margin is a good thing.
 
Back
Top