Intel engineer discusses their dual-core design

  • Thread starter Thread starter YKhan
  • Start date Start date
Rob said:
Finally ? Where have you been hiding for the last 4 or 5 years ? AMD
has had the better CPUs for desktops and 2-way servers and workstations
since the Athlon XP and MP transitioned from 0.18 to 0.13 microns.
Even before then the Athlon XP and MP outperformed the P4 and Xeon - but
also ran pretty danged hot.

The only CPU market Intel has held the technological edge in for the
past 4 or 5 years has been the mobile market, where the Pentium M has
been king and looks like it will reign for a while longer.

While I tend to agree with you, the perception among the masses has been
different, IMHO. But being "the hottest thing right now" changes that.
 
Robert Redelmeier said:
In comp.sys.ibm.pc.hardware.chips Terje Mathisen
There are several reasons why engineers are very poor at lying:
-) "I'm an engineer, my credibility is my main capital."
-) "Salesmen, la[w]yers, PHBs and several other types that I really
don't like do it, so I want to distance myself from them."
-) It is just so inelegant. :-(
If I absolutely _have_ to lie, it must be by omission: I'll
still tell the truth and nothing but the truth (as I understand
it, of course), but unless you ask me specific questions about
those parts I'm skipping, I might not tell you all of the truth.

So how do you answer when your wife asks: "Does this dress make
me look fat?" :)

"Of course it doesn't dear!"
(Well, maybe pleasing plump, chubby ... but not "fat" of course.)
The concept of a "duty of truth" is a practical justification.
One really should not lie (even by omission) when one owes information
to someone, and they may be reasonably expected to rely upon it.

For example, I have no trouble lying to a saleman saying "I'm busy"
rather than telling him "Your product is grossly overpriced,
I'm insulted you think I'm so stupid as to fall for it, and I
find you obnoxious." The latter may be entirely true, but it is
valuable information (feedback) the saleman has not earned.

A certain amount of lying also eases social interactions.
See the Jim Carrey movie "Liar, liar". Of course, you may
claim that engineers are poor at social interactions :)

--

... Hank

http://home.earthlink.net/~horedson
http://home.earthlink.net/~w0rli
 
Bill said:
Here's a more interesting question: Intel built the D/C chips on P4
rather than P-M, presumably so they could offer the ht model at a huge
premium. Given the low power and far better performance of the P-M in
terms of work/watt and work/clock, why not a dual core Pentium-M? Then
when the better P4 D/C chip is ready they could offer that?

I can't see Intel even having worried about whether it had HT or not.
They just wanted a dual-core in working condition, HT or no HT. The
fact that they were able to get some HT-enabled DC processors out of it
is a bonus as far as they are concerned.

Regarding why P4 instead of P-M? My assumption is that P-M is too
complicated to simply join together side-by-side to get a dual core. I
think P-4 is a major hack job anyways, dual-core or no dual-core. They
mentioned that they layed out the circuit patterns of the P4 on a
computer, and let the computer take care of rearranging it. Whereas the
P4 may have had a lot of places free to attach communications lines
between the cores, and if they don't, all they have to do is have the
computer relayout the design so that places to put comm lines appear in
convenient locations.

Yousuf Khan
 
Bill Davidsen said:
Here's a more interesting question: Intel built the D/C chips on P4
rather than P-M, presumably so they could offer the ht model at a huge
premium. Given the low power and far better performance of the P-M in
terms of work/watt and work/clock, why not a dual core Pentium-M? Then
when the better P4 D/C chip is ready they could offer that?

<speculation> The Pentium M has no multi-processor capabilities, so
making it dual core would have required more work and have had a
longer time-to-market than for the Pentium 4/Xeon. Why does the
Pentium M not have these capabilities when the P6 had them originally?
In the change from the original P6 to the Pentium M they replaced the
bus interface with a Pentium 4 style one; maybe they added one that is
not multiprocessor capable. Or they eliminated multiprocessing
capabilities in other places in order to save power and chip area.
</speculation>

Also, the dual-core chips are for the performance-hungry users. And
the performance chip in Intels marketing is still the Pentium 4 (not
sure if one core of the fastest Pentium D is faster than the fastest
Pentium M, though).

Followups set to comp.arch.

- anton
 
Robert said:
So how do you answer when your wife asks: "Does this dress make
me look fat?" :)

"I think that other one is even nicer."
The concept of a "duty of truth" is a practical justification.
One really should not lie (even by omission) when one owes information
to someone, and they may be reasonably expected to rely upon it.
Sure.

For example, I have no trouble lying to a saleman saying "I'm busy"
rather than telling him "Your product is grossly overpriced,
I'm insulted you think I'm so stupid as to fall for it, and I
find you obnoxious." The latter may be entirely true, but it is
valuable information (feedback) the saleman has not earned.
:-)

Terje
 

Hmm. I have no difficulty in telling him the former, though I would
leave out the remark about his obnoxiousness as unprofessional.

I have several times told salesmen and even technical staff that
their product would be cancelled, on the grounds that it would fail
in testing, including once when it had a shipment dated of under
6 months off. I told them I had seen it attempted N times before,
it had failed every time for fundamental reasons, and I didn't care
how senior the executive was who had told them it would succeed
THIS time - he didn't know what he was rabbiting on about and I did.
I may have used those words :-)

That particular product was virtual shared memory, as a platform
to run arbitrary SMP programs on a distributed memory system.


Regards,
Nick Maclaren.
 
keith said:
Uh, right. Since that's (microprocessor development) what I do for a
living, I suggest that *you* read the article again. This time you
might try reading for comprehension. Intel blew it big time, as did you.

Mudslinging apart, the admission is not new. It is a long time ago that
Intel top brass talked about a sudden 90 degree right turn.

What is new, is that we now know that Intel wanted to be able to match
cores with the same performance per watt in a package. Without that
ability, you have to disable the core that runs, but not well enough. Or
waste a good core. We now know that that package is cheaper for Intel
than disabling a core.
 
keith said:
Spin? Like to load them words, eh? One reason to do SMP testing is that
it's "easy" to test cache coherence with two processors. Two processors
can bang the caches pretty hard against each other.

Do you really think Intel has no SMP verification capabilities? Do they
not "borrow" tools from one organization to another? There's more here
than meets the eye. Someone dropped the ball, big time!

I'm a bit confused. You are talking "verification", but the article
talks about "testing tools and processes".

I thought "verification" meant determining the correctness of the
design, but "testing" often refers to the part of the manufacturing
process that tries to determine whether a newly manufactured chip
conforms to the design.

In those senses, verification of an SMP design should include simulation
of multiple processors. Testing should apply to each chip separately,
because if you put two or more chips together and the result fails, you
only know that one of a set of chips is bad, not which one it is.

Are you using the words that way? If not, could you define
"verification" and "testing"?

Patricia
 
keith skrev:


The only amazing thing here is that you don't seem to understand the
article and appear to know nothing about microprocessor development.

Uh, right. Since that's (microprocessor development) what I do for a
living, I suggest that *you* read the article again. This time you
might try reading for comprehension. Intel blew it big time, as did you.
 
If these cores are the desktop versions rather than Xeon, they were not
planned to be used in SMP, much less in dual core. I'd be interested to
get your spin on why they *would* test the desktop chip SMP.

Spin? Like to load them words, eh? One reason to do SMP testing is that
it's "easy" to test cache coherence with two processors. Two processors
can bang the caches pretty hard against each other.

Do you really think Intel has no SMP verification capabilities? Do they
not "borrow" tools from one organization to another? There's more here
than meets the eye. Someone dropped the ball, big time!
Here's a more interesting question: Intel built the D/C chips on P4
rather than P-M, presumably so they could offer the ht model at a huge
premium. Given the low power and far better performance of the P-M in
terms of work/watt and work/clock, why not a dual core Pentium-M? Then
when the better P4 D/C chip is ready they could offer that?

Absolutely. ...which makes the SMP verification issue even more
strange.
Just curious as to the logic for the decision if anyone has any insight.

Search me. I've given up trying to explain Intel's moves; too bizare.
 
I'm a bit confused. You are talking "verification", but the article
talks about "testing tools and processes".

I don't think so. "Testing" is a function of ATE widgets. There
comparatively little development needed to "test" a dual-core
processor. Verification was what the article was referring to.
I thought "verification" meant determining the correctness of the
design, but "testing" often refers to the part of the manufacturing
process that tries to determine whether a newly manufactured chip
conforms to the design.

Fine, if you want to skin the onion that thin. Why should it take that
much work to "test" a dual core processor?
In those senses, verification of an SMP design should include simulation
of multiple processors. Testing should apply to each chip separately,
because if you put two or more chips together and the result fails, you
only know that one of a set of chips is bad, not which one it is.

One doesn't "test" multiple processors. One tests products. If one has
it together, the "tests" fall out of the verification suites (more or less).
Are you using the words that way? If not, could you define
"verification" and "testing"?

I won't argue much with your definitions, just that they're normally
confused. "Verification" includes both simulation and engineering "test"
(we have both "verification" and "hardware verification" groups).
Manufacturing test is something else. Once the design is shown to work,
making sure the one shipped to the customer works is induction. ;-)
 
Rob Stow said:
Better put on your flame retardant suit.

You, as a newbie to this group with no credentials established
are telling Keith, with well established creds, that he knows
nothing about microprocessor development ?

Go look at the variable bit cpu thread, where Keith is trolling as a
mindless donut about "base".

In my book he is a troll.

Bye,
Skybuck

P.S.: Revenge is sweet.
 
keith said:
I don't think so. "Testing" is a function of ATE widgets. There
comparatively little development needed to "test" a dual-core
processor. Verification was what the article was referring to.




Fine, if you want to skin the onion that thin. Why should it take that
much work to "test" a dual core processor?




One doesn't "test" multiple processors. One tests products. If one has
it together, the "tests" fall out of the verification suites (more or less).




I won't argue much with your definitions, just that they're normally
confused. "Verification" includes both simulation and engineering "test"
(we have both "verification" and "hardware verification" groups).
Manufacturing test is something else. Once the design is shown to work,
making sure the one shipped to the customer works is induction. ;-)

I just thought of an interesting angle. Intel seems to have two
organizations, one for desktops and one for servers. Could all the
knowledge and testcases for SMP>2 reside in the server group, and the
chip being discussed reside in the desktop group? Might they have
trouble sharing?

Ore was it just a schedule and resource thang?

Just some idle speculation
 
Finally ? Where have you been hiding for the last 4 or 5 years ?

Intel Inside was more about perception than reality. The reality might
have been tarnished for a while, but I'm now noticing too for the first
time that the general perception (outside geekier circles) is finally
taking some damage.

But by the time it really starts to bite, Intel will probably have some
nice Pentium M based stuff to offer and past mistakes will soon be
forgotten.

The main difference I see in perceptions of both AMD and Intel is that the
market will ignore or forget Intels mistakes while punishing AMD for
theirs and won't forget them quickly.
 
Go look at the variable bit cpu thread, where Keith is trolling as a
mindless donut about "base".

Really? You propose a bit of marker per bit of data and you complain
about people suggesting that base-2 may not be ideal? You then go on to
swear, bitch, and moan (the latter only in your more lucid moments), and
then complain that peoplae call _you_ a waste? Note folks that I'm not
alone.
In my book he is a troll.

I suggest a google on sky, if you want a definition of a troll.
Bye,
Skybuck

P.S.: Revenge is sweet.


Revenge? Please!
 
Mudslinging apart, the admission is not new. It is a long time ago that
Intel top brass talked about a sudden 90 degree right turn.

After the Itanic hits the iceberg the captain orders a 90 degree turn?
....good plan!
What is new, is that we now know that Intel wanted to be able to match
cores with the same performance per watt in a package. Without that
ability, you have to disable the core that runs, but not well enough. Or
waste a good core. We now know that that package is cheaper for Intel
than disabling a core.

Huh? I don't follow that paragraph at all. One wants to run *both*
cores. Otherwise what's the point?

Intel's stacking two cores in an MCM and calling it a "dual core" tells
all. They were caught with their pants down, even after *knowing* what
the score was for a couple of years.
 
I just thought of an interesting angle. Intel seems to have two
organizations, one for desktops and one for servers. Could all the
knowledge and testcases for SMP>2 reside in the server group, and the
chip being discussed reside in the desktop group? Might they have
trouble sharing?

NIH on sterroids? I don't buy it. The first Xeons were Pentiums with the
SMP-limiting fuses unblown. The Pentium design was SMP capable, but the
desktop versions had the function disabled.

That's a *lot* of NIH you're proposing!
Ore was it just a schedule and resource thang?

I don't buy that either. I smell an excuse for some really bad management
decisions. ...blame the techies!
 
keith said:
Intel's stacking two cores in an MCM and calling it a "dual core" tells
all. They were caught with their pants down, even after *knowing* what
the score was for a couple of years.

No, the package wasn't ready so they had to put both on the same die.

Why don't you read the article?
 
AD. said:
Intel Inside was more about perception than reality. The reality might
have been tarnished for a while, but I'm now noticing too for the first
time that the general perception (outside geekier circles) is finally
taking some damage.

But by the time it really starts to bite, Intel will probably have some
nice Pentium M based stuff to offer and past mistakes will soon be
forgotten.

The main difference I see in perceptions of both AMD and Intel is that the
market will ignore or forget Intels mistakes while punishing AMD for
theirs and won't forget them quickly.

This question of perceptions is an interesting one.
I remember ten years ago I thought of Sony as a premium brand; when I
had to buy a TV or a VCR my default assumption was to buy Sony (and pay
a little more) unless there were a compelling reason not to.
A series of US-management-style clusterfucks over the last ten years
have, in my mind, completely destroyed that perception. Now if I have to
buy an electronics item either Sony just doesn't make something that
doesn't suck (iPod space), or I'll compare them with someone like
Samsung (flat panel, DVD player), and chances are Samsung will win.
The last Sony thing I bought (a pair of small speakers) while they
looked very cool, were basically all look and no substance; they were
too wimpy in power output to do the job I needed, which didn't do much
to improve my perception of them. When it came to buying some high-end
noise-cancelling headphones, the net reviews agreed: go with Bose and
don't waste your time on Sony.

My question then is:
(1) The readers of this group are probably more tech savvy than average,
so would you say that you have followed my downward opinion of Sony over
the past 10 yrs?
(2) Now, going into the wider population, do you think Sony has fallen
from a premium brand to just one among many?

(Oh, I suspect that Disney, in a different demographic, has undergone
the same sort of slide as Sony over about the same period.)

As for improvement of a brand, it's obviously possible. Apple has
certainly done so in the last few years, and before them I think IBM did
so. So I think Intel probably could improve their brand quality, but
also that they have reached the point where it won't happen with a
louder volume of ads, that it will actually require a period of
sustained products that are actually better than AMD in some way, to do
so.
 
Back
Top