"Pentium 4" brandname ready to be dropped

  • Thread starter Thread starter Yousuf Khan
  • Start date Start date
Because we are soon to be hitting physical limits
in how small things can be and how much power we
can pump through those itty bitty things without
melting them.

Up until now and for the near future, the physical
limits haven't been what was holding us back - it
was our manufacturing technology. Now manufacturing
technology is just about caught up to the physical
limits, so what will be left except to manufacture
bigger or better chips instead of chips that are
merely clocked faster ?

You might want to take a look at what Keith Williams
had to say on Monday in the thread
"Re: Processor heat dissipation, Leakage current, voltages & clockspeed"

Oh, my... I'm (in)famous! ;-)
Here's part of it ...
<quote typos=fixed>

Thank you. I gotta get a speel checker.
Face the facts. *We* are getting perilously close
to atomic dimensions and the voltage gradients are
constantly flirting with the MV/cm "limit". *We*
now have 100A on a chip, not much bigger across than
the wire supplying power to your electric stove ...
and the current is all on the "surface". The power
density of these things are on the order of a *BILLION*
times that of ol' Sol.


AMD is currently at 2.4 GHz with their 90 nm chips, and
the two that I've had my hands on have had little room
for overclocking - even when the fan is blowing -10'C
air over the heatsink.

Do I think AMD is going to solve this and get up to the
kinds of clocks Intel hit with the P4 ? Yes - eventually.
Do I think 65 nm is going to be a magic cure-all that's
going to burst through the 3.6 GHz ceiling Intel hit ?
I'll believe it when I see it.

You sum up my thoughts rather well. Sure, we're going to get
better and faster, but it's not going to be easy, nor cheap. The big
question is whether *we* want to pay for it. These questions have been
kicking around for many years, but it seems the question can no longer be
avoided.

<snip>
 
Well, their going dual core, that should be a clue.


That a logical fallacy. If in the transition from 130nm to 90nm and 90nm
to 65nm scaling became suddenly easier and they could get a 3x frequency
increase for each, would that mean that neither AMD nor Intel would go
dual core? I think the answer is an emphatic "NO WAY", of course they'd
still do dual core, and reap even larger performance benefits!

Let me give you a hint, 200mm^2 cores won't WORK in 90nm...
You can spend transistors in many fashions:
1. Improving the core
2. Improving the cache
3. Replicating the core
1 causes problems if you use too much area. 2 is pretty inexpensive,
from the area, defect and design standpoint, and 3 is inexpensive from
the design standpoint (I suspect).
Large area cores take too long for signals to traverse and are also
problems from the yield stand point.


None of what you said refutes my point that dual cores were done due to
the additional available transistors, and have little or nothing to do
with whether scaling the clock rate is still working fine, has been
slowed down or has stopped entirely. No matter what was happening or
not happening with scaling, they'd do dual cores because the transistors
available allowed it and there wasn't any better/easier use of those
transistors.

Actually, IMHO that has more to do with heat and scaling issues
driving the need for more cache and the inventory issues make larger
die sizes more palatable.
BTW, I believe that Intel will be releasing Dual core Xeons at the
same time as dual core desktops.


Nope, latest info has them doing the desktops in Q3 2005, and the Xeons
in H1 2006. Given their recent slips I wouldn't put much stock into
those dates, of course...

That's a good point, as dual core on the desktop would halve AMD's
capacity. Maybe Intel should have thought about going quadcore...that
would sure put some pressure on AMD.
Did AMD state that they would release a dual core server chip before
the desktop, or do they simply not intend to release a dual core
desktop chip?
Sorry, I stopped caring about the desktop market a while ago...


Yes, AMD has plans for dual core desktops in H1 2006, according to the
latest rumors.

I doubt this will happen. Intel can easily push dual cores as a huge
desktop advantage (twice as good). It will show up in benchmarks,
unlike HT, which was just a blip. I would expect to hit ~30-70% gains
on certain benchmarks, and also the systems will be more responsive
since normal users have quite a few processes running at once.


Yes, Intel can definitely do that. But consider this: Right now AMD
is selling everything they can make, supplying a desktop market that's
growing slowly, and a server market that's growing quickly (easy to do
considering it was essentially zero 18 months ago) 90nm gives them
more chips due to the smaller die sizes, but they have to supply their
existing desktop market, fast growing server market, and plan to attack
the mobile market as well in 90nm. They may simply not have the capacity
to attack the dual core desktop market in any meaningful way until they
move to their new 300mm fab at 65nm in 2006. Sure, they might sell some
dual core Athlon FXs, since those are just Opterons with a different
pinout and they will be selling dual core Opterons next summer. But if
Intel moves aggressively to dual cores across their whole desktop range
by this time next year, AMD probably won't be able to answer.

It will be interesting to see how consumers perceive the choice between
a dual core CPU with each core running at 3 GHz or so, versus a single
core Athlon 64 5000+ running at 3 GHz or so. It'll come down to marketing
of course, and Intel always wins there, but the benchmarking war ought to
be fun. There will be some things that Intel will win going away but for
other things that don't parallelize as well AMD will totally dominate.

Even though Intel is dropping HT, at least for the new dual cores which
will not have HT enabled, it will have done its job as it got some
developers interested in threading their applications. More importantly
for Intel, they concentrated on making sure all the apps used for
benchmarks supported HT, which will help their showing with their dual
core CPUs next year. A dual core CPU released in 2001 would have looked
useless on the desktop benchmarks of the time, but now there is a lot of
multitasking built into most of the apps used for testing, courtesy of
Intel.
 
Ah, sounds like "Death of the Internet predicted, film at 11pm"...

I think you're being a tad hyperbolic.
We're an over of magnitude away from hitting the limit in terms of "how
small things can be".

Assuming you're right (order of magnitude), that's less than a decade
before the world ends. ...not a great proposition for people investing
several billions in semiconductor research. We indeed *are* getting close
to atomic sizes for things like oxide thicknesses.
Though its possible quantum effects might hurt
us before then,

There already are! Gate tuneling is a quantum effect. ...then again,
what isn't in semiconductor physics?
but we might find a way to harness them for further
improvements. I remember 10 years ago when it was fashionable to claim
that we were nearing the physical limits using optics and we'd need to
be using xrays or electron beams by now. But damned if those smart
boffins didn't find a way around that with new materials with amazing
refractive indices so we can continue using optics to the 45nm and
perhaps even the 32nm generation. Its even possible that by then we
might have new materials that ever better to extend the life of optics
even further.

You certainly have a point. IBM spent in excess of a BigaBuck for a
synchrotron for xray lithography. Interferrence masks made this useless
(well, that and the fact that it never worked...)
As for power density limits, we are still in the caveman stages when it
comes to handling stuff like that, blowing air over pieces of metal
pressed down on the chip surface...c'mon! There are plenty of ways to
help with the cooling of localized heat buildup, from simple liquid
cooling like overclockers (and Apple) are doing now, to integrated
Peltier's (AMD's patent from 2001 that was recently publicized) to stuff
that's further out, like routing thousands of microscopic channels
through the chip for cooling fluid flow.

You're optimism is admirable. Sure we can do all that IBM mainframe stuff
from thirty years ago, but do you really want a cooling tower on your
roof? (ok, I'm being hyperbolic now ;-).
True, wires won't shrink as fast as feature sizes (and I think that's
been true for a while now and hasn't hurt us) That just means more
metal layers as we get smaller and more complex routing -- luckily we
have faster computers to help with that more complex routing.

If you don't think that wires are hurting us, perhaps you'd like to have
my job for a day. Wiring is a *huge* problem, and not getting better.
We've already paid the pain with copper, and ten-levels of it. What are
you suggesting as the breakthrough here?
Yes, and this is their very first batch of 90nm chips. I remember a lot
of the same complaints about the first 130nm K8s that were shipped being
unable to exceed 2 GHz and there was a lot of worrying about AMD's SOI
process, but since they are shipping some parts at 2.6GHz in 130nm now
it seems like they licked that problem pretty well.

You are an optomist. ;-)
 
David said:
BTW, I believe that Intel will be releasing Dual core Xeons at the
same time as dual core desktops.


That's a good point, as dual core on the desktop would halve AMD's
capacity. Maybe Intel should have thought about going quadcore...that
would sure put some pressure on AMD.

Intel had some huge inventory problems. Too many 300mm, 90nm fabs. Probably
originally intended to produce a lot of Itaniums, probably now slated to
produce a lot of EM64T's instead.
Did AMD state that they would release a dual core server chip before
the desktop, or do they simply not intend to release a dual core
desktop chip?

Sorry, I stopped caring about the desktop market a while ago...

Yeah, actually AMD's original intention was to never release dual-core chips
for the desktop. It was originally only going to release it for servers.
However, after Intel's about-face on dual-cores, AMD decided to add
dual-core desktops too. It doesn't cost AMD anything, as the desktop and
server processors are the exact same things, only in different packaging
(Socket 939 vs. 940).
I doubt this will happen. Intel can easily push dual cores as a huge
desktop advantage (twice as good).

I don't think AMD would delay dual-core desktops till 65nm either. Intel
would have a huge publicity advantage over it. However, look for AMD to sell
dual-core desktops at nearly server prices, like it does with the FX series.
Maybe, the dual-cores will become the GX-series? :-)
It will show up in benchmarks,
unlike HT, which was just a blip. I would expect to hit ~30-70% gains
on certain benchmarks, and also the systems will be more responsive
since normal users have quite a few processes running at once.

The HThread programming interface will become very popular now as the
gateway to multicores, not just SMT. Intel's HT API has room in its
registers for upto 256 virtual processors. Some of them can be real cores,
some can be SMT cores. Unfortunately, MS Windows only assumes upto 2 virtual
processors, so it looks like Intel is going to have to sell multicore
processors without SMT as that would make the processors look like 4
virtuals which would confuse Windows.

Yousuf Khan
 
Douglas said:
Yes, Intel can definitely do that. But consider this: Right now AMD
is selling everything they can make, supplying a desktop market that's
growing slowly, and a server market that's growing quickly (easy to do
considering it was essentially zero 18 months ago) 90nm gives them
more chips due to the smaller die sizes, but they have to supply their
existing desktop market, fast growing server market, and plan to
attack the mobile market as well in 90nm. They may simply not have
the capacity to attack the dual core desktop market in any meaningful
way until they move to their new 300mm fab at 65nm in 2006. Sure,
they might sell some dual core Athlon FXs, since those are just
Opterons with a different pinout and they will be selling dual core
Opterons next summer. But if Intel moves aggressively to dual cores
across their whole desktop range by this time next year, AMD probably
won't be able to answer.

I don't think Intel would want to move whole-hog into dual-cores for the
desktop. Dual cores on the desktop would be expensive to build and therefore
expensive to purchase. Dual cores will become their top-line processors to
make up for performance no longer available through continous clockspeed
increases. First they'll start out with bigger caches, but that will quickly
come to a point of diminishing returns, and then they will try dual-core to
further increase performance.

Yousuf Khan
 
Yousuf Khan said:
I don't think AMD would delay dual-core desktops till 65nm either. Intel
would have a huge publicity advantage over it. However, look for AMD to sell
dual-core desktops at nearly server prices, like it does with the FX series.
Maybe, the dual-cores will become the GX-series? :-)

All I know is that dual-cores will be hideously expensive. Think
P4-EE pricing, with NO ramp-down to mainstream pricing, at least not
for a few years.
 
I don't think Intel would want to move whole-hog into dual-cores for the
desktop. Dual cores on the desktop would be expensive to build and therefore
expensive to purchase. Dual cores will become their top-line processors to
make up for performance no longer available through continous clockspeed
increases. First they'll start out with bigger caches, but that will quickly
come to a point of diminishing returns, and then they will try dual-core to
further increase performance.


They will produce as many as necessary to use up their spare fab capacity.
Having $2 billion fabs partially idle is a poor business decision, and it
isn't as if they couldn't sell all the dual cores with one failed core as
a single core, so the cost of making a dual core is at worst double that of
making a single core. And the true cost of making a P4 is probably
something on the order of $25, so they could certainly sell their dual
core CPUs at pretty much the same price points as they sell single cores,
if they wished. What pricing scheme they actually choose will, like the
way they price all their CPUs, have to do with marketing and maximizing
their margins rather than having anything much to do with their production
cost.

I wouldn't be surprised to see them match up the prices to the single
core line. So the fastest dual core (3.2 GHz/1MB) would be priced the
same as their fastest single core at the time, which is always around
$600-ish. Then $400-ish for the next, and $275-ish for the last (2.8
GHz/1MB)
 
In comp.arch Rob Stow said:
Because we are soon to be hitting physical limits
in how small things can be and how much power we
can pump through those itty bitty things without
melting them.

Those limits are far off. The present limits are economy and
price limits.
Up until now and for the near future, the physical
limits haven't been what was holding us back - it
was our manufacturing technology. Now manufacturing
technology is just about caught up to the physical
limits, so what will be left except to manufacture
bigger or better chips instead of chips that are
merely clocked faster ?

And then teh markets will re-differentiate again, there will be
faster clockspeeds and what are you going to say then?
 
Those limits are far off. The present limits are economy and
price limits.

You keep saying this, but it's not true. Gate oxides are only a few atoms
thick now. Making them thinner isn't useful. Sure there are some things
that can be done (e.g. hi-K dielectrics) buyt physics only takes that so
far too. Are we done today? No, but it's not clear if we've bought much
from the last few billion$ we've spent.

The "economic" limits have been discussed for at least a decade. ...and
these predictions have been optimistic! Physics isn't easy to get around.
And then teh markets will re-differentiate again, there will be faster
clockspeeds and what are you going to say then?

Next year? ...or in twenty years? Silicon is here to stay, like the IC
engine. Where is the breakthrough?
 
All I know is that dual-cores will be hideously expensive. Think
P4-EE pricing, with NO ramp-down to mainstream pricing, at least not
for a few years.

You *know* this? I'd like to see your evidence for this "knowledge". I
don't see a dual-core in 90nm as being a nickel more expensive to produce
than a uni in 130nm. Indeed, the market will *demand* dual-cores, so the
cost is irrelevant, the price differential will soon be *zero*.
 
Douglas said:
They will produce as many as necessary to use up their spare fab
capacity. Having $2 billion fabs partially idle is a poor business
decision, and it isn't as if they couldn't sell all the dual cores
with one failed core as a single core, so the cost of making a dual
core is at worst double that of making a single core. And the true
cost of making a P4 is probably something on the order of $25, so
they could certainly sell their dual core CPUs at pretty much the
same price points as they sell single cores, if they wished. What
pricing scheme they actually choose will, like the way they price all
their CPUs, have to do with marketing and maximizing their margins
rather than having anything much to do with their production cost.

Selling all dual-core processors will hurt their margins, if they sell them
for the same price as single-cores. If they price them up, then their
margins will not be affected, but not so many people will want to buy them.
They could do better to fill up their fab capacity with increased production
of Celerons.
I wouldn't be surprised to see them match up the prices to the single
core line. So the fastest dual core (3.2 GHz/1MB) would be priced the
same as their fastest single core at the time, which is always around
$600-ish. Then $400-ish for the next, and $275-ish for the last (2.8
GHz/1MB)

However, their dual cores will be at least 2MB (each core will have its own
1MB). The die size on that will be enormous, which would mean it would
likely have to price its dual-cores at least $100 higher than its fastest
single-cores.

Yousuf Khan
 
keith said:
You *know* this? I'd like to see your evidence for this "knowledge". I
don't see a dual-core in 90nm as being a nickel more expensive to produce
than a uni in 130nm. Indeed, the market will *demand* dual-cores, so the
cost is irrelevant, the price differential will soon be *zero*.

A pretty safe forecast, since you posted the same day Via announced a
dual-core model. Yes, Via. Not known for high-priced CPUs! ;-)
 
Felger Carbon said:
A pretty safe forecast, since you posted the same day Via announced a
dual-core model. Yes, Via. Not known for high-priced CPUs! ;-)
Nah, Kieth was reasoning from first principles. or as an old guru around
here used to say, "a chip's a chip. And they all cost five dollars" (it
was a few years ago). As for the P4 "costing" 25 dollars, the reasoning
behind that assertion would be interesting.

del cecchi
 
Del said:
Nah, Kieth was reasoning from first principles. or as an old guru around
here used to say, "a chip's a chip. And they all cost five dollars" (it
was a few years ago).

Isn't it three dollars? And doesn't it apply to DRAM chips only? ;-)
As for the P4 "costing" 25 dollars, the reasoning
behind that assertion would be interesting.

Hm, probably valid for the Celeron version, naked die only. When the retail
price starts at ~$60 including taxes, the chip (die) can't cost more than
$25.
 
keith said:
You *know* this? I'd like to see your evidence for this "knowledge".

Yeah, I hesitated before using that word. Obviously I don't "know"...
I don't see a dual-core in 90nm as being a nickel more expensive to produce
than a uni in 130nm.

But still a lot more expensive than a 90nm uni.
Indeed, the market will *demand* dual-cores, so the
cost is irrelevant, the price differential will soon be *zero*.

Will the market, near-term, demand dualies at mainstream prices? Or
will dualies (for the next few years) be relegated to the high end? I
hope you're right. Time will tell...
 
Tried the XPC route, though with an Athlon XP 2000+. It would probably
be OK if sufficiently well hidden, but as a personal machine I found it
too noisy. Now, this was an SK41G, one of the early ones, so it's quite
possible that they've gotten better since then.



Thus my dilemma. ;-)



Pricey - *within reason* is OK.

300$ Is about what low noise water cooling solutions costs today.
And I think its within a reason for high end desktops.
Getting one or two CPU:s and VGA blocks, and chipset generated heat out
of case and radiated without a fan makes it noiseless, then get quality
low noise PSU and your system is practicly noiseless. Remember to get
heatsinks for gfx cards memory chips.

Yes 300$ may seem costly but its good for highend systems that cost much
anyway, so that cooling solution simply makes the availability of system
much better by letting it run all the time. And it doesn't include the
PSU unit cost. Also there is fanless PSU for extremists. But
I think it might be better to get your 120mm fan in a PSU than getting
fanless PSU and putting a slow 120mm case fan to cool all the components
that are not water cooled.[Fanless PSU still needs SOME air circulation
inside of it.] The advantage of getting water cooling besides being
quiet is that less air(dust) is moved through your system. Also another
advantage is that if you upgrade(your power usage) you could still use
same cooling system since it could handle a LOT more than what current
systems produce.

Of course then there is insanely pricey solution of TNN500A that doesn't
provide enough juice for highend gfx cards, or dual processors ;)

Jouni Osmala
Helsinki University of Technology

Ps. These are just opinions not based on hand on experience but from
multiple review sites. And I'd get such a system if I could afford a
highend PC. [Current one is much cheaper than DELL's cheapest offerings
around here.]
 
Yeah, I hesitated before using that word. Obviously I don't "know"...


But still a lot more expensive than a 90nm uni.

Probably going to be on the order of $35 vs. $45 to produce. Of
course, it's always very important to remember that the cost of
production has next to nothing to do with the price that the chips
will sell for.
Will the market, near-term, demand dualies at mainstream prices? Or
will dualies (for the next few years) be relegated to the high end? I
hope you're right. Time will tell...

I suspect that there will be a big enough market for at least some
reasonable-priced dual-core chips that they'll get down in the
$200-$300 price range in short order. It may take a while for them to
get down into the $100-$200 price range, and some could argue that
this is where "mainstream" prices are these days, but even that price
will probably eventually go dual-core.
 
decision, and it isn't as if they couldn't sell all the dual cores
Selling all dual-core processors will hurt their margins, if they sell them
for the same price as single-cores. If they price them up, then their
margins will not be affected, but not so many people will want to buy them.
They could do better to fill up their fab capacity with increased production
of Celerons.

No, their production is based on what they can SELL. Not otherway.
They just cannot increase the celeron production and tell people buy
them more of them at our high prices. No they maximize profits, and they
think that they can sell dual cores for higher margins than celerons!
Celeron of previous process generation is about same sized as this
process generations dual core. They have extra capacity, and they have
saturated the market with their products and have smaller competitor
crabbing some of their market share. So what they need to do is increase
the attractiveness of their processors and dual core is only reasonable
way to do it. They just cannot make more and sell for lower prices since
that would REDUCE their profits since their profits are margins*units.
And lowering celeron price will probably get less extra sales than they
would loose by selling those celerons at lower prices. They need to up
their mainstream with SOMETHING so that they could up their celerons
capabilities without hurting mainstream sales, and lowering their ASP.
However, their dual cores will be at least 2MB (each core will have its own
1MB). The die size on that will be enormous, which would mean it would
likely have to price its dual-cores at least $100 higher than its fastest
single-cores.

HOW SO? (R²*PI)/Wafercost ~30mm^2/$ for intel. Now prescott die size is
109 mm². So that should be about 4$ for silicon, Intel finally figured
out that the die area is pretty cheap.
Now yield of intel chips could be assumed way over 60% as intel has
history of going for high yield production, even for big chips.
So my estimation is that costs about 30$ for intel to make dual core VS
24$ for single core.
Yes, per die area silicon manufacturing costs are SMALL and everything
else costs a LOT. And dual core isn't huge actually its about same as
original pentiumII or Pentium IV.And they need SOMETHING to compete
against A64, so they throw their superiour manufacturing muscle so that
AMD simply cannot match it with volume production!
Yes. Most of the price of processor is [A cross margin, for R&D
including developement of process, marketing, profit, company overhead,
etc... And not the Manufacturing! prices are not anyway related to
manufacturing costs, today, unless costs happen to be really huge for
some reason.]

Jouni Osmala
Helsinki University of technology.
 
Selling all dual-core processors will hurt their margins, if they sell them
for the same price as single-cores. If they price them up, then their
margins will not be affected, but not so many people will want to buy them.
They could do better to fill up their fab capacity with increased production
of Celerons.


How would selling dual core processors hurt their margins, other than for
the extra few dollars for the additional silicon area? Yield is a non
issue here, since a dual core CPU with one bad core sells well as a single
core CPU.

Your suggestion to fill up fab capacity with Celerons is the LAST thing
they'd want to do if they are worried about hurting their margins! They
sell P4s for $150-$600, Celerons from $60-$110. The cost of production
is very similar, with maybe a dollar's worth of additional die area, all
of which is cache and thus yield isn't a problem for that additional area.
Please explain to me how producing more low margin products helps their
margins?

Not to mention that if they increase the supply of Celerons in the market,
they are either stuck with them if no one wants to buy them (and Intel is
already in an oversupply situation this year as it is) or they discount
them. And that's almost worse, because that would hurt their margins
more as well as lower the price of Celeron based systems compared with P4
based systems even more so it could hurt their high margin P4 sales.

Someone else in this thread calculated that the silicon costs a dollar
per 30 sq mm. Given the prices DRAMs sell for, it couldn't be much
higher than that. So using that figure, a dual core CPU has a materials
cost about $2 higher than a single core. Yield is not an issue, but
testing would be a bit more involved so let's round up the difference to
be $5. If Intel used a $15 premium like they did for the HT versus non-HT
P4s, they'd make more money on the dual core CPUs, and price wise they'd
be very desireable. We will probably be able to tell a lot about how
much excess fab capacity they have next year by the size of the dual core
premium. I just don't understand why people think that if a low end P4
sells for $150, that a dual core would have to sell for $300 for Intel
to make the same amount of money. The production cost and selling price
for CPUs have very little to do with each other!
 
In the section you snipped you missed the part about
wanting an ATX motherboard.

The AOpen one is just another one of those
less-than-full-featured micros. Only 2 DIMM
slots and only 3 PCI slots just doesn't cut it -
particularly when it costs twice as much as a
full-featured ATX board. A lot of people - but
not me - would also be disappointed by no AGP 3.0.

http://www.x86-secret.com/articles/cm/dfi855/dfi855-1.htm

this one has AGP :) and P-M has huge potential, beating A64 4000 few
times.

Pozdrawiam.
 
Back
Top