so Jobs gets screwed by IBM over game consoles, thus Apple-Intel ?

  • Thread starter Thread starter Guest
  • Start date Start date
Yousuf said:
Well, yield does not equal capacity. About two years ago, AMD's
marketshare had fallen to about 15%, had nothing to do with how much
they could supply, had everything to do with how much demand there was
for their product.


.... and also the fact that economy had been in the tank around that
time, tech markets were very sluggish, particularly for all the chip
makers, and proc sales were down for some time.


-Rick
 
Tony Hill said:
The die sizes for AMD's current processors range from about 85mm^2 for
their Sempron chips up to about 199mm^2 for their new dual-core chips.
Currently they're split somewhere in the middle of the transition from
130nm to 90nm production (I believe they're past the half-way point in
this transition). Yield numbers are a rather tightly guarded secret,
though ranges from 60-80% would be typical. I figure that a 35M to
45M would be a decent rough estimate of their total capacity at Fab
30.


Unless you've got a lot more info than the rest of us mere mortals,
chances are that any yield analysis would be rather pointless anyway.
As for the dual-core chips, they are indeed fairly large. On the flip
side, so was the original Athlon64/Opteron when it was first released.
The first Athlon64/Opteron chips were all built from the same die,
weighing in at 193mm^2 on a 130nm production line. The new dual-core
Opteron and Athlon64 X2 chips have a 199mm^2 die on a 90nm production
line. Big chips and, not surprisingly, a big price tag to match.

My yield analysis was based on wafer starts out of Fab30 (given), wafer size
(given), die size (given), die production of 100mm2 die from fab 30 (given
for 2003) and a simple assumption that yield = wafer area shipped/wafer area
started.

A simple area defect model would allow you to scale yield from the original
100 mm2 die, upon which the 1M die/week production numbers were base to a
200mm2 die. The yield would drop to about 20%, or 1/3 the production volume
they had with the original Athlon. Yield improvements since 2003 would
help - but 90nm is a fairly mature process. Moving to larger wafers and a 65
nm process will help - in the future.

http://www.micromagazine.com/archive/98/03/roadmap.html
Very true. Apple has even less incentive to use alternative part
suppliers than Dell does. However I would think that they like to
keep AMD around as a sort of safety blanket.

Agreed-and a warmer safety blanket than Freescale.
I sure one could argue one way or the other about the visibility of
AMD's research effort, but the fact of the matter is that it most
definitely does exist. In one extremely crude measure of research,
the number of patents issued to each company, AMD has been
consistently leading Intel for several years now. How this translates
into real research is another matter, but it at least shows that AMD
is doing *something*

Leading for several years??
http://www.uspto.gov/main/homepagenews/bak11jan2005.htm - not in 2004

http://www.uspto.gov/web/offices/ac/ido/oeip/taf/top03cos.htm - AMD had 60%
the number in 2003.


But I was talking more about the highly visible research program that Intel
has established. Whatever their real research effort is, AMD hasn't made it
an identifiable area of effort. In contrast, Intel makes a big deal of
research labs spread across the world and associations with prestigious
universities.

James
 
My yield analysis was based on wafer starts out of Fab30 (given), wafer size
(given), die size (given), die production of 100mm2 die from fab 30 (given
for 2003) and a simple assumption that yield = wafer area shipped/wafer area
started.

I hate to say it, but you're making *WAY* too many assumptions there
to get even remotely meaningful numbers.
A simple area defect model would allow you to scale yield from the original
100 mm2 die, upon which the 1M die/week production numbers were base to a
200mm2 die. The yield would drop to about 20%, or 1/3 the production volume
they had with the original Athlon. Yield improvements since 2003 would
help - but 90nm is a fairly mature process. Moving to larger wafers and a 65
nm process will help - in the future.

http://www.micromagazine.com/archive/98/03/roadmap.html

That model seems rather dated. For example, it doesn't take into
account the differences between cache and logic transistors (at best
you would require two yield analysis runs per die, one for logic and
one for cache). And of course, simple area defect models are just
that, simple. Their accuracy leaves MUCH to be desired.

Hmm.. I think my numbers were a bit dated, looks like Intel reversed
the trend a couple of years ago. It looks like it was just 1999
through to 2002 calendar years that AMD had more patents. Either way,
as mentioned above, this is an extremely crude measure of research
that companies are doing.
But I was talking more about the highly visible research program that Intel
has established. Whatever their real research effort is, AMD hasn't made it
an identifiable area of effort. In contrast, Intel makes a big deal of
research labs spread across the world and associations with prestigious
universities.

Very true, Intel is more visible in the R&D field, and indeed they
spend more money on R&D by a fairly large amount. For 2004 AMD
reported $935M R&D spending vs. $4.78B for Intel. As a percentage of
total revenue though, AMD is a bit higher at 18.7% vs. 14.0% for
Intel.


Of course, I'm not sure that the R&D really figures into this whole
discussion much at all. While Intel may have a decently large R&D
effort, unquestionably the world-leader in R&D would have to be IBM,
and it was Apple's switch away from IBM that started all of this.
 
Tony Hill said:
I hate to say it, but you're making *WAY* too many assumptions there
to get even remotely meaningful numbers.

So. What matters to yield except the number of wafers started vs the number
of chips shipped? The die size and wafer size determines (to within 5-10%)
the number of dies per wafer.The die size for the Athlon product being
shipped then was 100 mm2 on a 90nm process with 200 mm wafers. That
translates to ~ 314 die.wafer. Let's knock off 10% for unused area - 280
die. A publication in 2003 stated the wafer starts and the volumes 10K/wk
wafer starts. So Fab 30 had the potential to ship 140M Athlons/yr. The same
article claimed 50M/year processors shipped for Fab 30. That is larger than
your claim that AMD could ship ~ 25% of 175M/yr total volume. The yield
using these numbers, is an embarrassing 36%. I'll give you 50-60% on
underutilized capacity. To get up to a respectable 80% for this small die on
a mature process, they would have operating at only 57% utilization.

Do you believe that AMD capacity is underutilized? Then why, when Riaz is
questioned about the issue, does he refer to efforts for FUTURE increases -
fab 36, Charter?
That model seems rather dated. For example, it doesn't take into
account the differences between cache and logic transistors (at best
you would require two yield analysis runs per die, one for logic and
one for cache). And of course, simple area defect models are just
that, simple. Their accuracy leaves MUCH to be desired.

The idea that there is some intrinsic yield difference for transistors
configured into a memory circuit from those structured into logic, all
fabricated on the same process, is fallacious. These basic elements all
have attributes termed "critical area". One can try to measure that area
from a layout. However, the yield itself is a rather accurate measure. If
you have measured the yield on one product, and then build another one with
a larger number of tranistors, then the critical area will increase
similarly. If the ratio of transistor types changes markedly, then the
scaling will have to be corrected, too. Now you can count the number of
transistors, rather than area, if you wish, but area will be a crude
estimate of the former. How much would a more sophisiticated model change
the result from my estimate of 20%? To 25%? 15%?. It doesn't really
matter, does it? You are polishing the turd.
Hmm.. I think my numbers were a bit dated, looks like Intel reversed
the trend a couple of years ago. It looks like it was just 1999
through to 2002 calendar years that AMD had more patents.

Hmm. Just before the dates i provided citations for. Can you?
Either way,
as mentioned above, this is an extremely crude measure of research
that companies are doing.


Very true, Intel is more visible in the R&D field, and indeed they
spend more money on R&D by a fairly large amount. For 2004 AMD
reported $935M R&D spending vs. $4.78B for Intel. As a percentage of
total revenue though, AMD is a bit higher at 18.7% vs. 14.0% for
Intel.

If I were funding an R&D program, I would prefer $ over % :^).
Of course, I'm not sure that the R&D really figures into this whole
discussion much at all. While Intel may have a decently large R&D
effort, unquestionably the world-leader in R&D would have to be IBM,
and it was Apple's switch away from IBM that started all of this.

1. I was discussing why Jobs chose Intel over AMD, not IBM.

2. It is my understanding, from what I have heard in the industry, that Jobs
originally saw the relationship with IBM (and Moto) as bringing something
more than CPUs. He saw a replacement for PARC. But it didn't turn out as he
had hoped.

3. It is my judgement that the highly visible research program at Intel
would appear to have the elements that would seem to be a replacement for
IBM

4. While IBM does spend far more money on research than does Intel, it is
often observed that Intel spends far more money on research on the areas of
critical interest to Intel than does IBM. I suggest that areas of research
at Intel are probably pretty well aligned with those of Jobs.

James
 
The idea that there is some intrinsic yield difference for transistors
configured into a memory circuit from those structured into logic, all
fabricated on the same process, is fallacious.

Not true at all. Redundancy can easily be built into arrays. Not so
with logic.
 
James said:
Hmm. Just before the dates i provided citations for. Can you?

It gets harder and harder to find information for subjects that are
several years old. So far, I've found this one from a speech by AMD
Chairman Jerry Sanders in 2000 talking about 1999:

"What is perhaps the world's most important "innovation index" -
patents issued by the United States Patent and Trademark Office -
provides additional evidence of the value of our investment in R&D. In
1999, with 825 new U.S. patents, AMD ranked number 18 among all the
companies in the world in the number of patents issued. We ranked only
one place behind the Silicon Valley company whose name is synonymous
with invention, Hewlett Packard, and one place ahead of Intel. We will
continue our aggressive pursuit of innovation."

http://www.pcstats.com/releaseview.cfm?releaseID=198

Yousuf Khan
 
So. What matters to yield except the number of wafers started vs the number
of chips shipped? The die size and wafer size determines (to within 5-10%)
the number of dies per wafer.The die size for the Athlon product being
shipped then was 100 mm2 on a 90nm process with 200 mm wafers. That

AMD is only just now at somewhat over 50% 90nm wafer starts, they're
still producing a decent number of 130nm wafers.

AMD has pretty much never just had a single processor in production at
Fab30, they have always had multiple chips being pumped out at the
same time. 100mm^2 is, generally speaking, a fairly small die these
days. Yeah, the late AthlonXP/Sempron chips were under 100mm^2, and
the current 90nm fab line Socket 754 Sempron chips are around 100mm^2,
but that's about it. Many of their chips are larger.
translates to ~ 314 die.wafer. Let's knock off 10% for unused area - 280
die. A publication in 2003 stated the wafer starts and the volumes 10K/wk
wafer starts. So Fab 30 had the potential to ship 140M Athlons/yr. The same

Fab 30 has a maximum output of 5K wafers/week.
article claimed 50M/year processors shipped for Fab 30. That is larger than
your claim that AMD could ship ~ 25% of 175M/yr total volume.

Indeed, since the numbers are totally inaccurate.
The idea that there is some intrinsic yield difference for transistors
configured into a memory circuit from those structured into logic, all
fabricated on the same process, is fallacious.

Not in the least. Cache transistors damn near always come with
redundant blocks. the same is rarely true for logic transistors.
Also, my understanding of it is that the fairly even and structured
nature of cache allows for generally lower defect rates in that part
of the die.
Hmm. Just before the dates i provided citations for. Can you?

I'll see if I can track them down...

Here's a list from one of AMD's press releases:

http://www.amd.com/us-en/assets/content_type/DownloadableAssets/patents_chart.pdf

Original press release is here:

http://www.amd.com/us-en/Corporate/VirtualPressRoom/0,,51_104_543_8001~13966,00.html


For the 2002 numbers you can find them on your previous link:

http://www.uspto.gov/web/offices/ac/ido/oeip/taf/top03cos.htm

The two right-most columns show number of patents from 2002.
If I were funding an R&D program, I would prefer $ over % :^).
Indeed!


1. I was discussing why Jobs chose Intel over AMD, not IBM.

I understand that, it just seems like this couldn't have been too high
of a priority or else they wouldn't have left IBM in the first place.
2. It is my understanding, from what I have heard in the industry, that Jobs
originally saw the relationship with IBM (and Moto) as bringing something
more than CPUs. He saw a replacement for PARC. But it didn't turn out as he
had hoped.

Things in the computer industry have a funny way of not turning out
how any of us hoped!
3. It is my judgement that the highly visible research program at Intel
would appear to have the elements that would seem to be a replacement for
IBM

4. While IBM does spend far more money on research than does Intel, it is
often observed that Intel spends far more money on research on the areas of
critical interest to Intel than does IBM. I suggest that areas of research
at Intel are probably pretty well aligned with those of Jobs.

Intel's research has, traditionally, focused on the actual
manufacturing of processors. Certainly they do other things, that has
always being their real core area of expertise.

I wouldn't even want to pretend to know what Steve Jobs is after
though, so I certainly don't know if this is aligned with what he's
interested in or not!
 
Tony Hill said:
AMD is only just now at somewhat over 50% 90nm wafer starts, they're
still producing a decent number of 130nm wafers.

I was referring to Fab 30, which I understood to be running 200 mm. They
don't have a mixed process in one fab, do they?
AMD has pretty much never just had a single processor in production at
Fab30, they have always had multiple chips being pumped out at the
same time. 100mm^2 is, generally speaking, a fairly small die these
days.

My numbers were based on data from 2003.

Yeah, the late AthlonXP/Sempron chips were under 100mm^2, and
the current 90nm fab line Socket 754 Sempron chips are around 100mm^2,
but that's about it. Many of their chips are larger.
same

Fab 30 has a maximum output of 5K wafers/week.

It was stated to be 10K in 2003. 5K is a relatively small number of wafer
starts.
Indeed, since the numbers are totally inaccurate.

They would be if the claimed wafer starts was off by 50%.
Not in the least. Cache transistors damn near always come with
redundant blocks. the same is rarely true for logic transistors.
Also, my understanding of it is that the fairly even and structured
nature of cache allows for generally lower defect rates in that part
of the die.

The redundant blocks would have an impact. Other than that, the critical
area analysis would account for layout differences.
I'll see if I can track them down...

Here's a list from one of AMD's press releases:
http://www.amd.com/us-en/assets/content_type/DownloadableAssets/patents_chart.pdf

That's fine.
Original press release is here:

http://www.amd.com/us-en/Corporate/VirtualPressRoom/0,,51_104_543_8001~13966,00.html


For the 2002 numbers you can find them on your previous link:

http://www.uspto.gov/web/offices/ac/ido/oeip/taf/top03cos.htm

The two right-most columns show number of patents from 2002.


I understand that, it just seems like this couldn't have been too high
of a priority or else they wouldn't have left IBM in the first place.

probably a third level issue. But it is my contention (and always has been)
that the IBM intersts are not narrowly enough focused on technologies - and
businesses - directly related to Apple's business. That misalignment would
extend to research.
Things in the computer industry have a funny way of not turning out
how any of us hoped!


Intel's research has, traditionally, focused on the actual
manufacturing of processors. Certainly they do other things, that has
always being their real core area of expertise.

A look at their web site suggest that they have added some "IBM-like"
activities.
http://www.intel.com/research/index.htm
I wouldn't even want to pretend to know what Steve Jobs is after
though, so I certainly don't know if this is aligned with what he's
interested in or not!
My model of the relationship is a broader one than most assume. The use of
Intel processors is a first step. Intel will use Apple as a test bed for a
broad range of technologies, hoping that Apple use will pull them into wider
use. It would be interesting to speculate how WiMax would fit into this
relationship, for example. In turn, Apple will provide input into Intel's
R&D effort. Who knows, Intel's capital may even enter into the picture.

James
 
I was referring to Fab 30, which I understood to be running 200 mm. They
don't have a mixed process in one fab, do they?

Yes they do. All are 200mm diameter wafers, but they have been
progressing from 130nm production to 90nm production. I just checked
some AMD pages and they say that they have now completed that
transition.
My numbers were based on data from 2003.

In 2003 Fab30 was 100% 130nm (or close enough for approximation sake).
Throughout that year they were producing Thoroughbred and Appaloosa
core AthlonXP chips, with a die size of about 85mm^2 (+/- 5mm^2
depending on which stepping you're talking about), Barton core
AthlonXP chips at 100mm^2 and Clawhammer/Sledgehammer Athlon64 and
Opteron processors at 193mm^2. Exact product mix shifted throughout
the year from the smaller AthlonXP cores towards the larger
Athlon64/Opteron cores.
It was stated to be 10K in 2003. 5K is a relatively small number of wafer
starts.

According to AMD themselves, 5K/week:

http://www.amd.com/us-en/Corporate/JobOpportunities/0,,51_82_621_628^502^509,00.html


Not the largest plant in the world by any means, but still reasonably
hefty.
 
Tony Hill said:
Yes they do. All are 200mm diameter wafers, but they have been
progressing from 130nm production to 90nm production. I just checked
some AMD pages and they say that they have now completed that
transition.
Yikes!


In 2003 Fab30 was 100% 130nm (or close enough for approximation sake).
Throughout that year they were producing Thoroughbred and Appaloosa
core AthlonXP chips, with a die size of about 85mm^2 (+/- 5mm^2
depending on which stepping you're talking about), Barton core
AthlonXP chips at 100mm^2 and Clawhammer/Sledgehammer Athlon64 and
Opteron processors at 193mm^2. Exact product mix shifted throughout
the year from the smaller AthlonXP cores towards the larger
Athlon64/Opteron cores.


According to AMD themselves, 5K/week:

http://www.amd.com/us-en/Corporate/JobOpportunities/0,,51_82_621_628^502^509,00.html


Not the largest plant in the world by any means, but still reasonably
hefty.

They are farther behind the power curve than I had realized. I would expect
that there may be 65 nm capacity available soon from some of the foundries.
It is no wonder that they are looking into foundry support.

All that given, isn't their fab technology lag a good reason alone to choose
Intel?

James
 
They are farther behind the power curve than I had realized. I would expect
that there may be 65 nm capacity available soon from some of the foundries.
It is no wonder that they are looking into foundry support.

All that given, isn't their fab technology lag a good reason alone to choose
Intel?

What technology lag? Have you forgotten Intel's agonies with 90nm over a
period of months well into 3Q last year... Dothan, Precott et.al.? AMD is
using a more advanced process with Dual Stress Liner, which they
"co-developed" with IBM, than the strained silicon which gave Intel so much
trouble. AMD did like Intel when the 90nm didn't give the
yield/performance they'd hoped for - they produced both... their advanced
production capability allowed them to do that in a single fab and it's been
behind them for months. The current process is what Chartered will be
taking up I believe.

As for 65nm AMD is well into the prototype production stage with that at
Fab 36. If they say it's "going well" is there any less reason to believe
that than Intel's crowing how they will be first at 65nm? Let's wait and
see.
 
What technology lag? Have you forgotten Intel's agonies with 90nm over a
period of months well into 3Q last year... Dothan, Precott et.al.? AMD is
using a more advanced process with Dual Stress Liner, which they
"co-developed" with IBM, than the strained silicon which gave Intel so much
trouble.

How does one tell whether one process is "more advanced" than another?
Higher yield? Faster, lower power transistors? fewer mask steps? Or press
releases?

AMD did like Intel when the 90nm didn't give the
yield/performance they'd hoped for - they produced both... their advanced
production capability allowed them to do that in a single fab and it's been
behind them for months. The current process is what Chartered will be
taking up I believe.

What was stated is that AMD has now finally converted one modest-sized 200
fab to 90 nm. That doesn't sound like a leadership position to me. There are
many fabs across the world, Intel and elsewhere, who have been running 90 nm
300 mm fabs in high volume manufacture for a year.
As for 65nm AMD is well into the prototype production stage with that at
Fab 36. If they say it's "going well" is there any less reason to believe
that than Intel's crowing how they will be first at 65nm? Let's wait and
see.

Intel announced a prototype 65 nm line in 2004. But as with conversion to 90
nm, the proof will be in the high volume delivery of product, not prototype
operation. We can afford to wait and see. My point was, however, that Jobs
made his decision on the existing record, not hopes for the future.

James
 
in message news:...

How does one tell whether one process is "more advanced" than another?
Higher yield? Faster, lower power transistors? fewer mask steps? Or press
releases?

One reads about it, fits it in with one's pool of knowledge and looks at
the results. Since I'm not a process expert, I have to use my varied
background in chemistry and computers - the Dual Stress Liner, which brings
switching enhancement on both p- and n- channels seems to be more
advanced... a view with which others more expert than I concur. Perhaps
one of the IBM or Intel people who post can comment but certainly Intel is
still stuck with heat/dissipation problems. Seems like their NIH sneers at
Cu/SOI are coming back to haunt them.
AMD did like Intel when the 90nm didn't give the

What was stated is that AMD has now finally converted one modest-sized 200
fab to 90 nm. That doesn't sound like a leadership position to me. There are
many fabs across the world, Intel and elsewhere, who have been running 90 nm
300 mm fabs in high volume manufacture for a year.

No, it was you who said it was modest-sized; I understood Tony thought it
was suitably-sized.:-) Of course there are many 90nm plants but not many
of them producing complex, dense logic circuit CPUs. Even Intel's success
there is barely a year. The AMD Fab 30 has been producing usable 90nm
since August/September last year.
Intel announced a prototype 65 nm line in 2004. But as with conversion to 90
nm, the proof will be in the high volume delivery of product, not prototype
operation. We can afford to wait and see. My point was, however, that Jobs
made his decision on the existing record, not hopes for the future.

I did *not* say a prototype "line", which I'd consider more of a pilot
plant project - my understanding is prototype *production* which
usually/often precedes the "sampling" stage by 6months or so, with a
further couple of months or so before actual production is achieved... all
assuming that 65nm is actually going to work, which is no given for
anybody, AIUI right now.

Jobs?... As has already been stated by others, he "followed the money".:-[]
 
George Macdonald said:
One reads about it, fits it in with one's pool of knowledge and looks at
the results. Since I'm not a process expert, I have to use my varied
background in chemistry and computers - the Dual Stress Liner, which brings
switching enhancement on both p- and n- channels seems to be more
advanced... a view with which others more expert than I concur. Perhaps
one of the IBM or Intel people who post can comment but certainly Intel is
still stuck with heat/dissipation problems. Seems like their NIH sneers at
Cu/SOI are coming back to haunt them.

In other words, IBM and AMD press releases. No functional comparisons made.

Intel, and the rest of the industry, are stuck with heat/dissipation
problems. However, we do have one indicator of the relative degree of the
problem. After years of extolling the superiority of IBM technology (which
you seem to be so much impressed with) Jobs announced that he was abandoning
it, precisely because it did NOT match up to Intel technology with regards
to heat/dissipation problems. Moreover, he saw nothing on their technology
roadmap that would give him any hope that they would improve it.
No, it was you who said it was modest-sized; I understood Tony thought it
was suitably-sized.:-)

No. He said it was "reasonably hefty". Whether it was suitably sized or not
would depend on AMD's business. A 5K/week fab is a modest-sized fab. The
largest ones can produce 15K 300 mm wafers/wk. That is 6 times the capacity
of fab 30.

Of course there are many 90nm plants but not many
of them producing complex, dense logic circuit CPUs. Even Intel's success
there is barely a year. The AMD Fab 30 has been producing usable 90nm
since August/September last year.

There have been other fabs "producing" 90 nm product for several years now.
The point was that AMD finally converted their fab completely to 90 nm just
recently. That is at least a year behind Intel and others. And, it is only a
200 mm line, compared to the industry standard of 300 mm.
I did *not* say a prototype "line", which I'd consider more of a pilot
plant project - my understanding is prototype *production* which
usually/often precedes the "sampling" stage by 6months or so, with a
further couple of months or so before actual production is achieved... all
assuming that 65nm is actually going to work, which is no given for
anybody, AIUI right now.

A line that produces prototype product is a prototype line. For example,
typical prototype lines run SRAM wafers. When it finishes the prototype
stage then, if it is suitably sized, it can start ramping production. The
Intel announcement, for example, referred to a new production-sized fab in
Ireland. AFAIK, nobody builds pilot plant fabs any more. It is just too
expensive and time consuming. Fabs built for developing a new process are
sized to be useful for production. If not, you would encounter yet another
set of problems when you attempted to scale the process.
Jobs?... As has already been stated by others, he "followed the
money".:-[]

This ubiquitous "Others" makes many statements. "Others" claimed to know
that the AMD 90 nm process is superior to the Intel process, for example.But if Mr. Others is saying that AMD couldn't deliver equivalent product at
a competitive price, I would agree with him.

James
 
They are farther behind the power curve than I had realized. I would expect
that there may be 65 nm capacity available soon from some of the foundries.
It is no wonder that they are looking into foundry support.

All that given, isn't their fab technology lag a good reason alone to choose
Intel?

Err... what lag? Intel will be the first to start shipping chips
built on a 65nm process in about 6-10 months time, the foundries
aren't on track for 65nm until at least this time next year and
probably not until next fall. AMD has already started test-runs in
their new Fab36 facility and should be roughly tied with IBM for
second to ship 65nm products, probably next spring.

Yes, they are usually a couple months behind Intel in starting a
switch over to a new process, but they tend to have a shorter
switchover time (where Intel usually takes about a year to switch
their processors from one fab generation to the next, AMD usually
finishes in about 8 months). We saw this same sort of thing when
Intel and AMD transitioned to 90nm this time last year. Intel
"released" (using the term loosely) their first 90nm chips in Feb. of
2004. They first hit widespread availability in about May or June of
2004 and then it took until about the end of the year before that was
all they were producing. AMD didn't release their first 90nm chips
for a few months later, about this time last year, but had
availability pretty much right from the get-go and had completed their
transition to 90nm at the end of the year as well.
 
Tony Hill said:
Err... what lag? Intel will be the first to start shipping chips
built on a 65nm process in about 6-10 months time, the foundries
aren't on track for 65nm until at least this time next year and
probably not until next fall. AMD has already started test-runs in
their new Fab36 facility and should be roughly tied with IBM for
second to ship 65nm products, probably next spring.

I was speaking more to the track record. As you pointed out to me, AMD has
just finally completed their conversion of Fab 20 to 90 nm, As it is only a
200 mm fab, the are still behind Intel (and much of the rest of the industry
leaders). Perhaps AMD will be able to make up some of this lag with their
new Fab, but that is a hope for the future, not the present.

As to 65 nm, Intel claims to have been busy characterizing that process
since late 2003.
http://www.itnews.com.au/newsstory.aspx?CIaNID=17324
Yes, they are usually a couple months behind Intel in starting a
switch over to a new process, but they tend to have a shorter
switchover time (where Intel usually takes about a year to switch
their processors from one fab generation to the next, AMD usually
finishes in about 8 months). We saw this same sort of thing when
Intel and AMD transitioned to 90nm this time last year. Intel
"released" (using the term loosely) their first 90nm chips in Feb. of
2004. They first hit widespread availability in about May or June of
2004 and then it took until about the end of the year before that was
all they were producing. AMD didn't release their first 90nm chips
for a few months later, about this time last year, but had
availability pretty much right from the get-go and had completed their
transition to 90nm at the end of the year as well.

And I thought that their fab wasn't completely converted until recently.
There is a common confusion between first shipped and high volume
production. When a fab is pushing a full volume of wafers through the line,
at targeted yields, then HVP has been achieved.

And still running only 200 mm. There IS a difference between a 200 and 300
mm line - about 2X

James
 
In other words, IBM and AMD press releases. No functional comparisons made.

I am able to "read" more than press releases - you? I'm not going to do
your research for you.
Intel, and the rest of the industry, are stuck with heat/dissipation
problems.

AMD is not in the same position as Intel here. Please try to get some
relevant info before spouting.
However, we do have one indicator of the relative degree of the
problem. After years of extolling the superiority of IBM technology (which
you seem to be so much impressed with) Jobs announced that he was abandoning
it, precisely because it did NOT match up to Intel technology with regards
to heat/dissipation problems. Moreover, he saw nothing on their technology
roadmap that would give him any hope that they would improve it.

It's my understanding that the power management, or lack of it in PowerPC,
is the main(?) problem for Apple... something which AMD brings to the
IBM/AMD alliance and possibly some technology they obtained in part from
Transmeta. Why IBM has not taken that up I have no idea.

According to what I read of the principal reasons -- lack of mobile capable
G5 -- Apple has been seduced by a big lie: they talk of Intel notebooks
running 3GHz and higher and point to their miserly 1.67GHz chip; trouble is
Apple is not going to get those Intel chips, since they are obsolescent
mobile P4s. Intel's future lies with P-M and developments thereof... *and*
what speed are P-Ms running at?... 2.0GHz but there are damned few of them
and they run hot at full tilt. The most common P-M notebook runs at
1.7/1.8GHz... in fact very close to the iBooks at 1.67.

By the time Apple starts selling Intel-based systems, I'd say a fair
estimate of where Intel will be clockwise is ~2.6GHz with their new chip
derived from P-M. Sorry but I can't keep up with all the stupid code names
they dream up. At any rate, Apple's published(?) or imagined technical
reasons for switching are either a lie or a smokescreen.
No. He said it was "reasonably hefty". Whether it was suitably sized or not
would depend on AMD's business. A 5K/week fab is a modest-sized fab. The
largest ones can produce 15K 300 mm wafers/wk. That is 6 times the capacity
of fab 30.

Intel may have such fabs at such high density logic - nobody else I can
think of has... but I don't follow TI and some of the other non-CPU fabs so
closely. Certainly Moto/Freescale turned out to be a big loser for AMD
*and* Apple so count them out. The fact is that 5K in the design rules and
process technology of Fab 30 *is* a respectable count... and certainly well
capable of supplying more than AMD's current market share - the addition of
Apple's piddly amount would not have beeen a big additional burden.
Of course there are many 90nm plants but not many

There have been other fabs "producing" 90 nm product for several years now.
The point was that AMD finally converted their fab completely to 90 nm just
recently. That is at least a year behind Intel and others. And, it is only a
200 mm line, compared to the industry standard of 300 mm.

I believe you are wrong here - Intel did not complete to all 90nm CPU
conversion a year ago - they had first usable 90nm chips about a year ago.
As for "industry standard", the transition to 300mm has happened slowly
over the past year in HDL, where it is necessary and useful - it's hardly a
"standard". The fact that there are a bunch of flash, SRAM and DRAM plants
at 300mm is irrelevant.
A line that produces prototype product is a prototype line. For example,
typical prototype lines run SRAM wafers. When it finishes the prototype
stage then, if it is suitably sized, it can start ramping production. The
Intel announcement, for example, referred to a new production-sized fab in
Ireland. AFAIK, nobody builds pilot plant fabs any more. It is just too
expensive and time consuming. Fabs built for developing a new process are
sized to be useful for production. If not, you would encounter yet another
set of problems when you attempted to scale the process.

Whether it was produced on a full production line or not, the early Intel
65nm could by no means be considered prototype production - call it what
you want but it was a pilot project... a bit above proof of concept if you
like, from which a decision to "build" was taken. Hell the initial
"demonstration" around August 2004, *was* on SRAM chips... hardly prototype
CPUs. The infrastructure, including floorspace, is currently in various
states of construction in 3 locations.

I've no idea which Intel announcement you're referring to nor when it was
made. There have been several PR releases, news items, on the Ireland
plant situation and the go ahead for what is termed an expansion of an
existing plant was only taken in Feb/March this year, after an application
for EU aid was turned down. Ireland's economy is now considered "fixed"
and there are many other EU regions which need the stimulation more... like
East germany e.g.:-)
Jobs?... As has already been stated by others, he "followed the
money".:-[]

This ubiquitous "Others" makes many statements. "Others" claimed to know
that the AMD 90 nm process is superior to the Intel process, for example.

Look it up - Dual Stress Liner on Cu+SOI *is* generally considered more
advanced. The results are apparent in the higher performance,
cooler-running product.
But if Mr. Others is saying that AMD couldn't deliver equivalent product at
a competitive price, I would agree with him.

And yet they do - you can buy a better, very competitively priced product
right now and they make a profit from it, despite all the Intel marketing
chicanery. You're wrong again... and apparently out of touch with
reality.:-[]
 
George Macdonald said:
made.

I am able to "read" more than press releases - you? I'm not going to do
your research for you.

It was you that made this assertion. If you can't back it up with anything
but some vague, unsupported claims, then we will have to take your admission
of ignorance on the subject more seriously than the claims themselves.
AMD is not in the same position as Intel here. Please try to get some
relevant info before spouting.

Another assertion. Perhaps you will share with us the basis for your claim -
or do we have to find your supporting evidence ourselves?
It's my understanding that the power management, or lack of it in PowerPC,
is the main(?) problem for Apple... something which AMD brings to the
IBM/AMD alliance and possibly some technology they obtained in part from
Transmeta. Why IBM has not taken that up I have no idea.

The problem is, in part a power management architecture issue. However, it
is also strongly dependent on leakage currents in the basic structures. A
successful solution, such as the Centrino design, depends on both
approaches.
According to what I read of the principal reasons -- lack of mobile capable
G5 -- Apple has been seduced by a big lie: they talk of Intel notebooks
running 3GHz and higher and point to their miserly 1.67GHz chip; trouble is
Apple is not going to get those Intel chips, since they are obsolescent
mobile P4s.

Who is this "they"? Is it the same source that you referred to as "others"?

Intel's future lies with P-M and developments thereof... *and*
what speed are P-Ms running at?... 2.0GHz but there are damned few of them
and they run hot at full tilt.

Strange. I just bought a Toshiba Tecra M3 with a 2 GHz P4-M. They didn't
have any trouble delivering it.

The most common P-M notebook runs at
1.7/1.8GHz... in fact very close to the iBooks at 1.67.

Jobs clearly stated that a G4 running at 1.6Ghz wasn't competitive with a
P4-M running at 1.8 GHz.
By the time Apple starts selling Intel-based systems, I'd say a fair
estimate of where Intel will be clockwise is ~2.6GHz with their new chip
derived from P-M. Sorry but I can't keep up with all the stupid code names
they dream up. At any rate, Apple's published(?) or imagined technical
reasons for switching are either a lie or a smokescreen.

No doubt! It contradicts your views on the technology, therefore they must
be lies.
Intel may have such fabs at such high density logic - nobody else I can
think of has...

TSMC, UMC for example. They are producing high performance GPUs on it. Take
a look at a summary of the situation, as of over a year ago. No mention of
AMD.
http://www.geek.com/news/geeknews/2004May/bch20040524025287.htm


but I don't follow TI and some of the other non-CPU fabs so
closely. Certainly Moto/Freescale turned out to be a big loser for AMD
*and* Apple so count them out. The fact is that 5K in the design rules and
process technology of Fab 30 *is* a respectable count... and certainly well
capable of supplying more than AMD's current market share - the addition of
Apple's piddly amount would not have beeen a big additional burden.
No it isn't respectable. A 200 mm fab just now coming into full production
at 90 nm is at least a year behind Intel and the Tawainese foundries.

Many, many fabs have been "producing" for longer than that. What separates
the men from the boys is the ability to run the fab at HV production and
high yield - and at 300 mm. AMD still isn't there yet.


m just
I believe you are wrong here - Intel did not complete to all 90nm CPU
conversion a year ago - they had first usable 90nm chips about a year ago.

They didn't have all of their fabs converted, but they had one Irish fab
fully converted over a year ago.
As for "industry standard", the transition to 300mm has happened slowly
over the past year in HDL, where it is necessary and useful - it's hardly a
"standard". The fact that there are a bunch of flash, SRAM and DRAM plants
at 300mm is irrelevant.


Whether it was produced on a full production line or not, the early Intel
65nm could by no means be considered prototype production - call it what
you want but it was a pilot project... a bit above proof of concept if you
like, from which a decision to "build" was taken. Hell the initial
"demonstration" around August 2004, *was* on SRAM chips... hardly prototype
CPUs. The infrastructure, including floorspace, is currently in various
states of construction in 3 locations.

If you build a full-blown fab to develop the process on, the decision has
already been made. This idea that you run the process in a pilot line before
building a production line is archaic. I would assume that nobody (except
IBM) does this any more. Certainly Intel and the taiwanese use the above
strategy.
I've no idea which Intel announcement you're referring to nor when it was
made. There have been several PR releases, news items, on the Ireland
plant situation and the go ahead for what is termed an expansion of an
existing plant was only taken in Feb/March this year, after an application
for EU aid was turned down. Ireland's economy is now considered "fixed"
and there are many other EU regions which need the stimulation more... like
East germany e.g.:-)
No doubt the EU input will impact future expansion in Ireland. I don't it
think it will affect exisiting plans, however.

Jobs?... As has already been stated by others, he "followed the
money".:-[]

This ubiquitous "Others" makes many statements. "Others" claimed to know
that the AMD 90 nm process is superior to the Intel process, for example.

Look it up - Dual Stress Liner on Cu+SOI *is* generally considered more
advanced. The results are apparent in the higher performance,
cooler-running product.

Mr. Generally made such a claim? Where was this? In the AMD marketing
office?
But if Mr. Others is saying that AMD couldn't deliver equivalent product at
a competitive price, I would agree with him.

And yet they do - you can buy a better, very competitively priced product
right now and they make a profit from it, despite all the Intel marketing
chicanery. You're wrong again... and apparently out of touch with
reality.:-[]

AMD does indeed make an attractive product - for the desktop. Jobs stated
that he was motivated by the poor power performance of PPC and its impact on
the high margin, high growth laptop market. He stated it was this
consideration that moved him to Intel - but those are all lies, of course.
It was all some secret plot.

It's really hard to tell if AMD is making a profit or not from their
processors. Their overall gross margins are mediocre, but they are saddled
with failing businesses, such as the flash business. I guess Intel's much
larger flash business somehow doesn't impact their gross margins in the same
way.

James
 
James said:
Strange. I just bought a Toshiba Tecra M3 with a 2 GHz P4-M. They didn't
have any trouble delivering it.

Read it again, he's talking about P-M's not P4-M's. Totally different
architectures.
No it isn't respectable. A 200 mm fab just now coming into full production
at 90 nm is at least a year behind Intel and the Tawainese foundries.

You're misunderstanding the meaning of "coming into full production at
90nm". Up until now, the mix of chips has been 130nm and 90nm. The 130nm
chips were the last of their production of K7 chips, which have now been
phased out completely. That's why it's now into full production on 90nm,
they are now only manufacturing K8 chips at 90nm. They used the same
lines for 130nm and 90nm, just like they used the same lines for 180nm
and 130nm previously. Their line is flexible enough to do that.
AMD does indeed make an attractive product - for the desktop. Jobs stated
that he was motivated by the poor power performance of PPC and its impact on
the high margin, high growth laptop market. He stated it was this
consideration that moved him to Intel - but those are all lies, of course.
It was all some secret plot.

It's really hard to tell if AMD is making a profit or not from their
processors. Their overall gross margins are mediocre, but they are saddled
with failing businesses, such as the flash business. I guess Intel's much
larger flash business somehow doesn't impact their gross margins in the same
way.

You mean Intel's much /smaller/ flash business.


Yousuf Khan
 
It was you that made this assertion. If you can't back it up with anything
but some vague, unsupported claims, then we will have to take your admission
of ignorance on the subject more seriously than the claims themselves.

The only ignorance on display here is by you - adding unfounded speculative
accusations only weakens your position. I've told you the info is out
there but you seem incapable of using a search engine with the appropriate
string(s).
Another assertion. Perhaps you will share with us the basis for your claim -
or do we have to find your supporting evidence ourselves?

It is fact - Intel has serious heat problems with their top-end CPUs; AMD
doesn't. You can look up any of the Web sites which do benchmarks - the
infamous THG even had to re-hash their recent long-term stability tests,
with restarts, so as not to make the Intel chips look too bad. This is all
common fact. You could also buy an Athlon64 system and check it out for
yourself.:-[]
The problem is, in part a power management architecture issue. However, it
is also strongly dependent on leakage currents in the basic structures. A
successful solution, such as the Centrino design, depends on both
approaches.

AMD has uhh, mastered the problem and they are apparently using the same
process technology. Sorry that doesn't wash... and in fact Intel is
suffering just as badly with power density and quiescent leakage. Uhh, BTW
Centrino is *NOT* a CPU - it's a err said:
Who is this "they"? Is it the same source that you referred to as "others"?

Are you reading challenged? APPLE... IT... THEY!... unless the reports of
Jobs statements, by every media outlet I've seen on the subject, are all
lies.
Intel's future lies with P-M and developments thereof... *and*

Strange. I just bought a Toshiba Tecra M3 with a 2 GHz P4-M. They didn't
have any trouble delivering it.

Geez I hope you didn't buy a 2.0GHz P4-M but a P-M (Pentium M)... often
known by the rabble as a Centrino.
The most common P-M notebook runs at

Jobs clearly stated that a G4 running at 1.6Ghz wasn't competitive with a
P4-M running at 1.8 GHz.

Nonsensical statement. Please try to stay on track.
No doubt! It contradicts your views on the technology, therefore they must
be lies.

I'm afraid your statements in previous paras above have just disqualified
you from further discussion of the subject - you haven't a clue what you're
talking about.
TSMC, UMC for example. They are producing high performance GPUs on it. Take
a look at a summary of the situation, as of over a year ago. No mention of
AMD.
http://www.geek.com/news/geeknews/2004May/bch20040524025287.htm

500MHz GPUs - a different animal altogether... as are the VIA CPUs TSMC(?)
is making.
but I don't follow TI and some of the other non-CPU fabs so
No it isn't respectable. A 200 mm fab just now coming into full production
at 90 nm is at least a year behind Intel and the Tawainese foundries.

But it is not just now coming into full production. Lying about the facts
only indicts *you*.
Many, many fabs have been "producing" for longer than that. What separates
the men from the boys is the ability to run the fab at HV production and
high yield - and at 300 mm. AMD still isn't there yet.

No, there are *not* many fabs which have been producing 300mm HDL CPUs at
90nm for years. AMD made a decision that they did not need 300mm - it's
not a BFD and your insistence on 300mm separating men from boys is
picayune.
If you build a full-blown fab to develop the process on, the decision has
already been made. This idea that you run the process in a pilot line before
building a production line is archaic. I would assume that nobody (except
IBM) does this any more. Certainly Intel and the taiwanese use the above
strategy.

Why are you insisting on misquoting me? Is this your tedious ploy to win
an argument? I said it was "more of a pilot plant project". Whether the
pilot project was on a separate pilot line or not matters not. The idea
that you can make SRAMS at 65nm and just plunge into HDL CPU production is
off the wall.
Jobs?... As has already been stated by others, he "followed the
money".:-[]

This ubiquitous "Others" makes many statements. "Others" claimed to know
that the AMD 90 nm process is superior to the Intel process, for example.

Look it up - Dual Stress Liner on Cu+SOI *is* generally considered more
advanced. The results are apparent in the higher performance,
cooler-running product.

Mr. Generally made such a claim? Where was this? In the AMD marketing
office?

Google for evidence - I do not save every site address I visit to satisfy
Usenet sceptic like you... though I do recall www.xbitlabs.com had
something for the non-expert reader. For the performance, as already
stated compare the numbers.

Oh, and just because you believe every piece of blurb put out by Intel does
not mean that I also follow similar info paths.
But if Mr. Others is saying that AMD couldn't deliver equivalent product at
a competitive price, I would agree with him.

And yet they do - you can buy a better, very competitively priced product
right now and they make a profit from it, despite all the Intel marketing
chicanery. You're wrong again... and apparently out of touch with
reality.:-[]

AMD does indeed make an attractive product - for the desktop. Jobs stated
that he was motivated by the poor power performance of PPC and its impact on
the high margin, high growth laptop market. He stated it was this
consideration that moved him to Intel - but those are all lies, of course.
It was all some secret plot.

200Mhz in today's CPUs make hardly any difference in the same architecture
- across architectures it's hard to tell but, though I've never owned one,
it had always been my impression that clock for clock, a PowerPC would
thrash any x86. At an rate, it sure makes ya wonder about Apple's
marketing BS of the past few years. Dunno what they're going to do about
that. I also wonder what Mr. Jobs is going to think when the honeynoon is
over and he starts getting threats from Intel and watches as Dell buys CPUs
for a fraction of what he has to pay.:-)
It's really hard to tell if AMD is making a profit or not from their
processors. Their overall gross margins are mediocre, but they are saddled
with failing businesses, such as the flash business. I guess Intel's much
larger flash business somehow doesn't impact their gross margins in the same
way.

Intel lost a bunch on flash when they flooded the market 3-4Q 2004 - err.
they called it building market share - RIGHT! It is *not* hard to tell if
AMD is making a profit on processors - they *are* and their prices are
holding remarkably well. Their flash business is in the process of being
sold off... IPO'd. This stuff is easy to find the info on - arguing hard
facts is grotesque.
 
Back
Top