Intel follows lead of AMD, introduces model numbers

  • Thread starter Thread starter Black Jack
  • Start date Start date
Maybe not "same problems" but IBM has admitted that they have serious
"yield" problems with their foundry business chips at 90nm, e.g. nVidia.
Figures as low as 5% have been mentioned.

They were mentioned, but totally incorrectly. IBM wasn't talking
about yields at all, they were talking about the percentage of chip
*designs* that functioned on their first run. Their own internal
chips were working right out of the gate most of the time, while their
OEM customers were having to redesign the chip 2 or more times before
they worked.

Nothing to do with yields at all. It also had nothing to do with 90nm
production, the story was dealing with 130nm production.

That being said, IBM doesn't seem to be getting any large quantity of
their 90nm chips out. Apple has delayed shipments of their Xserve
boxes using the new PPC 970FX chip. To the best of my knowledge, that
PPC 970FX processor is the only chip that IBM is shipping from their
90nm fab lines at this time.
 
On 15 Mar 2004 20:18:48 GMT, (e-mail address removed) (Nick Maclaren) wrote:

More seriously, is there anywhere that the principles of this are
written up in a fashion that those of us who gave physics up at
17 might understand?

Practically any exposition that isn't just a snow job is going to
assume that you understand effective mass:

http://en.wikipedia.org/wiki/Effective_mass

If you swallow hard, that page contains a big piece of the puzzle. A
hole or an electron in a solid can be imagined to be accelerated by an
imposed electric field as if it were a free particle with positive or
negative charge, respectively with an effective mass m* (equation
given).

The effective mass can be calculated as a second derivative of the
dispersion relation E(k), where E is energy and k is the wavenumber
(equation also given).

If you need a deeper understanding of effective mass, you probably
want to consult a book on solid state physics, like Kittel, but, since
you gave up on qm, that's probably not going to get you very far.

What does the straining of silicon have to do with this? The crystal
structure scatters waves just like the rulings of a diffraction
grating. The details of that scattering produce the shape of the
energy surface E(k).

Just as you can change the scattering properties of a diffraction
grating by changing the spacing of the rulings, stretching or
compressing the silicon crystal lattice changes the scattering
properties and thus the dispersion relation and thus the effective
mass of holes and electrons.

If you do it right, you can reduce the effective mass, thus increasing
the acceleration of the hole or electron for any given imposed
electric field--the "increased mobility" that is constantly being
referred to in the snow jobs.

The stretching and compression are achieved by growing the strained
silicon on a crystal substrate whose natural spacing is greater or
lesser than the natural spacing of the silicon crystal. Assuming the
substrate is thicker than the strained layer, it will tend to impose
its spacing on the strained silicon layer, thus changing the
dispersion relation, thus changing the effective mass.

The floor is now open to nit-pickers.

RM
 
This is all getting very Bohring ....

More seriously, is there anywhere that the principles of this are
written up in a fashion that those of us who gave physics up at
17 might understand?

I can give it a shot. The entire reason for mechanically straining
the underlying silicon is to modify (ie increase) the mobility (speed)
of the electrons and holes, which in turn makes the transistors
faster.

Now, why does mechanical strain on the silicon lattice change the
mobility of the majority and minority carriers? Well, the speed of
the carriers are limited by several factors, including how often they
hit the crystal lattice and how much energy the carriers loose when
they do hit the lattice.

So by changing the lattice (compressing, tensioning) you change the
inherit probabilities that govern how often the carriers hit the
lattice and hence the average speed of the carriers.

Putting Germanium (which has a lower energy bandgap than Silicon) into
the Silicon will also change the energy gap of the material, and the
this can change the transport mechanisms too. (Here my memory is
getting hazy, so I would not like to get further out on a limb here)

Was it something like this you were looking for, Nick?

Regards,


Kai
 
Nick said:
More seriously, is there anywhere that the principles of this are
written up in a fashion that those of us who gave physics up at
17 might understand?

After reading this question this morning, the April issue of
Scientific American came this afternoon. It has an article that
includes many of the effects discussed, including the strain
effect, at a level that many should be able to understand.

-- glen
 
Robert Myers said:
Already suggested that. When megabytes instead of megawatts start
flowing south, then we'll know that "utility computing" is more than
just marketing hype.

RM

Just wait for the whining that starts then.
It's bad enough when jobs get outsourced --- now you want to outsource
computation too!
Remember logic is not the strongest feature of most politicians, few of
whom understand anything as basic as conservation of energy --- or they
might be less enthusiastic that a hydrogen economy somehow solves the
energy problems of the world.

Maynard
 
Ever been to Rochester, MN in January? Plenty of free AC.

No, but I spent a few of the coldest winters ever in Champaign-Urbana,
IL. The winters may be cold, but the summers are hot.

Canada also has something else in abundance that's of interest: cheap
hydro power.

RM
 
Thanks for the replies; I will chase them up.

The amusing thing is that I actually did a quantum mechanics course
as part of my degree, but to describe it as applied mathematics
would be a gross terminological inexactitude :-)


Regards,
Nick Maclaren.
 
One KWh is less than 10 cents on the US average. Compared to $30/hr
fully burdened cost for the lowest imaginable level of office labor, a
slower computer only has to cost 12 sec/day in lost productivity to
eat up the cost savings.

RM

Conversely, the faster computer has only to cause 12 seconds
loss of concentration, because of its fan going into loud hyperdrive
at an inopportune momnt, to justify the quieter machine.
 
I am also getting vibes that this is a lot more complex than many of
the causes of previous delays, but mine are probably more indirect
than yours.

What I haven't heard is that IBM and/or AMD have the same problems.
Nick Maclaren.

Intel is doing strained silicon as discussed already.

AMD and IBM are doing silicon-on-insulator in either partially depleted
of fully depleted form.

These processes are different enough that one can expect different
results in terms of transistor properties {transconductance, leakage
subthreshold effects,....} from the same lithographic capabilities.

Mitch
 
Thanks for the replies; I will chase them up.

The amusing thing is that I actually did a quantum mechanics course
as part of my degree, but to describe it as applied mathematics
would be a gross terminological inexactitude :-)

If you read the history of the development of quantum mechanics, which
you surely have, it becomes more bearable. Poisson brackets are
mathematically elegant. Commutators are mathematically elegant.
Making an analogy between Poisson Brackets and Commutators was an act
of inspired courage. Presenting that analogy as a formal mathematical
theory long after better ways of understanding what was really going
on (which is what I was confronted with) is a travesty. Teach
history. Teach mathematics. Don't jumble them up and call them
physics.

RM
 
That being said, IBM doesn't seem to be getting any large quantity of
their 90nm chips out. Apple has delayed shipments of their Xserve
boxes using the new PPC 970FX chip. To the best of my knowledge, that
PPC 970FX processor is the only chip that IBM is shipping from their
90nm fab lines at this time.

Aren't IBM making Xilinx's Spartan 3 FPGAs at 90nm? I have a couple on
my desk, but don't know if they're from IBM or UMC.

Cheers,
JonB
 
Conversely, the faster computer has only to cause 12 seconds
loss of concentration, because of its fan going into loud hyperdrive
at an inopportune momnt, to justify the quieter machine.

Unless you can dispense with a fan altogether, quietness and power
consumption are only weakly related. The quietest machine I own is a
P4 with a ducted CPU fan. On the other hand, my Pentium-M laptop has
exactly the annoying behavior you describle. There would be no point
in blaming or crediting the CPU in either case.

RM
 
Unless you can dispense with a fan altogether, quietness and power
consumption are only weakly related. The quietest machine I own is a
P4 with a ducted CPU fan. On the other hand, my Pentium-M laptop has
exactly the annoying behavior you describle. There would be no point
in blaming or crediting the CPU in either case.

Again I have to agree, my laptop does the same thing. Sometimes I even
suspect the fan is faulty, trying to spin up from too low a voltage
and can't, therefore making a periodic whining noise every few
seconds. I'd rather it stayed at full speed, it's less distracting.
--
L.Angel: I'm looking for web design work.
If you need basic to med complexity webpages at affordable rates, email me :)
Standard HTML, SHTML, MySQL + PHP or ASP, Javascript.
If you really want, FrontPage & DreamWeaver too.
But keep in mind you pay extra bandwidth for their bloated code
 
Now, why does mechanical strain on the silicon lattice change the
mobility of the majority and minority carriers? Well, the speed of
the carriers are limited by several factors, including how often they
hit the crystal lattice and how much energy the carriers loose when
they do hit the lattice.

So by changing the lattice (compressing, tensioning) you change the
inherit probabilities that govern how often the carriers hit the
lattice and hence the average speed of the carriers.

Thanks, this and RM's bit makes it clear what's going on... at least
on a practical level, nevermind that I wouldn't be able to explain
this with diagrams and numbers :PPpPP

--
L.Angel: I'm looking for web design work.
If you need basic to med complexity webpages at affordable rates, email me :)
Standard HTML, SHTML, MySQL + PHP or ASP, Javascript.
If you really want, FrontPage & DreamWeaver too.
But keep in mind you pay extra bandwidth for their bloated code
 
Robert said:
Unless you can dispense with a fan altogether, quietness and power
consumption are only weakly related. The quietest machine I own is a
P4 with a ducted CPU fan. On the other hand, my Pentium-M laptop has
exactly the annoying behavior you describle. There would be no point
in blaming or crediting the CPU in either case.

A Pentium M in a reasonably roomy case wouldn't need any
active cooling - a Zalman "flower" heatsink would do the job
very nicely. I've used one of those with an Athlon XP1900+
which puts out one heck of a lot more heat than a Pentium M.

Combine that with a fanless PSU and use quiet hard drives
with a little sound insulation and it shouldn't be too hard
to build a virtually silent system with P4-2400 or P4-2600
class performance and I have actually seen one such system.

It was a totally fanless system built around a mini-ITX board.
Quiet laptop drives were used. It does the job very well for
the programming/software design tasks the owner does. It seems
silent until you get *very* close to it. CD/DVD was USB so that
the motherboard's single IDE port could be used for two hard drives.
Case was a regular mid-tower with sound proofing added to muffle
hard drive noise.

The owner's goal when he had this thing built for him was silence
because the fan and hard drive noise from his previous system kept
driving him batty when he was trying to think - he figures the
$2400 system easily paid for itself in the first month just by
giving him a quieter work environment.
 
Unless you can dispense with a fan altogether, quietness and power
consumption are only weakly related. The quietest machine I own is a
P4 with a ducted CPU fan. On the other hand, my Pentium-M laptop has
exactly the annoying behavior you describle. There would be no point
in blaming or crediting the CPU in either case.

Not really. Overall, that is true, but isn't the issue. For any
given design (equating to cost, in some sense), the noise will
increase with the heat dissipation. I don't know what the practical
minimum noise level for removing 300 watts from a desktop case is,
but it is almost certainly audible to other people. Me, I have to
put my hand on a large server if I want to tell if the fans are
running ....

The heat itself is another major issue, because 300 watts is the
equivalent of 5 sedentary people. That is very likely to mean that
forced air conditioning is needed, with all of the attendant problems
THAT causes. As well as the cost, effect on climate and so on.



Regards,
Nick Maclaren.
 
On 16 Mar 2004 18:26:51 GMT, (e-mail address removed) (Nick Maclaren) wrote:

...For any given design (equating to cost, in some sense), the noise will
increase with the heat dissipation. I don't know what the practical
minimum noise level for removing 300 watts from a desktop case is,
but it is almost certainly audible to other people.

That noise can be as pleasant as the wind rustling quietly through the
leaves of a tree outside the window. Unpleasantly noisy desktop
computers are the result of careless, ignorant, or penny-wise,
pound-foolish engineering.

The heat itself is another major issue, because 300 watts is the
equivalent of 5 sedentary people.

The load on air conditioning is probably the most important cost
factor for most realistic office situations, since the air
conditioning has to be sized for the worst possible conditions. It
turns into a discomfort and lost productivity factor when the
air-conditioning is unequal to the load.
That is very likely to mean that
forced air conditioning is needed, with all of the attendant problems
THAT causes.

You can design quiet air-conditioning, too, but it is much more
expensive and much harder to maintain quality control than a desktop
computer cooling system.
As well as the cost, effect on climate and so on.

Let's work on the boss' company-bought gas-guzzler first.

RM
 
Robert said:
No, but I spent a few of the coldest winters ever in Champaign-Urbana,
IL. The winters may be cold, but the summers are hot.

If you want a _lot_ of free cooling, as well as high-bandwidth
connections, I have a suggestion for you:

Place your cluster in Longyearbyen, on Svalbard: N78 13'

This is almost certainly the northernmost point in the world with the
infrastructure to support such a facility.

I just (yesterday evening) came back from a short trip to visit friends
up there.

A couple of months ago dual subsea fiber cables that connect Svalbard
with the mainland got installed, most of the internal fiber pairs are
currently dark.

I think they are currently using just one or two pairs, at less than a
Gbit/s or so, but you can obviously increase this by an order of
magnitude or more.

There is a local university (a branch of Tromsø uni on the mainland), as
well as big research establishment at Ny Ålesund a bit further north.

Year-round average temperature is low enough that all buildings are
constructed on pylons rammed 3-4 m into the permafrost, the Longyearbyen
Glacier is just 5 km away, and the local coal mine and power plant can
deliver enough joice to power a _lot_ of x86 or Power cpus.
Canada also has something else in abundance that's of interest: cheap
hydro power.

That's the thing Svalbard does _not_ have. (It doesn't get warm enough
most of the year to support liquid water. :-()

Terje
 
They were mentioned, but totally incorrectly.

That's a bit strong IMO. There are some who believe there's more to this
than meets the eye. I know I read the initial article at Electronic News
and, after spending much too long trying to refind it, I even found a URL
link to it elsewhere. Turns out Electronic News has removed the bloody
article from their site.<GRRR> The only quoted para from it I can find
does mention .13u specifically for IBM's own parts but 90nm was part of the
story.
IBM wasn't talking
about yields at all, they were talking about the percentage of chip
*designs* that functioned on their first run. Their own internal
chips were working right out of the gate most of the time, while their
OEM customers were having to redesign the chip 2 or more times before
they worked.

Low first time yield?:-) Having to redesign to fit the process will
possibly drive the customer to his alternate supplier... as appears to be
happening with Qualcomm, nVidia and Xilinx... at least for the moment.
Nothing to do with yields at all. It also had nothing to do with 90nm
production, the story was dealing with 130nm production.

No - 90nm *has* been part of the yield story in general. TSMC & Qualcomm
claim to have working 90nm production. IBM has definitely stumbled here
and TSMC and UMC are getting a windfall as a result.

Rgds, George Macdonald

"Just because they're paranoid doesn't mean you're not psychotic" - Who, me??
 
|>
|> That's a bit strong IMO. There are some who believe there's more to this
|> than meets the eye. ...
|>
|> Low first time yield?:-) Having to redesign to fit the process will
|> possibly drive the customer to his alternate supplier... as appears to be
|> happening with Qualcomm, nVidia and Xilinx... at least for the moment.

Collating statements and rumours, it does appear that the story is
something like that. It sounds as if there is a lot more black art
to designing for 90 nm than for 130, and that most people involved
underestimated that.

|> >Nothing to do with yields at all. It also had nothing to do with 90nm
|> >production, the story was dealing with 130nm production.
|>
|> No - 90nm *has* been part of the yield story in general. TSMC & Qualcomm
|> claim to have working 90nm production. IBM has definitely stumbled here
|> and TSMC and UMC are getting a windfall as a result.

Hmm. Well, so do Intel and IBM at least, but any or all of them
may have been referring to relatively simple test designs. Do you
know exactly what the various foundries claimed they could do?

IBM has definitely stumbled, but my guess is that it is NOT because
they have more process problems. My guess is that they were charging
extra because they would deliver a 'slicker' operation for designers
who did not want to have to be process experts as well - and that
they failed to deliver that extra value for 90 nm.

I am basing that purely on IBM's general corporate attitude, and
not based on any knowledge of that area of IBM.


Regards,
Nick Maclaren.
 
Back
Top