Dell,gateway etc.. never choose AMD why?

  • Thread starter Thread starter jdobb2001
  • Start date Start date
Yeah, thinking back to the Pentium/K6 days, Dell's pricing model
would've really benefitted being able to continue to sell Socket 7
processors for an additional year or so until the Socket 370 became more
common place. Probably at that point they could've continued to sell
Socket 7's for another year beyond that just as AMD continued to do.

Yup, though I think Dell kind of shot themselves in the foot on this
one. They followed Intel's lead rather blindly and went all out for
Slot 1, then paid a bit of a price when AMD had a very competitive
socket 7 processors that came in with a much lower price tag. I don't
think it really hurt Dell all that much.
I doubt that six months is enough of a headstart to for Dell to start
feeling any kind of pain. It'll probably take six months just to get
dual-cores popular among people.

6 months on it's own isn't enough, but if Intel's 6-month late answer
is a noticeably slower processor than things won't look so hot.
They seem to be trying to keep power consumption contained. Might be the
smart thing to start concerning themselves over these days, i.e. power
requirements rather than speed. The 130nm parts are within the 89W
envelope. While the 90nm parts under the 67W envelope.

Could be. They do seem to be making some very attractive notebook
processors, some of which are coming in with TDPs of only 25W. That
puts the chips squarely in Pentium-M territory.
Do the dual-core Xeons that IBM makes use their own Summit chipset?
Perhaps its trying to make more sales of the Summit chipset rather than
the Xeons themselves?

Err, I assume you mean "dual-processor" and not "dual-core" above?
Either way the answer is no as far as I can tell, though IBM doesn't
seem too eager to provide this info. To the best of my knowledge
though, Summit is only used on their higher-end 4 and 8-way Xeon
systems.
 
Tony said:
Yup, though I think Dell kind of shot themselves in the foot on this
one. They followed Intel's lead rather blindly and went all out for
Slot 1, then paid a bit of a price when AMD had a very competitive
socket 7 processors that came in with a much lower price tag. I don't
think it really hurt Dell all that much.

Oh that's right I forgot, initially it wasn't even Socket 370, it was
initially Slot 1 for Intel. Socket 370 didn't come out until the middle
of the Pentium 3 generation. You just get used to thinking that Socket
370 must've always been with the P6-series of processors right from the
beginning, but it wasn't.

And actually that makes it even more likely that it would've benefitted
Dell. Dell not only had to switch away from Socket 7, but eventually
it had to switch away from Slot 1 too. During all of that time, there
was a steady supply of Socket 7's running K6's and Cyrixes available.

But yes, it didn't hurt Dell all that much. It's likely that Dell was
already the beneficiary of Intel's /special/ marketing campaigns by
that time. Other manufacturers like HP and Gateway (and even IBM at
that time) adopted the K6's and to a lesser extent the Cyrixes, and
took some good cost savings off of it, but Dell still managed to stay
competitive without the benefit of these cost savings.
Could be. They do seem to be making some very attractive notebook
processors, some of which are coming in with TDPs of only 25W. That
puts the chips squarely in Pentium-M territory.

25W?!? I don't think they're quite that good yet, I think the AMD's
might be in the 35W range, which is good but still a little higher than
the Pentium-M's 27W range.

There is talk now that AMD will be bringing out 25W mobile processors
next year. I posted it up in another thread recently.
Err, I assume you mean "dual-processor" and not "dual-core" above?
Either way the answer is no as far as I can tell, though IBM doesn't
seem too eager to provide this info. To the best of my knowledge
though, Summit is only used on their higher-end 4 and 8-way Xeon
systems.

Actually, no, I was referring to dual-core Xeons since that's what we
were talking about previously. But yes, it doesn't matter if we're
talking about dual- or single-core Xeons, as the Summit is equally
applicable to either one.

Yousuf Khan
 
keith said:
Even if those consequences make the product not suitable for the market
(I'm thinking power here, but also P4 architecture)?

Well, I'm sure they thought it would be suitable for the market, but it
turned out wrong, that's why they call it a bad decision. They chose
wrongly.

Perhaps the management's priority was on getting to the miniaturization
stage first so that they could tell the world they reached that stage
before everybody else? And of course everytime *in the past*, just
getting to a particular miniaturization stage also helped power
consumption and speed. Management may have simply thought their process
engineers could simply add a special ingredient as the process matured
to make the process all better again, without major disruptions. Leave
the experimenting with complex new process technologies for the next
major stage.

These days, in a lot of aspects of manufacturing, Intel is not
perceived as the leader. For example, AMD is the perceived leader in
the field of automation, with its APM (automated precision
manufacturing) process. Of course IBM came up with everything from
copper to SOI to straining first.

APM lets AMD find defects in its microprocessors as they're being made,
apparently. It looks like Chartered had to buy APM from AMD in order to
even begin manufacturing Opterons and Athlons. And AMD is already on
APM 3.0 now.
That's how companies fail! Like Goerge (I think it was the G-man) said,
Intel is trapped by their own success. They're also trapped by the silly
mistake of running the iceberg field at full speed with an "unsinkable"
chip. They've dug a very deep hole that AMD has done a rather good job of
taking advantage of. *THIS* is a marketing/executive mistake not seen
since DEC was torched by its insiders.

It's management having other priorities other than just basic
get-the-job-done priorities. I will bet that Intel will be the first
one to have a 65nm process, if anybody else gets there first, it will
be deeply embarrassing for them -- it's as if every little online
newspaper article hurts their feelings and they won't let anyone else
come up to a miniaturization stage first. Whereas AMD won't even bother
to compete against Intel for those bragging rights, it will get to 65nm
when 65nm is ready for them. To them, the race is the metric by which
they are measured: the race to the Ghz and higher, the race to the
miniaturization. But they end up sacrificing basic needs to get to some
goal fast, and as a consequence losing out on the overall race.

Intel's big head-bonking club has been having less and less effect as
time goes on. In the K6 days, after AMD got a bit of lead on the
Pentium classic, Intel released the Celeron and bonked AMD on the head
real good; AMD was down for a long time. In the K7 days, AMD introduced
copper interconnects and got to 1Ghz first; Intel bonked back with P4,
but this one didn't hurt AMD as bad as before, and AMD for the most
part kept competitive. In the current K8 days, it doesn't even look
like Intel can reach high enough to bonk AMD on the head this time.
Yousuf Khan
 
Even if those consequences make the product not suitable for the market
(I'm thinking power here, but also P4 architecture)?


That's how companies fail! Like Goerge (I think it was the G-man) said,
Intel is trapped by their own success. They're also trapped by the silly
mistake of running the iceberg field at full speed with an "unsinkable"
chip. They've dug a very deep hole that AMD has done a rather good job of
taking advantage of. *THIS* is a marketing/executive mistake not seen
since DEC was torched by its insiders.

I'm baffled as to why "they" (the board ?) is letting Barrett hang around
to complete the F/U. I think he's done a great job for AMD.:-) Funny how
Hector seems to be so umm, dexterous... considering his background of
coming from an outfit like Moto - no?

Did ya catch this one:
http://yahoo.reuters.com/financeQuo...tfh75946_2004-12-28_22-10-45_n28725043_newsml
- $10M.!!!!!... still a pretender and he's selling the coy. down the
river.:-) Ahh, life in the fast lane.

Rgds, George Macdonald

"Just because they're paranoid doesn't mean you're not psychotic" - Who, me??
 
Oh that's right I forgot, initially it wasn't even Socket 370, it was
initially Slot 1 for Intel. Socket 370 didn't come out until the middle
of the Pentium 3 generation. You just get used to thinking that Socket
370 must've always been with the P6-series of processors right from the
beginning, but it wasn't.

Not even close to the beginning, it missed the start of the P6 series
by just over 3 years! Socket 8 first, then Slot 1, then Socket 370
with the Celeron 366 chip. I think the first Socket 370 chips came
out on Jan. 9, 1999 (as per www.sandpile.org), just over a month
before the first PIII processor. The PIII didn't find it's way to
Socket 370 (albeit a slightly different Socket 370 than used by the
original Celeron) until Oct. of 1999.

I would hazard a guess that Dell wasn't overly impressed with the way
this transition went. From May 1997 when the PII first came out until
Oct. 1999 things probably were not quite what they would have like.
However on the flip side, that was right smack dab in the middle of
the tech bubble, so it probably wasn't too bad.
And actually that makes it even more likely that it would've benefitted
Dell. Dell not only had to switch away from Socket 7, but eventually
it had to switch away from Slot 1 too. During all of that time, there
was a steady supply of Socket 7's running K6's and Cyrixes available.

Yup. Good socket 7 chips with a relatively stable and VERY low cost
platform relative to Slot 1. Only downside was that the Socket 7 AGP
chipsets were, initially at least, rather crappy. It took a year or
two before SiS, VIA and ALi finally got their act together and
produced some reliable chipsets that didn't cause troubles with the
majority of video cards.
25W?!? I don't think they're quite that good yet, I think the AMD's
might be in the 35W range, which is good but still a little higher than
the Pentium-M's 27W range.

The Thin and Light Sempr0n chips are rated for only 25W. Now their
clock speed is a tiny bit lower than the Athlon64 chips (Sempr0n T&L
top out at 1.8GHz at the moment) and their L2 cache is smaller (128KB
or 256KB, depending on the model), but they are indeed down to 25W.


IMO this is actually one of the most interesting mobile processors
around today, though it doesn't seem to be getting any sales. Not
only is it's power consumption very low, but it also has PowerNow!
dynamic clock changing (something that is lacking in the Celeron-M)
and it's dirt-cheap!
Actually, no, I was referring to dual-core Xeons since that's what we
were talking about previously. But yes, it doesn't matter if we're
talking about dual- or single-core Xeons, as the Summit is equally
applicable to either one.

I'm sure Summit could apply to dual-core Xeons, though it might need a
bit of tweaking. I haven't heard a definitive answer yet if the
dual-core Xeons are going to be a drop-in replacement for single-core
ones or not (my guess is 'no' on that one, but I suspect the next
generation of Xeon chipsets will support both and arrive before
dual-core chips are available). Err.. actually I'm not sure if Summit
will support the CURRENT Xeon chips for that matter, I don't think it
supports the 800MT/s bus speed, though for the time being that doesn't
really matter since Summit is only for 4 and 8 socket servers, and
those chips are still running at 400MT/s or 533MT/s bus speeds.
 
Well, I'm sure they thought it would be suitable for the market, but it
turned out wrong, that's why they call it a bad decision. They chose
wrongly.

Or as in choosing the Holy Ggrail, "badly". The results could be similar.
Perhaps the management's priority was on getting to the miniaturization
stage first so that they could tell the world they reached that stage
before everybody else?

I'm thinking managment followed marketing, in lock-step.
And of course everytime *in the past*, just
getting to a particular miniaturization stage also helped power
consumption and speed.

Yep, and woe be it to the engineering manager that spouts reality. He's
replaced before his chair gets cold. "We've always figured it out before,
you can't?" I'm quite sure this is the wau things went down, and I've
seen such. Do you want to die today, or in *maybe* in six months? If
you admit defeat today, you die today. If not, perhaps Ceasar will die
first and the whole thing will be forgotten. Worst case; you live six
months longer.
Management may have simply thought their process
engineers could simply add a special ingredient as the process matured
to make the process all better again, without major disruptions. Leave
the experimenting with complex new process technologies for the next
major stage.

Risk? Not *ME*! I gotta pay the mortgage. The fact is that that is the
position the leader is in in the tech sector. Risk is bad for the lead
dog, but a necessity for the hungry. Intel simply hasn't been huingry for
years, and even lost their paranoid gene.
These days, in a lot of aspects of manufacturing, Intel is not perceived
as the leader. For example, AMD is the perceived leader in the field of
automation, with its APM (automated precision manufacturing) process. Of
course IBM came up with everything from copper to SOI to straining
first.

I'm not sure what APM means, other than the obvious Wall Street buzzword.
Also remember that Intel said it didn't need SOI nor copper until at least
the 65nm node. Too late.
APM lets AMD find defects in its microprocessors as they're being made,
apparently. It looks like Chartered had to buy APM from AMD in order to
even begin manufacturing Opterons and Athlons. And AMD is already on APM
3.0 now.

....whatever that means.

It's management having other priorities other than just basic
get-the-job-done priorities. I will bet that Intel will be the first one
to have a 65nm process, if anybody else gets there first, it will be
deeply embarrassing for them

Maybe. I have no idea how many 90nm processors they've shipped (AMD?).
Being first doesn't necessaryly buy any money points. Bragging points
dont' translate well into profits.
it's as if every little online newspaper
article hurts their feelings and they won't let anyone else come up to a
miniaturization stage first. Whereas AMD won't even bother to compete
against Intel for those bragging rights, it will get to 65nm when 65nm
is ready for them. To them, the race is the metric by which they are
measured: the race to the Ghz and higher, the race to the
miniaturization. But they end up sacrificing basic needs to get to some
goal fast, and as a consequence losing out on the overall race.

Huh? I'm lost. AMD is concetrating on Intel's weak flank, which is
architecture (and the fact that Intel is locked into Itanic), not
manufacturing. AMD isn't about to go head-to-head with Intel's production
capabilities. They "simply" have to execute on their architecture.
Intel's big head-bonking club has been having less and less effect as
time goes on. In the K6 days, after AMD got a bit of lead on the Pentium
classic,

Huh? The K6 came out about the same time as the PII. AMD "bonked" Intel
because the PII architecture sucked, as long as the L2 was on-card.
Intel released the Celeron and bonked AMD on the head real
good;

Riiiiggghht. The Celeron (cacheless) 200 was such a wunnerful product.
Please! It was Intel's first AMD panic attack.
AMD was down for a long time.

Hogwash. The K6-2 and K-6III were very competative with anything Intel
sold on the desktop.
In the K7 days, AMD introduced copper interconnects

What??? Come one Yousuf! You really gotta get it together. IBM
introduced copper interconnnects on the PPC750. AMD was working with
Moto at the time. (good plan that)
and got to 1Ghz first; Intel bonked back with P4,>

Oh, the P4 was just a peachy rush-job. No barrel-shifter and no integer
multiplier. What a great micro-architecture. That worked out well, eh?
but this one didn't hurt AMD as bad as before, and AMD for the most > part
kept competitive. In the current K8 days, it doesn't even look like
Intel can reach high enough to bonk AMD on the head this time.

Try selling your "story" to someone else. I've been around this block,
Yousuf.
 
I'm baffled as to why "they" (the board ?) is letting Barrett hang around
to complete the F/U. I think he's done a great job for AMD.:-) Funny how
Hector seems to be so umm, dexterous... considering his background of
coming from an outfit like Moto - no?

Hector came from TI, IIRC. There *is* a difference. TI is doing rather
well in its market. Moto? Then agin, I could be wrong (and am too
tired to look it up).
 
keith said:
I'm thinking managment followed marketing, in lock-step.

The steps probably lock right up at the top, in the executive suites.
But I don't think this is so much traditional marketing department
driving this, as it is executive suite politics and their image of
themselves.
Risk? Not *ME*! I gotta pay the mortgage. The fact is that that is the
position the leader is in in the tech sector. Risk is bad for the lead
dog, but a necessity for the hungry. Intel simply hasn't been huingry for
years, and even lost their paranoid gene.

I don't think it's so much that they think it's a risky move, but it's a
move that will put them behind schedule. A company like Intel should be
able to assign a lot of engineers to investigate specific technologies,
and they probably do. They just decide not to pursue the technology if
it's going to interfere with their next miniaturization stage, even if
it could help that stage immensely. They did that with copper at 180nm,
and now SOI at 90nm.
I'm not sure what APM means, other than the obvious Wall Street buzzword.
Also remember that Intel said it didn't need SOI nor copper until at least
the 65nm node. Too late.

It's probably similar to what a lot of other companies are doing within
their own processes, but it's been perceived that AMD has taken some
extra steps towards it that they've been able to patent to an extent.
Maybe. I have no idea how many 90nm processors they've shipped (AMD?).
Being first doesn't necessaryly buy any money points. Bragging points
dont' translate well into profits.

Exactly what a responsible company with a responsible management team
should know about. It's not getting to a miniaturization node for the
sake of miniaturization that's important, it's the products that come
out of that node that matter.
Huh? I'm lost. AMD is concetrating on Intel's weak flank, which is
architecture (and the fact that Intel is locked into Itanic), not
manufacturing. AMD isn't about to go head-to-head with Intel's production
capabilities. They "simply" have to execute on their architecture.

Yeah, sorry, I just re-read what I wrote above. I switched from talking
about AMD to talking about Intel in the middle of the paragraph. Wasn't
paying attention. When I was talking about the "race is the important
thing to them", I was referring to Intel not AMD.
Huh? The K6 came out about the same time as the PII. AMD "bonked" Intel
because the PII architecture sucked, as long as the L2 was on-card.

Yeah, the K6-series was competitive but remember they had some trouble
manufacturing them at the higher speeds each step along the way. They
could only manufacture the slower speed parts which would compete
against Celerons rather than P2's and P3's.
Riiiiggghht. The Celeron (cacheless) 200 was such a wunnerful product.
Please! It was Intel's first AMD panic attack.

That one was a mistake from Intel, but they quickly corrected it with
the 300A.
What??? Come one Yousuf! You really gotta get it together. IBM
introduced copper interconnnects on the PPC750. AMD was working with
Moto at the time. (good plan that)

Yeah, but who cares what IBM was doing with PPC? It's still the case
today. Only concerned with x86 processors. Motorola was competitive with
IBM for copper technology at the time.
Oh, the P4 was just a peachy rush-job. No barrel-shifter and no integer
multiplier. What a great micro-architecture. That worked out well, eh?

Marketing-wise, yes. A lot of people were impressed by the Ghz speeds it
reached, even if they did know that it wasn't actually as competitive
with Athlon or P3 Mhz-for-Mhz. Eventually (after some more
miniaturizations), the P4 was indeed overall competitive with Athlon,
just by shear Mhz advantage.

Yousuf Khan
 
keith said:
Hector came from TI, IIRC. There *is* a difference. TI is doing rather
well in its market. Moto? Then agin, I could be wrong (and am too
tired to look it up).

He was at TI a long time ago, his last gig was as president of Moto
semiconductor.

Yousuf Khan
 
Back
Top