Intel concedes server business to AMD ;-)

  • Thread starter Thread starter Robert Myers
  • Start date Start date
R

Robert Myers

The actual title of the article is "Intel playing catch-up to AMD
business"

http://www.networkworld.com/news/2005/050905-intel-amd.html

<quote>

Intel's top server executive recently acknowledged the disparity
between his company's server processor road maps and Advanced Micro
Devices, but said Intel plans to close the gap soon with a revitalized
product line.

<snip>

Users will find dual-core Opteron servers intriguing when compared
with single-core Xeon servers, Gelsinger says. "There will clearly be
some tire-kickers, and maybe some losses," he says, referring to Intel
customers who might switch to servers based on AMD's chips.

However, enterprise customers are generally conservative when it comes
to technology changes, Gelsinger says. Users interested in servers
with four or more processors currently have the option of Intel's new
Truland platform, which will protect any current investments by
letting customers plug dual-core Xeon chips into their current Truland
servers when these chips become available next year, he says.

</quote>

"Itanium" nowhere to be found in the article.

RM
 
Robert said:
The actual title of the article is "Intel playing catch-up to AMD
business"

http://www.networkworld.com/news/2005/050905-intel-amd.html

Tom's Hardware Guide Processors: AMD's Dual Core Athlon 64 X2 Strikes
Hard - Here Comes The King: Athlon 64 X2 Reviewed
http://www.tomshardware.com/cpu/20050509/index.html

AMD's new dual-core CPUs trounce Intel's - CNET.com
http://www.cnet.com/4520-6022_1-6217968-1.html?tag=prmo1

And here's Intel's response:

» Intel on AMD's early dual-core wins: "Not so fast" | Between
the Lines | ZDNet.com
http://blogs.zdnet.com/BTL/index.php?p=1356

Yousuf Khan
 
Tom's Hardware Guide Processors: AMD's Dual Core Athlon 64 X2 Strikes
Hard - Here Comes The King: Athlon 64 X2 Reviewed
http://www.tomshardware.com/cpu/20050509/index.html

AMD's new dual-core CPUs trounce Intel's - CNET.com
http://www.cnet.com/4520-6022_1-6217968-1.html?tag=prmo1

And here's Intel's response:

» Intel on AMD's early dual-core wins: "Not so fast" | Between
the Lines | ZDNet.com
http://blogs.zdnet.com/BTL/index.php?p=1356

Wow.. that's an extraordinarily WEAK response by Intel.

"AMD's integrated memory controller is bad because a year from now
OEMs will have to requalify their systems for DDR2 memory"?

Intel better hope that they've got a better answer than that!
 
Wow.. that's an extraordinarily WEAK response by Intel.

"AMD's integrated memory controller is bad because a year from now
OEMs will have to requalify their systems for DDR2 memory"?

Intel better hope that they've got a better answer than that!

I'd say it's a fairly shrewd response, and it's more than what you
quoted:

1. AMD has made a short-sighted decision by sticking with DDR when
DDR2 is the longer term solution. Your response: buyers don't or
shouldn't care, because all the hassle will be someone else's problem.
My only quibble with Intel's response would be that they didn't
_quite_ close the loop by saying: you remember how hard it's been
sometimes to get platforms for AMD processors you could really count
on? Each time you change memory technology, you have to start over.
FUD? Of course. This is marketing.

2. AMD processors don't have hyperthreading. Hyperthreading is
already in the benchmarks? Well, yes, but one of these days, they're
going to figure out how really to take advantage of HT. Honest.
Meanwhilst, the manager stuck with Intel processors that are
outperformed in benchmarks can say, "Yeah, but because of
hyperthreading, this is really like an 8-way box."

3. AMD doesn't have the capacity to be Intel. That's the one that
really counts for managers with their thinking caps on.

RM
 
And here's Intel's response:

Too bad 64-bit SW is not further along. From what I've read the Itanium
really flys on that stuff. Just sucks really bad for 32-bit. That, and
it's a bit pricy.

~Jason
 
Jason Gurtz said:
Too bad 64-bit SW is not further along. From what I've read the Itanium
really flys on that stuff. Just sucks really bad for 32-bit. That, and
it's a bit pricy.

~Jason
64 bit Linux has been available for some time. Now maybe perhaps the
compilers could be claimed to not be up to snuff for the weirdness that is
Itanium, but that is pretty weak. What's your next excuse?

It still isn't clear that the Itanium architecture was a good idea, or
whether it was Intel's PS/2.

del cecchi
 
64 bit Linux has been available for some time. Now maybe perhaps the
compilers could be claimed to not be up to snuff for the weirdness that is
Itanium, but that is pretty weak. What's your next excuse?

Yep, I've been running 64bit software at home for a year.
It still isn't clear that the Itanium architecture was a good idea, or
whether it was Intel's PS/2.

I'd say it was more like Intel's FS, or perhaps their latest
incarnation of the iAPX 432. ;-)
 
64 bit Linux has been available for some time. Now maybe perhaps the
compilers could be claimed to not be up to snuff for the weirdness that is
Itanium, but that is pretty weak. What's your next excuse?

Sure, 64-bit Osen have been available for a bit. But, how many 64-bit
applications--designed for 64-bit--are you running?

Yea, then there's the compiler problem...

~Jason
 
64 bit Linux has been available for some time. Now maybe perhaps the
compilers could be claimed to not be up to snuff for the weirdness that is
Itanium, but that is pretty weak. What's your next excuse?

How is that weak? It's a well known fact that not having a good tool
chain can be a serious issue. I mean, let's be honest, when MIPS,
POWERPC, SPARC etc. debuted, it was pretty darn obvious that their
compilers were far less mature than those for S/360; don't you think
that made adoption of those architectures a little hard to swallow for
those used to S/360's development tools?
It still isn't clear that the Itanium architecture was a good idea, or
whether it was Intel's PS/2.

Agreed...it will take time to tell.

David
 
David Kanter said:
How is that weak? It's a well known fact that not having a good tool
chain can be a serious issue. I mean, let's be honest, when MIPS,
POWERPC, SPARC etc. debuted, it was pretty darn obvious that their
compilers were far less mature than those for S/360; don't you think
that made adoption of those architectures a little hard to swallow for
those used to S/360's development tools?


Agreed...it will take time to tell.

David
It's weak because they have had what, 4 or 5 years to get some compilers out
there that work. If they haven't yet succeeded in doing so either it is
much harder than other architectures, lending credence to the idea that
Itanium was a bad idea or Intel is not trying very hard to make it happen,
lending credence to the idea that they are inept.

I know that there is not much demand for Itanium stuff, but some cash from
HP and Intel and some contibutions to GCC or whatever would go a long way in
enhancing the offerings.

del cecchi
 
Del said:
It's weak because they have had what, 4 or 5 years to get some compilers out
there that work. If they haven't yet succeeded in doing so either it is
much harder than other architectures, lending credence to the idea that
Itanium was a bad idea or Intel is not trying very hard to make it happen,
lending credence to the idea that they are inept.

Why do you think that 4-5 years is an appropriate time frame to develop
compilers? I mean MSVC was crap for a while from what I hear, and
didn't get good until recently. How long has that been around?
I know that there is not much demand for Itanium stuff, but some cash from
HP and Intel and some contibutions to GCC or whatever would go a long way in
enhancing the offerings.

Agreed.

David
 
It's weak because they have had what, 4 or 5 years to get some compilers out
there that work. If they haven't yet succeeded in doing so either it is
much harder than other architectures, lending credence to the idea that
Itanium was a bad idea or Intel is not trying very hard to make it happen,
lending credence to the idea that they are inept.
Almost anybody would have to vote for inept at this point wouldn't
they? Were I professer at the Harvard Business School, I'd probably
be looking around for some smart young whip to write a thesis on how
this always happens to technology companies, with Intel (and
Microsoft, btw) as only the latest case studies.

A software type (take it easy, Keith, he worked for a do-it-all chip
company that I'll bet you've worked with, but no names) opined to me
recently that AMD's advantage is that they really have no choice but
to get outside help, so they get it. My interpolation is that, given
the state of the industry, the best is generally available for hire by
AMD unless they happen to work for Intel.

I'd love to be able to go back and look at the historical documents at
Intel. What did they know and when did they know it? Even knowing
what they _thought_ they knew would make a fascinating read.

It isn't so much that they didn't get it right in four or five
years...the problem is that hard. It's hard to comprehend what they
were doing in terms of risk management, though. Not much, apparently.
I know that there is not much demand for Itanium stuff, but some cash from
HP and Intel and some contibutions to GCC or whatever would go a long way in
enhancing the offerings.
There was recently a Gelato meeting dealing with getting some serious
help for gcc with Itanium. I haven't followed the details since the
meeting to see whether anything really happened.

I'll venture that no compiler solution to Itanium's problems is
forthcoming. Not that significant progress isn't possible. It just
won't be enough.

That means death for Itanium? I'm less sure of that. It depends on
how adventurous Intel is willing to be. The fact that execution pipes
stall frequently really shouldn't be a serious problem for server
applications. The idea that you have to keep a single pipe stuffed
full and running all the time is mainframe and HPC thinking.

The only thing Intel really has to preserve is the ISA. If you can
add more execution paths without blowing the power consumption through
the roof, you should be able to make practically any architecture get
any kind of throughput you want as a multi-threaded server.

....That's the path (lots of threads, who cares if they stall) I
thought Intel was going to take with the core that I gather was being
designed at Marlborough. That core, apparently, is dead, so who knows
what Intel is going to do.

Should be posting to comp.arch so I could get major flames.

RM
 
Robert said:
2. AMD processors don't have hyperthreading. Hyperthreading is
already in the benchmarks? Well, yes, but one of these days, they're
going to figure out how really to take advantage of HT. Honest.
Meanwhilst, the manager stuck with Intel processors that are
outperformed in benchmarks can say, "Yeah, but because of
hyperthreading, this is really like an 8-way box."

Well, I just posted a story talking about how the Netburst architecture
is likely going to disappear by 2006. Without Netburst's long
pipelines, Hyperthreading becomes difficult with the short-pipeline
architecture that's going to replace it. But Intel is going to try it
anyways.
3. AMD doesn't have the capacity to be Intel. That's the one that
really counts for managers with their thinking caps on.

For servers, AMD has the capacity right now to supply server chips for
the entire market including the non-x86 server market. Of course,
people in the non-x86 market aren't going to be able to switch that
simply.

Yousuf Khan
 
Robert said:
Should be posting to comp.arch so I could get major flames.

Oh please, copy the thread over there if you like, but don't crosspost
it there! That way we don't have to deal with comp.arch regulars over
here.

Yousuf Khan
 
Jason said:
Sure, 64-bit Osen have been available for a bit. But, how many 64-bit
applications--designed for 64-bit--are you running?

Yea, then there's the compiler problem...

If the OS is Linux, then probably all of the applications are already
64-bit, whether they needed to be or not. It's simply a matter of
recompiling under Linux. A bit more difficult under Windows.

The Windows experience is going to be much more interesting to watch.
This is really what x64 was designed to address -- the ability to
seamlessly run 32-bit apps under 64-bit OSes. In the closed Windows
world, recompiling isn't all that simple, so the chip has to take the
responsibility to handle a variety of code bases.

And Microsoft hasn't made life easier on itself by not supporting
32-bit drivers alongside 32-bit apps. That means the class of software
that is the most tricky to create and maintain (i.e. device drivers) is
the one class of software that definitely has to be recompiled in order
to even work in the 64-bit OS.

Yousuf Khan
 
Oh please, copy the thread over there if you like, but don't crosspost
it there! That way we don't have to deal with comp.arch regulars over
here.
It was a joke, Yousuf.

Some of the participants here are well-respected comp.arch regulars in
any case.

RM
 
Back
Top