Interesting read about upcoming K9 processors

  • Thread starter Thread starter Yousuf Khan
  • Start date Start date
Dean said:
Foster, Cascades and Tanner are Xeon parts, not desktop x86. You are
either being intentionally disingenuous, or your shades are letting
through only what will support your argument. The article
specifically states:

No one said anything about desktop or server x86, we were just talking about
x86 in general.

Actually, neither of these articles made any mention about these being Xeon
parts. Yes, Foster eventually did turn out to be a P4 Xeon, but at that time
it wasn't known what market it was aimed at. It was just a name on a
roadmap.
"Graylish meanwhile confirms that Intel doesn't want people to get too
excited about IA-64 too soon. As reported here earlier, Intel's plans
for the continuation of IA-32 make it clear that it anticipates this
being the volume platform for some time"

However, the 32-bit Intel roadmaps ended at Foster at that time, while at
the same time a lot of names appeared on the IA-64 roadmap well after
Foster. Why so little visibility on IA-32 roadmap when IA-64 was so visible?
Um. Exactly which 'masses' are you referring to? Opteron has 6-7%
of the market. A64 has even less.

What would you call them, boutique chips? Athlon 64 with a small marketshare
in the desktop space could still mean millions of chips in a year. And of
course, Opteron at 6-7% would still mean that it outsells all non-x86 server
chips. And of course none of this is static, as both of those processors
increasing their marketshare not decreasing.

Yousuf Khan
 
Well, the fact that they spent an effort on x86 compatibility seems to
make it pretty clear that they wanted it to target at least some of
the same market.

If you look at what the software people actually did, having the
ability to execute 32-bit code was very useful in the early life of
the chip. For example, the initial Intel compiler under Linux was a
32-bit app, and some important Winblows apps weren't (aren't?) 64-bit
clean. When you produce a chip and run Linux or Winblows on it, those
damn customers expect a lot of wild stuff to actually work on it,
no matter which market you think you're attacking.

This would justify some of that compatibility work, although it could
have been a pure software scheme.

Followups reduced.

-- greg
 
That, by itself, is not an indication of the success or failure of Itanium.
First you have to provide the revenue numbers, and market percentage.
Otherwise you could state that because Intel gets almost 85% of all x86
revenues, that x86 is a failure...

Not indicative of success vs. failure at all (nor was it at all
intended to be), simply indicative that Itanium is pretty much just an
HP and SGI chip. Sure, there are at least a half-dozen other
companies selling the chips, but the quantities that they sell are
small enough that they can pretty much be ignored.
 
The explanation is most likely that the Windows OS is likely a huge pile of
spaghetti code that is a nightmare to maintain - including full 64-bit
operation.

That's probably got a lot to do with it, but it's definitely not just
64-bit code that's causing the holdup. MS announced that they were
delaying Win2K3 Server SP1 (32-bit version) at the same time and for
the same reason why they are delaying WinXP 64-bit and Win2K3 64-bit.
All seem to tie back in to getting all the changes and fixes for WinXP
SP2 implemented properly first and then streaming them into the other
products.
For those who have never worked on a commercial product of any
size, all it takes is a few customers complaining about a bug that 95% will
never encounter to extend a beta - and that 95% will scratch their heads and
claim that the product is *so* stable. Sure, you can get the 'core'
features to work fine, but the corner cases can be a major bitch... :-).

Certainly! It's that classic 90/10 principal of project work (ie 10%
of the work takes 90% of the time to complete), although for computers
I think it's more like a 95/5% principal!
 
Dean said:
Is it possible that Mike was wrong, or perhaps his source? Isn't the
Israel lab the one that developed Banias? Would someone there have
direct knowledge of IA-64 plans, or would it most likely be rumors?

Or perhaps, it's possible that Intel is rather flexible with codenames
during the conceptual phase, and doesn't actually lock them down until the
design phase?
I wonder if there has been any other time Intel has re-used a
codename for two different processors?

I can think of at least one recent instance. Clackamas vs. Yamhill
technology, which eventually became EM64T technology. Just goes to show how
flexible Intel really is with its codenames.
So, if the Northwood was
being planned as IA64 in mid-1999, but by early 2001 it was released
as a P4, that seems like a very short amount of time to do such a
redesign. I'm no EE, but it seems that if Intel has such release
cycles it would be pretty difficult to just change them on the fly
like this.

And I'm no Intel manager, but isn't it entirely possible that these Intel
project codenames are actually attached to management cost centres rather
than to actual technology? First you get funding for a project, and then you
lay down the technology. So perhaps during the initial conceptual phase they
were toying with the idea of either making Northwood either IA-64 or IA-32,
and in 1999 it looked like a good bet that IA-64 was the way to go with it,
but by 2000 it was looking like IA-32 was the way to go.

Let's look at what the Northwood actually ended up becoming, shall we? It
was just a small evolution of the Williamette core: a die shrink, an L2
cache increase, and the enabling of the dormant Hyperthreading technology.
These evolutions were therefore paid for through the Northwood cost centre.
The Williamette itself was rushed out the door as soon as possible to do
battle against the Athlon which was starting to hurt Intel. Once Williamette
was out the door, incomplete as it was, it's likely that its cost centre was
closed, and thus further evolution had to be paid for through the next
available cost centre which was the Northwood project. Basically, just a big
management relay race, rather than showcases of technology.

Yousuf Khan
 
Dean said:
I really am trying to find the evidence. It just seems so damned
hard to find that I must question whether it ever existed...

You go ahead and re-interpret history as much as you like. But you were the
one who wanted to know why so many of us have this memory of Intel wanting
to phase out x86 by now. Notwithstanding clauses notwithstanding, I think
the case has now been made for why so many of us have this memory of things
past.

Yousuf Khan
 
Yousuf said:
I can think of at least one recent instance. Clackamas vs. Yamhill
technology, which eventually became EM64T technology. Just goes to
show how flexible Intel really is with its codenames.

Which in light of the previous cost centres discussion, can be
reinterpretted too. Yamhill was supposedly cancelled by Intel a long time
ago, because it threatened future IA-64 sales. So the Yamhill cost centre
was closed. However, none of the designs worked on are ever thrown away. So
if they ever want to revive a project, they have to open up a new cost
centre with a new name. Thus Yamhill was cancelled, and thus its name could
never be reopened.

Yousuf Khan
 
Yousuf Khan said:
http://www.theregister.co.uk/1999/04/28/secrets_of_intels_ia64_roadmap/

<quote>
Updated Reliable sources said yesterday that a future Intel IA-64 chip
called Northwood would hit 3000MHz at its release. ....
Northwood, like Madison and Deerfield will be X60
compactions of the IA-64 but for the Willamette architecture.

So Northwood was described as a shrink of the Williamette, and
writing that it would be an IA-64 implementation was probably a
mistake (misinterpretation, or typo).

Anyway, wrt the intents for IA-64: Intel would not have participated
in its development if the intent had not been to replace the IA-32
architecture with it eventually.

It certainly was not intended as a high-end-only architecture, as they
saw very well that the high-end markets were being eaten from below,
so if they just did one general successful 64-bit architecture, the
high-end markets would fall to them eventually. I fear that they see
that now and will pull the plug on IA-64 now that it does not seem to
catch on widely.

Followups to comp.arch.

- anton
 
Yousuf Khan said:
lay down the technology. So perhaps during the initial conceptual phase they
were toying with the idea of either making Northwood either IA-64 or IA-32,

Didn't some ex-Intel person comment on comp.arch, not so long ago,
that that was indeed the case, but the idea was later canned?
Let's look at what the Northwood actually ended up becoming, shall we? It
was just a small evolution of the Williamette core: a die shrink, an L2
cache increase, and the enabling of the dormant Hyperthreading technology.

Would it be plausible that initially, Hyperthreading (two threads) was
invented so one IA-32 and one IA-64 process could share the processor
at the same time, different decoders feeding the same trace cache and
execution resources?

best regards
Patrick
 
Ah so... If an application I bought yesterday doesn't work today, it's
*my* fault? I don't think IBM became a giant with that attitude.

If the reason it broke was some accidental feature of the previous chip,
it is the application programmer's/vendor's fault, yes. It is very similar
to writing a multi-threaded program with a race condition that just doesn't
happen to occur in one particular context - the one you have been using up
until now - but now occurs and makes WW III happen.

Jan
 
You seem to have a bit of an odd view of backward compatibility.

I do. Have you seen the talk in question, and understood the situation
Colwell described?

Jan
 
I artiklen <[email protected]> , "Dean Kent"
Check this link to MDR. This is the 2nd half 2000 Intel forecast. That
means it was likely compiled in the first half, or perhaps even late 1999.
It shows Northwood as a P4 part, not IA-64. Can we presume that Mike or
his source were out in left field?

64 bit uops did show up in Prescott and according to Andy Glew, IA-64
support was proposed for Tejas (but turned down). Even if rumors have a
basis in fact, it is common for planned features to creep a generation or
two upstream (versus reality). Makes for much better drama.
 
I saw charts in 1997 also - and none of them showed Itanium moving out of
the high end server segment. Though I was only given reseller roadmaps
directly, I was given OEM roadmaps from a motherboard maker associated with
a very, very large Asian OEM. I don't recall seeing any of those things -
and I still have those roadmaps. I wonder if anyone 'recalling' such things
does?

As for George's argument, as usual it is a fallacy. It is called
"argumentum ad ignorantium". Just because it cannot be proven to be false,
does not mean that it is true. The burden of proof is upon those trying to
make the claim. Public information says Intel did not intend to replace
x86 anytime soon, so it will take a bit more than 'recollections' to make
the case that they did. Sorry.

As usual the Kentster's way of impolitely calling someone a liar... and not
only me. The roadmaps *did* exist! Were they official roadmaps like those
issued to the i-Stooges in your quixotic, privileged position?... nope!
Were they published in magazines and Web sites?... yup! The evidence has
vanished along with bubble memory cheers and i860 effervescence - seems
like you were not paying attention.

Rgds, George Macdonald

"Just because they're paranoid doesn't mean you're not psychotic" - Who, me??
 
Yousuf said:
This article from late 1998, it was thought that the IA-32 line of
processors would end in 2003 with the Foster, and from that point afterwards
IA-64 would take over starting with Deerfield. It's also interesting to note
that back then, Intel thought 64-bit for the masses would take off starting
in 2003. It turned out that they were absolutely right, but it just wasn't
one of their chips, it was the Athlon 64 and Opteron.

http://www.theregister.co.uk/1998/10/22/madison_and_deerfield_to_split/

http://www.theregister.co.uk/1998/10/15/intel_doctors_foster_to_extend/

Now, it's obvious that Intel's plans didn't actually live upto its original
roadmaps. That's not surprising or unexpected. However, it's also not
important, it was their intention that is being discussed here only. It was
obvious that in 1998, Intel was hinting at replacing IA-32 at least by 2003.

Yousuf Khan

For those of you who seem to enjoy beating a dead horse in you K9
thread, take a look at the links above!
 
Yousuf Khan said:
No one said anything about desktop or server x86, we were just talking about
x86 in general.

Actually, neither of these articles made any mention about these being Xeon
parts. Yes, Foster eventually did turn out to be a P4 Xeon, but at that time
it wasn't known what market it was aimed at. It was just a name on a
roadmap.

Now you *are* being disingenuous. It was very well known that Foster was a
Xeon at the time. I *have* the roadmaps, and the roadmaps very clearly
segment the market. Foster was at the lower end of the server segment by
2003 (which was supposed to be why Deerfield would be the low-end
replacement).
However, the 32-bit Intel roadmaps ended at Foster at that time, while at
the same time a lot of names appeared on the IA-64 roadmap well after
Foster. Why so little visibility on IA-32 roadmap when IA-64 was so
visible?

Yousef - this is a *SERVER* roadmap. Period. Not an IA-32 roadmap.
What would you call them, boutique chips? Athlon 64 with a small marketshare
in the desktop space could still mean millions of chips in a year. And of
course, Opteron at 6-7% would still mean that it outsells all non-x86 server
chips. And of course none of this is static, as both of those processors
increasing their marketshare not decreasing.

Itanium outsold Opteron last year (100K to about 70K, I believe). So, if
Opteron is a mass-market 64-bit CPU, what is Itanium? Your comments apply
to Itanium as well as Opteron, and yet you relegate Itanium to the scrap
heap? Disingenuous does not seem to describe properly the argument being
presented, it seems.

I believe that by this time the point should have been made, and recognized.
I doubt most care much about this particular point anymore, however.

Regards,
Dean
 
Yousuf Khan said:
I think a lot of us can remember Intel's predictions about IA64 eventually
replacing IA32 by some point in time. You don't need some archival webpage
in order to prove it. Just the fact that so many of us who have been in
this
business for so long can recall these statements is more than enough.


I'm sure most or even all of us remember these things.

Some of the problem is in semantics: introduction, general
availability, wide use, replacement. I take "replacing" to
mean that IA32 is no longer available, and IA64 is used
in all cases including embeded devices.

My particular memory puts the date of IA64 replacing
IA32 well into the second decade of this century.

My memory puts the date of widespread use of
IA64 in the first decade of this century.

This is from an investment viewpoint: what I thought
Intel was saying about the evolution of the portions
of their product line that produced the most profit.

But the fact that I think these things is probably rather
uninteresting, since I cannot produce any actual
documents that say the specific things I think will be true ;-)

--

... Hank

http://horedson.home.att.net
http://w0rli.home.att.net
 
In comp.sys.intel Ketil Malde said:
(e-mail address removed) (Nick Maclaren) writes:

I've wondered about this; if true, I think it's a combination of
several factors:
- the fact that SPEC is now an in-cache benchmark.
- 4 flops/cycle on linpack -> good scores for top500.
- before Xeon/Opteron shipped with 128b ddr, it2 had the only
6.4 GB/s memory system.
- big discounts from vendors.
- some people swallowing spin about the next-big-thing.
I guess I should emphasize that I'm just guessing - I know about a few
Superdomes that do number crunching (which anyway is what IA64 is good
at). Are there any numbers; anywhere?

I've done some testing, and Superdomes are OK - comparable to Altix,
not surprisingly.

the ironic thing is that ia64 owes nearly all of its good performance to
running benchmarks in-cache. out of cache, it's dramatically less
impressive. I re-analyzed a batch of specFP results, by sorting the
individual components by score. you'll immediately notice that ia64
has some real outliers, which just happen to be the smallest (RSS) SPEC
components, that just happen to be in-cache on ia64, and not on most
other processors. iirc, if you drop the top 4, Opterons are actually
faster than ia64.

there are realistic codes which have small working sets. the real problem
is that these codes tend to run pretty well on a 3GHz Xeon, so why pay
incredible prices for a 1.3 GHz It2? for memory-intensive codes, the
standard is pretty much Opteron.

regards, mark hahn.
 
Dean said:
Now you *are* being disingenuous. It was very well known that Foster
was a Xeon at the time. I *have* the roadmaps, and the roadmaps very
clearly segment the market. Foster was at the lower end of the
server segment by 2003 (which was supposed to be why Deerfield would
be the low-end replacement).

And as it has been said, it doesn't matter, we're just talking about x86 in
general. The x86 roadmap's visibility ended at Foster in one case, and
Northwood in another. I'm not going to get into a pissing contest with you
Dean (I'll let Keith do that :-), but you asked for reasons why the beliefs
are so widespread, so now you know.
Itanium outsold Opteron last year (100K to about 70K, I believe).
So, if Opteron is a mass-market 64-bit CPU, what is Itanium? Your
comments apply to Itanium as well as Opteron, and yet you relegate
Itanium to the scrap heap? Disingenuous does not seem to describe
properly the argument being presented, it seems.

Quite the accomplishment for Itanium too, that was. It beat the sales of a
chip that came out for the first time ever four months into the year. This
year Opteron is already slated to sell 100,000 in one quarter, let alone the
whole year. Quite the wide-selling boutique chip isn't it?
I believe that by this time the point should have been made, and
recognized. I doubt most care much about this particular point
anymore, however.

Oh that point was already reached a long time ago, you were the only one
denying what the rest of us saw as quite obvious. This is just the education
of Dean Kent at this point.

Yousuf Khan
 
George said:
As usual the Kentster's way of impolitely calling someone a liar...
and not only me. The roadmaps *did* exist! Were they official
roadmaps like those issued to the i-Stooges in your quixotic,
privileged position?... nope!
Were they published in magazines and Web sites?... yup! The evidence
has vanished along with bubble memory cheers and i860 effervescence -
seems
like you were not paying attention.

We've now even dug up some old historical webpages (possibly written in
parchment or papyrus or something) from the early days of the commercial
Internet which states exactly why we thought Intel's plans were to go
towards IA-64. Yet, he still needs to argue. Some people are just beyond
quixotic!

Yousuf Khan
 
Patrick said:
Would it be plausible that initially, Hyperthreading (two threads) was
invented so one IA-32 and one IA-64 process could share the processor
at the same time, different decoders feeding the same trace cache and
execution resources?

It's possible but we have no way of knowing that right now. Anyways, that
would sound more like dual-processing than Hyperthreading.

Yousuf Khan
 
Back
Top