Intel strikes back with a parallel x86 design

  • Thread starter Thread starter Jim Brooks
  • Start date Start date
Nathan said:
That's like saying none of the knowledge gained from
the P6 design carried over to the Williamette design.

About the Mac, Jobs looked at the roadmaps of both Intel and AMD.
There were other factors, but Jobs was more intrigued by Intel's.
Why?

Another thing to note. A familiar refrain is that Intel can outspend
AMD in R&D by several times. This is true. Then usually the next
refrain is that because of this Intel can catch up with AMD anytime it
wants. This has not been proven.

They may say that two heads are better than one. However, nobody has
ever proven that five hundred heads are better than one hundred heads.

Yousuf Khan
 
That's like saying none of the knowledge gained from
the P6 design carried over to the Williamette design.

About the Mac, Jobs looked at the roadmaps of both Intel and AMD.

Hector was quoted as saying that Jobs/Apple never talked to AMD... and that
they were much too busy anyway.:-)
There were other factors, but Jobs was more intrigued by Intel's.
Why?

Hey maybe this is not a joke at all:
http://www.electric-chicken.co.uk/itoilet.html
 
And that's using overclocked CPUs too. It's still more power-efficient.

They're not "overclocked"; they're a special speed grade Sun asked for
and got.

Casper
 
Looks to me like some of the regulars don't exactly know their hardware
basics either!
Along with the (transputer | iAPX-432 | TMS9900) coulda ruled the world
types. I only mention TMS9900 because even though it had memory mapped
registers, to a programmer it looked real good compared to the 8088 ISA.
You can substitute your favorite failed obscure processor there.

You probably know this but

If my memory serves me well, 2 of those were not failed, at least not
during their heyday they were quite successful selling in the millions
mark when a million actually meant something. One of them is alive and
well inside your settop box (that must mean many many millions at 70%
market share) but don't ask ST to name it, it hurts too much to say the
word.

The iapx432 being designed by a bunch of Phds with no clue about
hardware costs never reached the market AFAIR and the 8086 backup plan
went into effect. Eventually Intel must have forgot that lesson.

TI abandoned the 9900 as another of too many product lines and
eventually rationalized down to a DSP and mixed signal businesses.
Burning ones fingers in the commodity biz tends to make one refocus.

Inmos couldn't explain what seemed easy or obvious to the masses how to
compose processes, but CSP is still around.

BTW a modern Transputer wouldn't look anything like the old Transputer,
it might even run on x86 ISA or ARM or anything mainstream. Its just
boils down to an occam interpreter or compiler hosted on an otherwise
common hardware.

A specially designed processor to support pervasive communicating
processes with objects might look quite different though, but shades of
Niagara, Rekursiv etc.

johnjakson at usadt com
transputer2 at yahoo
 
|>
|> You probably know this but
|>
|> If my memory serves me well, 2 of those were not failed, at least not
|> during their heyday they were quite successful selling in the millions
|> mark when a million actually meant something. One of them is alive and
|> well inside your settop box (that must mean many many millions at 70%
|> market share) but don't ask ST to name it, it hurts too much to say the
|> word.

Yes. What the x86 fanatics miss is that there are a large number
of designs that could perfectly well have prevented its rise, or
toppled it from its perch and taken over during one of its more
vulnerable periods. Its success was always more a matter of luck
(and incompetence by the opposition) than merit.

In addition to those systems and the Alpha, there was the 68K
range and PowerPC, which both came VERY close to blocking the
rise of the x86 and toppling it, respectively. We know why they
didn't, too, and the reasons were not architectural.

Nowadays, with the patent system preventing innovation by new
companies and established companies not being prepared to tackle
new general-purpose architectures, I doubt that anything could
make headway until the x86 collapses of its own accord. Unless,
of course, that China says "sod you" to the USA over patents and
starts innovating itself.


Regards,
Nick Maclaren.
 
Jim said:
AMD has only a fraction of the resources that Intel has,
so AMD will have a hard time catching up

Under the assumption that having more resources makes you faster. Another
Brooks (Fred) thought differently. There's a lower limit of a project to
finish, depending only on the number of people involved (not on the
inherent complexity - maybe an overstaffed team can complete before by
delivering a skunkwork project instead of the planned one). The complexity
only gets exposed when you have an understaffed team (and even then, half
the people doesn't mean twice the time).

Read in isolation, the comment makes as much sense as saying "these guys
from Kenia have only a tiny fraction of the resources all the first world
people have, they'll have a hard time to catch up on the New York city
marathon." If you look at the list
(http://www.mistupid.com/sports/nymarathon.htm), you'll see that since
1982, no US American won, and for the last decade, Kenia has five wins out
of ten.

To do a job in a short time, three things are necessary:

* Excellence (If you can't run, you can't win)

* The necessary resources (You can win the NYC marathon bare foot - it has
been done - but sneakers help)

* Knowing the direction (If you get lost on the way, you'll never win)

The last item may be least important for the NYC marathon, but it's most
important for chip development. And here, despite of the resources, Intel
is years behind AMD.
 
Nick said:
Nowadays, with the patent system preventing innovation by new
companies and established companies not being prepared to tackle
new general-purpose architectures, I doubt that anything could
make headway until the x86 collapses of its own accord. Unless,
of course, that China says "sod you" to the USA over patents and
starts innovating itself.

Actually, if China is smart they won't say "sod you" but wait until
the US has made the world safe for IP and then do a swap of US debt
for US IP, the US having no other assets to sell at that point.
 
|> >
|> > Nowadays, with the patent system preventing innovation by new
|> > companies and established companies not being prepared to tackle
|> > new general-purpose architectures, I doubt that anything could
|> > make headway until the x86 collapses of its own accord. Unless,
|> > of course, that China says "sod you" to the USA over patents and
|> > starts innovating itself.
|>
|> Actually, if China is smart they won't say "sod you" but wait until
|> the US has made the world safe for IP and then do a swap of US debt
|> for US IP, the US having no other assets to sell at that point

Speaking as an IP developer, I am unaware that the world is unsafe
for IP.

It is unclear how much IP is honestly owned by the USA, as many
of the claims are legally void and used primarily for extortion,
an obstruction to innovation or a defence against those.


Regards,
Nick Maclaren.
 
Casper said:
They're not "overclocked"; they're a special speed grade Sun asked for
and got.

Casper

Yes, yes, we know, professionally "designed for extra speed" at the
factory by AMD so that Sun can win all benchmarks a few months ahead of
other people's Opteron boxes. Because in a few months AMD will have
those same speed grades available at 90W instead of 120W. :-)

Yousuf Khan
 
JJ said:
The iapx432 being designed by a bunch of Phds with no clue about
hardware costs never reached the market AFAIR and the 8086 backup plan
went into effect. Eventually Intel must have forgot that lesson.

the last asilomar acm sigops (before they starting letting the
conference wander around, there was midnight session bemoaning that
the pennyless mit students always had to pay for coast-to-coast trip,
and it would only be fair if the berkeley students should sometimes
have to pay coast-to-coast fare for sigops conferences) ... there was
presentation on iapx432 effectively moving some number of operating
system features into silicon ... features that have had a somewhat
significant change rate ... and the requirement for change didn't stop
when the features were in silicon (but iapx432 silicon was lacking in
ability to make such changes).
 
Nick said:
|> >
|> > Nowadays, with the patent system preventing innovation by new
|> > companies and established companies not being prepared to tackle
|> > new general-purpose architectures, I doubt that anything could
|> > make headway until the x86 collapses of its own accord. Unless,
|> > of course, that China says "sod you" to the USA over patents and
|> > starts innovating itself.
|>
|> Actually, if China is smart they won't say "sod you" but wait until
|> the US has made the world safe for IP and then do a swap of US debt
|> for US IP, the US having no other assets to sell at that point

Speaking as an IP developer, I am unaware that the world is unsafe
for IP.

It is unclear how much IP is honestly owned by the USA, as many
of the claims are legally void and used primarily for extortion,
an obstruction to innovation or a defence against those.

That's what I meant by "safe for IP". :)
 
Nick said:
In addition to those systems and the Alpha, there was the 68K
range and PowerPC, which both came VERY close to blocking the
rise of the x86 and toppling it, respectively. We know why they
didn't, too, and the reasons were not architectural.

If what you really mean by "came very close to blocking the rise of x86
and toppling it" is "NOT coming very close at all", then I agree with
you. :-)

x86 was a 100 million machine/year business already by the time Alpha
and PowerPC came onto the scene; they had no chance of blocking, let
alone toppling. 68K had the misfortune of being saddled with Macintosh,
and Macintosh alone.
Yes. What the x86 fanatics miss is that there are a large number
of designs that could perfectly well have prevented its rise, or
toppled it from its perch and taken over during one of its more
vulnerable periods. Its success was always more a matter of luck
(and incompetence by the opposition) than merit.

The only thing that can take on x86 is another x86. No blind luck or
incompetance by the opposition required here. Just a simple matter of
reading the market and delivering what it wants.

Alpha would still be with us, if it had a full, non-emulated x86
compatibility mode. PowerPC toyed with the idea of including an x86
compatibility mode, but that project disappeared. Itanium tried x86
through emulation and that was deemed not good enough, so a real x86
core is required.
Nowadays, with the patent system preventing innovation by new
companies and established companies not being prepared to tackle
new general-purpose architectures, I doubt that anything could
make headway until the x86 collapses of its own accord. Unless,
of course, that China says "sod you" to the USA over patents and
starts innovating itself.

Don't see what the patent system has gotta do with it. And innovation
still exists even within the x86 framework. I personally foresaw the
end of the x86 line at the 64-bit boundary. That was because too many
memory management structures in the 32-bit x86 ISA prevented any
further expansion into the 64-bit domain. I was completely surprised to
find that AMD was able to extend it out to 64-bit -- by completely
deleting these memory management structures -- never thought they'd
figure out how to expand to 64-bit by *removing* things. Very
innovative.

Yousuf Khan
 
YKhan said:
If what you really mean by "came very close to blocking the rise of x86
and toppling it" is "NOT coming very close at all", then I agree with
you. :-)

Nick is referring to the time, back in the day, when IBM chose the 8088
to power the PC. They could have chosen the 68k instead. The reasons
they didn't had to do with business concerns which were more important
than processor details. IBM even bailed Intel out a few years later
when they were about to go under.
x86 was a 100 million machine/year business already by the time Alpha
and PowerPC came onto the scene; they had no chance of blocking, let
alone toppling. 68K had the misfortune of being saddled with Macintosh,
and Macintosh alone.




The only thing that can take on x86 is another x86. No blind luck or
incompetance by the opposition required here. Just a simple matter of
reading the market and delivering what it wants.

And to think I was doubting there were such things as x86 fanatics. :-)
Alpha would still be with us, if it had a full, non-emulated x86
compatibility mode. PowerPC toyed with the idea of including an x86
compatibility mode, but that project disappeared. Itanium tried x86
through emulation and that was deemed not good enough, so a real x86
core is required.

Alpha would never have gotten off the ground if it had a "full
non-emulated x86 mode" and what would DEC have done with such a mode?
In fact what would anyone have done with it. IBM tried pretty hard to
make a x86/powerpc thang. (It's a floor wax and a dessert topping) and
didn't have much luck. It is only the last few years that density has
gotten to the point where one could just toss in an x86 core as an
"emulation assist engine".
China already sort of ignores patents. what's theirs is theirs and
what's yours is negotiable.
Don't see what the patent system has gotta do with it. And innovation
still exists even within the x86 framework. I personally foresaw the
end of the x86 line at the 64-bit boundary. That was because too many
memory management structures in the 32-bit x86 ISA prevented any
further expansion into the 64-bit domain. I was completely surprised to
find that AMD was able to extend it out to 64-bit -- by completely
deleting these memory management structures -- never thought they'd
figure out how to expand to 64-bit by *removing* things. Very
innovative.

Yousuf Khan

So folks that know what they are doing fooled you eh?
 
Again, another reason why AMD is doing so well these days. They brought
64-bit x86 out first, which was intriguing,

A relatively simple architectural hack.
64-bit effective addresses contributes to code bloat.
but they really ignited the
rockets once dual-core was introduced.

What an outrageous AMD-bias Khan suffers from.
Yeh, right. AMD is doing "good" with phony misleading Mhz ratings
(eg "Athlon 4400+") to conceal a 2 Ghz gap behind P4. So good AMD
had to resort to crying and whimpering to the Europeon trade
commission.
 
Bernd said:
Under the assumption that having more resources makes you faster. Another
Brooks (Fred) thought differently. There's a lower limit of a project to
finish, depending only on the number of people involved (

Silicon Valley is relearning that lesson from the elder Brooks.
1 good experienced SV engineer at $60/hour is worth way more than
4 rookie Indian CS grads at $15/hour in Bangawhore.
 
If what you really mean by "came very close to blocking the rise of x86
and toppling it" is "NOT coming very close at all", then I agree with
you. :-)

Don't be silly.
x86 was a 100 million machine/year business already by the time Alpha
and PowerPC came onto the scene; they had no chance of blocking, let
alone toppling. 68K had the misfortune of being saddled with Macintosh,
and Macintosh alone.

Either you are very young, or your memory is failing. Let me remind
you.

Back in the early 1980s, there were several chips fighting it out
for the workstation market. Intel's 8086 wasn't up to it and even
the 80286 was pretty marginal - the 68000 and 68010 were rapidly
becoming the dominant chips in the high-end market. What stopped
them from becoming dominant was that no company produced a 68K-based
workstation that was both relatively cheap and with working software.
Sun and Apollo established themselves because their systems
more-or-less worked. There were much cheaper 68K boxes, but their
software was crap, and they didn't have the third-party support.
There were at least a dozen companies in the world who could have
put together a decent 68K-based system over a year before the 80386
became viable. None did.

Only a little bit later, IBM and Motorola set up the PowerPC project.
Let's skip over the long and complex details, but the facts of the
matter were that IBM had viable systems a full year before the 80386
became viable, but wouldn't release them. And the PowerPC consortium
was really quite big at that stage, including Apple and most of the
Tier 2 vendors at least having taken out options. And please note
that Intel and Microsoft combined were small compared to IBM back in
the days of the 80286.
The only thing that can take on x86 is another x86. No blind luck or
incompetance by the opposition required here. Just a simple matter of
reading the market and delivering what it wants.

You may not be aware (or may have forgotten) that IBM put one hell
of a lot of marketing money in to make that turkey fly. And then
discovered that they had been taken for a ride, contractually, by
Intel and Microsoft. I can assure you that IBM was right royally
pissed off.
Don't see what the patent system has gotta do with it.

If I were to get $1 billion together, design a CPU that outperformed
the x86 10:1 and build it into a system, what chance would I have of
selling it without being smothered in lawsuits? Get real.
And innovation
still exists even within the x86 framework. I personally foresaw the
end of the x86 line at the 64-bit boundary. That was because too many
memory management structures in the 32-bit x86 ISA prevented any
further expansion into the 64-bit domain. I was completely surprised to
find that AMD was able to extend it out to 64-bit -- ...

Well, I wasn't, but I had been there before. That scarcely counts
as innovation - it is such a routine task.


Regards,
Nick Maclaren.
 
Jim Brooks wrote On 09/23/05 11:11,:
Signs and portents as JMS would say.

Stevel Jobs does a 180' and enthusiastically becomes
Intel's bedfellow on the basis of a compelling roadmap.
That roadmap has to be pretty darned interesting.

Intel claims they aren't developing Hyperthreading anymore.
But Intel now knows all the issues involved in hw threading.
Why not exploit that know-how as an advantage over AMD?
AMD has only a fraction of the resources that Intel has,
so AMD will have a hard time catching up

My speculation is that Intel will build on their HyperThread experience
to design a "parallel x86". x86 CPUs have become superscalar machines.
The next evolutionary step is a parallel machine. Dual-cores are only
an inefficient stop-gap design that wastes transistors with duplicated
or unnecessary resources (eg coherency logic between the core's caches).

My ideas for a parallel x86:

- thread quantums

The idea is to move coarse-granularity timer-driven time-slicing
into the hw so that time-slices can be instruction-granular.

The multithread model already runs each thread opportunistically.
Timer interupts already occur on instruction boundaries.
- thread prioritization

The OS assigns static priorities to threads.
The hw computes dynamic priorites according to static priority
and instruction issue for a thread per quantum.

Move a broken priority model from software to hardware. For what
speed advantage ?
- sub-threads

Support for parallel programming.
A reduced 80386 Task-State Segment (TSS) will be defined
(avoid saving unnecessary registers such as ES/FS/GS)
A variant of JUMP [TSS] with a new Thread bit defined in the TSS
will spawn a sub-thread (analoguous to a UNIX child process).
The sub-thread can stop by IRET [TSS].
A new WAIT [TSS] will synchronize the parent with its sub-thread.

- thread exceptions

A thread can raise exceptions to end or suspend itself.

These are all software features. There is no reason to move
the slowest parts of thread management to hardware, such as scheduling
of same.
- cache lines have thread bits in addition to LRU bits

When one cache line has to be evicted, victimize the line owned
by a lower-priorty thread.

Doubt it, but hey...
- ALUs: 8 simple, 4 complex.

SUN. :-)
- FPUs: 4 FADD, 4 FMUL, 2 FLDST.

Yea, lets optimize the least used section of the instruction set.
- deprecation of FP SIMD instruction set

SIMD was a good idea for a single-thread CPU as it let the control unit
issue a single-instruction for multiple-data without resource hazards.
But a multi-threaded control unit would function optimally with
a wide window of decomposed (SISD) instructions.

Wow, you sure use big words. Read a book ?
 
Nick said:
Either you are very young, or your memory is failing. Let me remind
you.

Back in the early 1980s, there were several chips fighting it out
for the workstation market. Intel's 8086 wasn't up to it and even
the 80286 was pretty marginal - the 68000 and 68010 were rapidly
becoming the dominant chips in the high-end market. What stopped
them from becoming dominant was that no company produced a 68K-based
workstation that was both relatively cheap and with working software.
Sun and Apollo established themselves because their systems
more-or-less worked. There were much cheaper 68K boxes, but their
software was crap, and they didn't have the third-party support.
There were at least a dozen companies in the world who could have
put together a decent 68K-based system over a year before the 80386
became viable. None did.

Workstations at $10k+ a pop are never going to let you rule the world.
And as it turned out, it wasn't workstations that anyone was looking
for, it was PCs. The only viable PC-like solution on a 68K was the
Macintosh, not Suns or Apollos. The unfortunate thing with Macintosh
was that Apple planned its PC design better than IBM, by locking down
their design so that nobody else could copy it. IBM through poor
planning or looming deadlines forgot about keeping things proprietary,
and came up with a surprisingly open design, which promptly got copied.
This produced a monster of a market that became bigger than its father,
IBM.
Only a little bit later, IBM and Motorola set up the PowerPC project.

Only a little bit later? PowerPC was introduced in 91 or 92.
Let's skip over the long and complex details, but the facts of the
matter were that IBM had viable systems a full year before the 80386
became viable, but wouldn't release them. And the PowerPC consortium
was really quite big at that stage, including Apple and most of the
Tier 2 vendors at least having taken out options. And please note
that Intel and Microsoft combined were small compared to IBM back in
the days of the 80286.

You're seriously trying to feed me this crap about IBM having PowerPCs
around 1985, when the 386 was first introduced, and you expect anyone
to take you seriously? By the time PowerPC was introduce, it was
competing against 486's and the first of the Pentiums.

As for what size Intel and Microsoft were in the days of the 286, who
cares? At that time IBM was fully behind the x86 architecture, and it
was in fact pushing it at that time. IBM was also fully behind the
later 386. It didn't start feeling threatened by Intel or Microsoft
until after the 386, when it became clear that startups like Compaq
could take on IBM with the aid of Intel and Microsoft, because they
showed no loyalty towards IBM.
You may not be aware (or may have forgotten) that IBM put one hell
of a lot of marketing money in to make that turkey fly. And then
discovered that they had been taken for a ride, contractually, by
Intel and Microsoft. I can assure you that IBM was right royally
pissed off.

IBM took itself for a ride. It forgot to make the PC design
proprietary, and by the time it remembered (hello, PS/2 & MCA), it was
already too late, there were cloners everywhere.

IBM spent a lot of money marketing the PC, not to make it a success
against the high-end 68K machines, but as a success against things like
the Commodore 64 and Apple II. IBM main market was offices and homes
for this product. It would push the message that businesses use IBM PCs
at the office, and now you can buy your own PC at home so you can do
your work from home -- so don't get that Commodore 64 toy, you can't
take your office work home to it.
If I were to get $1 billion together, design a CPU that outperformed
the x86 10:1 and build it into a system, what chance would I have of
selling it without being smothered in lawsuits? Get real.

Don't know what lawsuits you'd be expecting. But before I'd start
worrying about any hypothetical lawsuits, I'd first be worried whether
anyone would want your chip at all for your hypothetical $1 billion
spent. Nobody would care if it's 10 times faster, or a 100 times faster
regarding some aspect of calculation or another. If it isn't x86
compatible then its market is limited.
Well, I wasn't, but I had been there before. That scarcely counts
as innovation - it is such a routine task.

Sure it is, Nick. :-)

Yousuf Khan
 
Back
Top