Intel strikes back with a parallel x86 design

  • Thread starter Thread starter Jim Brooks
  • Start date Start date
Bill said:
These are not similar in potential. The 432 was just slow for real world
problems. The only time the slow rule the world is when they're elected
President.

My personal belief is that if/when the Linux desktop is really ready for
prime time, there will be a chance for a non-x86 processor to be very
popular. Or if there's some reason to port Windows to a non-x86
platform. 64 bit Windows is really not much of a leap.

It is not clear what this hypothetical non x86 desktop processor will
be. Cell? XBOX engine? Itanium? that is about the list of things
that will have the volume or potential to compete with x86. Got any
others? The Chinese have something up their sleeves for the domestic
market?
 
IBM was only behind the 386 to the extent that it didn't cannibalise
any of their existing products markets.

They came from `Entry Level Systems' remember, and where considered a
slighly brain-dead terminal by almost all of IBM. IBM *HAD* to be very
open, and touted as open as IBM knew that there was a large number of
the early adapters on the watch for `lock-in' and would screem about
it.

--
Paul Repacholi 1 Crescent Rd.,
+61 (08) 9257-1001 Kalamunda.
West Australia 6076
comp.os.vms,- The Older, Grumpier Slashdot
Raw, Cooked or Well-done, it's all half baked.
EPIC, The Architecture of the future, always has been, always will be.
 
chrisv said:
At least back than, it seemed that alternate architectures had at
least a chance of becoming viable. Think Amiga and Atari ST. If
those efforts had been combined into one, and handled properly... who
knows?

Yeah, and actually let's not forget the Macintosh. I remember there was
a lot of talk about Amiga and ST being Mac-compatible. If the had just
combined efforts with each other, then that would've expanded the
market for Motorola 68K into the mass market. And now we'd be seeing
Motorola 68K processors still being updated and at least still
competing against x86 processors, if not supplanting them. That window
of opportunity has long since closed.

Yousuf Khan
 
Del Cecchi said:
Don't claim later that what you wrote didn't mean what it clearly said.

Nick's been doing that consistantly for years. I doubt
he's going to change just because you tell him to stop,
but I guess you can always hope.

Speaking as someone who has done microprocessor
architecture for a living for quie a few years, I don't
think his posts contribute anything anyway. I've
shared many a laugh over his posts with co-workers.
YMMV.
 
Yes. But those systems were too complex to take off.

Hmm, I wouldn't say too complex - we sold several with our software while
waiting to see what 80386 would do when it arrived and you just plugged the
bugger in and ran the software with a stub loader: LOADM Progname or some
such; we did have a couple of folks who wanted to know why it didn't speed
up Lotus 1-2-3.:-) As indicated, we were not sure the 80386/387 was going
to match them and that was the shock - it far surpassed them.
That was a factor, and made it hard for garage shops to build a
68K-based system, but I don't think that is relevant. If one HAD
been built to take off, it would have been by a fairly serious
vendor. Apple, Sun, Apollo and others had few problems.

There were a lot of talented engineers -- well above garage shops -- who
tore their hair out trying to get NS 32032 systems working; Moto 68K was
not quite that bad but real engineers drive the price up... i.e. Apollo and
Sun. The 68K Macs were dogs... to the point that Apple actually made an
announcement, late 80s IIRC, that they would not be participating in the
workstation market, just as Intel was (hopefully) squaring up for it with
late 486s and later, Pentium.
 
In comp.sys.ibm.pc.hardware.chips YKhan said:
People were using their PC chips as server chips even before Intel had
any official server solution available, because Netware was available
pretty much within a few years of DOS.

Good heavens, it really is that old (released in 1983!) ... though as I
recalled, the first version ran on a proprietary box (68k based, according
to Wikipedia: http://en.wikipedia.org/wiki/Novell_Netware ) and not on a
standard PC.
 
Other examples: DOS vs CP/M and whatnot, Windows vs. OS/2, Linux vs
Windows (okay, the jury's still out, I guess), SGI vs. 3dfx and
nVidia, Toyota vs. Mercedes, Wal-mart/Aldi/Lidl/IKEA. Compaq vs. IBM,
later Dell vs. Compaq.

Seems that monetary success is very often grown from the mass-market
by undercutting the established competition. Often better products in
the end as well.

Any counterexamples?

Erm, Yugo? IOW it takes a certain commitment to excellence - it's taken
Hyundai 15-20years to waken up to this but they have apparently finally
cracked it.

From the other angle, Apple has gone from a unique high-end product, with a
certain cachet, into churning out cheap toys. Does this mean they are due
for a tumble?:-)
 
Good heavens, it really is that old (released in 1983!) ... though
as I recalled, the first version ran on a proprietary box (68k
based, according to Wikipedia:
http://en.wikipedia.org/wiki/Novell_Netware )
and not on a standard PC.

there was this datahub project started around summer of '81 that had a
work-for-hire contract with a bunch of people in provo (for a time,
one of the people in san jose was commuting almost weekly to provo).

for a time, there was a conference room/lab in bldg. 61G ... with a
number of machines interconnected with datahub

at some point it was decided to cancel the project and allow the work
product produced by people in provo to retained by the people that had
produced it. total conjecture how it was used by the people in provo?

random past postings mentioning datahub
http://www.garlic.com/~lynn/96.html#4a John Hartmann's Birthday Party
http://www.garlic.com/~lynn/2000g.html#40 No more innovation? Get serious
http://www.garlic.com/~lynn/2002f.html#19 When will IBM buy Sun?
http://www.garlic.com/~lynn/2002g.html#79 Coulda, Woulda, Shoudda moments?
http://www.garlic.com/~lynn/2002o.html#33 Over-the-shoulder effect
http://www.garlic.com/~lynn/2003e.html#26 MP cost effectiveness
http://www.garlic.com/~lynn/2003f.html#13 Alpha performance, why?
http://www.garlic.com/~lynn/2004f.html#16 Infiniband - practicalities for small clusters
http://www.garlic.com/~lynn/2005p.html#23 What ever happened to Tandem and NonStop OS ?
http://www.garlic.com/~lynn/2005q.html#9 What ever happened to Tandem and NonStop OS ?
 
|>
|> [ PowerPC ]
|>
|> Coulda woulda shoulda? It failed to dislodge x86 because they tried
|> to force a hardware solution without software to support it. How many
|> applications ever got ported to Windows/PowerPC?

That was one of the points I was making. Application vendors and
others were lined up, and many had joined the consortium, but IBM
dithered and dithered. By the time that a system was released,
they had lost interest as the 386/486 had established itself in
the market that the PowerPC was aimed at.

|> Hardware = Cheap. Software = Expensive. This is why x86 dominated
|> the market once it got it's HUGE lead in software.

Well, vaguely. But it hadn't got its huge lead back then. The

In the case of the 68K, no. However by the time the PowerPC and Alpha
came to be, x86 had an *enormous* lead.
number of critical programs that ran on everything major EXCEPT
x86 systems (largely because of the crocks than passed for operating
systems) was legion.

Workstation and server apps were a plenty for non-x86, but mass-market
desktop software? I doubt that any other architecture had even 10% of
the desktop applications that x86 had by 1990!
|> What chances would you have to sell it, period? With no software
|> support you're already dead in the water. Best look towards the
|> embedded market for you're design and hope that you can get the power
|> consumption down.

Why do you think that I have forgotten that? Look at my record.

As has been proved time and time again, it is FAR easier to port
to a new architecture than is often made out. Compiler writers
and adeuately competent programmers are cheap, and Linux and BSD
are fairly portable. I wouldn't need more than $10 million of
that $1 billion to get Linux up, and wouldn't need more than
perhaps $20 million to bribe a significant number of major
application vendors to support the system.

Intel and HP measure their spending on Itanium software development
money in the hundreds of millions, and just look how far that's got
them. (the IA-64 development fund was $250M alone, and that doesn't
touch things like compilers, porting Linux, HP-UX, etc). You really
think you're measly $30M is going to get anyone to blink? You're
dreaming!

I told ya, software is EXPENSIVE!
 
That's interesting, I didn't know that Netware was available on 68K
processors at one time. Anyways, Netware quickly became x86-only when
it was clear which direction the industry was going.

Yousuf Khan
 
Jim said:
FADD/FMUL/FLDST mean floating-point adder, multiplier, load/store units,
resp. Not to be confused with eponymous x87 instructions (sorry).

And yet, curiously, that still doesn't change the fact that it is the
least used section of the processor hardware.

Yousuf Khan
 
Nathan said:
A relatively simple architectural hack.
64-bit effective addresses contributes to code bloat.

Not really in this case, it won't contribute to code bloat. The AMD64
architecture defaults to 32-bit for data and 64-bit for addresses.
Therefore only pointers will become 64-bit, data structures can remain
32-bit and relatively efficient.
What an outrageous AMD-bias Khan suffers from.
Yeh, right. AMD is doing "good" with phony misleading Mhz ratings
(eg "Athlon 4400+") to conceal a 2 Ghz gap behind P4. So good AMD
had to resort to crying and whimpering to the Europeon trade
commission.

Oh, I'm sorry Bates, didn't know you were living on the Moon all of this
time, and that you weren't aware of the patently basic facts. AMD did in
fact introduce the first dual-core x86 server chips, which Intel still
hasn't been able to counter yet (though it's trying to get OEMs to pass
off a Pentium D as a server chip). And also it is the first to introduce
dual-core desktop x86's, because even though Intel *announced* its
Pentium EE first (and eventually Pentium D too), AMD was actually the
first to ship its Athlon 64 X2.

As for all of that irrelevant stuff about the Mhz gap to P4 and it's
antitrust case in front of the EU, don't know what your point is in
relation to dual-cores. Pretty sure that you don't know what your point
is either, you're just basically talking to keep yourself happy, which
is of course your right.

Yousuf Khan
 
Erm, Yugo?

Well - I don't claim that *any* cheap product is sufficient to take over
the world. I agree with the "commitment to excellence" requirement.
For mass-produced, low margin goods, it may actually be more important
to maintain high quality than for low volume/high margin products.
From the other angle, Apple has gone from a unique high-end product, with a
certain cachet, into churning out cheap toys. Does this mean they are due
for a tumble?:-)

This is more the counterexample I was thinking of. Apple, and I guess
Sony, focus on the slightly high end (even Apple's "cheap toys" are
more expensive than the competition), and manage to capture and hold a
significant part of the market. Both rely heavily on design and
strong branding, and (arguably perhaps?) maintain a "commitment to
excellence".

-k
 
Hmm, I wouldn't say too complex - we sold several with our software while
waiting to see what 80386 would do when it arrived and you just plugged the
bugger in and ran the software with a stub loader: LOADM Progname or some
such; we did have a couple of folks who wanted to know why it didn't speed
up Lotus 1-2-3.:-) As indicated, we were not sure the 80386/387 was going
to match them and that was the shock - it far surpassed them.

I said "too complex to take off" - not "too complex to work". They
were fairly widespread here, too, and Acorn was into that approach
as well. They never were a viable design for dominating the market.


Regards,
Nick Maclaren.
 
In the case of the 68K, no. However by the time the PowerPC and Alpha
came to be, x86 had an *enormous* lead.

Sigh. For the Nth time, that was true by the time that IBM actually
delivered but was NOT true when they COULD have delivered (about 2
years before). For the first 5 years of its life, the 80386/486
design languished in the desktop and el cheapo (reliability and security
no target) commercial markets. Masses of sales but not much margin.
Intel was sweating blood to get out of that and break into the high
end desktop and medium to large server markets.

Those of us who knew something about both technologies and, more
importantly, those markets felt that IBM's original PowerPC design
(which was NOT just a CPU, but a complete system) would have quickly
dominated the high-end desktop and commercial server markets - as I
said, every company bar Intel had signed up, and most were actively
planning products. But the 2 year delay was long enough for Intel
to get its act together, and the rest is history.
Workstation and server apps were a plenty for non-x86, but mass-market
desktop software? I doubt that any other architecture had even 10% of
the desktop applications that x86 had by 1990!

So? The point is that we knew THEN that the costs would come down,
so that a viable 1990 workstation/server design would reach down
to desktop prices by 1995. Intel would have had to take over an
established market to get out of the low-margin desktop ghetto,
while being squeezed on its most profitable lines. As it was, with
IBM dithering, it had no major opposition.
Intel and HP measure their spending on Itanium software development
money in the hundreds of millions, and just look how far that's got
them. (the IA-64 development fund was $250M alone, and that doesn't
touch things like compilers, porting Linux, HP-UX, etc). You really
think you're measly $30M is going to get anyone to blink? You're
dreaming!

Sigh. Remember who you are responding to.

Back in 1996, I pointed out that the IA64 software was predicted on
solving at least three problems that had defeated the best computer
scientists and vendors for 25 years, and were possibly insoluble.
HP persuaded Intel that they could be solved "to order" - I said
that they couldn't be. I was right and HP/Intel were wrong.

$30 million is MASSES for a half-sane system if managed competently.
Both BSD and Linux distributions have done it for a tiny proportion
of the cost.


Regards,
Nick Maclaren.
 
YKhan said:
That's interesting, I didn't know that Netware was available on 68K
processors at one time. Anyways, Netware quickly became x86-only when
it was clear which direction the industry was going.

If you study/write to/implement some of the early lower-level Netware
protocols, you'll notice the big-endian/little-endian confusion pretty soon!

There are lots of packet formats in BE, as well as others that are LE.

There are even examples of LE words stored in BE order (or vice versa)!

Terje
 
Well - I don't claim that *any* cheap product is sufficient to take over
the world. I agree with the "commitment to excellence" requirement.
For mass-produced, low margin goods, it may actually be more important
to maintain high quality than for low volume/high margin products.


This is more the counterexample I was thinking of. Apple, and I guess
Sony, focus on the slightly high end (even Apple's "cheap toys" are
more expensive than the competition), and manage to capture and hold a
significant part of the market. Both rely heavily on design and
strong branding, and (arguably perhaps?) maintain a "commitment to
excellence".

Agreed. I wonder though where Sony might be now if not for Trinitron? The
perception of the "brand" seems to be flagging somewhat and they have just
announced 10,000 layoffs over two years or so.
 
Yousuf said:
Not really in this case, it won't contribute to code bloat. The AMD64
architecture defaults to 32-bit for data and 64-bit for addresses.
Therefore only pointers will become 64-bit, data structures can remain
32-bit and relatively efficient.


LOL. "data structures can remain 32-bit".
Khan was either momentarily confused or is perpetually ignorant
of computer architecture. Instruction operands and data structures are
different concepts.

In x86-64 long mode, a REX byte prefix is required to access
a full 64-bit register. One of many cases is the fact that
C++ uses pointers pervasively which means loading a 64-bit address
into a 64-bit register happens a zillion times.
Designing additional code bloat into the ISA is foolish
as it wastes memory and impedes code prefetching/decoding.

AMD did in fact introduce the first dual-core x86 server chips,
which Intel still hasn't been able to counter yet (though it's
trying to get OEMs to pass off a Pentium D as a server chip).
And also it is the first to introduce dual-core desktop x86's,
because even though Intel *announced* its Pentium EE first
(and eventually Pentium D too), AMD was actually the
first to ship its Athlon 64 X2.


We'll see if AMD can implement anything truly challenging such as
HyperThreading or a novel revolutionary design such as Itanium 2.
Intel implemented EMT64 and dual-cores without even sweating.
 
Nathan Bates said:
We'll see if AMD can implement anything truly challenging such as
HyperThreading or a novel revolutionary design such as Itanium 2.
Intel implemented EMT64 and dual-cores without even sweating.

Hypertheading is perhaps "challenging" but it's also "performance
challenged". (Find a benchmark on a HT CPU and then read
"Hyperthreading disabled" in the fine print)

Hypertransport is an act that Intel still hasn't been able to
follow; the FSB bottleneck kills you at two CPUs.

Casper
 
|>
|> >We'll see if AMD can implement anything truly challenging such as
|> >HyperThreading or a novel revolutionary design such as Itanium 2.

One of the objectives of engineering design is to achieve an
objective while minimising how challenging the implementation is.

|> Hypertheading is perhaps "challenging" but it's also "performance
|> challenged". (Find a benchmark on a HT CPU and then read
|> "Hyperthreading disabled" in the fine print)

Indeed. And, while there are still claims that the Montecito will
support it, there is good evidence that Intel are scrapping it in
their New Microarchitecture.

Also, the Itanium is still very implementation-challenged - for
example, the Register Stack Engine STILL isn't there after Intel
failed to implement it in at least the Merced and McKinley. No,
lazy mode does not count - LOTS of other vendors (Sun and Hitachi,
to name but two) have done that.

|> >Intel implemented EMT64 and dual-cores without even sweating.
|>
|> Hypertransport is an act that Intel still hasn't been able to
|> follow; the FSB bottleneck kills you at two CPUs.

My guess is that the New Microarchitecture will drop the FSB and
move to the same sort of design as AMD and most other vendors.
Does anyone know for certain?


Regards,
Nick Maclaren.
 
Back
Top