Intel strikes back with a parallel x86 design

  • Thread starter Thread starter Jim Brooks
  • Start date Start date
Nick said:
A question for all you omniscient ones out there - what were the
worst computers of all time? IBM's candidate must surely be the
PC/RT, but A,T&T are strong competitors with the 3B2.

Regards,
Nick Maclaren.

GEC 4000 series, as installed in UCL's EUCLID system
from around 1980.

Apparently they were reliable enough to control railways,
but certainly weren't in the half-baked 'loosely-coupled
multiprocessor' delivered as EUCLID.

There were some nice ideas in the 4000 but they invariably
looked vile by the time you came to program them. In a
hideous language called Babbage. With a primitive card-
oriented line editor ported from GEORGE 3 because GEC's
own editors were so terrible. On a temporary disk that was
by design cleared on reboot. Which was needed whenever an
interprocessor link failed. Which they did, frequently, and
having failed, couldn't be used to get your data to a
'permanent' disk.

Actually I'm sure they can't be the worst computers of
all time but they're certainly the ones I look back on
least fondly.

Nick
 
Del Cecchi said:
Here at IBM it is my belief that Programming and Programmers are
classified as engineering and scientific type activities in terms of job
descriptions and other HR type activities. I don't believe that any
distinctions are made relative to methods or output. At least that is how
it was a few years ago.

I don't disagree with that. But we were talking about the commercial
marketplace, the programs written for it, and the programmers writing those
programs as opposed to the engineering scientific market, etc. As evidenced
by the old (pre S360) distinction between "scientific" computers and
"business or commercial" computers there at least used to be a difference.
In the context of this difference, talking about all programmers as
"scientific" just confuses things and blurs the difference.
There were some distinctions made between programmers writing code and
engineers writing code. Engineer code was considered hardware.

It well may have been that internal programming groups used tools and
facilities that were not part of shipped OS or application products.

Nick seem to make a distinction between the two types of
programs/programmers based on the tools they use. I don't see that
difference.
Here in Rochester we had and have many programmers working on a variety of
computers that were and are sold to the commercial marketplace almost
exclusively. They had debugging tools available.

Of course.
 
In comp.sys.ibm.pc.hardware.chips Bill Davidsen said:
Got a model number for an IBM rackmount server using the 386? We sure
never saw such a thing. You manage (as usual) to disparage without
provide a single verifyable fact.

Rackmount, no, but I saw quite a number of places using first generation
PS/2 386 systems as file servers or as the server for POS applications
around 1989-1990 or so. I never saw a rackmount *anything* until about
1993.
 
Sigh. The distinction I was using is the one that was used at the
time by IBM, Intel and others - and to a great extent still is.

"Business/commercial" (or "commercial", for short) includes everything
that is used in most offices etc. The fact that a lot of that was used
in academia longer before and still is (e.g. Email, text processing)
seems to have escaped the marketdroids and bean counters.

"Scientific/technical" includes most conventional programming, as well
as applications like Spice. It was and is regarded as much less
important by the marketdroids and bean counters, partly because it was
and is a much smaller market.

Grunt. I'm afraid your pigeon-holes are much too highly delineated - the
results are obvious. While there are marketroids who have to think in such
terms, fortunately their policies get bypassed by people who know better
and there are users who appreciate the lateral thinking. Besides that
distinction is not really that important in terms of whether the 68K,
PowerPC or whatever was competent/competitive to go up against the 386/486
in the market.
The fact of the matter (whether you like it or not) is that Intel
established itself as the chip maker for the IBM PC, which was never
intended by IBM to be used as much more than a programmable terminal.

You can presume whether I like something or not; that was a minor time
slice in the PC evolution. It did not take long for even IBM to get the
msg that a 327x was not going to hack it for the "user experience" in the
brave new world of spreadsheets, word processing and... graphics, which IBM
did not even make a PC card/monitor for to begin with.
Intel and Microsoft were not so blinkered, and realised the wider
potentials. However, in the days you are talking about, they were
targetting the small business/commercial market and trying to break
into the medium business/commercial market. While they would SELL
to the scientific/technical, they didn't regard it as worth changing
any plans for. And, to a large extent, they still don't.

The bottom line, which I was trying to get across, is that in a software
system which exercised the CPU across all its spectrum of operations, the
68K was a dog. It was just as much a dog at pure "commercial" work as it
was at quasi-scientific "business" application work - the CPU was just slow
in general and saddled with an idealistic orthogonal ISA. The 386 was
simply a better performer and the PowerPC was never going to better enough.
 
I read that Intel was developing an AMD compatibility chip about 18
months ago. I read it here, and many people said it was FUD.

More like 3 years ago -- Yamhill of course -- and the deny-ers were Intel
employees IIRC.
 
A question for all you omniscient ones out there - what were the
worst computers of all time? IBM's candidate must surely be the
PC/RT, but A,T&T are strong competitors with the 3B2.

Yeah those two are bad - 3B2 had the additional distinction of being the
artifice used to wreck NCR. I still think in terms of being ugly to
everyone, users, programmers, operators, etc., the worst was actually quite
successful in its err, market: System 3.
 
George Macdonald said:
You can presume whether I like something or not; that was a minor time
slice in the PC evolution. It did not take long for even IBM to get the
msg that a 327x was not going to hack it for the "user experience" in the
brave new world of spreadsheets, word processing and... graphics, which IBM
did not even make a PC card/monitor for to begin with.

however, one of the reasons behind big uptake of ibm/pcs ... was that
businesses could get a single machine (ibm/pc) for about the same
price as 327x ... and it could provide both 327x terminal and also
capability of doing some amount of local processing on the desk ...
as well as it had footprint of single display & keyboard (didn't
require two screens and two keyboards for each desk in order to have
both mainframe dataprocessing access and also local desktop stuff).
http://www.garlic.com/~lynn/subnetwork.html#emulation

the big uptake created quite a large install base of terminal
emulation products ... which was being threatened as PCs evolved into
more advanced computing capability ... as well as client/server was
raising its ugly head. this sort of gave rise to SAA ... which
nominally claimed that it was going to make it transparent where
something actually executed (PC or mainframe) ... but there was a lot
of effort in SAA going on to port major PC applications to the
mainframe ... and use the PC purely as sophisticated display devices.

In this period, my wife had co-authored and presented a corporate
response to a large federal request for secure, campus-like distribute
envrionment ... where she had formulated a lot of the pieces of 3-tier
architecture. we then expanded on that and were making 3-tier and
middle layer customer executive presentation ... which didn't endear
us to any of the SAA proponents. recent posting going into more
detail
http://www.garlic.com/~lynn/2005q.html#18 Ethernet, Aloha and CSMA/CD
http://www.garlic.com/~lynn/2005q.html#19 HASP/ASP JES/JES2/JES3
http://www.garlic.com/~lynn/2005q.html#27 What ever happened to Tandem and NonStop OS?

some number of collected postings referencing 3-tier architecture
http://www.garlic.com/~lynn/subtopic.html#3tier
 
George Macdonald said:
Yeah those two are bad - 3B2 had the additional distinction of being
the
artifice used to wreck NCR. I still think in terms of being ugly to
everyone, users, programmers, operators, etc., the worst was actually
quite
successful in its err, market: System 3.

What didn't you like about S/3? I suppose you didn't like its
descendents S/32, S/34, S/36? The instruction set was a stripped down
storage to storage resembling the S/360. Or you have something against
RPG? The cute little cards?

del
 
Rackmount, no, but I saw quite a number of places using first generation
PS/2 386 systems as file servers or as the server for POS applications
around 1989-1990 or so. I never saw a rackmount *anything* until about
1993.

I think they were called industrial PCs and possibly started with XTs
(or maybe even original PCs) ... we had some number of rackmount
PC/ATs with mainframe channel attach cards (PCCA). PCCA showed up
spring 85 ... in configurations with PC/Net LAN cards.

A flavor of this evolved into 8232 ... the mainframe controller
tcp/ip; industrial pc/at and a mainframe channel interface card with
8232 nameplate (big hairy bus&tag tailgate interface).

there was some issues with how the 8232 TCP/IP support was actually
implemented ... and as a result the mainframe side processor could
burn 100 percent of one 3090 processor getting 44kbytes/sec. I then
added RFC1044 support to mainframe tcp/ip was in some tuning at cray
research ... between a 4341-clone and a cray was getting mbyte/sec
(4341 channel media thruput) using only a small amount of the 4341
(aka about 25 times more bits for maybe 1/100th the cpu
processing). misc. past rfc 1044 posts
http://www.garlic.com/~lynn/subnetwork.html#1044

at the time there was some rumors and folklore about project that
integrated some tcp/ip support into the standard SNA processing
environment (vtam, 37xx, etc) where they had hired an outside
contractor for the effort. Supposedly the first cut had tcp/ip running
with significantly higher thruput than LU6.2 thru the same
infrastrucutre. supposedly it was sent back as flawed and having to
be redone (at least until tcp/ip was no longer faster than LU6.2).

the industrial pc/at with PCCA had also started out with significantly
higher thruput than what was finally shipped in the 8232 support.
 
In comp.sys.ibm.pc.hardware.chips YKhan said:
Yeah, but at that time AMD was cooperating with Intel. My assumption is
if you're cooperating, then you're not competing. So they likely just
followed Intel's lead in pricing there. Nothing like hating someone's
guts to start a price war (or any war).

I think it was around 1991 or so when the 386/40 came out that AMD really
started to compete rather than cooperate as a second source... I know that a
lot of the faster late 80286 parts (16/20 mhz) were non-Intel but I don't
recall if Intel also made > 12mhz 80286s, and there certainly wasn't a
marketing effort.

OTOH, I don't generally recall much chance to buy 286 chip + motherboard
separately... that is something that came in a little with the 386, and
really became most common (along with ZIF sockets) when 486s started to
become cheaper. One thing that looking back seems to have helped AMD was
the chance to market the processor separately from the motherboard.
 
Nate said:
I think it was around 1991 or so when the 386/40 came out that AMD really
started to compete rather than cooperate as a second source... I know that a
lot of the faster late 80286 parts (16/20 mhz) were non-Intel but I don't
recall if Intel also made > 12mhz 80286s, and there certainly wasn't a
marketing effort.

Well, the start of the 386 era was also the start of the Intel/AMD war.
It was the basis for the first of their many lawsuits. Intel never made
more than 12MHz 286's, because they would compete against their
then-brand-new 386/16's. At the time, it looked like 286's ran 16-bit
apps a little bit faster than 386's at the same Mhz. Nothing as
drastically mismatched as the Pentium 3 vs. Pentium 4 mind you, where
the P3 matched P4's of 40-50% higher clock speed. But it does now seem
to be a trend from Intel, each successive generation seemed to be a
little bit more inefficient at the same Mhz as the previous generation.
OTOH, I don't generally recall much chance to buy 286 chip + motherboard
separately... that is something that came in a little with the 386, and
really became most common (along with ZIF sockets) when 486s started to
become cheaper. One thing that looking back seems to have helped AMD was
the chance to market the processor separately from the motherboard.

Yeah, I can see that.

Yousuf Khan
 
I don't understand what you are saying here. For IBM mainframes, they have
COBOL compilers, test data generators, interactive editors, profilers, code
repository systems, etc. Some or all of these ar used by the aformentioned
hypothetical COBOL programmer. It is hard to see how they could do their
job without them.

Think back to the 1970s. As the Wheelers can witness, most of those
were not available under MVS (or, at least, were hopeless), and CMS
was extensively used for developing for MVS. Yet IBM classified CMS
as a system for scientific/technical use rather than for business/
commercial (though it wasn't that simple).

[ IBM's standard TSO editor was like an interactive version of IEBUPDTE,
to give just one horrible example. ]

And, again, in the mid-1980s. IBM's plans for SAA were that the
development of MVS applications would be done on OS/2, and that many
of those tools would not be available at all on MVS. Seriously.
So your claim is that debugging facilities were developed exclusivly for
scientific/technical environments and any use by commercial programmers was
a "happy accident"? I find that an odd belief, especially from a company
whoose middle name is "Business".

Sigh. No. Please don't think that IBM has a single corporate mind,
let alone a consistent plan. It hasn't had since 1955, to my certain
knowledge. Think of it this way.

IBM had and has a collection of products, which are used together by
the technical staff and customers in complicated ways. For marketing
and administrative purposes, these are divided rather arbitrarily
into segments - of which two are the ones we are talking about. For
several good and bad reasons, most systems intended for developing
conventional programs (and I am including software, here) ended up
in the scientific/technical even when the final product was in the
business/commercial. Not a major problem.

Where the issue became a problem is when some head-in-the-clouds
executive looked at the books, and decided to cut back on the less
profitable segment without considering the consequences, or imposed
a new structure that broke the ad-hoc links across segments that kept
them vaguely in step. It happened many times in IBM, and I have seen
it happen with other vendors.
Perhaps we have a different definition of a commercial environment and of
development tools. I regard a typical commercial environment as say a bank
where the programmers write COBOL programs using such environmental tools as
CICS and DB2 (or their equivalents from other vendors) to accomplish the
primary business of the bank (managing accounts). The tools, such as the
ones I mentioned above, support that.

Yes. One of the reasons that the above does not apply in the way that
it used to to MVS (sorry, zOS) as OS/400 (what is it, now?) is that
there AREN'T any corresponding "scientific/technical" environments to
develop on. But I can assure you that the mindset remains.

And, while it was extremely well developed in IBM, it is fairly common
in other organisations. It remains less visible because few or none
have gone as far down the line of separate product ranges for the two
segments as IBM did. But I can assure you that it is there, as it is
an artificial barrier that I trip across regularly!

An aspect that has never concerned me directly, but I have observed
several times with several vendors, is that the fancy development
tools (especially debuggers) often work with Fortran and C, but not
Cobol, even when they run on the same system. This often has the
effect that the Cobol system ships with its own set of debugging
tools, and debugging Cobol+Fortran codes becomes a nightmare.

One of the more common problems I hit is that parallel support (e.g.
MPI), batch schedulers etc. are classed as "scientific/technical"
and "high-RAS" products (whether management environments, automated
log management or high-RAS file systems) as "business/commercial".
Are they validated/tested together? Don't be silly. Do they work
together? Only in demonstrations. This IS improving, as more of the
commercial customers start to use the parallel tools and schedulers,
but is still a problem.


Regards,
Nick Maclaren.
 
In comp.arch Nick Maclaren said:
True. It should be able to :-)

They were - I ran lots of large apps on Unix on them ;-) Windows 3.1
and apps was a different issue.
For some meaning of "could", I suppose.

Probably more for some meaning of "mainframe" - you know, a big
box doing lots of data processing.
 
They were - I ran lots of large apps on Unix on them ;-) Windows 3.1
and apps was a different issue.
Quite.


Probably more for some meaning of "mainframe" - you know, a big
box doing lots of data processing.

And damn the RAS, automatic diagnostics, I/O capacity and all the
other things traditionally associated with mainframes.

On this matter, did anyone ever build a fully cache coherent SMP
system with more than a couple of CPUs out of the 80386 or 486?
If I recall, the Sequent had a bit of implicit coherence, but
nothing like enough to support POSIX threads.


Regards,
Nick Maclaren.
 
Here at IBM it is my belief that Programming and Programmers are
classified as engineering and scientific type activities in terms of job
descriptions and other HR type activities. I don't believe that any
distinctions are made relative to methods or output. At least that is
how it was a few years ago.

There is (or at least was) a legal difference. Programmers are classified
differently than engineers because of the differences in overtime rules
forced by a few law suits in the '70s.
There were some distinctions made between programmers writing code and
engineers writing code. Engineer code was considered hardware.

Because of the above overtime rules. Programmers doing "coding" are
non-exempt (kinda/sorta, under certain circumstances). Engineers are
exempt. This wasn't really a technical distinction.
It well may have been that internal programming groups used tools and
facilities that were not part of shipped OS or application products.

Many. PL/AS was never shipped to customers, AFAIK. Many of the docs
(e.g. language reference) were Registered Confidential.
Here in Rochester we had and have many programmers working on a variety
of computers that were and are sold to the commercial marketplace almost
exclusively. They had debugging tools available.

Rochester always did things "differently". ;-)
 
Grunt. I'm afraid your pigeon-holes are much too highly delineated - the
results are obvious. While there are marketroids who have to think in such
terms, fortunately their policies get bypassed by people who know better
and there are users who appreciate the lateral thinking. Besides that
distinction is not really that important in terms of whether the 68K,
PowerPC or whatever was competent/competitive to go up against the 386/486
in the market.


You can presume whether I like something or not; that was a minor time
slice in the PC evolution. It did not take long for even IBM to get the
msg that a 327x was not going to hack it for the "user experience" in the
brave new world of spreadsheets, word processing and... graphics, which IBM
did not even make a PC card/monitor for to begin with.

While I agree with your sentement, there wasn't any lack of a graphics
card. The CGA card and monitor were announced and shipped with the 5150.
I had a CGA card (couldn't afford the monitor) on my "first day order"
5150. ...along with the monochrome card and monitor.

OTOH, contrary to Nick's point, the 5150s did not ship with a 3270
emulator card. Those came later (IIRC IBM wasn't even first), as it
became obvious the PC was a better and cheaper 3270. Saying that the The
PC was originally intended to be a 3270 replacement is "new history", at
best.
The bottom line, which I was trying to get across, is that in a software
system which exercised the CPU across all its spectrum of operations,
the 68K was a dog. It was just as much a dog at pure "commercial" work
as it was at quasi-scientific "business" application work - the CPU was
just slow in general and saddled with an idealistic orthogonal ISA. The
386 was simply a better performer and the PowerPC was never going to
better enough.

It was supposed to be. ;-) Remember, RISC was the savior, CISC was
dead-end. Oops.
 
Yeah those two are bad - 3B2 had the additional distinction of being the
artifice used to wreck NCR. I still think in terms of being ugly to
everyone, users, programmers, operators, etc., the worst was actually quite
successful in its err, market: System 3.


My vote for the the worst in the IBM category is the System/7. It was the
turkey that kept the UTS (a sweet internal only system designed and built
in Rochester) from making it out the door. ...too many S/7s in the
wharehouse or that was the legend, anyway.
 
was that blue lightning? I thought that was a 486, using the license we
got when we kept Intel from going bankrupt buying 10 percent of their
stock (newly issued).

Blue Lightning was a clock-multiplied (2X and 3X) 486 class, with no FPU.
The license restricted IBM from integrating the FPU or selling bare
processors. The Blue Lightning found its way into processor upgrade
"boards" from Kingston, Evergreen, etc.
 
Yup. It was an extreme example of where IBM surveyed its customers,
completely ignored the geeks, and regarded the technical responses
of the statospheric suits as gospel. In the technical user groups,
IBM reps were amazed at the negative reaction when the project was
announced.

The project was never "announced". It died a silent, though excruciating
death.
There was a later example of this when the PowerPC was FINALLY
released. It was downgraded from SCSI to IDE, and I rang the head
of UK marketing. He said that customers had demanded it because
they said that the needed to reuse theor old disks in new machines.
I told him that nobody had done that in commerce, and it would
cut the machines off from the high-quality peripherals. He didn't
believe me and said "well, we will see".

Are you talking about OpenPC, or some such? "PowerPC" is a processor
architecture. I don't believe it was used for a system, though could be
wrong.
Well, we did. Guess who was right?

That didn't take any more prescience than your call on Itanic. ;-)
 
Back
Top