Intel strikes back with a parallel x86 design

  • Thread starter Thread starter Jim Brooks
  • Start date Start date
Bill said:
As the number of cores goes up the bandwidth to memory better go up as
well, and that includes everything in the path including the memory
devices. Server memory has had 2 and 4 way interleave for a long time,
I'm guessing that width is easier to add than speed, and that will come
sooner than later. Plus there may be issues which need another
generation of fab to solve, regarding power usage per core.

They need to come up with more optimal memory models, unless the
extremely suboptimal ones they're using now is part of a deliberate
plot to kill off shared memory multi-processing. We all know how
beloved SMMP is with some here. :)
 
A question for all you omniscient ones out there - what were the
worst computers of all time? IBM's candidate must surely be the
PC/RT, but A,T&T are strong competitors with the 3B2.

i wouldn't consider it even close to the 8100.

although there was the issue with the pc/rt ... what to do with all
the pl.8 programmers that had been on the displaywriter project
.... and coming up with VRM for the pl.8 coders to build an abstract
virtual machine ... for unix to be ported to (rather than the bare
iron) ... which in turn led to the fact that every new device ...
required a pl.8/vrm device driver in addition to a new (non-standard)
unix device driver

i think that it was evans who asked my wife to audit the 8100
.... which helped finally get it killed.

she also contributed to getting RP3 killed.

when we weren't allowed to bid on nsfnet
http://www.garlic.com/~lynn/2002k.html#12 ... nsfnet backbone RFP announce
http://www.garlic.com/~lynn/2000e.html#10 ... nsfnet rfp award announce

she went to director of NSF and got an audit of the backbone we were
running. This led to a letter (from NSF, co-signed by some other fed
agencies) to the head of research (copying the ceo and a couple
others) saying that what we had operational was at least five years
ahead of all bids to build something new.

this contributed to her being asked to leave research ... and her
transfer to FSD. Shortly later when research was making the rounds of
the organizations that had been funding RP3 (looking for more money),
FSD asked her to audit the project. Which in turn led to no new RP3
funding from FSD (which helped hasten RP3 demise).
 
ref:
http://www.garlic.com/~lynn/2005q.html#46

for arcane references ... 8100
http://en.wikipedia.org/wiki/IBM_8100

rp3 reference ... even slightly on-topic with respect to
parallel operation
http://www-03.ibm.com/ibm/history/exhibits/vintage/vintage_4506VV1005.html

of course with respect to earlier posts
http://www.garlic.com/~lynn/95.html#13
http://www.garlic.com/~lynn/2005q.html#38

the part of having stuff transferred to kingston for numerical
intensive and being told we couldn't work on anything with more than
four processors ... may have had some leftover issues because of RP3
.... in addition to the issue of encroaching on industrial strength
commercial data processing.
 
Nick Maclaren said:
I didn't mention it, because it wasn't relevant. Back in the days
of the 80386, no serious company used an IBM PC for that! Intel's
second success was breaking into that market, but that came after
the PowerPC had failed.


What those people do can't really be called conventional programming,
and quite a lot of the languages they use aren't even Turing complete
(ignoring finiteness restrictions). The conventional programming for
the "commercial" systems is done by a fairly small number of people
(e.g. the people who develop Oracle), and the vast number use those
higher-level programs.

Nonesense. Totally false distinction, and smells of academic elitism.

Whether a language is Turing complete or not is of zero interest.
The only thing of interest is whether the language will do the job at hand.
A language is just a tool. Programming is programming.
Is it only "real programmers, named Mel" who do "conventional
programming"? And what the heck does "conventional" mean
in any case? Plugboards? Loom cards?
I can witness that IBM used to regard the actual programming of even
some of the most "commercial" codes as a "scientific/technical"
activity :-)

Of course. The discipline lives in science not in religion.

--

... Hank

http://home.earthlink.net/~horedson
http://home.earthlink.net/~w0rli
 
Nick Maclaren said:
A question for all you omniscient ones out there - what were the
worst computers of all time? IBM's candidate must surely be the
PC/RT, but A,T&T are strong competitors with the 3B2.


Univac 1110 with the 40MB disks and too small drum (FH-432) for swap.
Running early version of OS-1100 in a time-sharing environment.
Uptime tended to be measured in minutes.
"Hey, I managed to get logged in before it crashed again."

--

... Hank

http://home.earthlink.net/~horedson
http://home.earthlink.net/~w0rli
 
Nonesense. Totally false distinction, and smells of academic elitism.

So what? I wasn't saying that I agreed with the distinction. I was
explaining how those companies use the term, and how they orient
their plans around it. Why ON EARTH do you think that I, as an
academic, necessarily agree with everything I describe?

For heaven's sake, do you think that every mediaeval historian who
describes the viewpoint of the Inquisition AGREES with burning
people at the stake?
Whether a language is Turing complete or not is of zero interest.
The only thing of interest is whether the language will do the job at hand.
A language is just a tool. Programming is programming.
Is it only "real programmers, named Mel" who do "conventional
programming"? And what the heck does "conventional" mean
in any case? Plugboards? Loom cards?

Sigh. Do PLEASE read what I say. I was making no value judgments,
but merely commenting that such uses are not CONVENTIONAL programming.
They aren't. It wasn't long ago that they weren't included in
programming at all - and I am NOT, repeat NOT, referring to only
academic use of the word.

I used that description to try and explain the difference in the
categories that was, and to a large extent still is, used by large
companies, like IBM and Intel.

My point was and is SOLELY that they categorise MOST conventional
programming of the type that I was describing as a scientific/
technical activity. That is ALL - get it? - ALL.


Regards,
Nick Maclaren.
 
Uptime tended to be measured in minutes.
"Hey, I managed to get logged in before it crashed again."

From what I have included, how would you tell what it was?


Regards,
Nick Maclaren.
 
Hank Oredson said:
Univac 1110 with the 40MB disks and too small drum (FH-432) for swap.

You could have used the FH-1782, which at 17.82 ms access time was still
much faster than and disks of the day. It had IIRC 8 times the capacity of
the FH-432. As for disks, wasn't the 8440 out by then? They were about 110
MB.

The big problem with the 1110 was the two level main memory, especially
since the primary was plated wire technology, whcih never worked well at
all.
Running early version of OS-1100 in a time-sharing environment.

The first level of OS-1100 that ran the 1110 was level 30, so it had been
out a while. But I agree that stability was a problem, especially when you
pushed it. It got a lot better by level 36.
 
So what? I wasn't saying that I agreed with the distinction. I was
explaining how those companies use the term, and how they orient
their plans around it. Why ON EARTH do you think that I, as an
academic, necessarily agree with everything I describe?

If you're going to present a distinction, you need to either present
it as being somebody else's or expect people to think you agree with
it.
For heaven's sake, do you think that every mediaeval historian who
describes the viewpoint of the Inquisition AGREES with burning
people at the stake?

If the historian presented the bald statement, "witches should be
burnt at the stake", then yes I'd think they agreed.
Sigh. Do PLEASE read what I say. I was making no value judgments,
but merely commenting that such uses are not CONVENTIONAL programming.
They aren't. It wasn't long ago that they weren't included in
programming at all - and I am NOT, repeat NOT, referring to only
academic use of the word.

I did read what you said. You appear to be expecting us to read what
you meant, which can be much harder.
I used that description to try and explain the difference in the
categories that was, and to a large extent still is, used by large
companies, like IBM and Intel.

My point was and is SOLELY that they categorise MOST conventional
programming of the type that I was describing as a scientific/
technical activity. That is ALL - get it? - ALL.

Then that is what you should have said.
 
snip
My point was and is SOLELY that they categorise MOST conventional
programming of the type that I was describing as a scientific/
technical activity. That is ALL - get it? - ALL.

OK. I am trying to understand. Would you say that someone who programs a
payroll application in COBOL, regardless of whether it uses Oracle or tape
files for its input is engaging in a "scientific/technical activity"?
 
Got a model number for an IBM rackmount server using the 386? We sure
never saw such a thing. You manage (as usual) to disparage without
provide a single verifyable fact.

Who gives a rat's ass about the sheet-metal or the logo on the front?
A '386 is a '386! Sheesh!
 
Bill said:
I remember that when the 486 came out, it was a few months before the
dealer price for a system board with CPU dropped below $1000. And when
we started benchmarking them management wouldn't believe the numbers,
because they were faster than the mainframe.

Cyrix was not only a price competitor, their 387 was markedly faster
then Intel's at something we used, most probably transcendental
functions. For some programs the real time was about 2x faster with Cyrix.

Cyrix wasn't the only 387 competitor. I had an IIT 2x87 paired to my 386
processor. I had a 386 motherboard that could take either a 287 or a 387
coprocessor. I figured I can put a cheaper 287-class coprocessor on it,
and the IIT was actually software compatible with the 387 coprocs,
rather than the 287's. I figured it was a nice compromise in between.

Yousuf Khan
 
Hank said:
Univac 1110 with the 40MB disks and too small drum (FH-432) for swap.
Running early version of OS-1100 in a time-sharing environment.
Uptime tended to be measured in minutes.
"Hey, I managed to get logged in before it crashed again."

OTOH, they did manage to reboot those beasts in something like 10% (or
maybe just 1%) of the time a current PC needs!

Afair, they could also checkpoint and restart many jobs after the
machine came back up. I.e. a crash didn't lose everything.

OTOH, being a first-year student with a stack of hand-punched cards,
every crash did mean "Resubmit the card pile and hope you get a printout
this time!"

(Most of those printouts were 4-6 pages to simply tell you about a
syntax error in one of the leading EXEC statements. .-()

Terje
 
OK. I am trying to understand. Would you say that someone who programs a
payroll application in COBOL, regardless of whether it uses Oracle or tape
files for its input is engaging in a "scientific/technical activity"?

Believe it or not, the answer was/is often "yes"! Seriously.

The distinction was and is very often whether a development environment
is an essential component. This confusion is one of the many reasons
that IBM and others never went over to a pure business/commercial
product line. There was more dependence of that on the supposedly
scientific/technical line than the executive suits realised.

Sometimes, as with IBM's PS/2, SAA etc., there is an attempt to set
up a special category of development systems for commercial codes,
but they normally fold very fast, as the market is much smaller than
even the academic programming one. For reasons given in the next
paragraph ....

Both did, however, lead to the catastropic state of affairs where there
may be no debugging facilities whatsoever on the systems that actually
run the 'developed' codes so that, when they bomb out in the field,
the official - and correct - response is that no assistance can
be given unless the customer develops a test that can repeat the
failure on the development system (which may not be marketed).

Been there - beat my head against that :-(


Regards,
Nick Maclaren.
 
If the historian presented the bald statement, "witches should be
burnt at the stake", then yes I'd think they agreed.

Yes, quite. The word "should" implies a value judgment, not a mere
categorisation. Please find such a word or expression in what I
wrote.
I did read what you said. You appear to be expecting us to read what
you meant, which can be much harder.

Sigh. I had assumed that more readers would be familiar with the IT
industry, and its often bizarre terminology. I had failed to notice
the cross-postings to the happy hacker newsgroups.

I shall try to do better next time, at the risk of talking down to
the more informed audience.


Regards,
Nick Maclaren.
 
Nick Maclaren said:
Believe it or not, the answer was/is often "yes"! Seriously.

OK, I will accept that as your opinion, though I disagree with the
implications. (see below)
The distinction was and is very often whether a development environment
is an essential component.

I don't understand what you are saying here. For IBM mainframes, they have
COBOL compilers, test data generators, interactive editors, profilers, code
repository systems, etc. Some or all of these ar used by the aformentioned
hypothetical COBOL programmer. It is hard to see how they could do their
job without them.
This confusion is one of the many reasons
that IBM and others never went over to a pure business/commercial
product line.

They dropped the vector facility that they used to have for S/370. They
still have binary floating point (indeed, I think it is now standard), but
that is required to meet IBM's strong committment to legacy code, and
doesn't cost much. I think they have pretty much dropped support for the
scientific programming library and program products like linear programming.
It appears that S/390 is essentially a commercial server/commercial
applications processor.
There was more dependence of that on the supposedly
scientific/technical line than the executive suits realised.

Sometimes, as with IBM's PS/2, SAA etc., there is an attempt to set
up a special category of development systems for commercial codes,
but they normally fold very fast, as the market is much smaller than
even the academic programming one. For reasons given in the next
paragraph ....

Both did, however, lead to the catastropic state of affairs where there
may be no debugging facilities whatsoever on the systems that actually
run the 'developed' codes so that, when they bomb out in the field,
the official - and correct - response is that no assistance can
be given unless the customer develops a test that can repeat the
failure on the development system (which may not be marketed).

So your claim is that debugging facilities were developed exclusivly for
scientific/technical environments and any use by commercial programmers was
a "happy accident"? I find that an odd belief, especially from a company
whoose middle name is "Business".

Perhaps we have a different definition of a commercial environment and of
development tools. I regard a typical commercial environment as say a bank
where the programmers write COBOL programs using such environmental tools as
CICS and DB2 (or their equivalents from other vendors) to accomplish the
primary business of the bank (managing accounts). The tools, such as the
ones I mentioned above, support that.
Been there - beat my head against that :-(

I don't doubt your experience, but mine is different in that the major
vendors (in my case more experience with Univac/Sperry than IBM, but IBM as
well) were fairly responsive when it seemed that the problem was theirs and
not ours. We could sometimes elevate serious problems to the point where
"the skys were filled with airplanes" carrying vendor support people to our
site.
 
Stephen said:
OK, I will accept that as your opinion, though I disagree with the
implications. (see below)




I don't understand what you are saying here. For IBM mainframes, they have
COBOL compilers, test data generators, interactive editors, profilers, code
repository systems, etc. Some or all of these ar used by the aformentioned
hypothetical COBOL programmer. It is hard to see how they could do their
job without them.




They dropped the vector facility that they used to have for S/370. They
still have binary floating point (indeed, I think it is now standard), but
that is required to meet IBM's strong committment to legacy code, and
doesn't cost much. I think they have pretty much dropped support for the
scientific programming library and program products like linear programming.
It appears that S/390 is essentially a commercial server/commercial
applications processor.




So your claim is that debugging facilities were developed exclusivly for
scientific/technical environments and any use by commercial programmers was
a "happy accident"? I find that an odd belief, especially from a company
whoose middle name is "Business".

Perhaps we have a different definition of a commercial environment and of
development tools. I regard a typical commercial environment as say a bank
where the programmers write COBOL programs using such environmental tools as
CICS and DB2 (or their equivalents from other vendors) to accomplish the
primary business of the bank (managing accounts). The tools, such as the
ones I mentioned above, support that.




I don't doubt your experience, but mine is different in that the major
vendors (in my case more experience with Univac/Sperry than IBM, but IBM as
well) were fairly responsive when it seemed that the problem was theirs and
not ours. We could sometimes elevate serious problems to the point where
"the skys were filled with airplanes" carrying vendor support people to our
site.

Here at IBM it is my belief that Programming and Programmers are
classified as engineering and scientific type activities in terms of job
descriptions and other HR type activities. I don't believe that any
distinctions are made relative to methods or output. At least that is
how it was a few years ago.

There were some distinctions made between programmers writing code and
engineers writing code. Engineer code was considered hardware.

It well may have been that internal programming groups used tools and
facilities that were not part of shipped OS or application products.

Here in Rochester we had and have many programmers working on a variety
of computers that were and are sold to the commercial marketplace almost
exclusively. They had debugging tools available.
 
Back
Top