The chance to break into Dell's supplier chain has passed.

  • Thread starter Thread starter Robert Myers
  • Start date Start date
I snipped it all, although I can't believe that someone educated in
computers would be ignorant of both watson's and olson's remarks along
with Gary Killdall flying and gates' 640k.

Well, some of us aren't quite as *old* as you are. ;-)
I was just out at sam's club the other day, and they were selling, for
550 bucks retail or the cost of a nice middle of the road TV, a Compaq
AMD system with a 17 inch flat CRT monitor (not lcd), 512MB, 180 GB
disk (might have been 250, don't remember for sure), XP, about 8 USB
ports, sound, etc etc. Even a little reader for the memory cards out of
cameras right on the front.

I'm not surprised. My bet is that it was XP Home though. I built quite a
nice Athlon XP system for a friend for $400, sans OS and monitor a few
months ago.
Computers already are into the realm of home electronics.

They have been for quite some time. I find it amazing how littttle home
electroncis costs though. I decent TV is _well_ under $500.
 
You mean the work required to tune? People will optimize the hell out
of compute intensive code--to a point. The work required to get the
world-beating SpecFP numbers is probably beyond that point.

Of course. Anyone can write a compiler that works. Writing a compiler
that makes use of static parallelism in normal codes (what Intel promised)
is quite another thing. The good folks on comp.arch were laughing at the
attempt. ...yes, at the time. It had been tried too many times to assume
that Intel was going to crack the difficult nut that no one before had
come close to.
If alpha and pa-risc hadn't been killed off, I might agree with you
about Itanium. No one is going to abandon the high-end to an IBM
monopoly. Never happen (again).

....and you think they're about to jump into Itanic (Intel's proprietary
boat) with both feet? Look in your never-mirror again.
I gather that Future Systems eventually became AS/400.

Revisionism at its best. Yes, FS and AS/400 were/are both single-level
store machines. Yes, some of the FS architects were sent to siberia to
find a niche. ;-) Saying that FS became AS/400 is a little bit of a
stretch. Accpeting even that, AS/400 (really S/38) was in a new market.
Their customers didn't have the $kaBillions invested in software that was
the death of FS. The S/38 (and AS/400) were allowed to eat at the carcas
of DEC. ...not at the S/360 customers, who had *no* interest in it.

*THAT* is the lesson Intel hasn't learned. Their customers don't *want*
Itanic. They want x86. If Intel grew x86 into the server space, maybe.
As it is AMD is draging it (and Intel) kicking-and-screaming there.
We'll never know
what might have become of Itanium if it hadn't been such a committee
enterprise. The 8080, after all, was not a particularly superior
processor design, and nobody needed *it*, either.

Oh, please. It *was* a committee design, so that part is a matter of the
"esistence theorem". It was a flagrant attempt to kill x86, taking
that portion of the market "private", which had been cross-licensed beyond
Intel's control. Customers didn't buy in, though perhaps if it delivered
what it promised and when...

I'm definitely not alone in the woods on this one, Keith.

Not alone, but without the money to make your dreams real. I follow the
money. Were you right, Cray wouldn't need the US government's support.
Go look at Dally's papers on Brook and Stream. Take a minute and visit gpgpu.org.
I could dump you dozens of papers of people doing stuff other than
graphics on stream processors, and they are doing a helluva lot of
graphics, easily found with google, gpgpu, or by checking out siggraph
conferences. Network processors are just another version of the same
story. Network processors are right at the soul of mainstream
computing, and they're going to move right onto the die.

Please. A few academic papers are making anyone (other than their
authors) any money? NPs are a special case. Show me an NP with a DP
FPU. Show me one that's making money.
With everything having turned into point-to-point links, computers have
turned into packet processors already. Current processing is the
equivalent of loading a container ship by hand-loading everything into
containers, loading them onto the container ship, and hand-unloading at
the other end. Only a matter of time before people figure out how to
leave things in the container for more of the trip, as the world already
does with physical cargo.

....and you still don't like Cell? I thought you'd be creaming your jeans
over it.
Power consumption matters. That's one point about BlueGene I've
conceded repeatedly and loudly.

Power consumption isn't something discussed in polite conversation. ;-)
It is indeed a huge thing. Expect to see some strange things come out of
this dilemma.
Stream processors have the disadvantage that it's a wildly different
computing paradigm. I'd be worried if *I* had to propose and work
through the new ways of coding. Fortunately, I don't. It's happening.

It's *not* like this is new. It's been done, yet for some reason not
enough want it to pay the freight. If it happens, fine. That only means
that someone has figured out that it's good for something. Meanwhile...
The harder question is *why* any of this is going to happen. A lower
power data center would be a very big deal, but nobody's going to do a
project like that from scratch. PC's are already plenty powerful
enough, or so the truism goes. I don't believe it, but somebody has to
come up with the killer app, and Sony apparently thinks they have it.
We'll see.

Didn't you just contradict yourself? ...in one paragraph?
On the face of it, MS word doesn't seem like it should work because of a
huge number of unpredictable code paths. Turns out that even a word
processing program is fairly repetitive. Do you know if they included
exception and recovery in the analysis?

I've likely said more than I should have, but yes. ...as much as there is
in M$ Weird. IIRC it was rather well traced. Much of the work (not the
software/analysis) was done in the organization I was in, but I tried my
best to steer clear of it. Call me the original non-believer in; "and then
a miracle happens". ;-)

It's still worth understanding why. The only way to make things go
faster, beyond a certain point, is to make them predictable.

Life isn't predictable though. Predictions that turn out to be false
*waste* power. ...adn that is where we are now. We're trying to predict
tomorrow and using enough power for a year to do it.
 
Take Keith down? Wouldn't dream of it. Rather take my old coon dog
and go out hunting bear.

Oh, you've met me?

Story time: A (rather attractive) bar tender once told me that a friend
was taking her out to hunt "bear". Says I (knowing exactly what she
*said* and meant), "I want to see *that*". She was rather taken aback that
I would doubt her hunting abilities. Says I, "nope, I just want to see
you hunting bare". After ten minutes or so (and keeping the entire
barroom in stiches) I did have to explain homophones. I would have gotten
slapped, but she new that I'd have enjoyed it too much. ;-)
 
Of course. Anyone can write a compiler that works. Writing a compiler
that makes use of static parallelism in normal codes (what Intel promised)
is quite another thing.

By "static parallelism" I think you mean compile-time scheduling to
exploit parallelism.
The good folks on comp.arch were laughing at the
attempt. ...yes, at the time. It had been tried too many times to assume
that Intel was going to crack the difficult nut that no one before had
come close to.

I think I've said this before, and maybe even to this exact point: If
you think a problem can be done, and there is no plausible
demonstration that it can't be done (e.g. the computational complexity
of the game "go"), then it is an unsolved problem, not an impossible
problem.

How to handicap levels of implausibility? Compared to what is
actually accomplished in silicon, building the required compiler seems
like a plausible bet.

We all have our hobby horses: yours is latency, mine is predictability
(and whatever bandwidth is necessary to exploit it). Understanding
and exploiting predictability is ultimately a bigger win than any
other possible advance in computation that I know of. A poster to
comp.arch suggested, not entirely seriously, that working around the
latency of a cluster shouldn't be any harder than working around the
memory wall and that we should be using the same kinds of strategies
(OoO, prefetch, cache, speculative computatio). Whether he intended
his remark to be taken seriously or not, we will eventually need that
level of sophistication, and it all comes down to the same thing:
understanding the predictability of a computation well enough to be
able to reach far enough into the future to beat latency.

Yes, I just made the problem even harder. Until there is a
demonstration that it can't be done, it is an unsolved problem, not an
impossible problem.

The business prospects of Intel in the meantime? I think they're mean
enough to survive.
...and you think they're about to jump into Itanic (Intel's proprietary
boat) with both feet? Look in your never-mirror again.

Power is not proprietary? Only one company builds boxes with Power.
Many will build boxes with itanium.
Revisionism at its best. Yes, FS and AS/400 were/are both single-level
store machines. Yes, some of the FS architects were sent to siberia to
find a niche. ;-) Saying that FS became AS/400 is a little bit of a
stretch. Accpeting even that, AS/400 (really S/38) was in a new market.
Their customers didn't have the $kaBillions invested in software that was
the death of FS. The S/38 (and AS/400) were allowed to eat at the carcas
of DEC. ...not at the S/360 customers, who had *no* interest in it.

*THAT* is the lesson Intel hasn't learned. Their customers don't *want*
Itanic. They want x86. If Intel grew x86 into the server space, maybe.
As it is AMD is draging it (and Intel) kicking-and-screaming there.

Intel is certainly not happy about the success of Opteron.

Not alone, but without the money to make your dreams real. I follow the
money. Were you right, Cray wouldn't need the US government's support.
Cray has not much of anything to do with anything at this point.
Another national lab poodle. And I think you just moved the
goalposts.
Please. A few academic papers are making anyone (other than their
authors) any money? NPs are a special case. Show me an NP with a DP
FPU. Show me one that's making money.
The fundamentals in favor of streaming computation in terms of power
consumption are just overwhelming, and they become more so as scale
sizes shrink.
...and you still don't like Cell? I thought you'd be creaming your jeans
over it.
Who ever said I didn't like Cell? It doesn't do standard floating
point arithmetic and it isn't really designed for double precision
floating point arithmetic, but Cell or a Cell derivative could
revolutionize computation.
Power consumption isn't something discussed in polite conversation. ;-)

I see it discussed more and more. Blades have become more powerful
and they've become less unreasonable in price, but the resulting power
density creates a different problem for data centers.
It is indeed a huge thing. Expect to see some strange things come out of
this dilemma.


It's *not* like this is new. It's been done, yet for some reason not
enough want it to pay the freight. If it happens, fine. That only means
that someone has figured out that it's good for something. Meanwhile...


Didn't you just contradict yourself? ...in one paragraph?
Don't know where you think the apparent contradiction is. An argument
could be made that VisiCalc made the PC. Whether that's exactly true
or not, VisiCalc made the usefulness of a PC as anything but a very
expensive typewriter immediately obvious.

I'm betting that the applications for streaming computation will come.
Whether it is Sony and Cell that make the breakthrough and that it is
imminent is less clear than that the breakthrough will come.

RM
 
By "static parallelism" I think you mean compile-time scheduling to
exploit parallelism.

Yes. Static as in; "will never change", rather than dynamic; "will
change, whether we want it to or not".
I think I've said this before, and maybe even to this exact point: If
you think a problem can be done, and there is no plausible
demonstration that it can't be done (e.g. the computational complexity
of the game "go"), then it is an unsolved problem, not an impossible
problem.

True. I'd say it's an "academic" problem, so leave it to the
academics. Meanwhile, I'll make money with what's known to be solveable.
I'm not about to bet my life's savings (company, were I CEO) on something
that has shown itself to tbe an intractable problem for several decades.
How to handicap levels of implausibility? Compared to what is actually
accomplished in silicon, building the required compiler seems like a
plausible bet.

Doesn't to me! Better minds than mine have tried and failed. Intel
proved once again that it was a *hard* problem.
We all have our hobby horses: yours is latency, mine is predictability
(and whatever bandwidth is necessary to exploit it). Understanding and
exploiting predictability is ultimately a bigger win than any other
possible advance in computation that I know of.

Understanding that the world isn't predictable leads one to not waste
efort looking down that path.
A poster to comp.arch
suggested, not entirely seriously, that working around the latency of a
cluster shouldn't be any harder than working around the memory wall and
that we should be using the same kinds of strategies (OoO, prefetch,
cache, speculative computatio). Whether he intended his remark to be
taken seriously or not, we will eventually need that level of
sophistication, and it all comes down to the same thing: understanding
the predictability of a computation well enough to be able to reach far
enough into the future to beat latency.

Yes, I just made the problem even harder. Until there is a
demonstration that it can't be done, it is an unsolved problem, not an
impossible problem.

You bet your life's savings. I'll pass.
The business prospects of Intel in the meantime? I think they're mean
enough to survive.

Survive, sure. I have no doubt about Intel's survival, but they have
pi$$ed away ten digits of their owner's money, while letting #2
define the next architecture.


Power is not proprietary? Only one company builds boxes with Power.
Many will build boxes with itanium.

I didn't say it wasn't. You implied that Itanic was somehow less
proprietary. Actually, Power isn't proprietary. There are others in the
business. ...heard of Motorola?
Intel is certainly not happy about the success of Opteron.

{{{{BING}}}}

We have the winner for understatement of the year! ;-)
Cray has not much of anything to do with anything at this point. Another
national lab poodle. And I think you just moved the goalposts.

At this point? They *got* to this point by playing a role in your dreams.
I haven't moved *anything*. You love Crayish architectures. I love
busiesses that make sense. Computers are no longer a toy for me.
They're a means to an end. I really don't care what architecture wins.
The fundamentals in favor of streaming computation in terms of power
consumption are just overwhelming, and they become more so as scale
sizes shrink.

Let me repeat; "you keep saying this", but if the problems can't be solved
by streaming they don't save any power at all. The universe of problems
that are solvable by streaming is on the order of the size of
"embarrasingly parallel" problems that can be solved with an array of a
kabillion 8051s.
Who ever said I didn't like Cell? It doesn't do standard floating point
arithmetic and it isn't really designed for double precision floating
point arithmetic, but Cell or a Cell derivative could revolutionize
computation.

I though you were one of the nay--sayers, like Felger. ;-)
I see it discussed more and more. Blades have become more powerful and
they've become less unreasonable in price, but the resulting power
density creates a different problem for data centers.

Note the smiley. I'd really like to go here, but I don't know where the
confidentiality edge is, so... Let me just say that you aren't the
only one noticing these things.

Don't know where you think the apparent contradiction is.

After re-reading the paragraph, I must have read it wrong at first.
Perhaps your dual use (and contradictory) use of "power" threw me off.
An argument
could be made that VisiCalc made the PC. Whether that's exactly true or
not, VisiCalc made the usefulness of a PC as anything but a very
expensive typewriter immediately obvious.

Ok, but I'd argue that it was a worthwhile business machine even if it
were onle an expensive typewriter (and gateway into the mainframe and
later the network).
I'm betting that the applications for streaming computation will come.
Whether it is Sony and Cell that make the breakthrough and that it is
imminent is less clear than that the breakthrough will come.

How much? What areas of computing?
 
At this point? They *got* to this point by playing a role in your dreams.
I haven't moved *anything*. You love Crayish architectures. I love
busiesses that make sense. Computers are no longer a toy for me.
They're a means to an end. I really don't care what architecture wins.
I can't think of something appropriately compact and eloquent to say
in reply. Of course a computer has to make business sense. I, and
others, have argued for Cray-type machines because relative amateurs
can hack decent code for them. That makes them good for scientific
computation in more ways than one. You can't easily hack and you
can't easily debug cluster code. Cray-type machines also tend to have
decent bisection bandwidth, a department in which Blue Gene, at least
as installed at LLNL, is pathetic.
Let me repeat; "you keep saying this", but if the problems can't be solved
by streaming they don't save any power at all. The universe of problems
that are solvable by streaming is on the order of the size of
"embarrasingly parallel" problems that can be solved with an array of a
kabillion 8051s.
We really don't know at this point. There was a long thread on
comp.arch, last summer I think, where we thrashed through a proposed
register-transfer architecture from someone who didn't really know
what he was doing (remember the WIZ processor architecture?). It got
weird enough to pull John Mashey out of the woodwork.

At one point, the thread attracted one Nicholas (sp?) Capens, who had
written some shader code to which he provided links. He talked about
"straightening out the kinks" so you could stream code. Those are the
right words. It's hard to get a compiler to do that sort of thing,
although compilers have mostly learned how to do what I could do as a
Cray Fortran programmer, but what I knew how to do is far from
exhausting what is possible. Using the vector mask register to merge
two streams when you don't know which of two results to use is an
example of streaming a computation that doesn't stream without some
trickery.

People *are* doing that kind of stuff with shaders right now, and some
are playing around with doing things other than graphics that way.

There is no general set of transformations you can perform, no general
theory of coding to say what is possible. Putting a compiler to work
on naive c or Fortran and expecting it to make everything suitable for
a stream processor is a non-starter, but we don't yet know what are
the real limits of human cleverness. I think we have barely scratched
the surface.

As to embarrassingly parallel, I think the Kasparov chess match
provided one example of what is possible, and, again, we don't really
know what people will do when arbitrarily large numbers of
embarrassingly parallel operations are almost free.
I though you were one of the nay--sayers, like Felger. ;-)

I picture Felger staying warm in winter with his racks of vacuum-tube
logic. ;-).
Note the smiley. I'd really like to go here, but I don't know where the
confidentiality edge is, so... Let me just say that you aren't the
only one noticing these things.

I had assumed so.

How much? What areas of computing?

Isn't 10x the standard for what constitutes a breakthrough?

1. Image processing (obvious)
2. Graphics (obvious)
3. Physics for games (obvious, discussed in another thread)
4. Physics for proteins (obvious, the only question is how big an
application it is and how much difference it will make).
5. Brute force searching (not obvious, outside my area of competence,
really)
6. Monte Carlo (any problem can be made embarrassingly parallel that
way). Financial markets are an obvious application.
7. Information retrieval (n-grams, cluster analysis and such stuff,
outside my area of competence).
8. Bioinformatics (outside my area of competence).

There's more, I'm sure, but that should be a start.

RM
 
Back
Top