Is Centrino brand all that strong?

  • Thread starter Thread starter Yousuf Khan
  • Start date Start date
1) No, you're a streaming processsor (a.k.a. Cray 1) bigot.

2) See smiley.

Oh, I didn't take offense, and I certainly didn't intend to give any.

Bandwidth and streaming processors kind of go together. It's a puzzle
as to how some of these new stream processors can possibly stay fed.
There's a recent comp.arch post to the subject. My eyes are already
jittery from a day in front of a monitor, so I don't want to go look
it up.

In any case: Cray, vector processors, itanium, rambus, bandwidth,
bandwidth, bandwidth. It's all of a piece. How did itanium get in
there? It can act pretty much like a stream processor with software
pipelining.
You've figured that part out anyway. ;-)


If you have the money to spend, I'm sure you too can find someone willing
to accept it and give you what you dream of. Frankly, money talks and
science begs.

Oh, science is doing just fine these days. Aside from the oil
companies, I want to see if a company doing, say, drug discovery buys
one.

There is an interesting post on realworldtech by someone who authors
things like chess-playing software about the importance of having true
random access to memory for things like search (which is what much of
AI is coming down to). He also mentions the FFT. You can dismiss it
as my private obsession, if you like, but I prefer to think of it as a
really strong intuition as to what computing is really all about. Or,
rather, a strong intuition as to what a real measure of capability is.

You are absolutely right: the guy with the checkbook writes the order.
If the guy with the checkbook wants to keep doing what was already
done twenty years ago, only just more of it, there is not much I can
do about it.

RM
 
Oh I should have said Intel P4 processors are not going to go much faster
above... which further highlights the lack of EM64T for Pentium-M.
I made an intense effort to understand what was going on with process
technology when all the surprises came down at 90nm, but since then
I've lost track of process technology. If Intel really has lost the
playbook, that would be news, but I don't really believe it.

What's going on with process tech does not really have to be understood at
the detail level to see the picture. IBM chief technologists, among
others, have told us of the "end of scaling" - Intel has demonstrated the
effect with 90nm P4. We know, as Keith has said right here, that the two
critical issues involved are power density and leakage. OTOH nobody is
talking of abandoning 65nm and lower, though they do talk of increasing
difficulty.

One of the results is dual or twin core CPUs, in order to be able to offer
continuing levels of performance improvement. Intel is presenting
something this week on new power management involving 64 levels of control,
even for the next Itanium.

I wouldn't go so far as to say that Intel has "lost the playbook" but their
ego seems to be getting in the way when technology sharing is the way the
rest of the industry is moving.
As to performance, which I've also lost track of, Power5 and Itanium
seem to have run away from the pack on CFP2000. That's the horse race
that Intel wants. As to the pack, AMD is in the hunt, but only just.

Really? I'm baffled as to why that's what Intel wants.
CFP2000 not a realistic measure of real-world performance? Probably
not, but then what is, other than your own code? Yes, it is easier to
write naive code for AMD processors than it is to write naive code for
NetBurst or for Itanium.

Horses for courses!
Well of course it is.

There are predictions floating around now that world oil production
may have peaked or may be about to peak. Depends on who you ask. If
you look at the methodology of both sides of the argument, it's pure
voodoo. The petroleum geologists fit curves. The econometrics guys
use computer models (is your soul stained with these kinds of sins,
George?) that either have obvious problems or are so complicated that
no one understands them. It's a battle over prejudices. The
_results_ are quoted widely in the press, because the press has to
fill all those column inches with something.

"Peak oil" is a political club of the GW industry - nothing more nor less.
The media laps it up of course. As for my soul, it's much too naive to be
stained.:-)
I mention "peak oil" here because off-topic rambling is my style, and
because it reminds me a bit of the microprocessor business, which
seems to have run out of steam. Past predictions of the future of
world oil production and energy usage have been so far off the mark as
to be useless. At some point, the world will switch from oil to
something else. Nobody knows when, to what, or at what price. We are
similarly ignorant of the future of the microprocessor business.

There's so much "noise" in the numbers for the energy production and usage
business... further blurred by the media dishing out such fraudulent junk
as the "hydrogen economy" being a way forward... or soccer-Moms in Iowa
telling us how "green" they feel by burning ethanol in their FFT SUVs. ô_Ô

The way I see it, unless some mind-boggling new technology is discovered,
the only "something else" to switch to from oil is nuclear. Of course, the
way the media has put it, the masses seem to have this weird idea that oil
supply is just going to dry up one day/year/decade... which is absurd.
Small differences that don't seem to point anywhere don't seem very
interesting to me. The switch to SOC designs interests me. The
switch to stream processors interests me. A better interconnect
interests me. All those things interest me because they have the
potential to change the rules.

The problem with microprocessors right now isn't that they can't be
made to go faster. The problem is that the application space that can
be accessed with a conventional single-processor architecture seems to
have been pretty thoroughly explored. Just as three-D seismography
quietly changed the rules in petroleum exploration, though, new
technology can change the rules for microprocessors. I don't think
that AMD taking aim at the Centrino brand is movement in that
direction, though.

As I've said before, steady progress with the odd discontinuity is fine
with me; it's also the way that the application of science to engineering
solutions has traditionally worked, with few exceptions.

As for AMD, we'll see if they can come up with something to tackle the
notebook market... but there's nothing about Centrino which changes or
defines any rules.
 
...but I wouldn't lift a finger if France was run-over by the Germans,
once again.

Most of Europe is err, overrrun by Germans now... in a slightly different
way of course but the effect is similar: they buy up companies that are
going through a weak spell, which have something they want, e.g. Bentley,
Rolls Royce et.al. and move production to the err, Fatherland. Much of
this is against Euro-rules now of course, e.g. the Seimens division
transformation to Infineon and "move" from U.K. to Germany but apparently
there are "ways". Now the French car companies, after a period of
reasonable success, are showing signs of flagging a bit and I'm just
waiting for VW to put a move on Peugeot or Renault... talk about putting
the cat among the pigeons.... Sacre Bleu!!:-)
I didn't say it was in any way *new*. "Hemi" is a little more than
BFE though. Hell I've seen flat-head BFE's. ...doesn't make flat-head ==
BFE either.

OK there may be something to it in Detroit - I know that the engineers
there used to despair when accountants and/or unions would specify that a
wedge chamber would be the most err, "effective", to save a coupla bucks or
lighten the "work". Taking a global view though, to make a fuss about
"bringing back the Hemi", all seems a bit feeble. - I mean everybody
*knows* that's how you do it.
Haven't seen one. But they're not about to issue "expensive" to those
who's title doesn't start with an 'ex'. Indeed the only 'ex' title I'll
ever see has a hyphen after the 'x'.

Their view of top-down thinking?:-)

BTW, completely OT here but if you haven't come across
http://diplomadic.blogspot.com/ yet it's well worth a visit, since it's
being wound up and has some hair-raising stuff on GW, Oil-for-Food and yes,
tsunami relief... much of it straight from first-hand witness. I only
found it recently so if you're already in the know......
 
keith said:
What do you pay for an auto radio today? Just because you can buy a
"transistor" radio for $20 at K-Mart doesn't mean the radio in the car is
the same thing.

Yeah, the $20 K-Mart radio was probably better. I'm only being
slightly facetious when I say that.
 
What's going on with process tech does not really have to be understood at
the detail level to see the picture. IBM chief technologists, among
others, have told us of the "end of scaling" - Intel has demonstrated the
effect with 90nm P4. We know, as Keith has said right here, that the two
critical issues involved are power density and leakage. OTOH nobody is
talking of abandoning 65nm and lower, though they do talk of increasing
difficulty.
But I don't know whether to take "the end of scaling" seriously or
not. What about nanotubes?

It doesn't matter, anyway? Hell, I don't know. Suppose you could
raise the computational density by a factor of a thousand. What kinds
of robotic widgets might we see as a result, for example?
One of the results is dual or twin core CPUs, in order to be able to offer
continuing levels of performance improvement. Intel is presenting
something this week on new power management involving 64 levels of control,
even for the next Itanium.
Especially for the next Itanium, I would have thought.
I wouldn't go so far as to say that Intel has "lost the playbook" but their
ego seems to be getting in the way when technology sharing is the way the
rest of the industry is moving.
Intel is a cash cow. It's a weak defense, but they do behave better
than M$, which completely substitutes market domination for
competence.
Really? I'm baffled as to why that's what Intel wants.

Keith and I effectively already had that discussion. Intel wants
enterprise applications locked onto Itanium the way they are locked
onto IBM mainframes. In that horse race, x86 is a sideshow--or Intel
wants it to be a sideshow. ;-).
Horses for courses!
Yes, indeed. And that's why I get so bent out of shape about some of
the choices our esteemed national assets, er, laboratories, have been
making in hardware. Problems can define hardware, but it can (and
actually does) work the other way around.

There's so much "noise" in the numbers for the energy production and usage
business... further blurred by the media dishing out such fraudulent junk
as the "hydrogen economy" being a way forward... or soccer-Moms in Iowa
telling us how "green" they feel by burning ethanol in their FFT SUVs. ô_Ô

The way I see it, unless some mind-boggling new technology is discovered,
the only "something else" to switch to from oil is nuclear. Of course, the
way the media has put it, the masses seem to have this weird idea that oil
supply is just going to dry up one day/year/decade... which is absurd.
If you ignore the ravings of the Malthusians and just look at what the
U.S. govt. is putting out, there are some interesting things
happening, and I'm never quite sure what's real and what's show.
There is an ORNL report that says, effectively, that, if you include
things like tar sands, you can forget about ever seeing a peak in oil
production, unless something dramatic happens to affect human
longevity.

OTOH, some of the noise about the hydrogen economy is coming from
within the U.S. govt., and some from contractors funded by the DoD.

If you look over the history of oil since 1974, it's been a history of
the kinds of surprises that you (and Keith) apparently favor: small
individually but important in the sum. Because it has been immensely
profitable (much more profitable than developing renewable energy
sources) people have just gotten smarter and smarter about finding and
extracting oil. As far as I can tell, none of the technology
developments that have reshaped the industry (albeit very quietly)
were foreseen in 1974. Meanwhilst, the revolutions that were supposed
to happen (nuclear, for example) still haven't happened.

One read is that "renewables" and "the hydrogen economy" coming from
Washington are really a message to oil-producing states: "We don't
need you." For all I know, some deeply cynical person inside the
government was thinking that way in 1974. The fear of even the
possibility of realistic alternatives to oil is what has kept OPEC in
line.

The lesson I draw from the oil business is one that Keith thinks I
don't understand: money drives everything. Until someone can count on
making the same kind of money displacing oil they can make by
producing it, people will continue to get smarter about producing oil
than to look for ways to displace it.

That very same mentality, of course, meant that DEC, IBM, et al, were
completely caught off guard by the attack of the killer micros. By
the time *they* could see the money on the table, the swarm was
already all over them.
As I've said before, steady progress with the odd discontinuity is fine
with me; it's also the way that the application of science to engineering
solutions has traditionally worked, with few exceptions.

It's hard to argue with a statement like that since steady progress
with a finite number of discontinuities covers a pretty broad class of
functions. You do seem to be ruling out functions that aren't
Riemann-integrable. ;-).
As for AMD, we'll see if they can come up with something to tackle the
notebook market... but there's nothing about Centrino which changes or
defines any rules.

Oh, but I think it did. Everybody's got a wireless laptop, and
Centrino is the brand of choice. Big marketing score for Intel at a
time when they did just about everything else wrong.

Centrino isn't tied into connectivity in any kind of fundamental way,
but the drumbeat of the message is there: it isn't the processor
that's important, anymore, it's the whole platform. That's the battle
Intel has defined, and PCI-Xpress, Advanced Switching, and heaven only
knows what else are going to stomp Hypertransport. I understand why
the crowd here isn't pleased emotionally, but, unless those emotions
gain wider acceptance (something like what probably is happening to
Microsoft), Intel will do just fine.

RM
 
Oh, I didn't take offense, and I certainly didn't intend to give any.

Big shoulders here. I was just stating a fact. ;-)
Bandwidth and streaming processors kind of go together. It's a puzzle
as to how some of these new stream processors can possibly stay fed.
There's a recent comp.arch post to the subject. My eyes are already
jittery from a day in front of a monitor, so I don't want to go look
it up.

Seeee, that's where we differ. I'm a "latency" bigot, and I uunderstand
that my problem is bigger than yours. Bandwidth is too easy.
In any case: Cray, vector processors, itanium, rambus, bandwidth,
bandwidth, bandwidth. It's all of a piece. How did itanium get in
there? It can act pretty much like a stream processor with software
pipelining.

Boooring! All you need is money and you're happy. So convince your uncle
that you need some bux!


Oh, science is doing just fine these days. Aside from the oil
companies, I want to see if a company doing, say, drug discovery buys
one.

What they're doing (or not) should be instructive. They have the bux to
force the issue if they see some profit at the end of teh tunnel. Sine
apparently they don't (correct me if I'm wrong)...
There is an interesting post on realworldtech by someone who authors
things like chess-playing software about the importance of having true
random access to memory for things like search (which is what much of AI
is coming down to). He also mentions the FFT. You can dismiss it as my
private obsession, if you like, but I prefer to think of it as a really
strong intuition as to what computing is really all about. Or, rather,
a strong intuition as to what a real measure of capability is.

My *strong* intuition is opposite of yours, apparently. I really, really,
believe we're latency bound, not bandwidth bouund. All the works seems to
be going into trying to excuse latency.
You are absolutely right: the guy with the checkbook writes the order.
If the guy with the checkbook wants to keep doing what was already done
twenty years ago, only just more of it, there is not much I can do about
it.

The guy with the checkbook wins. THe guy with the biggest one can afford
to dabble in new things like Itanic or Cell. At least the jury is still
out on one of these. ;-)
 
Oh I should have said Intel P4 processors are not going to go much faster
above... which further highlights the lack of EM64T for Pentium-M.


What's going on with process tech does not really have to be understood at
the detail level to see the picture. IBM chief technologists, among
others, have told us of the "end of scaling" - Intel has demonstrated the
effect with 90nm P4. We know, as Keith has said right here, that the two
critical issues involved are power density and leakage. OTOH nobody is
talking of abandoning 65nm and lower, though they do talk of increasing
difficulty.

This isn't anything new. The press has been suggesting the end of the
world is here for at least twenty years, and that science wouldn't
kill Moore, rather the counters-of-beans would. So far they're twenty
years out of touch. The techies seem to come through, though the
counters-of-beans aren't much happy with the price tag either. I can see
perhaps another ten years, though the price tag isn't going to
be trivial. The croupier is still dealing, though there are fewer at the
table.
One of the results is dual or twin core CPUs, in order to be able to
offer continuing levels of performance improvement.

Huh? Dual cores are there because there isn't anythign else useful to do
with the free transistors. Caches have played out their hand.
Intel is presenting
something this week on new power management involving 64 levels of
control, even for the next Itanium.
Yawn.

I wouldn't go so far as to say that Intel has "lost the playbook" but
their ego seems to be getting in the way when technology sharing is the
way the rest of the industry is moving.

Let's just say they took the yey off the ball. ...sorta like Philly last
weekend. ;-)
Really? I'm baffled as to why that's what Intel wants.

They've wanted to kill x86 for some time. Competition is *hard*. They
wanted to kill all the rest. ...almost made it happen, IMO.

<snip oil-politic stuff -- too tired>
 
Yeah, the $20 K-Mart radio was probably better. I'm only being
slightly facetious when I say that.

<serious mode> Tell me that again after 100K New England miles.
Automotive electronics is some pretty rugged stuff. The environment is
rather harsh.
 
On Wed, 09 Feb 2005 08:21:39 -0500, George Macdonald


But I don't know whether to take "the end of scaling" seriously or
not. What about nanotubes?

I think it's serious OK -- we already have evidence -- and I'm not sure
which part of the problem nanotubes solves... besides which major change in
material like that is bound to have an extended development time.
It doesn't matter, anyway? Hell, I don't know. Suppose you could
raise the computational density by a factor of a thousand. What kinds
of robotic widgets might we see as a result, for example?

Given the way govt. is working those days I'm very suspicious of the way
robotic anything gets abused. Take a look at the last 3 paras here
http://www.edn.com/article/CA185948.html for what CARB was experimenting
with 3 years ago. Take a drive through the U.K. and you'll see their
highways lined with electronic snitches - the latest models are buried in
the road so you can't even see them.
Intel is a cash cow. It's a weak defense, but they do behave better
than M$, which completely substitutes market domination for
competence.

See my post on RHEL 4.0 on dual AMD64 - M$ and maybe even Sun must be
worried. As for Intel, I wonder how many $billions they've pissed away on
efforts to proprietarize the architecture?
Keith and I effectively already had that discussion. Intel wants
enterprise applications locked onto Itanium the way they are locked
onto IBM mainframes. In that horse race, x86 is a sideshow--or Intel
wants it to be a sideshow. ;-).

Keith likely said the same but.... ain't gonna happen. Anyway, FP matters
little for "enterprise applications" - sorry, I don't see it.
Yes, indeed. And that's why I get so bent out of shape about some of
the choices our esteemed national assets, er, laboratories, have been
making in hardware. Problems can define hardware, but it can (and
actually does) work the other way around.

Don't you think this is just a fact of the change in economics: (nobody*
can afford a modern day equivalent of a Cray. Even though the Japanese
have done it, it's basically a boat-anchor.
If you ignore the ravings of the Malthusians and just look at what the
U.S. govt. is putting out, there are some interesting things
happening, and I'm never quite sure what's real and what's show.
There is an ORNL report that says, effectively, that, if you include
things like tar sands, you can forget about ever seeing a peak in oil
production, unless something dramatic happens to affect human
longevity.

OTOH, some of the noise about the hydrogen economy is coming from
within the U.S. govt., and some from contractors funded by the DoD.

To paraphrase Eisenhower: beware the academic-bureaucratic complex!
If you look over the history of oil since 1974, it's been a history of
the kinds of surprises that you (and Keith) apparently favor: small
individually but important in the sum. Because it has been immensely
profitable (much more profitable than developing renewable energy
sources) people have just gotten smarter and smarter about finding and
extracting oil. As far as I can tell, none of the technology
developments that have reshaped the industry (albeit very quietly)
were foreseen in 1974. Meanwhilst, the revolutions that were supposed
to happen (nuclear, for example) still haven't happened.

The technology of the petroleum industry hasn't really changed that much in
30 years - a couple of new processes to bolster fine tuning of existing
octane production... and banish aromatics and other unsaturates. The
biggest change has probably been the disappearance of the small
"tea-kettle" refiners.
One read is that "renewables" and "the hydrogen economy" coming from
Washington are really a message to oil-producing states: "We don't
need you." For all I know, some deeply cynical person inside the
government was thinking that way in 1974. The fear of even the
possibility of realistic alternatives to oil is what has kept OPEC in
line.

Could be but OPEC has scientists too - they must know that renewables and
hydrogen fail on umpteen fundamental, scientific/economic counts.
The lesson I draw from the oil business is one that Keith thinks I
don't understand: money drives everything. Until someone can count on
making the same kind of money displacing oil they can make by
producing it, people will continue to get smarter about producing oil
than to look for ways to displace it.

And of course the petroleum companies are well placed to make that decision
to "displace".

It's hard to argue with a statement like that since steady progress
with a finite number of discontinuities covers a pretty broad class of
functions. You do seem to be ruling out functions that aren't
Riemann-integrable. ;-).


Oh, but I think it did. Everybody's got a wireless laptop, and
Centrino is the brand of choice. Big marketing score for Intel at a
time when they did just about everything else wrong.

OK - marketing score.... au suivant!;-)
Centrino isn't tied into connectivity in any kind of fundamental way,
but the drumbeat of the message is there: it isn't the processor
that's important, anymore, it's the whole platform. That's the battle
Intel has defined, and PCI-Xpress, Advanced Switching, and heaven only
knows what else are going to stomp Hypertransport. I understand why
the crowd here isn't pleased emotionally, but, unless those emotions
gain wider acceptance (something like what probably is happening to
Microsoft), Intel will do just fine.

But Hypertransport and PCI-Express play together - stomping is not
required. When Intel does its on-chip memory controller they'll need
something equivalent to HyperTransport; no doubt AMD will develop from what
they have. Basically Intel has not been allowed to proprietarize their
"platform"... the game is open for the foreseeable future. I really don't
see anything to be displeased about... and certainly not emotionally
 
keith said:
<serious mode> Tell me that again after 100K New England miles.
Automotive electronics is some pretty rugged stuff. The environment is
rather harsh.

I was referring to aftermarket automotive radios being quite
affordable. Is the OEM radio "better built" and thus more costly to
make? Maybe, but the point is, it was not a great "feature" for a car
to have a kool AM/FM cassette radio - they simply were not uncommon or
expensive.
 
I think it's serious OK -- we already have evidence -- and I'm not sure
which part of the problem nanotubes solves...

Mobility. Faster gates at lower voltage, smallest possible voltage
being the goal of low power operation.

http://www.eetimes.com/at/news/OEG20031217S0020

I've got a decent physics education, but I'm not a solid state
physicist and certainly not a device engineer. I am pretty quick with
google:

nanotube transistor mobility electron OR carrier.

Carbon nanotubes also have very attractive thermal properties. They
also currently cost about as much, pound for pound, as industrial
diamonds.
besides which major change in
material like that is bound to have an extended development time.
Don't know how to evaluate that. There's a company nearby I could
walk to that thinks it's going to revolutionize memory (memory always
comes first, doesn't it?) surviving on venture capital. They'd better
come up with something pretty quick.
Given the way govt. is working those days I'm very suspicious of the way
robotic anything gets abused. Take a look at the last 3 paras here
http://www.edn.com/article/CA185948.html for what CARB was experimenting
with 3 years ago. Take a drive through the U.K. and you'll see their
highways lined with electronic snitches - the latest models are buried in
the road so you can't even see them.
Creepy. Embedded microelectronics in cars already don't work.

Mustn't confuse what you can do with existing embedded electronics
with what would be possible if the rules really changed. Advances in
AI would be nice, but there is, as far as I can tell, an esentially
inexhaustible demand for cycles in the business of motion dynamics and
kinematics.
See my post on RHEL 4.0 on dual AMD64 - M$ and maybe even Sun must be
worried.

AMD-based server doesn't even make it into the top ten on $/tpmc:

http://www.tpc.org/tpcc/results/tpcc_results.asp?print=false&orderby=priceperf&sortby=asc

Power doesn't. Xeon does. Itanium does. I doubt very much that
anybody at Intel is in a panic.
As for Intel, I wonder how many $billions they've pissed away on
efforts to proprietarize the architecture?

Watch for Intel to push RAS. And push, and push, and push. Think
Centrino.
Keith likely said the same but.... ain't gonna happen. Anyway, FP matters
little for "enterprise applications" - sorry, I don't see it.

No, but the $/tpmc numbers do. RIP Ken Olsen. I didn't mean to imply
the Keith agrees with me, but we have discussed Intel's mainframe-envy
and how it plays out as a business strategy.
Don't you think this is just a fact of the change in economics: (nobody*
can afford a modern day equivalent of a Cray. Even though the Japanese
have done it, it's basically a boat-anchor.

Oh, the cluster poster to comp.arch who has such contempt for my
wisdom got on my case for dissing the Earth Simulator, too.
Megabureaucrat projects: think Donald Trump. Real estate, staff,
power, ego. Earth simulator plainly does well on my touchstone
calculation, the FFT, but I've been told that on less ideal
calculations that require global communication it self-partitions into
"islands of performance."

As the alternative, think tiny, low power stream processors with a
sizzling commodity interconnect, not yesterday's embedded processor
with an undersized interconnect and a custom router. Custom Cray
processors don't make any sense? Probably not anymore. Commodity is
the right word. The national lab's latest pick just chose the wrong
commodity (out of date embedded microprocessor) to build on.

To paraphrase Eisenhower: beware the academic-bureaucratic complex!
The academic-bureaucratic complex can move the DoE and the national
labs, but not the energy industry.
The technology of the petroleum industry hasn't really changed that much in
30 years - a couple of new processes to bolster fine tuning of existing
octane production... and banish aromatics and other unsaturates. The
biggest change has probably been the disappearance of the small
"tea-kettle" refiners.
Seismic tomography just keeps getting better and better, the cost of
finding new oil has *dropped* over the last quarter century or so
(because people have gotten much smarter about where they look), and
people keep revising their estimates of what can be extracted upward
(sometimes, admittedly, not always with complete honesty).
Could be but OPEC has scientists too - they must know that renewables and
hydrogen fail on umpteen fundamental, scientific/economic counts.
I read somewhere (National Academy of Sciences, I think) that hydrogen
as a fuel has about the same time horizon as a mars mission. Biofuels
are more realistic. They're not getting the emphasis because they
don't keep the attention of the weapons scientists (hydrogen
economy==hot fusion). If the price of oil were stable at a
sufficiently high level, renewables would become much more of a
player.
And of course the petroleum companies are well placed to make that decision
to "displace".
Maybe not as well placed as they'd like, but, to the extent they're
playing, that's the game.
OK - marketing score.... au suivant!;-)
Itanium-based Enterprise servers have all the RAS features of z-series
and they are much, much less expensive.
But Hypertransport and PCI-Express play together - stomping is not
required. When Intel does its on-chip memory controller they'll need
something equivalent to HyperTransport;
no doubt AMD will develop from what
they have. Basically Intel has not been allowed to proprietarize their
"platform"... the game is open for the foreseeable future.

Right. As long as you're willing to stick a bridge in there, you can
hook up to infrastructure that's driven by Intel architecture.

RM
 
On Tue, 08 Feb 2005 22:45:53 -0500, Robert Myers wrote:

Seeee, that's where we differ. I'm a "latency" bigot, and I uunderstand
that my problem is bigger than yours. Bandwidth is too easy.
The engineer's mistake: thinking a problem is important because it's
hard. The current memory latency to processor cycle time ratio is a
couple hundred. Did _anybody_ think we'd get away with that?

Latency is not the enemy. Unpredictability is the enemy. With
sufficiently predictable dataflow, you can fake latency, but you
_cannot_ fake bandwidth.

You have unpredictable data and need global access with low latency?
Where can I buy something that does that...cheap?

Hardware is what you understand. Hardware is the topic of the group.
The limits to what you can do with hardware to beat latency are
really...hard.

On the other hand, you have to work really, really hard even to fake
randomness. Most of the gains to be made in beating latency are in
software.

What they're doing (or not) should be instructive. They have the bux to
force the issue if they see some profit at the end of teh tunnel. Sine
apparently they don't (correct me if I'm wrong)...
The people who are doing work where there's real money to be made
aren't necessarily advertising what they're doing. That leaves the
impression that the guys with all the color plots on the web are the
ones doing the real work. They aren't.

A clue to that reality came out on comp.arch when I took exception to
a DoE claim that Power was the leading architecture for HPC. That
exchange smoked out the existence of huge oil-company clusters of x86
that could show up on Top 500 but don't (why would they?). There is
custom hardware in use in biotech.
My *strong* intuition is opposite of yours, apparently. I really, really,
believe we're latency bound, not bandwidth bouund. All the works seems to
be going into trying to excuse latency.
Latency is incredibly important for performance of big boxes where
unpredictable contention for shared objects is the bottleneck. Since
those big boxes are designed for such (commercial) applications,
that's where the money and the effort go.
The guy with the checkbook wins. THe guy with the biggest one can afford
to dabble in new things like Itanic or Cell. At least the jury is still
out on one of these. ;-)

If you stand _way_ back, some important technlogies have been frozen
for a long time: the internal combustion engine, rockets, jet engines,
turbines, electric motors and generators: the mainsprings of
industrialized civilization. Microprocessors, which are mostly just
shrunk down versions of what was pretty well developed by the sixties
are going to be the same way? Maybe. Intel made a bad bet on Itanium
changing the rules. It didn't and it's not going to, although Intel
might still use it successfully to fence off part of the market. I'm
betting on the whole paradigm of microprocessors to change from
fetching of instructions and data from memory to cache and registers
to on the fly processing of packets. I could be just as wrong about
that as Intel was about VLIW.

RM
 
rmyers1400 said:
The engineer's mistake: thinking a problem is important because it's
hard.
Bullshit.

The current memory latency to processor cycle time ratio is a
couple hundred. Did _anybody_ think we'd get away with that?

Irrelevent what "they" thought. "They" knew we'd have to because it's
a real problem.
Latency is not the enemy. Unpredictability is the enemy. With
sufficiently predictable dataflow, you can fake latency, but you
_cannot_ fake bandwidth.

Again, bullshit. If you know the answer, why are you calculating it?
You can *buy* bandwidth. ...and no you can't fake latency. You can
guess at what you'll need, but when you guess wrong you still have to
pay the piper.
You have unpredictable data and need global access with low latency?
Where can I buy something that does that...cheap?

That's the point. One can't buy latency. It's not even expensive.
One *can* buy bandwidth, but once you have enough, more doesn't matter.
That's not true with latency. Lower is *always* better.
Hardware is what you understand. Hardware is the topic of the group.
The limits to what you can do with hardware to beat latency are
really...hard.

Exactly. 299,792,458m/S isn't just a good idea. It's the law.
On the other hand, you have to work really, really hard even to fake
randomness. Most of the gains to be made in beating latency are in
software.

It's easy to come close enough to stall a pipe.
The people who are doing work where there's real money to be made
aren't necessarily advertising what they're doing. That leaves the
impression that the guys with all the color plots on the web are the
ones doing the real work. They aren't.

I don't see you forking over a couple of hundred $Megabux to solve your
problems. I'm sure I could direct you to the appropriate people if
your pockets are that deep (as Del has told you before).
A clue to that reality came out on comp.arch when I took exception to
a DoE claim that Power was the leading architecture for HPC. That
exchange smoked out the existence of huge oil-company clusters of x86
that could show up on Top 500 but don't (why would they?). There is
custom hardware in use in biotech.

Latency is incredibly important for performance of big boxes where
unpredictable contention for shared objects is the bottleneck. Since
those big boxes are designed for such (commercial) applications,
that's where the money and the effort go.

Bingo! When you can convince the monkeys-with-money there's as much
money in Cray-1's, you'll have 'em coming out of your ears. I believe
strongly in the "existence theorem".
If you stand _way_ back, some important technlogies have been frozen
for a long time: the internal combustion engine, rockets, jet engines,
turbines, electric motors and generators: the mainsprings of
industrialized civilization.

Sure. These things have already had the innovation squoze out of 'em.
When you're up against the Carnot efficiency, where's the money?
Microprocessors, which are mostly just
shrunk down versions of what was pretty well developed by the sixties
are going to be the same way? Maybe.

IMO, yes. Note that "shrunk down" improves latency. ;-)
Intel made a bad bet on Itanium
changing the rules. It didn't and it's not going to, although Intel
might still use it successfully to fence off part of the market.

Sure, and that's why I guessed it would fail six or seven years ago.
What does Itanic bring to the table that isn't already there? Why
would a money-monkey want to jump onto a proprietary platform? Indeed,
why would they ever consider the expense of *moving* to one?
I'm
betting on the whole paradigm of microprocessors to change from
fetching of instructions and data from memory to cache and registers
to on the fly processing of packets. I could be just as wrong about
that as Intel was about VLIW.

<shrug> Could be, not that it matters much at the end of the day.
 
@comcast.net says...

That's the point. One can't buy latency. It's not even expensive.
One *can* buy bandwidth, but once you have enough, more doesn't matter.
That's not true with latency. Lower is *always* better.

Lower latency just means you have more slop in scheduling, that's all.

There _are_ circumstances, like transaction processing, where you
can't do much to beat latency. I don't do those kinds of
applications. Neither does anybody doing computation, as opposed to
transaction processing.

The enemy, to repeat, is unpredictability. If you're stuck with
unpredictable data flow, you're stuck with unpredictable data flow.
With a 200:1 memory access:CPU cycle time, you're going to spend much
of your time stalled, anyway, which, in fact, is exactly the way that
CPU's involved in transaction processing behave.

Bingo! When you can convince the monkeys-with-money there's as much
money in Cray-1's, you'll have 'em coming out of your ears. I believe
strongly in the "existence theorem".

Money, fortunately, isn't the answer. The national labs have a very
short time horizon, despite their visionary claims. Venture
capitalists aren't interested, generally speaking--same reason. DARPA
might be interested if you could do an autonomous vehicle, but the
problem there is algorithms, not hardware. The streaming hardware is
coming, anyway, thanks to ATI, and nVidia for sure, maybe
IBM/Sony/Toshiba if only I understood Cell better.

IMO, yes.

Well, I'm hoping you're wrong. Evidence coming in for GPU's is that
there is a revolution coming there. If there is a show-stopper, it
will be _bandwidth_.

RM
 
Lower latency just means you have more slop in scheduling, that's all.

Or like most of the world, you *cannot* schedule. If scheduling were easy
Itanic would still be floating. The existence theorem says that it's not.
Thus latency is the lynchpin to performance. Maybe *you* have some
"embarasingly linear" data flow, but the real world doesn't. Were this
the general case we'd have a P-V with a hundred-stage pipe. We don't;
there's that ugly existence theorem at work again.
There _are_ circumstances, like transaction processing, where you can't
do much to beat latency. I don't do those kinds of applications.
Neither does anybody doing computation, as opposed to transaction
processing.

Really? You don't care about conditions? ...don't need precise
exceptions? You're by *far* in the minority, Robert. If your data is so
homogenous, what's the interest in computing the answer. It'll always be
the same.
The enemy, to repeat, is unpredictability. If you're stuck with
unpredictable data flow, you're stuck with unpredictable data flow. With
a 200:1 memory access:CPU cycle time, you're going to spend much of your
time stalled, anyway, which, in fact, is exactly the way that CPU's
involved in transaction processing behave.

Thank you. The processors doing this kind of work outnumber your
"workload" by 1E10:1, at least.
Money, fortunately, isn't the answer.

It is, if you want to make *your* widget. Otherwise others get to make
*their* widget: you lose.
The national labs have a very
short time horizon, despite their visionary claims. Venture capitalists
aren't interested, generally speaking--same reason. DARPA might be
interested if you could do an autonomous vehicle, but the problem there
is algorithms, not hardware.

Hmm, seems Itanic hit that same 'berg. There's that ugly "existence
theorem" again.
The streaming hardware is coming, anyway,
thanks to ATI, and nVidia for sure, maybe IBM/Sony/Toshiba if only I
understood Cell better.

Ah, but that pisses you off enevn more, because their market has no need
for a DP FPU. ...and you can't afford to have them make one for you that
does. There's that money thing again, and the dreaded "existence theorem".
Well, I'm hoping you're wrong. Evidence coming in for GPU's is that
there is a revolution coming there. If there is a show-stopper, it will
be _bandwidth_.

Nah, it'll be latency. The pipe's will still be starved. The bandwith
will match the *number* of pipes. The pipes themselves still will be
latency limited.
 
Mobility. Faster gates at lower voltage, smallest possible voltage
being the goal of low power operation.

We've been hearing the same tune from the optics guys for a quarter of a
century too. ...nothing interesting yet.
 
Most of Europe is err, overrrun by Germans now... in a slightly different
way of course but the effect is similar: they buy up companies that are
going through a weak spell,

Like Chrysler? They're enough to put DB through a weak spell!
....forever. I still don't understand *what*they*were*thinking*. Sorta
like HP buying the 'Q' and then dumping Alpha. ...another of the
great corporate *what*they*were*thinking* moments. At least the latter
has been admitted now.
which have something they want, e.g. Bentley,
Rolls Royce et.al. and move production to the err, Fatherland. Much of
this is against Euro-rules now of course, e.g. the Seimens division
transformation to Infineon and "move" from U.K. to Germany but
apparently there are "ways". Now the French car companies, after a
period of reasonable success, are showing signs of flagging a bit and
I'm just waiting for VW to put a move on Peugeot or Renault... talk
about putting the cat among the pigeons.... Sacre Bleu!!:-)

;-) Then there's Ford buying Jag. Huh? Volvo was almost as bad.
OK there may be something to it in Detroit - I know that the engineers
there used to despair when accountants and/or unions would specify that
a wedge chamber would be the most err, "effective", to save a coupla
bucks or lighten the "work". Taking a global view though, to make a
fuss about "bringing back the Hemi", all seems a bit feeble. - I mean
everybody *knows* that's how you do it.

But they *did* it. Note that I'm not about to buy one. I wouldn't touch
another of those Chrysler steaming piles of dung (I've had four in
twenty years).
Their view of top-down thinking?:-)

They give out the crap to the drones. After all a tier-one laptop costs
about what my chair costs for a week. Can't invest that much!

My real plan is to get-outta-Dodge. It seems the other half is having
second thoughts (son, future wife, maybe grand-kiddies??).
BTW, completely OT here but if you haven't come across
http://diplomadic.blogspot.com/ yet it's well worth a visit, since it's
being wound up and has some hair-raising stuff on GW, Oil-for-Food and
yes, tsunami relief... much of it straight from first-hand witness. I
only found it recently so if you're already in the know......

I looked briefly today, but didn't even have time to figure out how to
navigate the site.
 
I was referring to aftermarket automotive radios being quite
affordable.
$20?

Is the OEM radio "better built" and thus more costly to
make?

Generally. ...at least moreso than a $20 radio.
Maybe, but the point is, it was not a great "feature" for a car
to have a kool AM/FM cassette radio - they simply were not uncommon or
expensive.

Sure, but not $20 either.
 
Mobility. Faster gates at lower voltage, smallest possible voltage
being the goal of low power operation.

http://www.eetimes.com/at/news/OEG20031217S0020

I've got a decent physics education, but I'm not a solid state
physicist and certainly not a device engineer. I am pretty quick with
google:

nanotube transistor mobility electron OR carrier.

Carbon nanotubes also have very attractive thermal properties. They
also currently cost about as much, pound for pound, as industrial
diamonds.

Yeah, yeah I know what's being said. The fact that nanotubes are going to
solve a multitude of other problems and bring us micro-motors and the like
arouses suspicion, from my POV... one solution fits all?
Don't know how to evaluate that. There's a company nearby I could
walk to that thinks it's going to revolutionize memory (memory always
comes first, doesn't it?) surviving on venture capital. They'd better
come up with something pretty quick.

Memory just looks easy I guess. What about organic cell memory which was
supposed to be commercialized 5 years ago? I've been hearing about optical
memory as a replacement for main memory for 30 years - the closest thing we
have is the DVD.:-)

AMD-based server doesn't even make it into the top ten on $/tpmc:

http://www.tpc.org/tpcc/results/tpcc_results.asp?print=false&orderby=priceperf&sortby=asc

Power doesn't. Xeon does. Itanium does. I doubt very much that
anybody at Intel is in a panic.

So the 1st Opteron is 12th - BFD! The rules are about to change as
evidenced by the 64-bit boost with REHL 4.0. Hell even Dell is *saying*
it's tempted. Note we still do *not* have a 64-bit comparison between
EM64T and AMD64 - gotta wonder why!
Watch for Intel to push RAS. And push, and push, and push. Think
Centrino.

More err, marketing? What are they going to invent as a name here?
Eventually that wears thin and especially in the server space.
No, but the $/tpmc numbers do. RIP Ken Olsen. I didn't mean to imply
the Keith agrees with me, but we have discussed Intel's mainframe-envy
and how it plays out as a business strategy.

$/tpmc numbers are fungible enough and any disparity you see in what's
published is small enough, that I don't see any big deal here - Opteron is
in the premier league by that measure and is about to leap up the table.
BTW I meant that I thought Keith would have said the same as I... as far as
Intel locking enterprise apps onto Itanium... enter David Spade.:-)
Oh, the cluster poster to comp.arch who has such contempt for my
wisdom got on my case for dissing the Earth Simulator, too.
Megabureaucrat projects: think Donald Trump. Real estate, staff,
power, ego. Earth simulator plainly does well on my touchstone
calculation, the FFT, but I've been told that on less ideal
calculations that require global communication it self-partitions into
"islands of performance."

It reminds me how, in the 80s, the "pundits" of the U.S. specialist
computer press were wringing their hands over the Japanese promise to
As the alternative, think tiny, low power stream processors with a
sizzling commodity interconnect, not yesterday's embedded processor
with an undersized interconnect and a custom router. Custom Cray
processors don't make any sense? Probably not anymore. Commodity is
the right word. The national lab's latest pick just chose the wrong
commodity (out of date embedded microprocessor) to build on.

Infiniband will rise from the ashes? I dunno.
The academic-bureaucratic complex can move the DoE and the national
labs, but not the energy industry.

That's the trouble - just noise. Hydrogen as a fuel is a red herring.
I read somewhere (National Academy of Sciences, I think) that hydrogen
as a fuel has about the same time horizon as a mars mission. Biofuels
are more realistic. They're not getting the emphasis because they
don't keep the attention of the weapons scientists (hydrogen
economy==hot fusion). If the price of oil were stable at a
sufficiently high level, renewables would become much more of a
player.

Look at the scale - bio-fuels are all wrong. I don't have what I'd call an
accurate number but I read somewhere that it takes 70% more energy to
produce bio-ethanol than you get back from it - sounds reasonable to me.
The cheapest ethanol is still produced by the petro-chemical industry from
ethylene and it's a much higher cost than gasoline. The bio-ethanol being
offered for FFT vehicles is/was Mr Daschle's pork barrel.

As for hydrogen, I dunno where hot-fusion came into it, but I see no viable
solution to the production, distribution, transport, portability problems
as a fuel. Hell we hear people talking about "liquid hydrogen" as though
you can actually do it. Some himbo/bimbo on CNN trots it out and it
becomes folklore! It's at that point that the "expert" being "interviewed"
clams up, does a shit-eating grin... instead of telling the truth.
Maybe not as well placed as they'd like, but, to the extent they're
playing, that's the game.

And they'll play the game when the stakes are right - they know how far in
the future that is.
Itanium-based Enterprise servers have all the RAS features of z-series
and they are much, much less expensive.

And we've all paid, subsidized that "much, much less expensive" by buying
x86 systems for the last 10 years or so. Do you really think Itanium can
be self-financed? We'll see.
Right. As long as you're willing to stick a bridge in there, you can
hook up to infrastructure that's driven by Intel architecture.

So Hypertransport was too bitter a pill for Intel to swallow.<shrug>
 
We've been hearing the same tune from the optics guys for a quarter of a
century too. ...nothing interesting yet.

For the record, in light of the reply I just posted, let us stress that you
and I are not collaborating here.:-)
 
Back
Top