The chance to break into Dell's supplier chain has passed.

  • Thread starter Thread starter Robert Myers
  • Start date Start date
Intel thought it was taking the best ideas available at the time it
started the project. IBM had a huge investment in VLIW, and Elbrus
was making wild claims about what it could do.

IBM never had a "huge investment" in VLIW. It was a research project, at
best. OTOH, Intel has a *huge* investment in VLIW, and it's a bus
that isn't going anywhere. It's too easy for us hardware folks to toss of
the hard problems to the compiler folk. History shows that this isn't a
good plan. Even if Intel *could* have pulled it off, where was the
incentive for the customers? They have a business to run and
processor technology isn't generally part of it.
Somebody who doesn't actually do computer architecture probably has a
very poor idea of all the constraints that operate in that universe, but
I'll stick with my notion that Intel/HP's mistake was that they had a
clean sheet of paper and let too much coffee get spilled on it from too
many different people.

That was one, perhaps a big one. Intel's real problem, as I see it, is
that they didn't understand their customers. I've told the FS stories
here before. FS was doomed because the customers had no use for it and
they spoke *loudly*. Itanic is no different, except that Intel didn't
listen to their customers. They had a different agenda than their
customers; not a good position to be in.
Nobody needs a home computer, and worldwide demand for computers will be
five units.

640Kb is enough for anyone, yada-yada-yada. It's all about missing the
point. Customers rule, architects don't.
The advantages of streaming processors is low power consumption and high throughput.

You keep saying that, but so far you're alone in the woods. Maybe for the
codes you're interested in, you're right. ...but for most of us there are
surprises in life. We don't live it linearly.
The belief was (I think) that the front end part was sufficiently
repetitive that it could be massaged heavily to deliver a very clean
instruction stream to the back end. The concept isn't completely wrong,
just not sufficiently right.

I worked (tangentially) on the original TMTA product. The "proof of
concept" was on MS Word. Let's call it "VC irrational exuberance". Yes
there was lots learned there, some of it interesting, but it came at a
time when Dr. Moore was still quite alive. Brute force won.
That's what IBM (and Intel and probably Transmeta, although they never
admitted it) probably wanted to do. For free, you should get runtime
feedback-directed optimization to make up for the overhead of morphing.
That's the theory, anyway. Exception and recovery may not be the
biggest problem, but it's one big problem I know about.

As usual, theory says that it and reality are the same. Reality has a
different opinion.
 
Any servers that Lenovo sells won't be allowed to have the IBM name on
it, but they'll likely be able to sell Lenovo-branded servers none-the-less.

That's the way I read the tea-leaves too.
They'll be able to sell Lenovo IBM-branded products as add-ons to
servers sales from both IBM and Lenovo.

Sure. As long as the products live withing the contract. IBM will be
selling IBM branded Lenovo stuff for some time too. I really don't see a
big change here, other than ownership of a marinally (un?) profitable
enterprise.
The only thing that will kill Dell is Intel's inability to support them
anymore.

I think their big vunerability is betting squoze from the top and the
bottom. Sure they can make $400 systems with the best of 'em, but where
are they going to grow? HPaq has a worse problem, IMO. They can't go
either way, but they do have a lot of ink to sell.
 
Robert said:
Oh, come on, Yousuf, I was making a joking reference to the comments
of Watson of IBM about the worldwide need for computers (about five
should do it, he opined), and Olson of DEC on the need for computers
in the home (not needed at all). I unsnipped your comment, without
which the exchange makes no sense at all.

And I snipped them again, because even with them in, it still makes no
sense whatsoever. How old do you think I am, to have gotten that
reference? Even if I was an old foghat, it's doubtful I would've gotten
that reference without at least a reminder about who said it. Or at
least quotes around it to say it's a quote.
Your dismissal of Cell may
be correct, but I don't think there's enough evidence anywhere for
anybody to draw any conclusions of any kind. I made reference to the
Watson and Olson opinions as a reminder of just how wrong people can
be. Olson didn't think the home computer was meeting anybody's needs,
either.

I think it's safe to assume it's going to fail to live upto its hype.
The hype being that it'll sweep the world in every field including PCs.

And likely the comments that Olsen and Watson made about the lack of
demand for home computers was completely right for the times they were
uttered. The first PC was still likely decades away at those points in
time. Even Bill Gates' infamous, "640K oughta be enough", was probably
right on the money for that point in time.

However, the Cell is almost present-day technology now, and it's pretty
easy to see where it's going to go because it's not so far away.
But it's processor state, not instruction sets, that's the problem.

What do you mean?

Yousuf Khan
 
Well, actually AMD has taken care of the systems engineering problem
completely for Dell. It created an ecosystem straight away for Opteron,
not just motherboards but complete barebones systems from Newisys. It
was so easy to make an Opteron system that people like IBM couldn't find
any excuse not to go with Opteron this time at all. Not to say that IBM
is thrilled to be having to sell Opterons, it would much rather
concentrate on Power and possibly Xeon, but it simply has no excuse not
to. So IBM is doing its most minimal job at selling Opterons.

Kinda like OS/2? IBM isn't about doing what others easily can. It can be
described as a one-stop supermarket. "If you *really* want it, we have it!"
So Dell has no excuse from a systems engineering point of view either.
But it does still have the marketing funds issue which I gather is much
more important to it.

Dell - systems engineering? Is that like "military intelligence"?
AMD has been fine so far without it. AMD should really start asserting
itself and say that it is not expecting to sell anything to Dell. Even
when Dell says nice things about AMD, AMD should immediately put the
kibosh on the rumours. That'll really drive Dell nuts, it'll ruin their
negotiations with Intel. And it should continue doing that quarter after
quarter, that way Dell will only get regular discounts from Intel. When
Dell gets only regular discounts, then that puts all of Dell's
competitors at a level playing field against them.

I agree with the first few sentences. AMD should flat out tell the world
that they're not going after Dell, never! The second half I don't so much
agree with. Intel, nor Dell particularly care.
 
Robert said:
You don't think IBM's involvement with the process technology has
something to do with it selling Opteron? They're in bed with AMD. I
look at it the other way around. When Intel looks at them fiercely,
they can just shrug their shoulders and say, "What can we do? We
gotta pay our process guys, you know."

The only thing that IBM's chip division tells its server division to
sell is Power, nothing else. Actually that's probably coming down from
the executive board of IBM, rather than one division to another.

IBM's chip division is in bed with AMD. IBM's server division is in bed
with Intel for Xeon.
That gets to a level of speculation about how the big boys play the
game that I wouldn't want to get to. I'll buy the China thing. If
AMD can crack that market and if (say) Lenovo can make decent inroads
in the server space, then maybe it would be something significant for
AMD. It works in China just like it works anywhere else, maybe worse,
because it's probably a little more tolerant of the business practices
of Intel, which is building plants in China.

Well, so is AMD. Neither is building anything like a full-fledged chip
plant in China, just packaging plants. It's likely that AMD will be the
first to build a full chip plant in China though, as the subsidies in
Europe are drying up. Ireland just had to withdraw a promise of
subsidies to Intel for its Irish plant, because the EU overruled it.
I'm sure you think I'm out to sell diminished prospects for AMD. I'm
not. I just don't see a path for AMD to turn technical superiority
into significantly greater sales.

It's a matter of them playing dirty like Intel. It's the only way to do it.

Yousuf Khan
 
Yousuf Khan said:
The only thing that IBM's chip division tells its server division to
sell is Power, nothing else. Actually that's probably coming down from
the executive board of IBM, rather than one division to another.

You smoking that BC bud again, up there in canuckistan?
IBM's chip division is in bed with AMD. IBM's server division is in bed
with Intel for Xeon.
I don't even know what this sentence is supposed to mean. Maybe you
didn't notice IBM's last? reorganization?
This is almost as funny as the stuff from "the sun never sets on ibm"
about how ibm deliberately made S/3 not be 360 compatible....

snipitee doo dah.

del
 
Yousuf Khan said:
They do need home electronics though. The sooner they can bring PC
technology into the realm of home electronics the better. I'm surprised
they can't get the cost of these things down any further. They were
making huge strides in reducing prices until now.
I snipped it all, although I can't believe that someone educated in
computers would be ignorant of both watson's and olson's remarks along
with Gary Killdall flying and gates' 640k.

I was just out at sam's club the other day, and they were selling, for
550 bucks retail or the cost of a nice middle of the road TV, a Compaq
AMD system with a 17 inch flat CRT monitor (not lcd), 512MB, 180 GB
disk (might have been 250, don't remember for sure), XP, about 8 USB
ports, sound, etc etc. Even a little reader for the memory cards out of
cameras right on the front.

Computers already are into the realm of home electronics.

del cecchi
 
Intel does only as well as it has to, I'm sure. I'm sure that's what
infuriates many techies, but a business type looking at how Intel
plays its cards. They'll do just as well as they have to to stay at
the table...that's the Intel guarantee.

Possibly true but they do drop the ball often enough and, as in this case
of server chipsets, there are holes in their pool of expertise. They seem
determined, once again, to take over the supply of server chipsets - we'll
see how it works.

I'm not sure if you can stand to read this:
http://www.anandtech.com/mb/showdoc.aspx?i=2264 -- the self indulgent
writing style is heavy going -- but apparently the msg is that Intel *is*
worried about AMD's offerings.
I'll guess there are too many people watching Dell in a way that Crazy
Eddie was never watched and even Enron was never watched.

I didn't say they won't get away with it for a while - even Crazy Eddie
did. Nothing's forever and they are playing with the sharp end of the
stick and, relatively speaking for their size, they are not long on assets.
You may be right about Lenovo, but that deal is surely structured so
that Lenovo can't touch the server space.

It doesn't get IBM's server business but I'm pretty sure it is not going to
drop its existing server stuff.
I'm skeptical that it actually works that way above four processors.
Take a look at tpmc sorted by raw performance

http://www.tpc.org/tpcc/results/tpcc_results.asp?print=false&orderby=tpm&sortby=desc

I think the first Opteron entry is a RackSaver QuatreX-64 Server 4P,
with a score of 82,226, with Power and Itanium up in the millions.
It's true, the $/tpmc is very attractive at $2.72, but the claim you
are making is about scalability. I think AMD has designed a sizzling
chip for the 4P space.

There are Opterons higher up than that in the table, e.g. with 130,623 from
HP still with 4P and though that does cover a significant part of the
server market, we have no idea how Opteron will perform with more
processors in a commodity clustering mechanism, *with* industrial strength
software, i.e. not M$ SQL Server. Price for price, it sure beats the hell
out of Xeon anyway and is on a par with Itanium 2s at the 4P size.
HP or Sun is going to save itself by becoming the king of low-priced
4P Opteron servers, the space that IBM and Dell have left open for
them? Just writing the sentence down would make me want to sell the
stock of either. I'm sure Lenovo is a cause for concern on Dell's
part.

Well that's not what I said but I was being a little facetious with HP and
Sun... the point being that there is lots of competition which has more
control of its future - i.e. they are not blinkered with a single source
for CPUs, chipsets, mbrds etc. Dell *used* to use Serverworks in their
servers but that option was nixed by Intel.
HP's future is itanium. Sun doesn't have a future. If something
kills Dell, it won't be Dell's failure to adopt AMD that does it.

Itanium is dead!... it just won't lie down.:-) Sun is currently claiming
to be doing doing rather well with their Opteron offerings. Dell will
destroy itself - it's the margins; IMO they are incapable of weathering the
inevitable storm.
 
Possibly true but they do drop the ball often enough and, as in this case
of server chipsets, there are holes in their pool of expertise. They seem
determined, once again, to take over the supply of server chipsets - we'll
see how it works.
Someone compared Intel to GM in one of these forums. Think of what a
lame designer and manufacturer of cars GM is. Still number 1.
Doesn't please anybody to hear things like that in a techie forum, but
when anybody tells me that Intel or IBM is going to get knocked out of
a space in which they are a significant player, my question is, "Tell
me how." As to expertise, I have a hard time imagining that it can't
be bought.
I'm not sure if you can stand to read this:
http://www.anandtech.com/mb/showdoc.aspx?i=2264 -- the self indulgent
writing style is heavy going -- but apparently the msg is that Intel *is*
worried about AMD's offerings.
Erg. Some speculation I didn't understand about how they think the
market split is 70/30 rather than 80/20. I'll have to read it again
when I'm _really_ bored. If the point is whether Intel is worried
about AMD, if they weren't, I'd be convinced that they'd really
completely totally lost it, but I don't think things have slid quite
that far. :-).

There are Opterons higher up than that in the table, e.g. with 130,623 from
HP still with 4P and though that does cover a significant part of the
server market, we have no idea how Opteron will perform with more
processors in a commodity clustering mechanism, *with* industrial strength
software, i.e. not M$ SQL Server. Price for price, it sure beats the hell
out of Xeon anyway and is on a par with Itanium 2s at the 4P size.

We'll see. My point was that we don't yet know how Opteron scales
beyond 4P. A skeptic might guess that there's a reason, but you'll
just think I'm AMD-bashing. We just have to wait and see.

As to clusters, I'm assuming that the cluster and smp markets for
business don't overlap much, but now you're arguing connectivity
beyond hypertransport.

The counter to your whole line of argument about scalability is that
the advantage of the onboard controller has to diminish with
increasing processor count. The hundred nanoseconds you save becomes
less significant if the the average latency is more like half a
microsecond.
Well that's not what I said but I was being a little facetious with HP and
Sun... the point being that there is lots of competition which has more
control of its future - i.e. they are not blinkered with a single source
for CPUs, chipsets, mbrds etc. Dell *used* to use Serverworks in their
servers but that option was nixed by Intel.
Well, no, I twisted your remark. ;-). As I said, it remains to be
seen how much of an advantage Opteron is beyond 4P.
Itanium is dead!... it just won't lie down.:-)

I'll be fascinated to see. HP will abandon Itanium and the big SMP
space? They have to have some kind of entry into that market for
their consulting business. The chip to beat in that space is Power
and the chosen candidate to beat it (complete with RAS that intel is
never going to give to x86) is Itanium. I'll agree, Itanium is an
awkward deal for just about everybody (except for those of us who do
SpecFP type computing), but I don't see that any of the invested
players have an option to abandon it. Bull, Fujitsu,... everybody
needs a chip that's not a PC chip.
Sun is currently claiming
to be doing doing rather well with their Opteron offerings. Dell will
destroy itself - it's the margins; IMO they are incapable of weathering the
inevitable storm.

Even from my vantage point of profound ignorance, Sun's survival
depends on lots other than Opteron, like the SCO-Linux suit, for
example. Who knows.

RM
 
IBM never had a "huge investment" in VLIW. It was a research project, at
best. OTOH, Intel has a *huge* investment in VLIW, and it's a bus
that isn't going anywhere. It's too easy for us hardware folks to toss of
the hard problems to the compiler folk. History shows that this isn't a
good plan. Even if Intel *could* have pulled it off, where was the
incentive for the customers? They have a business to run and
processor technology isn't generally part of it.
You mean the work required to tune? People will optimize the hell out
of compute intensive code--to a point. The work required to get the
world-beating SpecFP numbers is probably beyond that point.
That was one, perhaps a big one. Intel's real problem, as I see it, is
that they didn't understand their customers. I've told the FS stories
here before. FS was doomed because the customers had no use for it and
they spoke *loudly*. Itanic is no different, except that Intel didn't
listen to their customers. They had a different agenda than their
customers; not a good position to be in.
If alpha and pa-risc hadn't been killed off, I might agree with you
about Itanium. No one is going to abandon the high-end to an IBM
monopoly. Never happen (again).

I gather that Future Systems eventually became AS/400. We'll never
know what might have become of Itanium if it hadn't been such a
committee enterprise. The 8080, after all, was not a particularly
superior processor design, and nobody needed *it*, either.

You keep saying that, but so far you're alone in the woods. Maybe for the
codes you're interested in, you're right. ...but for most of us there are
surprises in life. We don't live it linearly.
I'm definitely not alone in the woods on this one, Keith. Go look at
Dally's papers on Brook and Stream. Take a minute and visit
gpgpu.org. I could dump you dozens of papers of people doing stuff
other than graphics on stream processors, and they are doing a helluva
lot of graphics, easily found with google, gpgpu, or by checking out
siggraph conferences. Network processors are just another version of
the same story. Network processors are right at the soul of
mainstream computing, and they're going to move right onto the die.

With everything having turned into point-to-point links, computers
have turned into packet processors already. Current processing is the
equivalent of loading a container ship by hand-loading everything into
containers, loading them onto the container ship, and hand-unloading
at the other end. Only a matter of time before people figure out how
to leave things in the container for more of the trip, as the world
already does with physical cargo.

Power consumption matters. That's one point about BlueGene I've
conceded repeatedly and loudly.

Stream processors have the disadvantage that it's a wildly different
computing paradigm. I'd be worried if *I* had to propose and work
through the new ways of coding. Fortunately, I don't. It's
happening.

The harder question is *why* any of this is going to happen. A lower
power data center would be a very big deal, but nobody's going to do a
project like that from scratch. PC's are already plenty powerful
enough, or so the truism goes. I don't believe it, but somebody has
to come up with the killer app, and Sony apparently thinks they have
it. We'll see.
I worked (tangentially) on the original TMTA product. The "proof of
concept" was on MS Word. Let's call it "VC irrational exuberance". Yes
there was lots learned there, some of it interesting, but it came at a
time when Dr. Moore was still quite alive. Brute force won.
On the face of it, MS word doesn't seem like it should work because of
a huge number of unpredictable code paths. Turns out that even a word
processing program is fairly repetitive. Do you know if they included
exception and recovery in the analysis?
As usual, theory says that it and reality are the same. Reality has a
different opinion.

It's still worth understanding why. The only way to make things go
faster, beyond a certain point, is to make them predictable.

RM
 
Robert Myers wrote:


I think it's safe to assume it's going to fail to live upto its hype.
The hype being that it'll sweep the world in every field including PCs.
That's called knocking down a straw man. Sure, there are some game
players getting a little carried away. There is simply no way of
knowing, until it plays itself out, how big a deal this is going to
be. I hope somebody at Intel is paying attention.
And likely the comments that Olsen and Watson made about the lack of
demand for home computers was completely right for the times they were
uttered.

Watson was closer to right than Olsen, and Olsen was completely wrong,
even for his time. The evidence was on the table, although he was two
years ahead of the release of VisiCalc (1977 vs. 1979).
The first PC was still likely decades away at those points in
time. Even Bill Gates' infamous, "640K oughta be enough", was probably
right on the money for that point in time.
Several candidates for the "First PC" had been out for several years
by the time Olsen stuck his foot in his mouth. The Apple I was
released the year before. Gates was an idiot, if he ever said such a
thing, and I don't think he actually did. Think of a 1000x1000 color
bitmap.
However, the Cell is almost present-day technology now, and it's pretty
easy to see where it's going to go because it's not so far away.
Why don't you be a little more specific in your predictions, since
they're so easy to make?
What do you mean?
For itanium, the actual effect of an instruction depends on a great
many past events that have to be kept track of (state). The op-code
appears to act on a few registers. The actual instruction operates on
a space of much larger dimensionality. x86 also has state that is
sufficiently scrambled that it's amazing that vmware can do what it
does. The problem is *much* harder than translating instructions,
especially if you want to take advantage of all of itanium's wigetry
to optimize peformance. And for every interrupt, all that state has
to be kept track of and acted upon appropriately, perhaps involving
elaborate unwinding of provisional actions.

RM
 
Robert Myers said:
Even from my vantage point of profound ignorance, Sun's survival
depends on lots other than Opteron, like the SCO-Linux suit, for
example. Who knows.

Can anybody here provide a list of Sun's annual sales and profit for
the past 10 years? I suspect that might cast some light on Sun's
ability to survive.

Robert, you seem to be a lot friendlier to Dell than you used to be.
Is it OK now that Dell kicks R&D upstream, away from ~1M white-box
screwdriver shops? ;-)

For many, many years now I've known that *announced* simulators (such
as running IBM PC code on 68000s) are always really fast, but shipping
and debugged simulators are always dog-slow. I never knew why. Now,
with your explanation of processor state, I think I understand.
Thanks.

Give Keith heck. Keith needs a good taking down, and I haven't been
able to do it lately. ;-)

Felger Carbon
who still thinks Dell has the best business plan in the PC industry,
no matter what Geo McD thinks ;-)
 
Can anybody here provide a list of Sun's annual sales and profit for
the past 10 years? I suspect that might cast some light on Sun's
ability to survive.

I don't have ten years handy, but their annual report has five (2000-
2004).
2004 2003 2002 2001 2000
Net revenue: $11.2B $11.4B $12.5B $18.2B $15.7B
Net Income: -$.388M -$3.43B -$.587B $.927B $1.85B

Give Keith heck. Keith needs a good taking down, and I haven't been
able to do it lately. ;-)

Ah, come on Felg! You can sleep later.
Felger Carbon
who still thinks Dell has the best business plan in the PC industry,
no matter what Geo McD thinks ;-)

Damning with faint praise, eh? Either way, Mike has a great retirement
plan.
 
Robert said:
That's called knocking down a straw man. Sure, there are some game
players getting a little carried away. There is simply no way of
knowing, until it plays itself out, how big a deal this is going to
be. I hope somebody at Intel is paying attention.

The only thing it's guaranteed to be used in is Playstation as a CPU.
One of the hypes is that's going to replace the GPUs in graphics cards,
and the entire x86 processor in PCs. It's not likely going to replace
any of the existing GPUs out there, nor any of the CPUs. Playstation
will in fact continue to have a GPU from Nvidia. Another hype is that
it's going to be used inside Apple Macs soon, again not at all likely.
Watson was closer to right than Olsen, and Olsen was completely wrong,
even for his time. The evidence was on the table, although he was two
years ahead of the release of VisiCalc (1977 vs. 1979).

Several candidates for the "First PC" had been out for several years
by the time Olsen stuck his foot in his mouth. The Apple I was
released the year before. Gates was an idiot, if he ever said such a
thing, and I don't think he actually did. Think of a 1000x1000 color
bitmap.

Well, then maybe Olsen was wrong.

As for Gates, I'm pretty sure his comments were restricted specifically
to the early DOS 1.0 days of the IBM PC with an 8086 processor, when
they usually came with 64K of RAM and not the whole 640K available to
it. I myself got into PCs a little later when 512K and 640K were more
the standard than the optional, and DOS was into the 3.x versions, and
even then 640K was mostly pretty luxurious -- but of course you could
see the day coming when more would be needed and quickly.
Why don't you be a little more specific in your predictions, since
they're so easy to make?

I thought I already was? Just to recap, the predictions are: no Cell in
PCs, no Cell in Macs, and no Cell will replace Nvidia or ATI GPUs.

And I'll add a couple more here. Cell might show up in a couple of IBM
supercomputers. It might even show up in an occasional IBM device, like
a NAS box.
For itanium, the actual effect of an instruction depends on a great
many past events that have to be kept track of (state). The op-code
appears to act on a few registers. The actual instruction operates on
a space of much larger dimensionality. x86 also has state that is
sufficiently scrambled that it's amazing that vmware can do what it
does. The problem is *much* harder than translating instructions,
especially if you want to take advantage of all of itanium's wigetry
to optimize peformance. And for every interrupt, all that state has
to be kept track of and acted upon appropriately, perhaps involving
elaborate unwinding of provisional actions.

What, are you talking about saving registers during an interrupt?
That's all done on the stack in an x86 processor.

I'm not sure how that relates to what VMWare has to do. VMWare has to
give itself OS privileges in the CPU thus kicking the real OS down into
an emulated CPU environment, where it thinks it's still the primary
supervisor. The emulation only kicks in whenever privileged
instructions are ever executed, otherwise, they are passed straight
through normally to the processor.

Yousuf Khan
 
keith said:
I agree with the first few sentences. AMD should flat out tell the world
that they're not going after Dell, never! The second half I don't so much
agree with. Intel, nor Dell particularly care.

No, they don't particularly care outwardly. However, Intel will do what
Intel always does when it knows its in control of a situation, it
starts squeezing. At that point it will know Dell has no choice but to
go with Intel, and it will start holding back discounts.

Yousuf Khan
 
Robert Myers wrote:


I thought I already was? Just to recap, the predictions are: no Cell in
PCs, no Cell in Macs, and no Cell will replace Nvidia or ATI GPUs.

Well, let's see. A PC, almost by definition, uses x86 (yeah, I know,
PowerPC. Right). So, no Cell in PC's. Agreed. OTOH, you've already
got Linux on Playstation. You'll have Linux on Playstation with Cell.
No fundamental reason why you need a playstation, a TV, and a PC. In
fact, you probably only need one of the three. I cannot imagine that
Sony _isn't_ thinking that way. Why does anybody need windows when
they can web surf, do their email and maybe some wordprocessing on
their TV?
And I'll add a couple more here. Cell might show up in a couple of IBM
supercomputers.

Somebody will jury-rig some damn fool thing. Probably not IBM, but
how would I know? It won't be serious in this generation, but that
doesn't mean it won't ever be.
It might even show up in an occasional IBM device, like
a NAS box.

What's the payoff for IBM? The hobbyists are the pioneers and
innovators here. Let's wait and see what the crazies do first.
What, are you talking about saving registers during an interrupt?
That's all done on the stack in an x86 processor.

I'm not sure how that relates to what VMWare has to do. VMWare has to
give itself OS privileges in the CPU thus kicking the real OS down into
an emulated CPU environment, where it thinks it's still the primary
supervisor. The emulation only kicks in whenever privileged
instructions are ever executed, otherwise, they are passed straight
through normally to the processor.
For a fact, I haven't a clue as to how vmware does it, because x86
doesn't trap all privileged instructions (nor does itanium, for that
matter).

Instructions act on the state of the processor, not just on registers,
and that's what you have to emulate. You can call that instruction
translation if you like, but it's not what you would naively imagine.
In the case of itanium, that state is incredibly complex because of
predicated instructions (among other things).

Instruction translation not being necessarily atomic, you have the
added problem of what to do when both the virtual and the real
processor are interrupted. It makes my head hurt just to think about
it. It would be fascinating to get a look at the interrupt code for
(say) dynamorio. I'll bet it's bear, because, of course, dynamorio is
doing real-time optimization. That's not necessarily what you
proposed, but the original idea of code morphing was to get some of
the overhead back through optimization.

RM
 
Can anybody here provide a list of Sun's annual sales and profit for
the past 10 years? I suspect that might cast some light on Sun's
ability to survive.

Robert, you seem to be a lot friendlier to Dell than you used to be.
Is it OK now that Dell kicks R&D upstream, away from ~1M white-box
screwdriver shops? ;-)
Dell customer service is horrible. Period. Maybe corporations who
buy in big quantities get good service. I didn't. Lots of others
haven't. Short of suing them, there is no recouse, and they
absolutely do not care. My local screwdriver shop, which is
admittedly a cut above average, has never let me down.

As to my being "friendlier," I'd like to think that eventually I
adjust to whatever the reality is. The reality is that Dell has
figured out how to make money on practically no margin.

Dorothy Bradbury (I think I've got her name right) suggested that Dell
runs specials the way your local supermarket runs specials: they get a
deal on a railroad car full of canned tomatoes, that week canned
tomatoes are on sale. If they can move them right away, they don't
need to make nearly as much as if they have to finance and warehouse
the inventory. Sounds plausible to me. I actually think Dell's
pricing is sneakier than that, maybe even to the extent, suggested
elsewhere, of being illegal.
For many, many years now I've known that *announced* simulators (such
as running IBM PC code on 68000s) are always really fast, but shipping
and debugged simulators are always dog-slow. I never knew why. Now,
with your explanation of processor state, I think I understand.
Thanks.
I can't tell if you're being serious. I'm amazed that emulators work
at all, but then, I'm amazed that microprocessors work at all.
Give Keith heck. Keith needs a good taking down, and I haven't been
able to do it lately. ;-)

Take Keith down? Wouldn't dream of it. Rather take my old coon dog
and go out hunting bear.

RM
 
On Mon, 07 Mar 2005 02:23:30 -0500, George Macdonald

Someone compared Intel to GM in one of these forums. Think of what a
lame designer and manufacturer of cars GM is. Still number 1.
Doesn't please anybody to hear things like that in a techie forum, but
when anybody tells me that Intel or IBM is going to get knocked out of
a space in which they are a significant player, my question is, "Tell
me how." As to expertise, I have a hard time imagining that it can't
be bought.

Oh no doubt there's talent... even GM has a tremendous pool of engineers
and scientists in the U.S. and abroad. Note the new models from Australia
- you think these designs were imported because the U.S.
engineers/designers are incompetent?... not at all. With Intel, and
probably G.M., the corporate culture needs to be redefined - remember
"disagree and commit" and "constructive confrontation"... it's like a
conspiracy among HR and PR to "own" the company.:-) Corporations do this
all the time: hire experts and then don't listen to them; when they're
right the marketroids still claim the credit.
Erg. Some speculation I didn't understand about how they think the
market split is 70/30 rather than 80/20. I'll have to read it again
when I'm _really_ bored. If the point is whether Intel is worried
about AMD, if they weren't, I'd be convinced that they'd really
completely totally lost it, but I don't think things have slid quite
that far. :-).

Speculation? Nope it was straight from the mfrs' in Taiwan. I think the
blitz of ".... Technology" bulletins at IDF last week was a sign of how
worried Intel is. This is Intel's way of attracting attention to
non-events in their repertoire: remember CSA... and the Dynamic Addressing
in their memory controller which was part of "Acceleration Technology"?
Where are they now? AMD had the same damned thing as "Dynamic Addressing"
in the Opteron long before.... without a song & dance.
We'll see. My point was that we don't yet know how Opteron scales
beyond 4P. A skeptic might guess that there's a reason, but you'll
just think I'm AMD-bashing. We just have to wait and see.

As to clusters, I'm assuming that the cluster and smp markets for
business don't overlap much, but now you're arguing connectivity
beyond hypertransport.

Why no overlap? Tight grid computing is certainly something that business
could/should get interested in and if there are commodity switches for the
job.....
The counter to your whole line of argument about scalability is that
the advantage of the onboard controller has to diminish with
increasing processor count. The hundred nanoseconds you save becomes
less significant if the the average latency is more like half a
microsecond.

Yes but it's no worse than the degradation for other MP systems and if you
have 4 or 8 working in close proximity you still have a gain. I'm not sure
where SUMO/NUMA is on that count but 4/8 CPUs hits a *BIG* piece of Intel's
current server market. Opteron also has more going for it than low latency
local memory.
I'll be fascinated to see. HP will abandon Itanium and the big SMP
space? They have to have some kind of entry into that market for
their consulting business. The chip to beat in that space is Power
and the chosen candidate to beat it (complete with RAS that intel is
never going to give to x86) is Itanium. I'll agree, Itanium is an
awkward deal for just about everybody (except for those of us who do
SpecFP type computing), but I don't see that any of the invested
players have an option to abandon it. Bull, Fujitsu,... everybody
needs a chip that's not a PC chip.

RAS? Reliability Availability Serviceability? Intel was adamant just
18months ago that they would never give 64-bit to x86. Sooner or later
Intel has to stem the Itanium blood.
 
Oh no doubt there's talent... even GM has a tremendous pool of engineers
and scientists in the U.S. and abroad. Note the new models from Australia
- you think these designs were imported because the U.S.
engineers/designers are incompetent?... not at all. With Intel, and
probably G.M., the corporate culture needs to be redefined - remember
"disagree and commit" and "constructive confrontation"... it's like a
conspiracy among HR and PR to "own" the company.:-) Corporations do this
all the time: hire experts and then don't listen to them; when they're
right the marketroids still claim the credit.
The comparison to the auto industry and GM works in a number of
different ways, and it doesn't bode well either for the industry or
for Intel. If I thought AMD were any kind of answer, I'd be cheering
for AMD, too, but I don't think it is. Industries and companies both
mature, and Intel is a mature company in a mature industry.
Speculation? Nope it was straight from the mfrs' in Taiwan.

<quote>

As far as percentages go, the motherboard manufacturers unanimously
agree that the number of AMD motherboard shipments today are higher
than the overall 80/20 market split between AMD and Intel.

The advantage is still in Intel's corner, with the highest percentage
we were quoted being that only 30% of all motherboard shipments were
for AMD platforms.

</quote>

The overall split is 80/20, but some m'board mfr. is talking 30% AMD.
What am I supposed to conclude?
I think the
blitz of ".... Technology" bulletins at IDF last week was a sign of how
worried Intel is. This is Intel's way of attracting attention to
non-events in their repertoire: remember CSA... and the Dynamic Addressing
in their memory controller which was part of "Acceleration Technology"?
Where are they now? AMD had the same damned thing as "Dynamic Addressing"
in the Opteron long before.... without a song & dance.

Intel has been doing this kind of stuff since forever. What's
different now?
Why no overlap? Tight grid computing is certainly something that business
could/should get interested in and if there are commodity switches for the
job.....
People with applications big enough to require more than four
processors easily go back and forth between cluster and SMP?
Yes but it's no worse than the degradation for other MP systems and if you
have 4 or 8 working in close proximity you still have a gain. I'm not sure
where SUMO/NUMA is on that count but 4/8 CPUs hits a *BIG* piece of Intel's
current server market. Opteron also has more going for it than low latency
local memory.
All the advantages I can think of work best 1P, except for
hypertransport, which has limited scalability, but rather than
speculating, let's wait and see what anybody actually comes up with as
a benchmark.
RAS? Reliability Availability Serviceability? Intel was adamant just
18months ago that they would never give 64-bit to x86. Sooner or later
Intel has to stem the Itanium blood.

Intel would never be the same after abandoning Itanium.

RM
 
<quote>

As far as percentages go, the motherboard manufacturers unanimously
agree that the number of AMD motherboard shipments today are higher
than the overall 80/20 market split between AMD and Intel.

The advantage is still in Intel's corner, with the highest percentage
we were quoted being that only 30% of all motherboard shipments were
for AMD platforms.

</quote>

The overall split is 80/20, but some m'board mfr. is talking 30% AMD.
What am I supposed to conclude?

Different mbrd makers supply different markets and target different price
slots. If Dell, and some other large OEMs, are still using all Intel
brand, then the brands of mbrds we know are probably going to be above the
20% on AMD. Note also the low demand for i915s - probably because the
i865/875 were a significant step forward for Intel and they got a goodly
portion of the potential market with them.

There's no doubt that Athlon64 s939s and their mbrds are in short supply
just now - NewEgg, e.g., can't seen to keep the supply of nForce4s and even
nForce3s in stock. On top of that the mbrd mfrs seem to be going after the
top end nForce4 SLI market, approaching $200./mbrd, so regular single video
card systems are shortest in supply.
Intel has been doing this kind of stuff since forever. What's
different now?

I thought last week's flurry was particularly notable - almost desperate.
People with applications big enough to require more than four
processors easily go back and forth between cluster and SMP?

Not necessarily but if they can get some extra bang from either, why rule
them out. It's way outside my scope of expertise but apparently
distributed database works. said:
All the advantages I can think of work best 1P, except for
hypertransport, which has limited scalability, but rather than
speculating, let's wait and see what anybody actually comes up with as
a benchmark.

Yeah well I'm dying to see a comparison between the two on 64-bit. I guess
we have to wait till WinXP-64 is officially available before we see that
but I'd have thought we'd have more Linux comparisons by now. I heard that
c't had done some and published on paper... but then silence.

I think Hypertransport does OK up to maybe 8 CPUs but I dunno if that can
be arranged w/o a backplane. As for >8 you have to do some pretty fancy
footwork even with Intel CPUs *and* you don't have a standard ASIC cell
like Hypertransport to attach with.
 
Back
Top