RISC vs. CISC : Thread on netscape.public.mozilla.general

  • Thread starter Thread starter Will Dormann
  • Start date Start date
W

Will Dormann

This guy was talking about his Mac. Mentioned how it's got a RISC
processor. Here's the conversation:

********



The key to the ability to run so many different OSes. And the faster
operations
of systems like UNIX and Linux.
(OSX uses unix with the Mac interface on top)

********


So Phillip is making two points:
1) RISC chips are inherently faster than "CISC" chips found in a PC
2) RISC chips allow you to have your choice of OS

Comments?


-WD
 
The key to the ability to run so many different OSes. And the faster
operations
of systems like UNIX and Linux.
(OSX uses unix with the Mac interface on top)

********


So Phillip is making two points:
1) RISC chips are inherently faster than "CISC" chips found in a PC

This is not exactly backed by any data. x86 chips are some of the
fastest raw number crunchers in the world, often beating out MUCH more
expensive RISC processors. SPEC CPU2000 is probably the most widely
used cross-platform CPU benchmark (though arguably it's as much a
compiler test as a CPU + memory subsystem test). In CINT, x86 chips
take the #1 and #2 spot (P4 EE and Athlon64 FX respectively), followed
closely by the Itanium2 (VLIW) and Power4 (RISC). In CFP, the Itanium
comes out on top, followed by the Power4 and then the two x86 chips.
Overall, the CISC x86 chips, the RISC Power4 chips and the VLIW
Itanium2 chips end up being reasonably close in most cases. The only
real difference is that the two x86 chips cost between $700 and $1000,
while the Itanium2 costs $5000 and the Power4 is likely more expensive
still.

Of course, when you get right down to it, there really isn't much
difference between CISC and RISC these days, certainly not like the
theoretical explanation that some people like to claim. Both CISC and
RISC chips decode the actual instructions into different types of
operations internally, both use multiple pipelines, out-of-order
execution, branch prediction, caches, register renaming, etc. etc.
Internally they work pretty much the same, and even externally they
aren't as different as they used to be. Where RISC chips used to have
very simple instruction sets (hence the name "Reduced Instruction Set
Computer"), they've now mostly grown quite a bit. CISC instruction
sets may not have shrunk any, but most of the rarely used instructions
are avoided by most compilers and implemented without wasting many
resources in hardware (ie run slowly).

When you get right down to it, the whole RISC vs. CISC debate doesn't
really matter anymore.
2) RISC chips allow you to have your choice of OS

This argument is total bullshit. x86 runs WAY more operating systems
than any other instruction set. Hell, it probably runs more OSes than
all other instruction sets combined! There's hardly an operating
system in the world that doesn't run on x86, even Darwin, the base
system of Apple's OS X, runs on x86 (it's only the GUI that is
Mac-only). I've heard of people running 30+ different operating
systems on a single PC. This obviously was just for shits and
giggles, there was no practical reason to have that many OSes
installed, but it CAN be done on a PC.
 
| So Phillip is making two points:
| 1) RISC chips are inherently faster than "CISC" chips found in a PC
| 2) RISC chips allow you to have your choice of OS
|
| Comments?

Plain shit,

Nowadays, CISC chips have a RISC heart with n-pipeline, branch prediction
....
there is SIMD on x86 chips. because of their affordable prices and high sale
volume, x86 chips evoluate faster than plain risc chips. As a consequence,
risc chips are not inherently faster anymore because cisc chips use risc
paradigms, but cisc chips hide his internal work with a façade pattern.

Furthermore, the more affordable is a technology, the more business
development there is upon it. As a consequence, x86 chips have the more
extended choice of OS, applications, libraries. I prefer to get a cheap
cluster of x86 computers rather than an expensive risc solution. The google
success is due to this low cost arch.

Most of the companies get rid of their plain risc stations and take pc,
because price/performance/needs is the key choice.

some links :
http://www.arstechnica.com/cpu/4q99/risc-cisc/rvc-1.html
http://www.arstechnica.com/cpu/1q00/simd/simd-1.html
http://www.arstechnica.com/cpu/1q00/g4vsk7/g4vsk7-1.html
 
| So Phillip is making two points:
| 1) RISC chips are inherently faster than "CISC" chips found in a PC
| 2) RISC chips allow you to have your choice of OS
|
| Comments?

Plain shit,
Sums it up pretty well.
Nowadays, CISC chips have a RISC heart with n-pipeline, branch prediction
...
there is SIMD on x86 chips. because of their affordable prices and high sale
volume, x86 chips evoluate faster than plain risc chips. As a consequence,
risc chips are not inherently faster anymore because cisc chips use risc
paradigms, but cisc chips hide his internal work with a façade pattern.
And you correctly point out that there is less to this whole subject
than meets the eye, once you scratch away a pretty thin layer of ISA
paint.
Furthermore, the more affordable is a technology, the more business
development there is upon it. As a consequence, x86 chips have the more
extended choice of OS, applications, libraries. I prefer to get a cheap
cluster of x86 computers rather than an expensive risc solution. The google
success is due to this low cost arch.
Probably a _little_ more to google success than commodity servers,
maybe?

Itanium, of course, is neither CISC nor RISC, but it could me made
extremely affordable if Intel chose to make it so. Intel would dearly
love to place Itanium servers with google and would probably
practically give them away to get the placement (IBM on the other hand
is so busy worrying about Eclipse and WebSphere that a secretary
probably has to come in periodically with a memo to the head honchos
to remind them they're still in the hardware business). Whatever it
is that's keeping Itanium out of google, it's not money.

To all intents and purposes, the price/performance ratio of Itanium
vs. x86 is anything Intel wants it to be. Intel wants to keep that
number as high as it can, but there are circumstances (like google)
where Intel can use the flexibility it has to make the make the
capital costs of an Itanium server either not an issue or work in
Itanium's favor.

Intel can even absorb the initial capital costs of rewriting software,
and I suspect they'd be willing to do even _that_ to be able to say
that google is using Itanium.

What they can't do is make Itanium easy to program, create a
widespread infrastructure of Itanium programmers that can be hired for
a reasonable price, or make Itanium easily interchangeable with x86.
Once a company moves to Itanium, it's there for keeps, just like a
company that moves to IBM hardware.

Intel with Itanium just like the guy on the corner with the litttle
plastic packages of white powder. Special deal the first time.
Probably a taste for free. google just not dumb enough to accept the
offer.
Most of the companies get rid of their plain risc stations and take pc,
because price/performance/needs is the key choice.
TCO (Total cost of ownership) and ROI (Return on investment) are the
numbers that the people with the green eye-shades want to know. When
IBM wants to sell one of its pricey boxes, it knows how to talk this
kind of language, and it wins the argument often enough to keep
placing boxes.

You only have to have been present once when hundreds of workers are
brought to a standstill or have seen an entire day's worth of
transactions thrown into chaos by operator error to understand that
there is more to this argument than MIPS, FLOPS, or Transactions per
second per dollar.

Power can be fully virtualized, and IBM can offer to sweep away a
whole roomful of boxes, cables, and other geek gear with one very
presentable looking (albeit very expensive) box that is backed up with
decades of learning how not to make mistakes. IBM can make a box full
of identical processors look all different kinds of ways at once, and
it can do it without having to rely on someone with an impressive
certification (say an RHCE) that didn't even exist a decade ago,
although I understand that people with credible mainframe credentials
are currently in short supply.

Too much to credit the cited poster with? Probably. But as with many
urban myths, there might be just an atom or two of truth somewhere
that suggests the line of thinking.

RM
 
Probably a _little_ more to google success than commodity servers,
maybe?

Certainly, but it is perhaps important to note that Google has what is
probably the largest cluster of computers in the world (rumor has it
that they have over 50,000 servers), and they seem to be making a
pretty good business of it, all using x86 chips.
Itanium, of course, is neither CISC nor RISC, but it could me made
extremely affordable if Intel chose to make it so. Intel would dearly
love to place Itanium servers with google and would probably
practically give them away to get the placement (IBM on the other hand
is so busy worrying about Eclipse and WebSphere that a secretary
probably has to come in periodically with a memo to the head honchos
to remind them they're still in the hardware business). Whatever it
is that's keeping Itanium out of google, it's not money.

Google has said quite specifically what is keeping the Itanium out:
power consumption. They are the first and most adamant to claim that
performance/watt is more important to them than up front cost. This
is also keeping IBM's Power4 systems out, though the new PPC 970FX
blades could easily find a nice home at Google I would guess.
Excellent performance with a TDP of 25W and two processors in a single
blade.

Of course, if Intel followed my advice and made the Pentium-M "Dothan"
chips dual-processor capable and bumped up the bus speed then this
would be a real good solution as well, but I don't think Intel bothers
with my advice very often :>
To all intents and purposes, the price/performance ratio of Itanium
vs. x86 is anything Intel wants it to be.

What I find interesting is that it seems that it's the motherboards
that are REALLY pushing the Itanium costs through the roof. Even the
top-end Itanium2 chips only cost about $5,000, which is a lot but not
TOO much more than the ~$3,000 of a XeonMP or Opteron 8xx chip.
However if you compare the cost of a similarly equipped 4P Opteron or
XeonMP server to that of a 4P Itanium2 system you end up with a MUCH
higher base price for the latter.

Since presumably Intel doesn't make too many of the Itanium
motherboards it seems like someone other than Intel is getting a fair
chunk of the Itanium pie.
Intel wants to keep that
number as high as it can, but there are circumstances (like google)
where Intel can use the flexibility it has to make the make the
capital costs of an Itanium server either not an issue or work in
Itanium's favor.

Unfortunately there isn't much that they can do about power
consumption here. Even the "low power" Itanium2 processors have a TDP
of 66W, and they manage that with noticeably lower performance than
the high-end I2.

Besides which, I'm not even convinced that the I2 would be the best
performing solution for what Google is after. It hasn't done very
well to prove itself as a web server platform, and while it's database
scores are sometimes impressive, they usually aren't all that much
higher than those of comperable high-end PC servers.
 
Google has said quite specifically what is keeping the Itanium out:
power consumption. They are the first and most adamant to claim that
performance/watt is more important to them than up front cost.

Didn't know that. For a chip that was supposed to leave all the
complexity to the compiler, Itanium sure has turned out to be a power
hog.

The argument you can make for a behemoth like Itanium doesn't work for
google, whose application is as embarrassingly parallel as you can
get. Itanium isn't the right processor for google, but Intel would
badly like to be able to say that google uses Itanium.
This
is also keeping IBM's Power4 systems out, though the new PPC 970FX
blades could easily find a nice home at Google I would guess.
Excellent performance with a TDP of 25W and two processors in a single
blade.
IBM could do even better on low power if they wanted to, and maybe
they've already made such a proposal to google, for all I know. Such
a low-power solution wouldn't be likely to be built around stock PPC
970FX, since they've already done the homework for a low power chip
built around stock Power 440 cores.
Of course, if Intel followed my advice and made the Pentium-M "Dothan"
chips dual-processor capable and bumped up the bus speed then this
would be a real good solution as well, but I don't think Intel bothers
with my advice very often :>
You know perfectly well why Intel isn't anxioux to do that. They've
got some really nice low-power low-voltage Pentium-M blades that HP
sells and gets a very nice price for, but every license that Intel has
let to anyone else to get near Pentium-M has always been for "mobile"
and I'm quite sure that means "non-server" applications.
What I find interesting is that it seems that it's the motherboards
that are REALLY pushing the Itanium costs through the roof. Even the
top-end Itanium2 chips only cost about $5,000, which is a lot but not
TOO much more than the ~$3,000 of a XeonMP or Opteron 8xx chip.
However if you compare the cost of a similarly equipped 4P Opteron or
XeonMP server to that of a 4P Itanium2 system you end up with a MUCH
higher base price for the latter.

Since presumably Intel doesn't make too many of the Itanium
motherboards it seems like someone other than Intel is getting a fair
chunk of the Itanium pie.
I'm sure Intel will do whatever it has to do, but it's already put its
OEM partners through the wringer on Itanium.

If I were on comp.arch, I'd be more careful about saying this, because
it would get me into the middle of a flame war, but the attractiveness
of Itanium is for whomping big non-trivially parallel architectures
where you'd like to have maximum connectivity.

Up until very recently, low latency interconnects have been like $1000
each minimum for the 12x (10G) Infiniband adapter and switch port, or
$2000 total per compute node. If the connectivity per compute node is
expensive, you'd like to have each compute node doing as much work as
possible--thus the presumed market for Itanium.

Those economics are changing rapidly. Mellanox

http://www.mellanox.com/news/press/pr_111003b.html

has announced a 12x infiniband switch at under $300/port. I can't see
any reason why an HCA couldn't be priced similarly, bringing the
connectivity costs down to about $600/port per compute node using
commodity hardware. A 2-P Itanium board with 12x Infiniband via PCI
X-press would be nice as the building block for a high-connectivity
cluster, but the fact that the connectivity hardware has come down so
far in price almost inevitably means that the Itanium hardware has to
follow.

Those economics blow lots of things out of the water: Itania with
price and power consumption out of whack, SGI boxes, and even Cray Red
Storm type boxes. Too easy to build your own. If 10G per compute
node doesn't sound like enough, 30G is on the horizon, meaning a 4
processor board (Itanium, Xeon, or Opteron) starts to sound like a
plausible building block, assuming the price per bit per second is
roughly the same. And all with a circuit-switched network. ;-).

RM
 
Didn't know that. For a chip that was supposed to leave all the
complexity to the compiler, Itanium sure has turned out to be a power
hog.

Yup, not to mention a transistor hog, 410M transistors and a 374mm^2
die size on a 130nm fab process. Certainly a fair chunk of those
transistors are cache (6MB of L3 is over 300M transistors in itself),
but it does seem to have close to 100M logic transistors. For a
design that was supposed to be so simple, it's anything but.
The argument you can make for a behemoth like Itanium doesn't work for
google, whose application is as embarrassingly parallel as you can
get. Itanium isn't the right processor for google, but Intel would
badly like to be able to say that google uses Itanium.

I'm sure they would! Not to mention the fact that if the 50,000
server number that is tossed around is accurate, that would be roughly
equal to half of ALL the Itanium servers sold since it's introduction!
(ok, not quite, they shipped their 100,000 Itanium server a couple
months ago).
IBM could do even better on low power if they wanted to, and maybe
they've already made such a proposal to google, for all I know. Such
a low-power solution wouldn't be likely to be built around stock PPC
970FX, since they've already done the homework for a low power chip
built around stock Power 440 cores.

The PowerPC 440 and 450 cores are all targeting embedded applications
for the most part, so it's not at all surprising that they don't eat
up too much power. The PPC 970FX turns in some pretty impressive
power consumption numbers for the performance it offers, but I
wouldn't hold my breath in any big reduction in power. Maybe a
slightly lower clock chip would consume less power, but clock for
clock they probably aren't going to improve too much.
You know perfectly well why Intel isn't anxioux to do that. They've
got some really nice low-power low-voltage Pentium-M blades that HP
sells and gets a very nice price for, but every license that Intel has
let to anyone else to get near Pentium-M has always been for "mobile"
and I'm quite sure that means "non-server" applications.

As usual, it all comes back to the artificial segmentation and
marketing issues.
If I were on comp.arch, I'd be more careful about saying this, because
it would get me into the middle of a flame war,

Ohh, I'm sure we can start a flame war here as well if you like? :>
but the attractiveness
of Itanium is for whomping big non-trivially parallel architectures
where you'd like to have maximum connectivity.

Up until very recently, low latency interconnects have been like $1000
each minimum for the 12x (10G) Infiniband adapter and switch port, or
$2000 total per compute node. If the connectivity per compute node is
expensive, you'd like to have each compute node doing as much work as
possible--thus the presumed market for Itanium.

Even at $2000/compute node you aren't really doing that well with
Itanium. Most HPC nodes are fairly trimmed down other than CPUs,
memory and interconnect. A trimmed-down (before interconnect) I2 will
cost you about $15,000 for a 2P node, while a similar Opteron node
will cost you $3000. Add in the interconnect and you're talking
$17,000 vs. $5000 and you can still get 3 Opteron servers for the
price of one Itanium2 system. Now obviously there are lots of other
considerations here, but the point being that you need to be talking
about VERY expensive extras before the Itanium becomes cost effective
for most apps.
Those economics are changing rapidly. Mellanox

http://www.mellanox.com/news/press/pr_111003b.html

has announced a 12x infiniband switch at under $300/port. I can't see
any reason why an HCA couldn't be priced similarly, bringing the
connectivity costs down to about $600/port per compute node using
commodity hardware. A 2-P Itanium board with 12x Infiniband via PCI
X-press would be nice as the building block for a high-connectivity
cluster, but the fact that the connectivity hardware has come down so
far in price almost inevitably means that the Itanium hardware has to
follow.

On the upside for Intel, Itanium hardware HAS come down in price a
lot. It used to be quite difficult to find an Itanium2 system for
under $20,000 while now you can get a dual-processor I2 system for not
much over $10,000. The downside is that this is STILL a LOT higher
than x86 systems, where 2P Opteron and Xeon systems sell for
$2500-$5000 for the most part.
Those economics blow lots of things out of the water: Itania with
price and power consumption out of whack, SGI boxes, and even Cray Red
Storm type boxes. Too easy to build your own. If 10G per compute
node doesn't sound like enough, 30G is on the horizon, meaning a 4
processor board (Itanium, Xeon, or Opteron) starts to sound like a
plausible building block, assuming the price per bit per second is
roughly the same. And all with a circuit-switched network. ;-).

I guess it's no surprise that superclusters seem to be the way that
everything is going these days rather than MPP type supercomputers.
 
Tony Hill said:
Ohh, I'm sure we can start a flame war here as well if you like? :>

Zzzz. Snarf. Unh... what? A flame war? Is somebody talking nasty
about my hero Michael Dell again? Hey, did youse guys notice Dell's
skyrocketing sales and profits in 2003 Q4? While cutting costs?

I'm surprised, Tony. I thought you woulda lernt better by now. ;-)
 
Zzzz. Snarf. Unh... what? A flame war? Is somebody talking nasty
about my hero Michael Dell again? Hey, did youse guys notice Dell's
skyrocketing sales and profits in 2003 Q4? While cutting costs?

As PT Barnum may or may not have said about the birth rate of
suckers... In any case, I prefer Barnum's alleged last words as better
than anything Orson Welles could have dreamed up for a dying mogul:

"How were the receipts today at Madison Square Garden?"

If Michael Dell is in your pantheon of heroes, I hope you have PT
Barnum right there along with him as more than a worthy prototype.

Dell's involvement with Itanium so far is as tepid as you can get. To
every appearance, as little as they can get away with without annoying
its most important vendor.

When Dell really _wants_ to sell Itania, it will bring some meaningful
price discipline to that market. I will applaud your hero's
contribution to affordable computing and then go find a reputable
vendor from which I can purchase hardware of known provenance.

I was actually kindof hoping you'd lunge at the remark about how
affordable a circuit-switched network will be. I put that out there
like shark chum and got not even a nibble.

As for Infiniband, Dell talked up Infiniband when Intel talked up
Infiniband. Intel has lost its enthusiasm for Infiniband, and--I am
shocked, shocked to have to say this--so has Dell! Just another
example of how effective great market strategists are at reading one
anothers' minds, I guess.

Intel's coyness about infiniband doesn't seem to bother anybody else,
but it bothers me. After Microsoft and Intel did a job on infiniband,
it's making a very nice comeback. Intel has ethernet on its mind, and
recently they've been talking up photons:

http://www.intel.com/pressroom/archive/releases/20040212tech.htm

and

http://www.intel.com/labs/sp/

A quick scan through their article in Nature

ftp://download.intel.com/labs/sp/download/natureintc.pdf

indicates that they have gracelessly failed to mention that one of
John Bardeen's former students has recently demonstrated a transistor
that can modulate and emit photons in useful quantities, although at a
mere 1 MHz, whereas the Intel discovery has to get its photons from
elsewhere, but has been deomonstrated at a modulation frequency of 1
GHz.

Anybody for a copper vs. fiber flame war? I'm afraid that most
everybody would agree with me that, as far as Intel goes, if they
can't have the jackpot, they'll gladly do whatever they can to keep
everybody else off balance until they find a world-dominating strategy
they can call their own. Fighting about whether a two-bit computer
salesman from Texas will or will not go along with whatever Intel
wants to do just isn't worth the effort.

RM
 
Zzzz. Snarf. Unh... what? A flame war? Is somebody talking nasty
about my hero Michael Dell again? Hey, did youse guys notice Dell's
skyrocketing sales and profits in 2003 Q4? While cutting costs?

I'm surprised, Tony. I thought you woulda lernt better by now. ;-)

No no no! That flame war is suppose to go on in a different thread!
This thread is for the pro-Itanium/anti-Itanium flame war! Also,
don't get it confused with the Rambus flame war that is also raging :>
 
Robert Myers said:
You know perfectly well why Intel isn't anxioux to do that.
They've got some really nice low-power low-voltage Pentium-M
blades that HP sells and gets a very nice price for, but every
license that Intel has let to anyone else to get near Pentium-M
has always been for "mobile" and I'm quite sure that means
"non-server" applications.

Have you heard a "mobile server solution" term ? Well .. :)


Pozdrawiam.
 
Have you heard a "mobile server solution" term ? Well .. :)

Nobody that I can think of right off hand plays games like that with
Intel. Historically-speaking, AMD's recent successes in the server
business are an aberration. Up until very recently, if you couldn't
get access to Intel chips and you didn't have your own proprietary
chip, you were just plain not in the server business.

The constant saber-rattling from SCO is an annoyance for everybody
using non-Microsoft products, but everybody is still doing business.
A cease and desist order from Intel would put you out of business in a
heartbeat.

On the other hand... ;-) ...

Shuttle has a Pentium-M license, and I can think of lots of things you
could do with a pile of Shuttle boxes or mainboards if you could route
them through some country where the US courts would have a hard time
reaching, and where Shuttle could claim that it had no idea what that
high-volume off-shore purchaser had in mind.

Even better, you could find out who's really making the Shuttle
mainboards and get them at a price that would allow you to make a
profit. That would probably require a visit to Taiwan and the help of
a middleman.

Ever thought of going into the server business?

RM
 
Even better, you could find out who's really making the Shuttle
mainboards and get them at a price that would allow you to make a
profit. That would probably require a visit to Taiwan and the help of
a middleman.

A visit to China is more likely to yield results methinks :PPpP

--
L.Angel: I'm looking for web design work.
If you need basic to med complexity webpages at affordable rates, email me :)
Standard HTML, SHTML, MySQL + PHP or ASP, Javascript.
If you really want, FrontPage & DreamWeaver too.
But keep in mind you pay extra bandwidth for their bloated code
 
Back
Top