so Jobs gets screwed by IBM over game consoles, thus Apple-Intel ?

  • Thread starter Thread starter Guest
  • Start date Start date
G

Guest

It seems that Steve Jobs got screwed over by IBM, because IBM didn't care
that much about making the CPUs that Jobs wanted for Macs, because IBM
decided that game consoles were more important. 'IBM Inside' every
next-generation game console: Sony Playstation3 (IBM 'Cell' CPU) Microsoft
Xbox 360 (IBM 'Waternoose' CPU)
Nintendo Revolution (IBM 'Broadway' CPU) which will ensure tens of millions
of IBM CPUs sold every year, compared to maybe 5 million Mac CPUs every
year. even if it was more than 5 million Mac CPUs, it would still almost
certainly be less than 10 million. so it seems that is one of the
major reasons why Apple has hooked up with Intel for CPUs



related article
http://arstechnica.com/columns/mac/mac-20050607.ars
 
It seems that Steve Jobs got screwed over by IBM, because IBM didn't care
that much about making the CPUs that Jobs wanted for Macs, because IBM
decided that game consoles were more important. 'IBM Inside' every
next-generation game console: Sony Playstation3 (IBM 'Cell' CPU) Microsoft
Xbox 360 (IBM 'Waternoose' CPU)
Nintendo Revolution (IBM 'Broadway' CPU) which will ensure tens of millions
of IBM CPUs sold every year, compared to maybe 5 million Mac CPUs every
year. even if it was more than 5 million Mac CPUs, it would still almost
certainly be less than 10 million. so it seems that is one of the
major reasons why Apple has hooked up with Intel for CPUs



related article
http://arstechnica.com/columns/mac/mac-20050607.ars

This is in tune with what most people have been saying. Your point is?
 
Bjorn Olsson d.ä said:
This is in tune with what most people have been saying. Your point is?

I doubt IBM gets as much per CPU in game consoles as they do for regular
PCs. You can sell them per thousand for up to 1000 USD in PCs. It's
probably $100 tops for game consoles and likely much less than that. Yeah,
it's a great deal for them, but the likely revenue stream isn't as great as
the previous poster made it seem. If Intel sold 10 million Mac CPU units,
it may very well offset any loss of the X-Box when they practically gave
away the P-III.
 
I doubt IBM gets as much per CPU in game consoles as they do for regular
PCs. You can sell them per thousand for up to 1000 USD in PCs. It's
probably $100 tops for game consoles and likely much less than that. Yeah,
it's a great deal for them, but the likely revenue stream isn't as great as
the previous poster made it seem. If Intel sold 10 million Mac CPU units,
it may very well offset any loss of the X-Box when they practically gave
away the P-III.

Oh, my! :-0
 
keith said:
Oh, my! :-0

Ain't it fun to watch the tea leaf and chicken entrail deviners make up
stuff? I love it.

I bet you believe Hillary ain't running in 2008 too. (not you keith)

del cecchi
 
Judd said:
I doubt IBM gets as much per CPU in game consoles as they do for regular
PCs. You can sell them per thousand for up to 1000 USD in PCs. It's
probably $100 tops for game consoles and likely much less than that. Yeah,
it's a great deal for them, but the likely revenue stream isn't as great as
the previous poster made it seem.

Since you obviously have $$ figures to back up your arguement, please
feel free to share them...
If Intel sold 10 million Mac CPU units,
it may very well offset any loss of the X-Box when they practically gave
away the P-III.

FWIW, neither IBM nor FreeScale really give a rat's ass about losing
Apple's business. The level of R&D $$$ that both would have to put in
to deliver what Apple was looking for, just wasn't worth it for them.
 
Since you obviously have $$ figures to back up your arguement, please
feel free to share them...

It's well known that Intel gave a low bid for the xbox to kneecap AMD,
and walked away from the xbox2 since IBM lowballed them in turn.

IBM has done some baaaad things to the PPC core to get it fabable at
3.2, and Sony is willing to accept at 12.5% defect rate (1 SPE out of 8
per die being nonfunctional).
FWIW, neither IBM nor FreeScale really give a rat's ass about losing
Apple's business. The level of R&D $$$ that both would have to put in
to deliver what Apple was looking for, just wasn't worth it for them.

What's more, TMK either company would only undertake this R&D if Apple
paid for it. Freescale makes router chips and embedded stuff now,
nothing suitable for portables. IBM doesn't feel confident on taking on
Intel any more I guess, while Intel is already making exactly what
Apple needs so Apple gets a free ride, or better, with them.
 
It's well known that Intel gave a low bid for the xbox to kneecap AMD,
and walked away from the xbox2 since IBM lowballed them in turn.

I assure you, that's not how it happened. AMD was able to match the
price, but, Microsoft, gave Intel a much larger margin, because MSFT
knew that Intel had the manufacturing capacity to guarantee delivery of
CPUs for Xbox, while AMD (unless they dedicated their fac capacity
almost totally to Xbox) did not. In fact, MSFT never had any real
intention of going with AMD -- they just leaked the rumours that they
were to scare Intel into giving them a lower price. Same thing with
GigaPixel versus nVidia. Notice that *both* Intel and nVidia have
walked away from Xbox 360? MSFT didn't exactly make a lot of friends
on that transaction.
IBM has done some baaaad things to the PPC core to get it fabable at
3.2,

Yeah, more pipelining, adding more dispatch slots, rename registers and
an industry leading out-of-order window is really baaaaad ... NOT! The
G5's biggest problem is not performance (it was only about 10-20%
slower at the high end versus x86s at introduction, which is actually
quite an accomplishment versus the previous generation Motorola crap)
but rather its lack of clock scalability and ridiculous power
consumption.

See the clock scaling business is clearly something AMD learned with
the K6, and Intel learned with the original Pentium. Just looking at
the history of these two companies, and you can see that both know how
scale a processors clock throughout its lifetime to match up with
Moore's Law. The K6 went from 233Mhz to 550Mhz, the Pentium 33Mhz to
200Mhz, Athlon from 600Mhz to 2.0Ghz, Pentium II/III from 300Mhz to
1.266Ghz, Pentium IV from 1.5Ghz to 3.8Ghz, and the Athlon-64 got
started at around 2.0Ghz. Look at the anemic PPC 970 processor;
introduced at 2.0 Ghz, now a measley 2.5 Ghz but only by using *liquid
cooling*.

And lets talk about power consumption. Because of the threat from
Transmeta (with their amazing low power cores) both AMD and Intel have
reacted by making lower and lower power cores. Intel will eventually
switch to them in their mainstream processors (via the Pentium-M path)
and AMD has put their low-power technology (gates clocks, and other
transistor techniques) into their entire product-line. In the
meantime, IBM, not feeling any competitive pressure from anyone, just
decided to crank the power consumption through the roof.
[...] and Sony is willing to accept at 12.5% defect rate (1 SPE out of 8
per die being nonfunctional).

Interesting. I am not aware of any other CPU manufacturer willing to
share their yeild rates publically, so I don't really know what to
compare that to.

Possibly true. Not exactly good news for Apple -- if their suppliers
didn't care about them, how could they expect high quality parts?
What's more, TMK either company would only undertake this R&D if Apple
paid for it. Freescale makes router chips and embedded stuff now,
nothing suitable for portables. IBM doesn't feel confident on taking on
Intel any more I guess, while Intel is already making exactly what
Apple needs so Apple gets a free ride, or better, with them.

That's right IBM, cannot hope to accomplish what even the lowly humble
AMD has done. IBM simply doesn't have the competence, drive, or will
to compete with either Intel or AMD. Confidence would not have helped
them. And obviously Freescale is more than happy to stick to the
confines of the embedded market.
 
I assure you, that's not how it happened. AMD was able to match the
price, but, Microsoft, gave Intel a much larger margin, because MSFT
knew that Intel had the manufacturing capacity to guarantee delivery of
CPUs for Xbox, while AMD (unless they dedicated their fac capacity
almost totally to Xbox) did not. In fact, MSFT never had any real
intention of going with AMD -- they just leaked the rumours that they
were to scare Intel into giving them a lower price. Same thing with
GigaPixel versus nVidia. Notice that *both* Intel and nVidia have
walked away from Xbox 360? MSFT didn't exactly make a lot of friends
on that transaction.

Right. I don't think Intel found that the xbox was worth its while, tho
seeing it stay x86 probably was.
Yeah, more pipelining, adding more dispatch slots, rename registers and
an industry leading out-of-order window is really baaaaad ... NOT!

eh? TMK the Cell has none of those things, the PPC core has been cut
down to the bare minimum, 604e-level of basicality.
The
G5's biggest problem is not performance (it was only about 10-20%
slower at the high end versus x86s at introduction, which is actually
quite an accomplishment versus the previous generation Motorola crap)

The G5 has nothing to do with Motorola crap, other than Apple getting
Motorola to license VMX to IBM behind the scenes (this is lore, but
AFAIK is true).
but rather its lack of clock scalability and ridiculous power
consumption.

? The G5 seems identical to Opteron/Hammer to me. Hell, AMD and IBM
closely collaborate now on process and fabbing.
See the clock scaling business is clearly something AMD learned with
the K6, and Intel learned with the original Pentium. Just looking at
the history of these two companies, and you can see that both know how
scale a processors clock throughout its lifetime to match up with
Moore's Law. The K6 went from 233Mhz to 550Mhz, the Pentium 33Mhz to
200Mhz, Athlon from 600Mhz to 2.0Ghz, Pentium II/III from 300Mhz to
1.266Ghz, Pentium IV from 1.5Ghz to 3.8Ghz, and the Athlon-64 got
started at around 2.0Ghz. Look at the anemic PPC 970 processor;
introduced at 2.0 Ghz, now a measley 2.5 Ghz but only by using *liquid
cooling*.

2.7 with liquid cooling. And the G5 came out at 1.6-2.0 and the Opteron
at 1.4-1.8.

A pair of 2.6Ghz Opteron 252s will set you back over $1700 at newegg,
so at least for this generation Apple is better off on G5 than AMD.
And lets talk about power consumption. Because of the threat from
Transmeta (with their amazing low power cores) both AMD and Intel have
reacted by making lower and lower power cores. Intel will eventually
switch to them in their mainstream processors (via the Pentium-M path)
and AMD has put their low-power technology (gates clocks, and other
transistor techniques) into their entire product-line. In the
meantime, IBM, not feeling any competitive pressure from anyone, just
decided to crank the power consumption through the roof.

? Part of the simplification of the Cell was to keep heat down. Plus
IBM is planning to ship triple-core 3.2Ghz for MSFT later this year,
Intel's dual-core 3.2Ghz offering costs more than an entire xbox2 will.
[...] and Sony is willing to accept at 12.5% defect rate (1 SPE out of 8
per die being nonfunctional).

Interesting. I am not aware of any other CPU manufacturer willing to
share their yeild rates publically, so I don't really know what to
compare that to.

This is also common with GPUs, the manufacturer sells the dodgy parts
(that have point failures) at mid-range pricepoints with the bad
modules disabled (8 pipes instead of 16 or what have you). Sony is
doing the same thing to save some money.
Possibly true. Not exactly good news for Apple -- if their suppliers
didn't care about them, how could they expect high quality parts?

Indeed. Apple has had this problem for 5 or more years. More lore is
that Moto was none to happy to lose its Mac OEM license, but not sure
how true that is since I don't think they were really making any money
on that.
That's right IBM, cannot hope to accomplish what even the lowly humble
AMD has done. IBM simply doesn't have the competence, drive, or will
to compete with either Intel or AMD. Confidence would not have helped
them. And obviously Freescale is more than happy to stick to the
confines of the embedded market.

I think making IBM the bad guy out of this is a mistake, and I don't
know you bad-mouth them so much since AMD is also relying on IBM's
technology for their products (I don't think it's any accident that
Opteron and G5 clockspeeds have been so closely matched these past 2
years).

My somewhat uninformed perspective is that IBM lacked the resources to
match Intel's roadmap of P-M parts which are coming out next year.
Remember too that Apple relies on IBM to design the memory controllers
too, so really Apple is on the hook for 2x the R&D with the G5.

That the Mac mini had to come out with 2 year-old technology instead of
a nice & tight low-wattage Sonoma architecture was plain sad.

I just think after 4 revs of OS X, it was time for the platform to
break the PPC ball and chain that has dogged it for the past 5 years.
Moving to x86 is not a big deal technically, and it's going to be
interesting to see Apple partnered with a very strong hardware company
for a change.
 
The G5 has nothing to do with Motorola crap, other than Apple getting
Motorola to license VMX to IBM behind the scenes (this is lore, but
AFAIK is true).

IIRC, VMX was not a Motorola design, but actually an IBM design.
Remember that Mot and IBM had originally formed a group that worked
together to specify the PPC; this meant that specs like VMX was shared.
Motorola just happened to be the first to implement the specification.
? The G5 seems identical to Opteron/Hammer to me. Hell, AMD and IBM
closely collaborate now on process and fabbing.

Opteron does not use liquid cooling. AMD basically bought fab capacity
from IBM, since they can't fund the purchase of a second fab by
themselves.
2.7 with liquid cooling. And the G5 came out at 1.6-2.0 and the Opteron
at 1.4-1.8.

The low end clock rates are just for product placement differentiation.
I am just counting from the top of the clock rate range since that
represents what the manufacturer can really do. Either way, my point
is that the EOL clock rate ends up roughly 2x-3x of the original clock
rate from the two major x86 vendors. Intel tends to have longer clock
lifes, because they generally redo the same architecture twice in its
lifetime (for eg, the P5, and P55c cores, the P-II, and the Deschutes
core, Willamette and Prescott, etc.) Both IBM and Mot have typically
had great troubles ramping the clock rate of their CPUs by comparison.
A pair of 2.6Ghz Opteron 252s will set you back over $1700 at newegg,
so at least for this generation Apple is better off on G5 than AMD.

Opterons are server CPUs, and thus are more comparable to Power
processors (both have lower clock rates, but are better for SMP), not
PowerPC processors. You should compare the Athlon-64 line to PowerPCs.
? Part of the simplification of the Cell was to keep heat down.

Heat/MIPS possibly. The Cell has more active parts -- and assuming
that each CELL processor has a floating point multiplier, there is no
way in hell that the max power draw is *low* in a 3.0Ghz processor.
[...] Plus IBM is planning to ship triple-core 3.2Ghz for MSFT later this year,
Intel's dual-core 3.2Ghz offering costs more than an entire xbox2 will.

Cost is just a question of marketing and positioning. (And only very
rarely correllated with yield, like the Pentium 4 EE.) I know this is
a common thing with Mac advocates -- i.e., trying to justify false
comparisons but claiming that you need to normalize them by either
price or clock rate.
[...] and Sony is willing to accept at 12.5% defect rate (1 SPE out of 8
per die being nonfunctional).

Interesting. I am not aware of any other CPU manufacturer willing to
share their yeild rates publically, so I don't really know what to
compare that to.

This is also common with GPUs, the manufacturer sells the dodgy parts
(that have point failures) at mid-range pricepoints with the bad
modules disabled (8 pipes instead of 16 or what have you). Sony is
doing the same thing to save some money.

GPUs are different. There is a complete driver software layer that
insulates bugs. I.e., people don't write "binary" to graphics cards --
you write to Direct X or Open GL (or Quartz or GDI).
Indeed. Apple has had this problem for 5 or more years. More lore is
that Moto was none to happy to lose its Mac OEM license, but not sure
how true that is since I don't think they were really making any money
on that.

Mot must have known they were going to lose Apple as a customer. They
simply weren't making an effort to improve their processors in any
reasonable way. Moore's law works on *all* processors where the vendor
is making an effort. And serious CPU vendors *know* this. Mot allowed
their processors to languish for *years* -- they must have known that
they would eventually lose the Apple contract the next time IBM felt
like making a new CPU.
I think making IBM the bad guy out of this is a mistake, and I don't
know you bad-mouth them so much since AMD is also relying on IBM's
technology for their products (I don't think it's any accident that
Opteron and G5 clockspeeds have been so closely matched these past 2
years).

AMD is using IBM's *fabrication process* (which *is* state of the art),
not their CPU design people.

The clock rate similarity is just a question of ALU design limitations
wrt the fabrication process. Its a common CPU design idiom -- work out
the fastest you can make your generic ALUs in a given process, then
design the rest of your pipeline with stages that run at the same
speed. The reason Intel's clock rate is so different is because their
ALUs are kind of these staggered half width things (which they call
Netburst) which are a lot faster than regular oridinary ALUs.
My somewhat uninformed perspective is that IBM lacked the resources to
match Intel's roadmap of P-M parts which are coming out next year.
Remember too that Apple relies on IBM to design the memory controllers
too, so really Apple is on the hook for 2x the R&D with the G5.

I think IBM lacked the interest in doing it. Like me, they assumed
that Apple simply didn't have the cahonies to move to x86. And
Mot/Freescale are a joke, so I think IBM just assumed that they had the
contract no matter what they did.
That the Mac mini had to come out with 2 year-old technology instead of
a nice & tight low-wattage Sonoma architecture was plain sad.

I just think after 4 revs of OS X, it was time for the platform to
break the PPC ball and chain that has dogged it for the past 5 years.
Moving to x86 is not a big deal technically, and it's going to be
interesting to see Apple partnered with a very strong hardware company
for a change.

Well actually it *is* a big deal technically. Steve Jobs gave a great
demo of a quick recompile, but Photoshop is clearly a very assembly
language and endian sensitive application. And I suspect a lot of the
media apps on the Mac are in a similar situation. The joke
applications will just be a recompile, but not the serious stuff.
 
Well actually it *is* a big deal technically. Steve Jobs gave a great
demo of a quick recompile, but Photoshop is clearly a very assembly
language and endian sensitive application. And I suspect a lot of the
media apps on the Mac are in a similar situation. The joke
applications will just be a recompile, but not the serious stuff.

Yep. Joke applications like Mathematica, right?
 
IIRC, VMX was not a Motorola design, but actually an IBM design.

VMX's origin is rather unclear.
IBM's PPC designs were G3, with no VMX. IBM made not a single G4 for
Apple AFAIK.
Apparently IBM was not a big fan of the additional diespace that the
VMX stuff took up, prefering to go for lean & mean instead of making
the CPU fatter.
Remember that Mot and IBM had originally formed a group that worked
together to specify the PPC; this meant that specs like VMX was shared.
Motorola just happened to be the first to implement the specification.

Doing some research I see this is more or less accurate.
Opteron does not use liquid cooling.
Right.

AMD basically bought fab capacity
from IBM, since they can't fund the purchase of a second fab by
themselves.
http://news.zdnet.com/2100-9584_22-979718.html


The low end clock rates are just for product placement differentiation.
Agreed.

I am just counting from the top of the clock rate range since that
represents what the manufacturer can really do. Either way, my point
is that the EOL clock rate ends up roughly 2x-3x of the original clock
rate from the two major x86 vendors. Intel tends to have longer clock
lifes, because they generally redo the same architecture twice in its
lifetime (for eg, the P5, and P55c cores, the P-II, and the Deschutes
core, Willamette and Prescott, etc.) Both IBM and Mot have typically
had great troubles ramping the clock rate of their CPUs by comparison.

This is largely proportional to pipeline length. The P4's direction of
longer pipeline has proved to be a mistake and Intel has already
cancelled all designs and has moved to a P-M future.
Opterons are server CPUs, and thus are more comparable to Power
processors (both have lower clock rates, but are better for SMP), not
PowerPC processors. You should compare the Athlon-64 line to PowerPCs.

Athlons can't do MP so a dualie G5 would murder an athlon box for
CPU-intensive stuff.
[...] Plus IBM is planning to ship triple-core 3.2Ghz for MSFT later this year,
Intel's dual-core 3.2Ghz offering costs more than an entire xbox2 will.

Cost is just a question of marketing and positioning. (And only very
rarely correllated with yield, like the Pentium 4 EE.) I know this is
a common thing with Mac advocates -- i.e., trying to justify false
comparisons

yeah yeah
but claiming that you need to normalize them by either
price or clock rate.

that made no sense, I made no such claims.

You claimed:
In the
meantime, IBM, not feeling any competitive pressure from anyone, just
decided to crank the power consumption through the roof.

while I pointed out that IBM is giving a 3.2Ghz triple-core part to
Microsoft that enables Microsoft to build its entire xbox2 at a
pricepoint less than the current 3.2Ghz dual-core Intel part.
[...] and Sony is willing to accept at 12.5% defect rate (1 SPE out of 8
per die being nonfunctional).

Interesting. I am not aware of any other CPU manufacturer willing to
share their yeild rates publically, so I don't really know what to
compare that to.

This is also common with GPUs, the manufacturer sells the dodgy parts
(that have point failures) at mid-range pricepoints with the bad
modules disabled (8 pipes instead of 16 or what have you). Sony is
doing the same thing to save some money.

GPUs are different. There is a complete driver software layer that
insulates bugs. I.e., people don't write "binary" to graphics cards --
you write to Direct X or Open GL (or Quartz or GDI).

Another nonresponse. Whatever.
AMD is using IBM's *fabrication process* (which *is* state of the art),
not their CPU design people.

True I guess.
I think IBM lacked the interest in doing it. Like me, they assumed
that Apple simply didn't have the cahonies to move to x86. And
Mot/Freescale are a joke, so I think IBM just assumed that they had the
contract no matter what they did.

I think IBM wanted money to move forward. Intel will *give* Apple money
to move (backward).
Well actually it *is* a big deal technically.

Not really. Endianness was solved, more or less, 10+ years ago when
NEXTSTEP became OPENSTEP.
Hardware wise, Apple is already dealing with bass-ackward endianness
with the PCI and AGP subsystems.
Steve Jobs gave a great
demo of a quick recompile, but Photoshop is clearly a very assembly
language and endian sensitive application.

It's a vector unit sensitive application, and Apple has been working on
abstracting the vector units from the application with the
Accelerate.framework, which combines vector routines and image
processing routines in one architecture-neutral package.
And I suspect a lot of the
media apps on the Mac are in a similar situation. The joke
applications will just be a recompile, but not the serious stuff.

Nah. Real apps use OpenGL, and swizzling in OpenGL is pretty easy to
set up.
 
Travelinman said:
Yep. Joke applications like Mathematica, right?

Oh yeah, Mathematica was shown to run flawlessly after the recompile.
It took, what, 30 seconds to demonstrate that it...launched on a Wipple
machine. Kinda like those -probably- rigged demos of how PS for PPC was
faster than PS for PCs.
It certainly doesn't mean that it can't be done, but to suggest that
it's easy, is only a notion a fanboi would entertain.

Now you've done it! No more Kool-Aid for the others!

Nicolas
 
Travelinman said:
Yep. Joke applications like Mathematica, right?

Mathematica is written mostly in the Mathematica language (so its like
'porting' a Java program). The core stuff, does not require low level
details like endianness or AltiVec or anything like that (I know, I've
written a small symbolic math program myself -- you just don't obsess
over low level details like this). And given that there already exists
Mac and PC versions of Mathematica, I am actually surprised that it
took them anything more than a recompile.

Apple's headline application has always been Photoshop. You'll notice
they made no mention of how that port is going.
 
VMX's origin is rather unclear.

Not to me its not. If you read comp.arch at all, you'd know this. IBM
did all instruction set design, since PPC is really derivative from the
Power/RT processor designs, which predates PowerPC. Mot was just along
for the ride.
IBM's PPC designs were G3, with no VMX. IBM made not a single G4 for
Apple AFAIK.

Yes, writing stuff down on paper is different from fabricating it. Mot
just picked up the ball faster than IBM of fabrication.
Apparently IBM was not a big fan of the additional diespace that the
VMX stuff took up, prefering to go for lean & mean instead of making
the CPU fatter.

No, they just hadn't gotten around to it. Note that IBM actually added
VMX to Power processors as well -- i.e., they always wanted to do it,
they just hadn't put together a design until the Power4/PPC 970.

Uhh ... this is a side effect of AMD buying fab capacity from IBM.
They have to be familliar with the IBM process for this to happen,
meaning they need to learn IBM's techniques for fabrication, which
obviously they could incorporate into their own fabrication process.
So they make a big deal about the sharing because it plays better than
"AMD buys fab capacity from IBM".

(The real point of the story is to convince investors that AMD will be
able to compete with Intel's latest fabrication capacity and
technology. I.e., its *possible* for AMD to win DELL's business.)
This is largely proportional to pipeline length. The P4's direction of
longer pipeline has proved to be a mistake and Intel has already
cancelled all designs and has moved to a P-M future.

It was not a mistake in the main requirement of being able to scale the
clock rates (which they clearly succeeded at.) It was just a *failure*
in the sense the AMD's design was superior.

Actually both the K8 and PPC 970 are deeply pipelined cores as well.
But the K8 is very clever in that certain stages which appear in
Intel's pipelines, don't appear in the K8 at all. No drive stages and
no rename/issue stages. (Nevertheless, the K8 definately has rename
registers.) So the K8 may look less pipelined, just because it has
fewer stages; but that's a bit artificial -- they just don't need the
additional stages due to their ingenious design.

Clock rate scaling is *not* just a question of pipelining. It also has
to do with controlling clock skew. You can learn more here:


http://stanford-online.stanford.edu/courses/ee380/050330-ee380-100.asx
Opterons are server CPUs, and thus are more comparable to Power
processors (both have lower clock rates, but are better for SMP), not
PowerPC processors. You should compare the Athlon-64 line to PowerPCs.

Athlons can't do MP so a dualie G5 would murder an athlon box for
CPU-intensive stuff.
Benchmarks?
[...] Plus IBM is planning to ship triple-core 3.2Ghz for MSFT later this year,
Intel's dual-core 3.2Ghz offering costs more than an entire xbox2 will.

Cost is just a question of marketing and positioning. (And only very
rarely correllated with yield, like the Pentium 4 EE.) I know this is
a common thing with Mac advocates -- i.e., trying to justify false
comparisons

yeah yeah
but claiming that you need to normalize them by either
price or clock rate.

that made no sense, I made no such claims.

You brought in the issue of price.
You claimed:


while I pointed out that IBM is giving a 3.2Ghz triple-core part to
Microsoft that enables Microsoft to build its entire xbox2 at a
pricepoint less than the current 3.2Ghz dual-core Intel part.

I don't know any details about the IBM 3.2 Ghz triple core part. If
this part isn't somehow crippled in other ways, then why isn't Apple
shipping with one? Doesn't it at all seem fishy to you?
[...] and Sony is willing to accept at 12.5% defect rate (1 SPE out of 8
per die being nonfunctional).

Interesting. I am not aware of any other CPU manufacturer willing to
share their yeild rates publically, so I don't really know what to
compare that to.

This is also common with GPUs, the manufacturer sells the dodgy parts
(that have point failures) at mid-range pricepoints with the bad
modules disabled (8 pipes instead of 16 or what have you). Sony is
doing the same thing to save some money.

GPUs are different. There is a complete driver software layer that
insulates bugs. I.e., people don't write "binary" to graphics cards --
you write to Direct X or Open GL (or Quartz or GDI).

Another nonresponse. Whatever.

Sorry you don't understand these things. CPUs cannot ship with any
"non-functional parts" unless its half of an L2 cache or something like
that (but there are superior techniques used in modern CPUs). The
reason is that all software gets compiled to machine language -- so
everything just has to work. GPUs can ship with major
non-functionalities, so long as they can be covered up in the drivers;
I know this from first hand experience.
I think IBM wanted money to move forward. Intel will *give* Apple money
to move (backward).

Intel is not giving money to Apple, I assure you. They don't care
*that* much about gaining Apple's business.
Not really. Endianness was solved, more or less, 10+ years ago when
NEXTSTEP became OPENSTEP.

This has nothing to do with anything. On x86 its very common to use
the a trick for converting from double to integer by reading out the
bits of the double (casting the pointer to the same memory location
from (doube *) to (int *)) then adding a magic constant to the contents
and reading out the results. Tricks like this require certain endian
assumptions, that have nothing to do with the operating system. So
long as memory is still accessible directly by software, there will
always be endianness consistency issues.
Hardware wise, Apple is already dealing with bass-ackward endianness
with the PCI and AGP subsystems.

The devices on the other side of PCI and AGP are abstracted by drivers.
General software does not talk to them directly -- its always through
an OS level API which is itself shielded by drivers.
It's a vector unit sensitive application, and Apple has been working on
abstracting the vector units from the application with the
Accelerate.framework, which combines vector routines and image
processing routines in one architecture-neutral package.

And if Apple is still working on it, do you think Adobe, with its
*shipping* requirements is paying any attention to it?
Nah. Real apps use OpenGL, and swizzling in OpenGL is pretty easy to
set up.

Media includes audio dude. OpenGL doesn't do anything for that.
 
Mathematica is written mostly in the Mathematica language (so its like
'porting' a Java program). The core stuff, does not require low level
details like endianness or AltiVec or anything like that (I know, I've
written a small symbolic math program myself -- you just don't obsess
over low level details like this). And given that there already exists
Mac and PC versions of Mathematica, I am actually surprised that it
took them anything more than a recompile.

I have no way of checking whether that's true or not, but it sure
negates your comment (above) where you claim that anything other than a
'joke application' would be hard to port.
Apple's headline application has always been Photoshop. You'll notice
they made no mention of how that port is going.

Other than Adobe saying that they were committed to porting all their
apps. IIRC, they claimed that they'd be one of the first.
 
Not to me its not. If you read comp.arch at all, you'd know this.

deja has 14 articles mentioning VMX and IBM.

This one:

http://groups-beta.google.com/group/comp.arch/msg/8c55bd0149aa57a7?dmode=source&hl=en

is how I remember it.

Though this one:

http://groups-beta.google.com/group/comp.arch/msg/0cf5708fb80ac5da?dmode=source&hl=en

supports your claim tht IBM was the prime mover initially.
IBM
did all instruction set design, since PPC is really derivative from the
Power/RT processor designs, which predates PowerPC. Mot was just along
for the ride.

Well, they apparently took over the ride ca 1998. IBM was NOT using VMX
in its chips, which was, TMK one of the major reasons they got to 1Ghz
way before moto.
No, they just hadn't gotten around to it. Note that IBM actually added
VMX to Power processors as well -- i.e., they always wanted to do it,
they just hadn't put together a design until the Power4/PPC 970.

I doubt this. VMX is/was only single-precision for one thing. Not a
good match for Power.
It was not a mistake in the main requirement of being able to scale the
clock rates (which they clearly succeeded at.) It was just a *failure*
in the sense the AMD's design was superior.

Actually both the K8 and PPC 970 are deeply pipelined cores as well.
But the K8 is very clever in that certain stages which appear in
Intel's pipelines, don't appear in the K8 at all. No drive stages and
no rename/issue stages. (Nevertheless, the K8 definately has rename
registers.) So the K8 may look less pipelined, just because it has
fewer stages; but that's a bit artificial -- they just don't need the
additional stages due to their ingenious design.

Well this is too technically wonky for me so instead of pipelining I'll
just stick to sacrificing IPC for frequency...

Just the point that IBM was making a triple-core 3.2Ghz part for
Microsoft that will be much cheaper than Intel's dual-core 3.2Ghz part.

Seems like IBM was willing to compete with Intel well in this
particular arena.
I don't know any details about the IBM 3.2 Ghz triple core part. If
this part isn't somehow crippled in other ways, then why isn't Apple
shipping with one? Doesn't it at all seem fishy to you?

Microsoft owns that IP since they paid for it. TMK, they're taking it
to TMSC or whoever for fabbing.

IBM would be perfectly willing to do such an exercise for Apple, should
they too also agree to pay some hundred(s) of megabucks to get the ball
rolling.
[...] and Sony is willing to accept at 12.5% defect rate (1 SPE out of 8
per die being nonfunctional).

Interesting. I am not aware of any other CPU manufacturer willing to
share their yeild rates publically, so I don't really know what to
compare that to.

This is also common with GPUs, the manufacturer sells the dodgy parts
(that have point failures) at mid-range pricepoints with the bad
modules disabled (8 pipes instead of 16 or what have you). Sony is
doing the same thing to save some money.

GPUs are different. There is a complete driver software layer that
insulates bugs. I.e., people don't write "binary" to graphics cards --
you write to Direct X or Open GL (or Quartz or GDI).

Another nonresponse. Whatever.

Sorry you don't understand these things. CPUs cannot ship with any
"non-functional parts" unless its half of an L2 cache or something like
that (but there are superior techniques used in modern CPUs). The
reason is that all software gets compiled to machine language -- so
everything just has to work. GPUs can ship with major
non-functionalities, so long as they can be covered up in the drivers;
I know this from first hand experience.

That makes more sense, but I don't think that is so common that drivers
work around a variable number of missing features.

In this case Sony knocking out 1 core is pretty much indentical to
NVIDIA knocking out half their pipes, and also Intel shipping chips
with half their cache physically deactivated.
Intel is not giving money to Apple, I assure you. They don't care
*that* much about gaining Apple's business.

They give everyone money (that's why all Wintel OEM commercials end in
the Intel tones). They're a regular ATM.
This has nothing to do with anything. On x86 its very common to use
the a trick for converting from double to integer by reading out the
bits of the double (casting the pointer to the same memory location
from (doube *) to (int *)) then adding a magic constant to the contents
and reading out the results. Tricks like this require certain endian
assumptions, that have nothing to do with the operating system. So
long as memory is still accessible directly by software, there will
always be endianness consistency issues.

Right. These silly things were obviated by NeXT taking the time to
create an endian-neutral API to abstract this away. NSFloat, NSNumber,
unichar, etc.
The devices on the other side of PCI and AGP are abstracted by drivers.
General software does not talk to them directly -- its always through
an OS level API which is itself shielded by drivers.

true enough but much of Apple's existing driver codebase is already
dealing with swizzling thanks to PCI.
And if Apple is still working on it, do you think Adobe, with its
*shipping* requirements is paying any attention to it?

It shipped with 10.3, IIRC. Adobe doesn't use it since they have/had 5+
years invested in altivec.
10.4 features CoreImage additions, which Adobe won't use either most
likely.
Media includes audio dude. OpenGL doesn't do anything for that.

True enough, I expect audio apps will be in for an especially tough
time.
 
Travelinman said:
Other than Adobe saying that they were committed to porting all their
apps. IIRC, they claimed that they'd be one of the first.

Adobe should have a not-horrible time, just a lot of copy/paste from
the existing Windows codebase for the intel binaries.

Lot of drudgery, but the hard stuff has already been done.
 
Well, they apparently took over the ride ca 1998. IBM was NOT using VMX
in its chips, which was, TMK one of the major reasons they got to 1Ghz
way before moto.

Well that reasoning only applies if you have limited design teams that
can only work on one thing at a time. MMX/SSE, etc never had any clock
rate impact on any x86 processor. OTOH, I was left with the distinct
impression that the AltiVec "byte permutation" instruction was
difficult to implement in hardware, and may have caused clock rate
pressure. If so, that would be a huge price to pay for 1 instruction.
I doubt this. VMX is/was only single-precision for one thing. Not a
good match for Power.

Uh ... excuse me, but VMX *IS* in the current Power CPUs.
Well this is too technically wonky for me so instead of pipelining I'll
just stick to sacrificing IPC for frequency...

Right. You gotta stay in the reality distortion field; must not
disturb it with annoying things like technical details.

Ahahahaah! No, I mean *real* benchmarks by independent outsiders. You
know, like Anand's, Tom's, Tech-fest, HARD-OCP, Sharky, FiringSquad,
3D-Mark, SPEC-CPU, SPEC-GL. Oh I forgot, nobody ever benchmarks an
Apple do they? ... Oh wait! Here's one:

http://www.barefeats.com/macvpc.html
Just the point that IBM was making a triple-core 3.2Ghz part for
Microsoft that will be much cheaper than Intel's dual-core 3.2Ghz part.

Seems like IBM was willing to compete with Intel well in this
particular arena.

You clearly don't know the Xbox history. Intel was not a player in the
Xbox 360 by their own choice. Otherwise MSFT would gladly go with
Intel again, to have a simple "backwards compatibility story", like
SONY did with PS2.
Microsoft owns that IP since they paid for it. TMK, they're taking it
to TMSC or whoever for fabbing.

Ah! TSMC is a second tier fabrication facility. I.e., they don't
support things like copper interconnect, SOI or shallow trench
isolation. So this design has to be a completely stripped down,
probably comparable to a G3 or something, but designed for clock rate
and a generic fab process. If this thing has an IPC even as high as a
P4 I would be greatly surprised.
IBM would be perfectly willing to do such an exercise for Apple, should
they too also agree to pay some hundred(s) of megabucks to get the ball
rolling.

No, licensing the design is cheap, and Apple can clearly get the same
deal that MSFT did at the drop of a hat. Apple rejected it, and with
good reason. Apple needs the clock rate to scale, but MSFT doesn't.
[...] and Sony is willing to accept at 12.5% defect rate (1 SPE out of 8
per die being nonfunctional).

Interesting. I am not aware of any other CPU manufacturer willing to
share their yeild rates publically, so I don't really know what to
compare that to.

This is also common with GPUs, the manufacturer sells the dodgy parts
(that have point failures) at mid-range pricepoints with the bad
modules disabled (8 pipes instead of 16 or what have you). Sony is
doing the same thing to save some money.

GPUs are different. There is a complete driver software layer that
insulates bugs. I.e., people don't write "binary" to graphics cards --
you write to Direct X or Open GL (or Quartz or GDI).

Another nonresponse. Whatever.

Sorry you don't understand these things. CPUs cannot ship with any
"non-functional parts" unless its half of an L2 cache or something like
that (but there are superior techniques used in modern CPUs). The
reason is that all software gets compiled to machine language -- so
everything just has to work. GPUs can ship with major
non-functionalities, so long as they can be covered up in the drivers;
I know this from first hand experience.

That makes more sense, but I don't think that is so common that drivers
work around a variable number of missing features.

No, just the most common defects. You take care of all your low
hanging fruit and all of a sudden your yield looks a heck of a lot
better.
In this case Sony knocking out 1 core is pretty much indentical to
NVIDIA knocking out half their pipes, and also Intel shipping chips
with half their cache physically deactivated.

Like I said, Intel (and AMD) doesn't do that anymore. You make the L2
caches slightly redundant with built-in testing and with spare cache
lines. The survive fab defects by remapping the defective lines during
initial chip testing. The CELL having an additional processing unit,
with the requirement of disabling routing to exactly one of them (with
a special pin in the packaging, say) makes sense though.
They give everyone money (that's why all Wintel OEM commercials end in
the Intel tones). They're a regular ATM.

Its call comarketing. They don't need to pay for the whole ad, they
just need to pay enough of it to convince the OEM to pay the balance
themselves. Its actually a more effective way for Intel themselves to
advertise, because people tend to buy complete systems more than they
buy individual CPUs -- but Intel doesn't want to play favorites with
system vendors. Intel is doing this as a means of competing with AMD,
not Apple.

This only seems out of place to you because Mot and IBM never truly
competed with each other for Apple's business, and therefore they never
had a reason to pull the same trick.
Right. These silly things were obviated by NeXT taking the time to
create an endian-neutral API to abstract this away. NSFloat, NSNumber,
unichar, etc.

I see -- and they included IEEE-754 representational tricks in those
APIs? For an example of when IEEE-754 representation tricks are
useful:

http://www.pobox.com/~qed/sqroot.html

Look. Any performance sensitive, standard binary format (like PKZIP,
for example) is going to require low level endian swizzling on
Mac-based software. Furthermore, a lot of software will just assume
endianness when they can precisely because they associate the
endianness with the operating system. The fact that Next happens to
have made endian-neutral APIs doesn't mean anything to anyone who isn't
writing code which is cross platform to begin with.
It shipped with 10.3, IIRC. Adobe doesn't use it since they have/had 5+
years invested in altivec.

And similar time invested in SSE. That's not the issue. They need to
mate an OS X front end, with an x86 back end. That's just going to be
a hell of a lot of work.
10.4 features CoreImage additions, which Adobe won't use either most
likely.

*Can't* use, is more like it. To do the level of image manipulation it
does, its all assembly language.
 
On Sun, 12 Jun 2005 15:48:40 -0700, imouttahere wrote:

I doubt this. VMX is/was only single-precision for one thing. Not a
good match for Power.

The Power4 certainly does *not* have VMX, while the 970 does. ...just
wanted to make this clear. ...carry on! ;-)
 
Back
Top