Well, they apparently took over the ride ca 1998. IBM was NOT using VMX
in its chips, which was, TMK one of the major reasons they got to 1Ghz
way before moto.
Well that reasoning only applies if you have limited design teams that
can only work on one thing at a time. MMX/SSE, etc never had any clock
rate impact on any x86 processor. OTOH, I was left with the distinct
impression that the AltiVec "byte permutation" instruction was
difficult to implement in hardware, and may have caused clock rate
pressure. If so, that would be a huge price to pay for 1 instruction.
I doubt this. VMX is/was only single-precision for one thing. Not a
good match for Power.
Uh ... excuse me, but VMX *IS* in the current Power CPUs.
Well this is too technically wonky for me so instead of pipelining I'll
just stick to sacrificing IPC for frequency...
Right. You gotta stay in the reality distortion field; must not
disturb it with annoying things like technical details.
Ahahahaah! No, I mean *real* benchmarks by independent outsiders. You
know, like Anand's, Tom's, Tech-fest, HARD-OCP, Sharky, FiringSquad,
3D-Mark, SPEC-CPU, SPEC-GL. Oh I forgot, nobody ever benchmarks an
Apple do they? ... Oh wait! Here's one:
http://www.barefeats.com/macvpc.html
Just the point that IBM was making a triple-core 3.2Ghz part for
Microsoft that will be much cheaper than Intel's dual-core 3.2Ghz part.
Seems like IBM was willing to compete with Intel well in this
particular arena.
You clearly don't know the Xbox history. Intel was not a player in the
Xbox 360 by their own choice. Otherwise MSFT would gladly go with
Intel again, to have a simple "backwards compatibility story", like
SONY did with PS2.
Microsoft owns that IP since they paid for it. TMK, they're taking it
to TMSC or whoever for fabbing.
Ah! TSMC is a second tier fabrication facility. I.e., they don't
support things like copper interconnect, SOI or shallow trench
isolation. So this design has to be a completely stripped down,
probably comparable to a G3 or something, but designed for clock rate
and a generic fab process. If this thing has an IPC even as high as a
P4 I would be greatly surprised.
IBM would be perfectly willing to do such an exercise for Apple, should
they too also agree to pay some hundred(s) of megabucks to get the ball
rolling.
No, licensing the design is cheap, and Apple can clearly get the same
deal that MSFT did at the drop of a hat. Apple rejected it, and with
good reason. Apple needs the clock rate to scale, but MSFT doesn't.
[...] and Sony is willing to accept at 12.5% defect rate (1 SPE out of 8
per die being nonfunctional).
Interesting. I am not aware of any other CPU manufacturer willing to
share their yeild rates publically, so I don't really know what to
compare that to.
This is also common with GPUs, the manufacturer sells the dodgy parts
(that have point failures) at mid-range pricepoints with the bad
modules disabled (8 pipes instead of 16 or what have you). Sony is
doing the same thing to save some money.
GPUs are different. There is a complete driver software layer that
insulates bugs. I.e., people don't write "binary" to graphics cards --
you write to Direct X or Open GL (or Quartz or GDI).
Another nonresponse. Whatever.
Sorry you don't understand these things. CPUs cannot ship with any
"non-functional parts" unless its half of an L2 cache or something like
that (but there are superior techniques used in modern CPUs). The
reason is that all software gets compiled to machine language -- so
everything just has to work. GPUs can ship with major
non-functionalities, so long as they can be covered up in the drivers;
I know this from first hand experience.
That makes more sense, but I don't think that is so common that drivers
work around a variable number of missing features.
No, just the most common defects. You take care of all your low
hanging fruit and all of a sudden your yield looks a heck of a lot
better.
In this case Sony knocking out 1 core is pretty much indentical to
NVIDIA knocking out half their pipes, and also Intel shipping chips
with half their cache physically deactivated.
Like I said, Intel (and AMD) doesn't do that anymore. You make the L2
caches slightly redundant with built-in testing and with spare cache
lines. The survive fab defects by remapping the defective lines during
initial chip testing. The CELL having an additional processing unit,
with the requirement of disabling routing to exactly one of them (with
a special pin in the packaging, say) makes sense though.
They give everyone money (that's why all Wintel OEM commercials end in
the Intel tones). They're a regular ATM.
Its call comarketing. They don't need to pay for the whole ad, they
just need to pay enough of it to convince the OEM to pay the balance
themselves. Its actually a more effective way for Intel themselves to
advertise, because people tend to buy complete systems more than they
buy individual CPUs -- but Intel doesn't want to play favorites with
system vendors. Intel is doing this as a means of competing with AMD,
not Apple.
This only seems out of place to you because Mot and IBM never truly
competed with each other for Apple's business, and therefore they never
had a reason to pull the same trick.
Right. These silly things were obviated by NeXT taking the time to
create an endian-neutral API to abstract this away. NSFloat, NSNumber,
unichar, etc.
I see -- and they included IEEE-754 representational tricks in those
APIs? For an example of when IEEE-754 representation tricks are
useful:
http://www.pobox.com/~qed/sqroot.html
Look. Any performance sensitive, standard binary format (like PKZIP,
for example) is going to require low level endian swizzling on
Mac-based software. Furthermore, a lot of software will just assume
endianness when they can precisely because they associate the
endianness with the operating system. The fact that Next happens to
have made endian-neutral APIs doesn't mean anything to anyone who isn't
writing code which is cross platform to begin with.
It shipped with 10.3, IIRC. Adobe doesn't use it since they have/had 5+
years invested in altivec.
And similar time invested in SSE. That's not the issue. They need to
mate an OS X front end, with an x86 back end. That's just going to be
a hell of a lot of work.
10.4 features CoreImage additions, which Adobe won't use either most
likely.
*Can't* use, is more like it. To do the level of image manipulation it
does, its all assembly language.