so Jobs gets screwed by IBM over game consoles, thus Apple-Intel ?

  • Thread starter Thread starter Guest
  • Start date Start date
(e-mail address removed) wrote:


Uh ... excuse me, but VMX *IS* in the current Power CPUs.

1) Uh, "Power" is a marketing term, not a current architecture (the
architecture is PowerPC, or more precisley for this class; PowerPC/AS).

2) VMX is certainly *not* in the Power4 or Power5 processors. It was
added to the 970, for the obvious reason.

<snip>
 
Well that reasoning only applies if you have limited design teams that
can only work on one thing at a time. MMX/SSE, etc never had any clock
rate impact on any x86 processor. OTOH, I was left with the distinct
impression that the AltiVec "byte permutation" instruction was
difficult to implement in hardware, and may have caused clock rate
pressure. If so, that would be a huge price to pay for 1 instruction.

Well IBM had PPC750 parts running at over 1.0Ghz when Moto was stuck at
450.

Why? Dunno.
Uh ... excuse me, but VMX *IS* in the current Power CPUs.

Only thanks to Apple getting it in there for the PPC970.
Right. You gotta stay in the reality distortion field; must not
disturb it with annoying things like technical details.

I had made the argument that Intel's idea of pipelining for greater
clock has been abandoned.

You post 3 paragraphs of trivia about AMD crap that I couldn't care
less about, so in the interests of avoiding a totally pointless
argument (too late!) I helpfully modified my assertion to just be that
Intel went for frequency over IPC with the P4, something I should hope
you would not find controversial or otherwise RDF-influenced.
Ahahahaah! No, I mean *real* benchmarks by independent outsiders.

Wheel the goalposts as far back as you like, but if you can't find
anything wrong or inconsistent with those tests they're credible to me.
You
know, like Anand's, Tom's, Tech-fest, HARD-OCP, Sharky, FiringSquad,
3D-Mark, SPEC-CPU, SPEC-GL. Oh I forgot, nobody ever benchmarks an
Apple do they? ... Oh wait! Here's one:

yeah, one that does back my claim that a dual 2.7 murders a single
athlon in MP-enabled, computation-intensive code. Thanks.
You clearly don't know the Xbox history. Intel was not a player in the
Xbox 360 by their own choice.

No, IIRC Otellini publically confirmed that disinclination recently.
Otherwise MSFT would gladly go with
Intel again, to have a simple "backwards compatibility story", like
SONY did with PS2.

Sure.

Still, IBM has designed a triple-core 3.2Ghz CPU for Microsoft's $500
box, compared to Intel's dual-core 3.2Ghz P-D (which, as you surely
know is nothing more than two Prescotts duct taped together) that they
are selling for over $530 in qty 1000

http://www.intel.com/intel/finance/pricelist/

which sorta puts paid to your assertion:

"IBM, not feeling any competitive pressure from anyone, just decided to
crank the power consumption through the roof"

Seems to this RDF-challenged observer that IBM is ahead of Intel here.
Ah! TSMC is a second tier fabrication facility. I.e., they don't
support things like copper interconnect, SOI or shallow trench
isolation. So this design has to be a completely stripped down,
probably comparable to a G3 or something, but designed for clock rate
and a generic fab process.

http://www.tsmc.com/english/technology/t0113.htm

is what they'll be using. Doesn't seem that generic to me.
If this thing has an IPC even as high as a P4 I would be greatly surprised.

Well, the triple SMT cores will probably mitigate TMSC's 'woeful'
fabbing capabilities.
No, licensing the design is cheap,

from Microsoft? huh?
and Apple can clearly get the same
deal that MSFT did at the drop of a hat.

yeah, pay IBM hundred(s) of millions of dollars.
Apple rejected it, and with
good reason. Apple needs the clock rate to scale, but MSFT doesn't.

True. I think Apple was getting tired worrying about the 3-5 year
horizon, not just the immediate problems they were having with
Freescale. Enough to make anyone say, "**** it!", and that's not even
considering the immense cost and time-to-market advantages Apple is
getting from not having to design its own chipsets any more.

(agreeable stuff snipped)
Its call comarketing.

Right. That's what I was referring to with my above.
They don't need to pay for the whole ad, they
just need to pay enough of it to convince the OEM to pay the balance
themselves. Its actually a more effective way for Intel themselves to
advertise, because people tend to buy complete systems more than they
buy individual CPUs -- but Intel doesn't want to play favorites with
system vendors. Intel is doing this as a means of competing with AMD,
not Apple.

This only seems out of place to you

I wasn't alleging Intel was giving kickbacks, tho of course that's what
comarketing dollars really are in the final analysis.
because Mot and IBM never truly
competed with each other for Apple's business, and therefore they never
had a reason to pull the same trick.

"please pay us more so we can give you co-marketing money???"

co-marketing money is a sign of market strength, not weakness.
I see -- and they included IEEE-754 representational tricks in those
APIs? For an example of when IEEE-754 representation tricks are
useful:

http://www.pobox.com/~qed/sqroot.html

Apple only found need to mention IEEE fp in the briefest of notes in
their 100+ page transition document.
Look. Any performance sensitive, standard binary format (like PKZIP,
for example) is going to require low level endian swizzling on
Mac-based software. Furthermore, a lot of software will just assume
endianness when they can precisely because they associate the
endianness with the operating system. The fact that Next happens to
have made endian-neutral APIs doesn't mean anything to anyone who isn't
writing code which is cross platform to begin with.

Right. Apple's frameworks were cross-platform to begin with. That
people weren't smart enough to take advantage of them isn't Apple's
fault.
And similar time invested in SSE. That's not the issue. They need to
mate an OS X front end, with an x86 back end. That's just going to be
a hell of a lot of work.

Nah. Dubious assertion to me. Adobe people are pros, they know how to
architect code.

Assuming they're still on Codewarrior... moving to Xcode/gcc, now THAT
will be an immense amount of work.
*Can't* use, is more like it. To do the level of image manipulation it
does, its all assembly language.

True, CoreImage doesn't do Adobe any good in maintaining pixel-perfect
compatibility with the x86 code.
 
Intel is not giving money to Apple, I assure you. They don't care
*that* much about gaining Apple's business.

An article I read on this gave what sounded like a good reason that
Intel does care quite a bit about getting Apple's business. The claim
is that Intel has long wanted to push x86 PC designs past all the legacy
crap they are still stuck with, but the PC makers don't want to go
along. The PC makers just want to make the cheapest boxes they can, and
that means sticking with cheap legacy stuff.

Apple gives Intel a customer that can be used to showcase what a PC
should be like.
 
Opteron does not use liquid cooling. AMD basically bought fab capacity
from IBM, since they can't fund the purchase of a second fab by
themselves.

Err, they can't?!?

http://www.amd.com/us-en/0,,3715_10023,00.html

Sure looks like a second fab to me!

FWIW first wafer starts at AMD's second fab were actually April 1 of
this year, so finished wafers should be coming out any day now.
Admittedly these are only test wafers are not ready for real
production, but it does look like AMD is on track to have this fab up
and running for early 2006.
 
Yeah, more pipelining, adding more dispatch slots, rename registers and
an industry leading out-of-order window is really baaaaad ... NOT! The

I think that's the point, most of those features were stripped out of
the cores for both the PPC core of the Cell processor for PS3 and the
PPC chips for XBox360. It remains to be seen if the G5 itself can
clock to 3.2GHz on a 90nm process, or perhaps even on a 65nm process.
See the clock scaling business is clearly something AMD learned with
the K6, and Intel learned with the original Pentium.

The basic concept of clock scaling has been something that has been
fairly well known for some time. For a good example of this, look all
the way back to '92 and DEC's Alpha processor. It's always been a
trade-off between clock scaling, instructions per clock cycle, power
consumption and the all-important factor of cost.
Just looking at
the history of these two companies, and you can see that both know how
scale a processors clock throughout its lifetime to match up with
Moore's Law. The K6 went from 233Mhz to 550Mhz, the Pentium 33Mhz to
200Mhz, Athlon from 600Mhz to 2.0Ghz, Pentium II/III from 300Mhz to
1.266Ghz, Pentium IV from 1.5Ghz to 3.8Ghz, and the Athlon-64 got
started at around 2.0Ghz. Look at the anemic PPC 970 processor;
introduced at 2.0 Ghz, now a measley 2.5 Ghz but only by using *liquid
cooling*.

Well, to be fair to IBM, their original design goal for the PPC 970/G5
was 1.6 and 1.8GHz. They ended up getting an extra speed grade of
2.0GHz to sell to Apple. Now they are up to 2.7GHz with the latest
and greatest PowerMac chips. It may not seem like great scaling, but
pretty much everyone else has had similar (if not worse) woes in the
past while. Intel hit 3.2GHz a full 2 years ago, yet their fastest
clocked chip now is still only 3.8GHz.

As for the liquid cooling this, Apple definitely *COULD* cool the
chips using air if they so desired, but it would require rather noisy
fans to do so, especially with two processors in the box.
And lets talk about power consumption. Because of the threat from
Transmeta (with their amazing low power cores) both AMD and Intel have
reacted by making lower and lower power cores.

I don't think Transmeta had much to do with this, it's more a question
that the market WANTED lower power cores. Intel has been producing
extremely low powered cores for some time now, certainly dating back
to before Transmeta showed up on the scene, they just started getting
more popular and Intel saw it as a business opportunity.

It's also important to note that a lot of these "low power" cores are
actually consuming quite a lot of power relative to what chips used to
consume. For example, the Pentium-M consumes about 25W. An old
PentiumMMX 300MHz Mobile processor consumed only 8.4W. Even the
PentiumMMX 233MHz DESKTOP chip only consume 17.0W of power.

For comparison, Transmeta's "amazing low power cores" consume a
maximum of about 7 or 8W (exact numbers are very poorly documented by
Transmeta).
Intel will eventually
switch to them in their mainstream processors (via the Pentium-M path)
and AMD has put their low-power technology (gates clocks, and other
transistor techniques) into their entire product-line. In the
meantime, IBM, not feeling any competitive pressure from anyone, just
decided to crank the power consumption through the roof.

IBM is another company that does a very poor job of (publicly)
documenting power consumption, however my understanding is that the
PowerPC 970FX tops out at less than 100W. This puts it well into the
same ballpark as the Athlon64 or P4.
[...] and Sony is willing to accept at 12.5% defect rate (1 SPE out of 8
per die being nonfunctional).

Interesting. I am not aware of any other CPU manufacturer willing to
share their yeild rates publically, so I don't really know what to
compare that to.

I don't think that this is exactly the same thing as yield rate, more
just that allowing a certain amount of redundancy allows for increased
yields. AMD and Intel (and all others) do this with cache, where a
processor with 1MB of cache is actually built using more than 1MB and
then disabling any defective blocks. The design of The Cell just
allows IBM to extend this to processing units in addition to cache.

FWIW though, typically anything over 80% is considered good yields for
a high-end processor.
That's right IBM, cannot hope to accomplish what even the lowly humble
AMD has done. IBM simply doesn't have the competence, drive, or will
to compete with either Intel or AMD.

I think it's as much a question of return on investment as anything
else. Even with selling ~35M processors a year AMD is only just
breaking even. With IBM selling only about 5M processors to Apple
there's no way that they would have been making much money, if any at
all. Certainly their R&D costs would have been partially covered by
other processor sales (eg. the core of the PPC 970 is partly based off
of the Power4 processor), but I can't see it as being a particularly
good business venture for them.

With the console businesses IBM should be generating at least 50-75M
unit sales per year. Even with much lower profit margins it is likely
that this business will be a lot more profitable.
 
Adobe should have a not-horrible time, just a lot of copy/paste from
the existing Windows codebase for the intel binaries.

Lot of drudgery, but the hard stuff has already been done.

The "hard stuff" is NOT porting the application in the sense of
getting it to compile, but rather testing, debugging and validating
the application for the new platform. With any decently large
application this is ALWAYS the most difficult part. Even re-writing
the bits of assembly code is a lot easier than this.
 
Tony said:
The "hard stuff" is NOT porting the application in the sense of
getting it to compile, but rather testing, debugging and validating
the application for the new platform. With any decently large
application this is ALWAYS the most difficult part. Even re-writing
the bits of assembly code is a lot easier than this.

"heavy-lifting" then?
 
Tim said:
An article I read on this gave what sounded like a good reason that
Intel does care quite a bit about getting Apple's business. The claim
is that Intel has long wanted to push x86 PC designs past all the legacy
crap they are still stuck with, but the PC makers don't want to go
along. The PC makers just want to make the cheapest boxes they can, and
that means sticking with cheap legacy stuff.

Apple gives Intel a customer that can be used to showcase what a PC
should be like.

HP was supposed to do this. Remember Athena?

The best instance of this is, of course, USB. Here Intel had a great
innovation (ignoring ADB as an antecedent for the moment) and couldn't
get anybody to support it. Microsoft? Intel first publicized USB in
early 1995, I guess too late for Win95. Half-assed support was in OSR2,
but required digging through a .cab or something to get working. Took
microsoft until Win98, *three* years later, to get USB support, even
though it was on a lot of MBs at the time.

So it's not just the OEMs, it also Microsoft that is gating Intel's
greatness.
 
HP was supposed to do this. Remember Athena?

The best instance of this is, of course, USB. Here Intel had a great
innovation (ignoring ADB as an antecedent for the moment) and couldn't
get anybody to support it. Microsoft? Intel first publicized USB in
early 1995, I guess too late for Win95. Half-assed support was in OSR2,
but required digging through a .cab or something to get working. Took
microsoft until Win98, *three* years later, to get USB support, even
though it was on a lot of MBs at the time.

....and it *still* sucks. USB is an abortion.
So it's not just the OEMs, it also Microsoft that is gating Intel's
greatness.

"Greatness"? ...you mean like DRDRAM? =:-\
 
keith said:
...and it *still* sucks. USB is an abortion.

better than ADB and firewire cables are too thick/stiff.

Granted, much of the software stuff associated with USB was clearly
devised by summer interns on crack.
"Greatness"? ...you mean like DRDRAM? =:-\

Ah, but we must conform to the new RDF now.
 
In said:
The K6 went from 233Mhz to 550Mhz,

To be pedantic, that's actually 3 models, the K6 (166-300 mhz), the K6-2
(266-500 mhz) and K6-III (333-550mhz); the K6-2+ was really a rebadged K6-3
with a half-sized cache.
the Pentium 33Mhz to 200Mhz,

60mhz - 166mhz for the Pentium Classic; I'm not sure if the fastest ones
ever made it into mobile chips.

133-233mhz, for desktop Pentium MMX chips; there was also a 266mhz mobile
P-MMX, and there may have been a 300mhz
Athlon from 600Mhz to 2.0Ghz

Athlon from 500mhz to 2.2Ghz, although the different cores are quite
different if you go from the original Athlon to the final Athlon XP or
current Sempron cores.
Pentium II/III from 300Mhz to 1.266Ghz

Speaking generally, the P6 core scaled from 150mhz (or was there a 133mhz
PPro) with the PPro (up to 200mhz, IIRC).

P-II 233-450mhz (I think; ISTR someone telling me there was a 500mhz, though
that may have been the P2-based Xeon)

P-III 450mhz-1.4ghz.
Pentium IV from 1.5Ghz to 3.8Ghz, and the

1st generation P-IV started at 1.3ghz
Athlon-64 got started at around 2.0Ghz

1.8ghz, actually, and the 1st-generation AMD 64-bit core went down to 1.4ghz
in the Opteron.
 
better than ADB and firewire cables are too thick/stiff.

At least Firewire works. USB is a PITA.
Granted, much of the software stuff associated with USB was clearly
devised by summer interns on crack.
s/was/is/all


Ah, but we must conform to the new RDF now.

No, we must cut our way through it. Nothing's changed.
 
In
Travelinman said:
I have no way of checking whether that's true or not, but it sure
negates your comment (above) where you claim that anything other than
a 'joke application' would be hard to port.

as paul said, mathematica is largely written in the mathematica language,
but it's more complicated than porting a java- it's more like porting emacs;
once you have list ported to a platform, it's pretty simple. and as paul
said, the heavy lifting for it has already been done.

<snip>
 
60mhz - 166mhz for the Pentium Classic; I'm not sure if the fastest ones
ever made it into mobile chips.

There was a 200MHz Classic as well.

While we're being pedantic :)
 
ed said:
In

as paul said, mathematica is largely written in the mathematica language,
but it's more complicated than porting a java- it's more like porting emacs;
once you have list ported to a platform, it's pretty simple. and as paul
said, the heavy lifting for it has already been done.

<snip>

Again, that may very well be true. But it STILL negates his statement
that only joke applications could be easily ported.
 
Travelinman said:
Again, that may very well be true. But it STILL negates his statement
that only joke applications could be easily ported.

no, it doesn't negate that statement, since it shows that mathematica
was ported easily because all the heavy lifting had *alread* been done.
if you're going to show an example of an easy port, you're going to
have to find another application.
 
ed said:
no, it doesn't negate that statement, since it shows that mathematica
was ported easily because all the heavy lifting had *alread* been done.
if you're going to show an example of an easy port, you're going to
have to find another application.

Of course it negates his statement. He said that only 'joke
applications' would be ported easily. The fact that Mathematica was
ported easily and that Mathematica is not a joke application proves that
he was wrong.

IF I had said that all applications would be easy to port, you might
have a point. But since I never claimed that, it's not an issue.
 
To be pedantic, that's actually 3 models, the K6 (166-300 mhz), the K6-2
(266-500 mhz) and K6-III (333-550mhz); the K6-2+ was really a rebadged K6-3
with a half-sized cache.

As long as we're being pedantic, I don't believe the K6-III (Chomper-XT?)
ever went to 550MHz. IIRC it topped out at 450MHz (mine's a 400). The
K6-II topped out at 500MHz or perhaps 533MHz, IIRC.

Damn! That's a long time ago! ;-)
 
In
TravelinMan said:
Of course it negates his statement. He said that only 'joke
applications' would be ported easily. The fact that Mathematica was
ported easily and that Mathematica is not a joke application proves
that
he was wrong.

no, it doesn't, as you don't know how easy it was to port the portions that
were 'preported'.
 
Back
Top