T
TravelinMan
YKhan said:It has a research component. No one is going to do experiments on a
working production line.
Sorry to disappoint you, but it happens all the time in the real world.
YKhan said:It has a research component. No one is going to do experiments on a
working production line.
Keith said:Hasn't the ASTC been closed down?
YEEHAWW Howie wasn't invited to the auction. No water. No
electricity. No land. No workers. Lotsa taxes buys lotsa bureaucracy
though. It's taken forty years to build a two-lane road, and it'll
likely be another forty before it's finished.
I been to vermont. Most of the people were pretty white anyway.If people only knew... You shoulda seen people turn white when there
was a b*mb scare on the main site some ten years back (long before
9/11).
News to me, but that doesn't mean much. Easily could have been and they
forgot to notify me.
No Water? You can damn near throw a rock into Lake Champlaign from
there. Or is Champy covered by the Endangered species act?
I been to vermont. Most of the people were pretty white anyway.
It has a research component. No one is going to do experiments on a
working production line.
But each manufacturer has its own history to draw upon when it creates
its own automated process management scheme. AMD's history (including
all of its failures) is particularly relevant to the manufacture of x86
microprocessors.
AMD has much more experience at producing high-speed processors at good
yields than IBM. This is borne out by IBM's severe yields problem at
Fishkill compared to AMD's relative lack of it using nearly indentical
equipment in Dresden.
AMD needed some help from IBM about integrating new materials into its
processes. But once that knowledge was gained, it would be AMD that
would have the better chance at getting it going on a big scale.
keith said:Wrong. One doesn't duplicate $billion lines. Experimets are run on
production lines all the time.
...and IBM has never made an x86 processor? Also, perhaps you can
clarify why you think x86 is so special. Processors technologies are
processor technologies. IBM has different goals, sure.
? Most of this is learned experience, not theoretical.
You're on drugs, yousuf!
Geez! What a load of horse-hockey!
Yousuf said:When did Tom's become a "real" benchmarks site?
Well IBM had PPC750 parts running at over 1.0Ghz when Moto was stuck at
450.
Why? Dunno.
I had made the argument that Intel's idea of pipelining for greater
clock has been abandoned.
You post 3 paragraphs of trivia about AMD crap that I couldn't care
less about,
[...] so in the interests of avoiding a totally pointless
argument (too late!) I helpfully modified my assertion to just be that
Intel went for frequency over IPC with the P4, something I should hope
you would not find controversial or otherwise RDF-influenced.
Wheel the goalposts as far back as you like, but if you can't find
anything wrong or inconsistent with those tests they're credible to me.
yeah, one that does back my claim that a dual 2.7 murders a single
athlon in MP-enabled, computation-intensive code. Thanks.
No, IIRC Otellini publically confirmed that disinclination recently.
Sure.
Still, IBM has designed a triple-core 3.2Ghz CPU for Microsoft's $500
box, compared to Intel's dual-core 3.2Ghz P-D (which, as you surely
know is nothing more than two Prescotts duct taped together) that they
are selling for over $530 in qty 1000
http://www.intel.com/intel/finance/pricelist/
which sorta puts paid to your assertion:
"IBM, not feeling any competitive pressure from anyone, just decided to
crank the power consumption through the roof"
Seems to this RDF-challenged observer that IBM is ahead of Intel here.
http://www.tsmc.com/english/technology/t0113.htm
is what they'll be using. Doesn't seem that generic to me.
Well, the triple SMT cores will probably mitigate TMSC's 'woeful'
fabbing capabilities.
from Microsoft? huh?
yeah, pay IBM hundred(s) of millions of dollars.
True. I think Apple was getting tired worrying about the 3-5 year
horizon, not just the immediate problems they were having with
Freescale. Enough to make anyone say, "**** it!", and that's not even
considering the immense cost and time-to-market advantages Apple is
getting from not having to design its own chipsets any more.
Apple only found need to mention IEEE fp in the briefest of notes in
their 100+ page transition document.
Right. Apple's frameworks were cross-platform to begin with. That
people weren't smart enough to take advantage of them isn't Apple's
fault.
Nah. Dubious assertion to me. Adobe people are pros, they know how to
architect code.
Assuming they're still on Codewarrior... moving to Xcode/gcc, now THAT
will be an immense amount of work.
True, CoreImage doesn't do Adobe any good in maintaining pixel-perfect
compatibility with the x86 code.
Trust me, then next Intel processor (after the Pentium-M) will be very
deeply pipelined. Just going by their previous generation design
times, I would say that in about 12-24 months they should be
introducing a completely new core design.
Apple does not disclose sufficient details to reproduce the benchmark
numbers. Futhermore, there are no third parties auditting their
results. The few benchmarks where they did give sufficient details, I
have personally debunked here:
http://www.pobox.com/~qed/apple.html
I don't know what "RDF" stands for (I'm not a regular in this group)
The reason why you see Dell openly calling Apple to license OS X to
them, is because Dell realizes that ultimately Apple will come out of
this much stronger (something they could have done long time ago, BTW.)
ASTC was shut down YE 2004, according to my source. Must have moved thekeith said:I'm not a process type, but I haven't seen anythign out of there for at
least two years, likely three. Politics AIUI, but...
The enviro folks wouldn't want anyone to suck the water out of the lakeNope. No water or sewer. Just because there is a (not-so) great lake
10mi away doesn't mean there is (a *LOT* of) water to be had for a fab.
EF has *unlimited* water. IBM sunk a kabillion wells in the '80s a
few miles away and then topped the wells with grass. They then gave
the surface to the town for soccer fields, keeping the mineral rights.
Snow white, in fact. ;-) HOwever, you've not *seen* white. Shutting
the place down for a few days wasn't taken lightly! Hell, it's never
been
shut down for weather. ...unlike those wussies in NC. ;-)
Trust me, then next Intel processor (after the Pentium-M) will be very
deeply pipelined. Just going by their previous generation design
times, I would say that in about 12-24 months they should be
introducing a completely new core design. (On the other hand, the tech
bubble burst and the time they wasted on Itanium may have truly thrown
things off kilter over at Intel. Who knows, we'll see.)
When they disclosed the software, hardware, specifications of their
testing, and when people on the outside have been able to reproduce
their results.
If you have BIAS problems (*cough* ads from nVidia *cough*) with Tom's
that's fine, but that's ok, you have all the other sites I listed which
keep him in check.
Compare this to Apple which hired Veritest (formly ZD labs, notorious
for bias towards their sponsors and just generally bad benchmarking in
the past) who told Apple they they *LOST* on SPEC CPU Int (even after
rigging the whole test scenario in Apple's favor, and ignoring the
faster Opteron/Athlon based CPUs), and then Apple interpreting this to
mean that their computer was the fastest on the planet.
There are two completely different standards here.
ASTC was shut down YE 2004, according to my source. Must have moved the
experiments to the other lines.
The enviro folks wouldn't want anyone to suck the water out of the lake
anyway.
And Pataki came up with the $$$$$$.
Ice storms are nothing to mess with. We've had a couple and they were a
pain. But we don't shut down for weather either. "It is up to the
individual and their manager" etc.
Yet, no production silicon came out of there.
No, IBM hasn't made an x86 processor in the longest time. But it's not
really the fact that it's an x86 processor, that makes it an issue.
It's the fact that IBM doesn't have enough experience making massive
amounts of processors in a long long time, so it doesn't have the
background anymore. Sure it can feed its own needs with Power
processors, but that's not a large quantity of processors.
Really? By who?
Intel may have given up on the P4, but that, I think, it just a
reflection that the P4 didn't have as much life in it as Intel
originally thought,
and they didn't have a next generation technology
ready to go. But they did have a re-implemented P-Pro architecture
that seems to have fit their current product needs.
[...] so in the interests of avoiding a totally pointless
argument (too late!) I helpfully modified my assertion to just be that
Intel went for frequency over IPC with the P4, something I should hope
you would not find controversial or otherwise RDF-influenced.
They did, but their intention was not to have low IPC. They just
couldn't design around this. Their real goal was to deliver high
overall performance. A goal they did achieve. Just not quite as well
as AMD achieved it. If Intel could have figured out how to truly
deliver higher IPC with their half-width double pumped integer ALU
architecture, they would have ruled the CPU universe and have
completely rewritten the rules on CPU design.
Its called marketing and positioning. I thought I posted this already.
I don't know what "RDF" stands for (I'm not a regular in this group)
but I'll just read that as "challenged". There is no credible sense in
which IBM can be considered ahead of Intel (let alone AMD). Except
with the Power architecture on Spec FP CPU which is really a memory
bandwidth test, but that's a different matter altogether.
Ok, so they have copper. Look, there is a reason why AMD made a big
deal about announcing a fab technology sharing agreement with IBM --
there are techniques and technologies that multiple billions in
research that IBM spends develops that lesser fabricators like TSMC
cannot match.
If TSMC is making a highly clocked Microprocessor with 3 cores, it
means they are using a low gate count core design and tweaking the hell
out of it at the circuit level for higher clock rates.
I don't even know what that means. You mean by having a steady
customer, they will be able to fund future process technology? That's
great, but their problem is what they can deliver with their *current*
technology. And they have other steady customers, like ATI and nVidia
who will pay just as well as Microsoft for fab capacity.
The reason why you see Dell openly calling Apple to license OS X to
them, is because Dell realizes that ultimately Apple will come out of
this much stronger (something they could have done long time ago, BTW.)
Tony said:Err.. IBM is currently fabbing VIA's x86 processors... do those not
count?
They do manage to turn out a fair share of power processors for the
higher end of the embedded market, and then they have things like the
PPC970 and the Power4/Power5 for the high-end. They do have an
interesting hole in the middle where Intel and AMD tend to live, but I
don't think it's THAT much of a stretch for them.
Still, I DO believe that IBM has a fair bit to learn from AMD, the
information flow between the two companies is definitely not a one-way
street.
Tony said:ATI and nVidia are hardly their only customers. TSMC is a BIG
manufacturing company. For 2004 they had a total capacity of 5
million 8-inch equivalent wafers, split between their 9 fabs. For
comparison, once AMD gets their new 12-inch, 300nm Fab36 up and
running at full steam, combined with their current 8-inch, 200mm
Fab30, they will have a total capacity of about 650,000 8-inch
equivalent wafers per year.
Intel is the only company in the world with capacity to match TSMC.
keith said:You think this is novel? Processes are tweaked on the fly to produce
what's needed all the time.
"Intel likes to save up all its changes and do them all at once," he said. "Since AMD has to do a lot with less, it makes incremental changes. If you took someone from an Intel factory and put him in an AMD factory, after a few days he'd run out of there thinking, 'These people are crazy.' "
Yousuf Khan said:This is what I was talking about:
AMD Lifts Its Veil - Forbes.com
http://www.forbes.com/home/intellig.../14/amd-semiconductor-ruiz_cx_ah_0613amd.html
Yousuf Khan
Yousuf said:AMD Lifts Its Veil - Forbes.com
Robert Myers said:http://www.forbes.com/home/intelligentinfrastructure/2005/06/14/amd-semiconduc
tor-ruiz_cx_ah_0613amd.html
A fascinating article.
<quote>
Nor can AMD afford to make mistakes. Intel's PC chip unit turned in
gross margins above 49% in 2004, while AMD's gross margins for its PC
chips were just shy of 12% in the same period.
</quote>
You don't have to go any further to understand why big vendors are
reluctant to depend on AMD. A manufacturing gross margin of
_Twelve_percent_?