Yousuf said:
And I still am, where's the back-tracking? I'm counterpointing your
claim that GPUs *aren't* more complicated than CPUs.
Not effectively.
If they were so
much simpler than CPUs, then we wouldn't see these CPU-like production
problems crop up.
Having to talk to a process tech is hardly an unusual problem. I
suspect the vast majority of ASIC designs end up having to deal with
process issues.
CPUs tend to be hard to produce because of their
complicated circuitry, and so are GPUs.
GPUs are semicustom devices, most CPUs are full custom. There's a hint
for you.
Whatever, I've watched the industry long enough to know that there is a
direct correlation between circuit complexity and manufacturing
problems.
Correlation isn't causation. Manufacturing problems are also caused by
using inaccurate spice decks, bad circuit design, smaller process
nodes. GPUs might be more complex than hardware RAID controllers, but
that's not saying much. They are most assuredly less complex than CPUs
by almost any measure.
You have almost no difficulties producing SRAM chips, but
tons of problems with CPUs and GPUs.
Do you have any proof that the number of problems encountered in GPUs
and CPUs is anywhere close to equivalent? Maybe you counted errata?
That's why CPU manufacturers
usually turn out to be IDMs (integrated device manufacturers, meaning
they own their own fabs).
Yes, like those guys at Sun, Transmeta, MIPS, or ARM. You got this one
backwards again...
They need control over their own
manufacturing process, and they usually can't contract it out
(Chartered being the lone exception so far).
This is a performance issue, not a QC one. People contract out CPU
foundry work to IBM...
I see GPUs are still
running at sub-1GHz these days, if they have to go above 1Ghz, I think
it makes perfect sense that they join up with IDMs.
Last time I checked, Sun was shipping well above 1GHz and they are
fabless. The fact of the matter is that Nvidia doesn't want a fab. If
they need cutting edge process tech, they can use IBM...which they
tried, and gave up on because of high defect rates.
Their speed is
being limited by their lack of control over their own production
process.
No, their speed is being limited by their design goals. When you have
data parallelism there is no reason to try and hit high clock speeds,
that just makes your life difficult. Heat ~ Frequency^3...
Obviously AMD can't claim it's going to keep its CPU and GPU groups
separate, when everybody suspects that they're buying the company to
integrate a CPU with a GPU.
Then how are they going to work with Nvidia again? NV's executives
have a duty to ensure that competitors don't have their roadmaps,
errata, etc. etc.
However, as far as chipsets go, what's there left inside them anymore?
The memory controller is integrated into the CPU now. We're left with
wireline and wireless networking, USB/Firewire, PCIe, and not much
else. Nvidia, as well as VIA, SIS and Broadcom can all design those
things just as well as ATI and hook them up via Hypertransport too.
Can't see how ATI will have any advantage here other than the marketing
advantage of being able to say that it's an AMD chipset for an AMD
processor in those markets where it might possibly matter, like
corporate laptops.
Q: What would happen if the VP of marketing at Nvidia gave NV roadmaps
to someone at ATI?
A: He'd be fired and there would be a shareholder lawsuit.
Why don't you go ask someone who designs stuff for a living whether
GPUs or CPUs are more complex. I'm sure they will have some
enlightening commentary such as:
1. GPUs use simpler processes (TSMC bulk versus AMD SOI, or Intel
bulk)
2. GPUs don't have a fixed ISA with nasty legacy baggage
3. GPUs don't use full custom design
4. GPUs aren't OOO
5. GPUs are only beginning to show aggressive circuit design...and
still nothing on the scale of what Intel or DEC does/did
6. CPUs take 5 years to design, GPUs take 3 or less
7. GPU problems can often be 'fixed' in the driver, BIOS fixes have to
be prioritized due to space availability, whereas a driver has
unlimited space
Do you have any counter arguments or reasons why GPUs are more complex?
I'm waiting to hear.
DK