A
AirRaid
http://blogs.mercurynews.com/aei/2006/08/the_coming_comb.html#more
The Coming Combo Of The CPU And GPU, Ray Tracing Versus Rasterization,
And Why Billions Of Dollars Is At Stake
Dean Takahashi, 12:01 AM in Dean Takahashi, Gaming
Not everybody may care about how they get their eye-popping graphics.
But how it gets delivered to you will be determined by the results of a
multi-billion dollar chess game between the chip industry's giants. Can
you imagine, for instance, a future where Nvidia doesn't exist? Where
there's no Intel? The survivor in the PC chip business may be the
company that combines a graphics chip and a
microprocessor on a single chip.
In the graphics chip industry, everyone remembers how Intel came into
the market and landed with a thud. After acquiring Lockheed's Real3D
division and building up its graphics engineering team, Intel launched
the i740 graphics chip in 1998 and it crashed and burned. The company
went on to use the i740 as the core of its integrated graphics chip
sets, which combined the graphics chip with the chip set, which
controls input-output functions in the PC. Intel took the dominant
share of graphics as the industry moved to integrated, low-cost chip
sets, according to Jon Peddie Associates. But the company never gave up
on its ambition of breaking into graphics. Intel has a big team of
graphics engineers in Folsom, Calif., to work on its integrated
graphics chip sets. And it recently acquired graphics engineers from
3Dlabs. The Inquirer.net has been writing about rumors that Intel has a
stand-alone graphics chip cooking. That may have been one of the
factors that pushed Advanced Micro Devices into its $5.4 billion
acquisition of graphics chip maker ATI Technologies. Because of that
deal, the PC landscape has changed forever. Now there is an imbalance
as Intel, Nvidia, and AMD-ATI try to find the center of the future of
computing. (pictured: Intel's Jerry Bautista)
If AMD and ATI combine the central processing unit with the graphics
processing unit, it could collapse the barrier between
multibillion-dollar industries, leaving both Nvidia and Intel to
scramble. What will happen? Will Nvidia stick a CPU on the corner of
its graphics chip and take a lot of dollars away from Intel in the PC?
Intel is betting that something else will happen. In interviews with
its researchers, they are confident that graphics processing will
naturally shift to the CPU from the GPU. That's because they believe
that the decades-old technique dubbed "ray tracing" will replace
the technique of rasterization, or texture-mapping, that modern
graphics chips have grown up with. Check out more about this in a paper
by Intel researcher Jim Hurley at
http://www.intel.com/technology/itj/2005/volume09issue02/art01_ray_tracing/p10_authors.htm.
Ray tracing involves rendering an image by shooting a ray from a point
of view and seeing what it hits.You can see what is in a picture and
what is hidden from view. There is no need to render everything in the
whole scene. Only what is visible. By contrast, rasterization is
getting more and more complicated. Programmers have to make numerous
passes, adding layer upon layer of shadows and lighting to a scene
until it looks just right. Hurley says it is a more accurate depiction
of reality, while rasterization can only approximate reality. Ray
tracing has been expensive, but the animation houses such as Dreamworks
and Pixar have used it in their latest movies. Perhaps the latest
efforts in video games will not be far behind, says Hurley. If you look
at the Cell chip for the PlayStation 3, it was clear that Sony thought
about putting graphics and the CPU on one chip. But it changed its mind
and brought Nvidia into the picture with the RSX graphics chip.
I've interviewed a number of folks about what they think about the
possibility of combining the graphics chip and the CPU, as well as the
notion that ray tracing on the CPU might replace rasterization on
graphics chips. Some of these interviews took place before the AMD-ATI
merger, and some after. Here are some of the quotes.
Patrick Moorhead, vice president of advanced marketing at Advanced
Micro Devices, said, "The idea that the microprocessor and the
graphics chip might combine was an element in our merger. We could have
licensed that. We see that as a mid-term reality. We are announcing we
will do a combined CPU and GPU development in 2007. Initially it is
focused on emerging markets. That's where the right solution is
optimized for emerging solutions. The cost of emerging platform is
governing a lot of things. The CPU controls the costs of peripherals in
the system. If it is more integrated, the less power it requires. You
can get by with a smaller power supply and have other benefits."
Pat Gelsinger, executive vice president Digital Enterprise Group at
Intel, said: "This idea is interesting for our tera-scale computing
initiative. We have had a certain architectural model for graphics. ATI
and Nvidia have evolved it very effectively. We do some a lot of
business in integrated graphics. Now as people have moved to multipass,
more sophisticated rendering, and we have introduced ray tracing and
other models, the nature of the graphics pipelines is changing. You
shove polygons through. That is not right for the work load. Small
general-purpose algorithms look more like what we do on a CPU than a
GPU. Ray tracing is being done today. People ship them today running a
blade configuration, with two-unit rack-mounted servers. They use it
for techniques for high-resolution rendering on things like Shrke 2.
They produce superior graphics results with it. It will start to
displace traditional rendering architectures. We're not there yet. We
are still a few years away from that point of saying that.
What can the graphics chip do instead? Well, why can you tool the
physics to run on the GPU or CPU? Look at Havok versus Ageia. The
results of that is we see the next-generation visualization work loads.
We are taking that into account in our planning. I wouldn't call it a
collision of the graphics chip and the CPU. It's next-generation work
loads. I'm not expecting GPUs to go away, and Jen-Hsun and Dave Orton
aren't expect it to go away.
Dave Kirk, Nvidia's chief scientist, said, "If ray tracing was
universally superior to rasterization, wouldn't digital film studios
use ray tracing exclusively? They do not. In fact, the first film to
extensively use ray tracing is Pixar/Disney's Cars, which has lots of
shiny reflective objects and scenes rather than soft, flexible
characters and natural environments. Most digital animated films use a
combination of many techniques including both ray tracing and
rasterization, to create the widest possible variety of effects as
efficiently as possible. It is likely that games and interactive
graphics applications will progress in the same way over time. It is
naïve to think that CPUs with limited parallelism will be competitive
with the massively parallel devices such as graphics chips."
On ray tracing versus rasterization, Greg Brandeau, the chief
technology officer at Pixar Animation, said, "This is a complicated
question. My simple summary would be that more studios would use ray
tracing if the computer power was cheap enough. There are work arounds
to get nice images but ray tracing is computationally very expensive.
Also, Cars was not the first movie to use ray tracing. Shrek 2 used ray
tracing to approximate global illumination. As to which of these
technologies will win out, only time will tell. These two technologies
are so different that it is hard to predict which will ultimately win
out. At Pixar, we don't know the answer but we are constantly
evaluating the state of the latest hardware to figure out what is going
to give us the most pretty pixels per dollar."
Dave Orton, CEO of ATI Technologies: I think it is extremely realistic
that the CPU and the GPU will be combined on one chip. If you think
about the market and how it has moved from the GPU to the integrated
graphics chip sets, there are new opportunities. Like the ultramobile
PC or MIT's "One Laptop Per Child" project. Power continues to be
an issue. There is a huge opportunity in the low end of the market to
create a third platform stack to the overall PC platform. So you have
integrated chip sets and GPUs. It's a question of when.
The question is if the CPU architecture can do graphics processing. I
would agree there is a class of graphics processing that you would want
a graphics processor to do. But the question is if there is a
system-on-a-chip that could also do that. It will not happen in
performance laptops. But think of how many chip sets still use Direct X
7 or Direct X 8 graphics. Those are still shipping. The "One Laptop
Per Child" applications. I think there will be a class of problems
you can solve with current technology. It might be one generation
behind the state of the art graphics. "
On ray tracing versus rasterization, Orton said: "Ray tracing is just
one form of how you render a pixel. It's not the only form. At a
scientific level, you can say that it is growing to fit more problems.
But the reality is there is a broad range of how you want to render a
pixel. Ray tracing is one form of how you do it. Other applications
will want to render it in different ways. I don't see the processor
doing it as much as I see extensions of the processor doing it."
Henri Richard, chief sales and marketing officer, Advanced Micro
Devices: "It's more of a question of when than if. We will have a
transistor budget at some point in time to combine the CPU and the GPU
on one piece of silicon. In a multicore environment, one core will be
the GPU.
Justin Rattner, chief technology officer, Intel, "Intel builds
raster-based graphics. We have for some time. It's mature as a
technology and has reached its highest evolutionary form. What you see
now is to achieve the desired look for a scene, you have to make many
passes over the data per scene. Fifteen or twenty-five times on a GPU
pipeline. That's not raster versus ray trace It's about how GPUs
have fixed pipelines, anything that is longitudinal, you render it,
then take another pass. A more flexible architecture lets you render in
one pass. We're interested in that from a pure architectural point of
view. In the next five years, these two architectures will meet
somewhere in the middle. GPUs will become more flexible and CPUs will
do more things. Ray tracing has more to do with the fact that you get
the desired result with very little effort. Right now it's tedious to
get the desired look. If you can do ray tracing in real time, it's
the obvious choice for the solution. Right now we get six or so frames
per second. It can deliver an arbitrary degree of photorealism. The
idea has generated a lot of discussion with the merger of ATI and AMD
coming. We have been much more focused on working with the graphics and
rendering software communities to create the architecture and software
for a new generation of rendering. It's marked now by functions that
don't have much to do with image quality. If you want to add physics
and behavioral AI, you have to design the software in a different way.
Not piecemeal. That's where were we come in. You have to do this in a
general-purpose environment. Our view is we have to beat them on
performance. You have to do something they can't do today. Otherwise,
you can't generate momentum.
Jim Hurley, researcher, Intel: "We think that ray-tracing is going to
take off. This is a technique where you render only what you see.
It's different from rasterization, which is what graphics chips do.
With rasterization, you feed triangles into a rasterizer and it
processes them in order. But it doesn't take into account the
relationship of the triangles. It can only do multiple passes and do
things over and over again. Ray-tracing lets you shoot a ray into a
model of a world and it will find the object it is aimed at. It is a
simulation of the physics of light. Rasterized graphics is an
approximation. It can achieve plausible images but it is using brute
force. Ray tracing can run efficiently on a CPU because of the large
caches. GPUs rely on brute-force bandwidth. Traditional raster graphics
doesn't do that well with ray-tracing. You can't do ray-tracing on
a GPU because there isn't that much memory. You get movie quality for
games and can do it in real time. Rasterization is trying to mimic what
ray tracing does in photorealism. Pixar's rendering was based on
raster graphics for a long time, but all the movies houses are moving
to ray tracing." (CPU magazine).
Jerry Bautista, director of Intel's Microcomputer Research Lab:
Regarding the graphics chip companies, he said, "If I were them, I
would be nervous. We see a trend. We watch the FLOPS, or floating point
operations, the watts, and dollars that go into the graphics cards and
the computational physics on GPUs. They have been a growing part of the
PC budget. We are aware of that. Some graphics computation is handled
well on a graphics processor. We can pull the graphics back on the CPU.
In the future, the load of rendering an image falls in favor of the
computer side, the microprocessor, and the pixelization task becomes
minor. Our horizon is three to five years.
He added, "Instead of saying that we will win over the graphics chip
makers, I'd talk with them about the applications themselves. In
today's systems, they are largely concerned with rendering. About 90
percent of resources are spend drawing pictures. There is not much left
for physics and artificial intelligence. What happens if we are have
real physics and real AI kicks in? In ray tracing, we see if we had 10
or 100 cores, we would see a 10X speed-up. With 1,000 cores, we would
see 100X speed-up. It just keeps going. Ray-tracing can swallow up
whatever compute we build. At what point do you get diminishing
returns?" (CPU magazine)"
David Wu, game programmer and president of Pseudo Interactive in
Toronto, Canada, said, "Many concepts from ray tracing and
rasterization are converging, eventually they will meet. With current
architectures (CPU or GPU) and memory bottlenecks, rasterization has an
inherent advantage in performance. That will be enough to keep it as
the technique of choice for high performance applications for many
years. The main advantage of Ray tracing is the fact that you can
create nice abstract images with little programming effort. However
when you get down to all that details that are required to render real
scenes there is not much savings in programming complexity. Ray
Tracing might find its niche amongst hobbyists (who want to build there
own renderers from scratch), dogmatic programmer evangelists who like
the term "Ray Tracing", and existing, legacy systems.
Wu added, "There is no question about the GPU/CPU separation. They
will both be on one chip using pretty much the same sort of hardware by
the next console generation. Something like CELL, but without all of
the flaws and easier to program for. Physics will be done using the
same hardware. Relatively simple, massively parallel processors with a
lot of hardware dedicated to the issue of memory latency and
bandwidth."
Tim Sweeney, CEO of Epic Games and graphics expert, said, "I'm a very
strong believer in the coming convergence of CPU and GPU hardware and
programming models, enabling CPUs to once again implement great
software rendering, or alternatively for GPUs to be applied naturally
to general computing problem using mainstream programming languages.
This is a separate topic from the question of whether ray tracing is
the future of graphics. Many vast benefits would come from a CPU-GPU
convergence that would benefit all means of generating scenes:
rasterization, ray tracing, radiosity, voxels, volumetric rendering,
and other paradigms. Such a convergence means that real-time ray
tracing will become possible, but by no means does it imply that ray
tracing will become the de facto solution for 3D drawing. For example,
ray tracing is poorer for rendering for anti-aliasing (looking towards
multisampling and analytic anti-aliasing techniques), and typically
imposes a 20-40X computational penalty compared to rendering. Ray
tracing is superior for handling bounced light, reflection, and
refraction. So, there are some places where you will definitely want
to ray trace, and some cases where it would be a very inefficient
choice. Certainly, future rendering algorithms will incorporate a mix
of techniques from different areas to exploit their strengths in
various cases without being universally penalized by one technique's
weakness."
Bob Drebin, chief technology officer of the PC business unit at ATI
Technologies, said, "They do pseudo ray tracking in the movies now.
They rasterize. If there is a polygon that needs complex reflections,
they start a ray trace for that. In both Shrek 2 and Cars, they use it
depending on the effect. With our Toy Shop demo, we did limited ray
tracing with cobblestones. It's limited. For the bricks. It's a
tool that they use in a shader program for certain situations where you
determine your color. What objects occlude you, what can you see. It is
a technique. Notion of casting rays to determine visibility or color is
something we use today. It's just one of the things. To me the more
interesting thing is the dynamics of the scene. Top tier developers
feel they are getting good. Now they want to make the scenes more
compelling from an interactive view. It's more about how I make it
more dynamic, more interactive, than to make the lighting more precise.
The physics, the interaction of objects. Character animation getting
muscle based. That is where the energy is going in game computing. In
terms of realism, I see ray tracing as a technique that will be used
selectively. Even if it goes that way, ray tracing is a highly parallel
operation. I don't see a time when they are talking of a time with
thousands of processors. I don't see advantage of a CPU doing it. If
the question is who can do a single ray fastest, then the CPU will win.
Then the goal is determine each reflection as soon as possible and move
to next one. But if you complete a million of them, the question is how
long it takes to do the first one. Youl can many of the rays in
parallel together. The throughput would be much higher. Thousands or
millions of ray intersections would collide. With thousands, then the
GPU is the clear winner. I think that in a lot of ways with the new
compute coming to the GPU, there are things that are not possible to do
on a CPU. In the past, the only place you could express it was the CPU.
The GPU is now becoming programmable. People aren't saying give me a
smaller CPU. They are saying now I can finally do more things. In a
game like Half-Life 2, you would be able to throw around all the
objects. Not just one or two. Multi-core is great for lots of
sequential computation. It may become less clear. With a richer
programming languages, the GPU needs less interaction with the CPU. I
suspect CPUs will become more parallel. We aren't running out of
things we wish we could do.
Jen-Hsun Huang, CEO of Nvidia, before the ATI-AMD merger announcement,
said, "Programmability has different types. There are scalar
programs. That uses a scalar microprocessor with a flow of instructions
and it fetches instructions out of a cache. It processes data in a
data-dependent way. That sort of programming is what microprocessors
are really wonderful at. We are not very good at that kind of
processing. Our processors are adept at processing large amounts of
data that have less dependency. Our processor is more akin to a stream
processor. The types of architectures are radically different. Just as
the CPU can run DSP programs, a DSP is much better at running DSP
programs. There are different types of programming models, whether it
is signal processing for baseband, or voice. There are scalar
processors. There are image processors for enormously large data sets
which is what a GPU does.
There is integration at two levels. There is the unification of
processing models. There is the CPU and the GPU, combined together in a
unified processor model. I think the latter is very unlikely. Although
on balance, transistors are free, we are challenged because most of the
opportunities require low power. So you have to have efficient
programming. It is far more efficient to run a program written for CPUs
on a CPU, and it's far more efficient to run a program for GPUs on a
GPU. There is the issue of power efficiency and cost efficiency. Brute
force is not a very good option. There is the second approach of
combining two processors onto one chip. In some markets, that would
happen. For example, integrated graphics combines two chips into one
where the technology is not very demanding. The market requirements are
much slower in commercial, corporate desktops and others that require
very little graphics. But if the graphics technology is a defining part
of that system, whether it is a game console or high-end PC or
workstations, the two devices innovate at different rhythms. There is
no reason the two devices want to merge into one in that case. In fact,
combining them into one makes it very difficult to combine two modern
cores into the same substrate on the same schedule. There, what causes
the two to move apart is not difference in programming models but
differences in market requirements and rhythms. By putting it in one
chip, you end up getting the worst of both worlds.
Nelson Gonzalez, CEO of Alienware, said about the merger of ATI and
AMD, "It may be a good thing. The reason I say that is I see ray
tracing is part of the way to go in the future. I don't think it's
going to be handled always at the CPU level. Maybe some FPGA chip.
That's the way to go. We are getting to the point where you have to
run eight geometric processors to process all these polygons. At some
point it doesn't make sense anymore. It makes sense to do ray
tracing.
I would think at this point it makes sense to keep the graphics chip
and the CPU separate. Unless you have many many cores, we're still
away from that. The writing is one the wall. Pixar rendered the Cars
movie with ray tracing. You're going to get a level of realism you
can't get with what we have. The combo of ray tracing and
rasterization makes sense at the beginning. Eventually, the future is
really just pure ray tracing. It's easier to model than draw these
things out.
The Coming Combo Of The CPU And GPU, Ray Tracing Versus Rasterization,
And Why Billions Of Dollars Is At Stake
Dean Takahashi, 12:01 AM in Dean Takahashi, Gaming
Not everybody may care about how they get their eye-popping graphics.
But how it gets delivered to you will be determined by the results of a
multi-billion dollar chess game between the chip industry's giants. Can
you imagine, for instance, a future where Nvidia doesn't exist? Where
there's no Intel? The survivor in the PC chip business may be the
company that combines a graphics chip and a
microprocessor on a single chip.
In the graphics chip industry, everyone remembers how Intel came into
the market and landed with a thud. After acquiring Lockheed's Real3D
division and building up its graphics engineering team, Intel launched
the i740 graphics chip in 1998 and it crashed and burned. The company
went on to use the i740 as the core of its integrated graphics chip
sets, which combined the graphics chip with the chip set, which
controls input-output functions in the PC. Intel took the dominant
share of graphics as the industry moved to integrated, low-cost chip
sets, according to Jon Peddie Associates. But the company never gave up
on its ambition of breaking into graphics. Intel has a big team of
graphics engineers in Folsom, Calif., to work on its integrated
graphics chip sets. And it recently acquired graphics engineers from
3Dlabs. The Inquirer.net has been writing about rumors that Intel has a
stand-alone graphics chip cooking. That may have been one of the
factors that pushed Advanced Micro Devices into its $5.4 billion
acquisition of graphics chip maker ATI Technologies. Because of that
deal, the PC landscape has changed forever. Now there is an imbalance
as Intel, Nvidia, and AMD-ATI try to find the center of the future of
computing. (pictured: Intel's Jerry Bautista)
If AMD and ATI combine the central processing unit with the graphics
processing unit, it could collapse the barrier between
multibillion-dollar industries, leaving both Nvidia and Intel to
scramble. What will happen? Will Nvidia stick a CPU on the corner of
its graphics chip and take a lot of dollars away from Intel in the PC?
Intel is betting that something else will happen. In interviews with
its researchers, they are confident that graphics processing will
naturally shift to the CPU from the GPU. That's because they believe
that the decades-old technique dubbed "ray tracing" will replace
the technique of rasterization, or texture-mapping, that modern
graphics chips have grown up with. Check out more about this in a paper
by Intel researcher Jim Hurley at
http://www.intel.com/technology/itj/2005/volume09issue02/art01_ray_tracing/p10_authors.htm.
Ray tracing involves rendering an image by shooting a ray from a point
of view and seeing what it hits.You can see what is in a picture and
what is hidden from view. There is no need to render everything in the
whole scene. Only what is visible. By contrast, rasterization is
getting more and more complicated. Programmers have to make numerous
passes, adding layer upon layer of shadows and lighting to a scene
until it looks just right. Hurley says it is a more accurate depiction
of reality, while rasterization can only approximate reality. Ray
tracing has been expensive, but the animation houses such as Dreamworks
and Pixar have used it in their latest movies. Perhaps the latest
efforts in video games will not be far behind, says Hurley. If you look
at the Cell chip for the PlayStation 3, it was clear that Sony thought
about putting graphics and the CPU on one chip. But it changed its mind
and brought Nvidia into the picture with the RSX graphics chip.
I've interviewed a number of folks about what they think about the
possibility of combining the graphics chip and the CPU, as well as the
notion that ray tracing on the CPU might replace rasterization on
graphics chips. Some of these interviews took place before the AMD-ATI
merger, and some after. Here are some of the quotes.
Patrick Moorhead, vice president of advanced marketing at Advanced
Micro Devices, said, "The idea that the microprocessor and the
graphics chip might combine was an element in our merger. We could have
licensed that. We see that as a mid-term reality. We are announcing we
will do a combined CPU and GPU development in 2007. Initially it is
focused on emerging markets. That's where the right solution is
optimized for emerging solutions. The cost of emerging platform is
governing a lot of things. The CPU controls the costs of peripherals in
the system. If it is more integrated, the less power it requires. You
can get by with a smaller power supply and have other benefits."
Pat Gelsinger, executive vice president Digital Enterprise Group at
Intel, said: "This idea is interesting for our tera-scale computing
initiative. We have had a certain architectural model for graphics. ATI
and Nvidia have evolved it very effectively. We do some a lot of
business in integrated graphics. Now as people have moved to multipass,
more sophisticated rendering, and we have introduced ray tracing and
other models, the nature of the graphics pipelines is changing. You
shove polygons through. That is not right for the work load. Small
general-purpose algorithms look more like what we do on a CPU than a
GPU. Ray tracing is being done today. People ship them today running a
blade configuration, with two-unit rack-mounted servers. They use it
for techniques for high-resolution rendering on things like Shrke 2.
They produce superior graphics results with it. It will start to
displace traditional rendering architectures. We're not there yet. We
are still a few years away from that point of saying that.
What can the graphics chip do instead? Well, why can you tool the
physics to run on the GPU or CPU? Look at Havok versus Ageia. The
results of that is we see the next-generation visualization work loads.
We are taking that into account in our planning. I wouldn't call it a
collision of the graphics chip and the CPU. It's next-generation work
loads. I'm not expecting GPUs to go away, and Jen-Hsun and Dave Orton
aren't expect it to go away.
Dave Kirk, Nvidia's chief scientist, said, "If ray tracing was
universally superior to rasterization, wouldn't digital film studios
use ray tracing exclusively? They do not. In fact, the first film to
extensively use ray tracing is Pixar/Disney's Cars, which has lots of
shiny reflective objects and scenes rather than soft, flexible
characters and natural environments. Most digital animated films use a
combination of many techniques including both ray tracing and
rasterization, to create the widest possible variety of effects as
efficiently as possible. It is likely that games and interactive
graphics applications will progress in the same way over time. It is
naïve to think that CPUs with limited parallelism will be competitive
with the massively parallel devices such as graphics chips."
On ray tracing versus rasterization, Greg Brandeau, the chief
technology officer at Pixar Animation, said, "This is a complicated
question. My simple summary would be that more studios would use ray
tracing if the computer power was cheap enough. There are work arounds
to get nice images but ray tracing is computationally very expensive.
Also, Cars was not the first movie to use ray tracing. Shrek 2 used ray
tracing to approximate global illumination. As to which of these
technologies will win out, only time will tell. These two technologies
are so different that it is hard to predict which will ultimately win
out. At Pixar, we don't know the answer but we are constantly
evaluating the state of the latest hardware to figure out what is going
to give us the most pretty pixels per dollar."
Dave Orton, CEO of ATI Technologies: I think it is extremely realistic
that the CPU and the GPU will be combined on one chip. If you think
about the market and how it has moved from the GPU to the integrated
graphics chip sets, there are new opportunities. Like the ultramobile
PC or MIT's "One Laptop Per Child" project. Power continues to be
an issue. There is a huge opportunity in the low end of the market to
create a third platform stack to the overall PC platform. So you have
integrated chip sets and GPUs. It's a question of when.
The question is if the CPU architecture can do graphics processing. I
would agree there is a class of graphics processing that you would want
a graphics processor to do. But the question is if there is a
system-on-a-chip that could also do that. It will not happen in
performance laptops. But think of how many chip sets still use Direct X
7 or Direct X 8 graphics. Those are still shipping. The "One Laptop
Per Child" applications. I think there will be a class of problems
you can solve with current technology. It might be one generation
behind the state of the art graphics. "
On ray tracing versus rasterization, Orton said: "Ray tracing is just
one form of how you render a pixel. It's not the only form. At a
scientific level, you can say that it is growing to fit more problems.
But the reality is there is a broad range of how you want to render a
pixel. Ray tracing is one form of how you do it. Other applications
will want to render it in different ways. I don't see the processor
doing it as much as I see extensions of the processor doing it."
Henri Richard, chief sales and marketing officer, Advanced Micro
Devices: "It's more of a question of when than if. We will have a
transistor budget at some point in time to combine the CPU and the GPU
on one piece of silicon. In a multicore environment, one core will be
the GPU.
Justin Rattner, chief technology officer, Intel, "Intel builds
raster-based graphics. We have for some time. It's mature as a
technology and has reached its highest evolutionary form. What you see
now is to achieve the desired look for a scene, you have to make many
passes over the data per scene. Fifteen or twenty-five times on a GPU
pipeline. That's not raster versus ray trace It's about how GPUs
have fixed pipelines, anything that is longitudinal, you render it,
then take another pass. A more flexible architecture lets you render in
one pass. We're interested in that from a pure architectural point of
view. In the next five years, these two architectures will meet
somewhere in the middle. GPUs will become more flexible and CPUs will
do more things. Ray tracing has more to do with the fact that you get
the desired result with very little effort. Right now it's tedious to
get the desired look. If you can do ray tracing in real time, it's
the obvious choice for the solution. Right now we get six or so frames
per second. It can deliver an arbitrary degree of photorealism. The
idea has generated a lot of discussion with the merger of ATI and AMD
coming. We have been much more focused on working with the graphics and
rendering software communities to create the architecture and software
for a new generation of rendering. It's marked now by functions that
don't have much to do with image quality. If you want to add physics
and behavioral AI, you have to design the software in a different way.
Not piecemeal. That's where were we come in. You have to do this in a
general-purpose environment. Our view is we have to beat them on
performance. You have to do something they can't do today. Otherwise,
you can't generate momentum.
Jim Hurley, researcher, Intel: "We think that ray-tracing is going to
take off. This is a technique where you render only what you see.
It's different from rasterization, which is what graphics chips do.
With rasterization, you feed triangles into a rasterizer and it
processes them in order. But it doesn't take into account the
relationship of the triangles. It can only do multiple passes and do
things over and over again. Ray-tracing lets you shoot a ray into a
model of a world and it will find the object it is aimed at. It is a
simulation of the physics of light. Rasterized graphics is an
approximation. It can achieve plausible images but it is using brute
force. Ray tracing can run efficiently on a CPU because of the large
caches. GPUs rely on brute-force bandwidth. Traditional raster graphics
doesn't do that well with ray-tracing. You can't do ray-tracing on
a GPU because there isn't that much memory. You get movie quality for
games and can do it in real time. Rasterization is trying to mimic what
ray tracing does in photorealism. Pixar's rendering was based on
raster graphics for a long time, but all the movies houses are moving
to ray tracing." (CPU magazine).
Jerry Bautista, director of Intel's Microcomputer Research Lab:
Regarding the graphics chip companies, he said, "If I were them, I
would be nervous. We see a trend. We watch the FLOPS, or floating point
operations, the watts, and dollars that go into the graphics cards and
the computational physics on GPUs. They have been a growing part of the
PC budget. We are aware of that. Some graphics computation is handled
well on a graphics processor. We can pull the graphics back on the CPU.
In the future, the load of rendering an image falls in favor of the
computer side, the microprocessor, and the pixelization task becomes
minor. Our horizon is three to five years.
He added, "Instead of saying that we will win over the graphics chip
makers, I'd talk with them about the applications themselves. In
today's systems, they are largely concerned with rendering. About 90
percent of resources are spend drawing pictures. There is not much left
for physics and artificial intelligence. What happens if we are have
real physics and real AI kicks in? In ray tracing, we see if we had 10
or 100 cores, we would see a 10X speed-up. With 1,000 cores, we would
see 100X speed-up. It just keeps going. Ray-tracing can swallow up
whatever compute we build. At what point do you get diminishing
returns?" (CPU magazine)"
David Wu, game programmer and president of Pseudo Interactive in
Toronto, Canada, said, "Many concepts from ray tracing and
rasterization are converging, eventually they will meet. With current
architectures (CPU or GPU) and memory bottlenecks, rasterization has an
inherent advantage in performance. That will be enough to keep it as
the technique of choice for high performance applications for many
years. The main advantage of Ray tracing is the fact that you can
create nice abstract images with little programming effort. However
when you get down to all that details that are required to render real
scenes there is not much savings in programming complexity. Ray
Tracing might find its niche amongst hobbyists (who want to build there
own renderers from scratch), dogmatic programmer evangelists who like
the term "Ray Tracing", and existing, legacy systems.
Wu added, "There is no question about the GPU/CPU separation. They
will both be on one chip using pretty much the same sort of hardware by
the next console generation. Something like CELL, but without all of
the flaws and easier to program for. Physics will be done using the
same hardware. Relatively simple, massively parallel processors with a
lot of hardware dedicated to the issue of memory latency and
bandwidth."
Tim Sweeney, CEO of Epic Games and graphics expert, said, "I'm a very
strong believer in the coming convergence of CPU and GPU hardware and
programming models, enabling CPUs to once again implement great
software rendering, or alternatively for GPUs to be applied naturally
to general computing problem using mainstream programming languages.
This is a separate topic from the question of whether ray tracing is
the future of graphics. Many vast benefits would come from a CPU-GPU
convergence that would benefit all means of generating scenes:
rasterization, ray tracing, radiosity, voxels, volumetric rendering,
and other paradigms. Such a convergence means that real-time ray
tracing will become possible, but by no means does it imply that ray
tracing will become the de facto solution for 3D drawing. For example,
ray tracing is poorer for rendering for anti-aliasing (looking towards
multisampling and analytic anti-aliasing techniques), and typically
imposes a 20-40X computational penalty compared to rendering. Ray
tracing is superior for handling bounced light, reflection, and
refraction. So, there are some places where you will definitely want
to ray trace, and some cases where it would be a very inefficient
choice. Certainly, future rendering algorithms will incorporate a mix
of techniques from different areas to exploit their strengths in
various cases without being universally penalized by one technique's
weakness."
Bob Drebin, chief technology officer of the PC business unit at ATI
Technologies, said, "They do pseudo ray tracking in the movies now.
They rasterize. If there is a polygon that needs complex reflections,
they start a ray trace for that. In both Shrek 2 and Cars, they use it
depending on the effect. With our Toy Shop demo, we did limited ray
tracing with cobblestones. It's limited. For the bricks. It's a
tool that they use in a shader program for certain situations where you
determine your color. What objects occlude you, what can you see. It is
a technique. Notion of casting rays to determine visibility or color is
something we use today. It's just one of the things. To me the more
interesting thing is the dynamics of the scene. Top tier developers
feel they are getting good. Now they want to make the scenes more
compelling from an interactive view. It's more about how I make it
more dynamic, more interactive, than to make the lighting more precise.
The physics, the interaction of objects. Character animation getting
muscle based. That is where the energy is going in game computing. In
terms of realism, I see ray tracing as a technique that will be used
selectively. Even if it goes that way, ray tracing is a highly parallel
operation. I don't see a time when they are talking of a time with
thousands of processors. I don't see advantage of a CPU doing it. If
the question is who can do a single ray fastest, then the CPU will win.
Then the goal is determine each reflection as soon as possible and move
to next one. But if you complete a million of them, the question is how
long it takes to do the first one. Youl can many of the rays in
parallel together. The throughput would be much higher. Thousands or
millions of ray intersections would collide. With thousands, then the
GPU is the clear winner. I think that in a lot of ways with the new
compute coming to the GPU, there are things that are not possible to do
on a CPU. In the past, the only place you could express it was the CPU.
The GPU is now becoming programmable. People aren't saying give me a
smaller CPU. They are saying now I can finally do more things. In a
game like Half-Life 2, you would be able to throw around all the
objects. Not just one or two. Multi-core is great for lots of
sequential computation. It may become less clear. With a richer
programming languages, the GPU needs less interaction with the CPU. I
suspect CPUs will become more parallel. We aren't running out of
things we wish we could do.
Jen-Hsun Huang, CEO of Nvidia, before the ATI-AMD merger announcement,
said, "Programmability has different types. There are scalar
programs. That uses a scalar microprocessor with a flow of instructions
and it fetches instructions out of a cache. It processes data in a
data-dependent way. That sort of programming is what microprocessors
are really wonderful at. We are not very good at that kind of
processing. Our processors are adept at processing large amounts of
data that have less dependency. Our processor is more akin to a stream
processor. The types of architectures are radically different. Just as
the CPU can run DSP programs, a DSP is much better at running DSP
programs. There are different types of programming models, whether it
is signal processing for baseband, or voice. There are scalar
processors. There are image processors for enormously large data sets
which is what a GPU does.
There is integration at two levels. There is the unification of
processing models. There is the CPU and the GPU, combined together in a
unified processor model. I think the latter is very unlikely. Although
on balance, transistors are free, we are challenged because most of the
opportunities require low power. So you have to have efficient
programming. It is far more efficient to run a program written for CPUs
on a CPU, and it's far more efficient to run a program for GPUs on a
GPU. There is the issue of power efficiency and cost efficiency. Brute
force is not a very good option. There is the second approach of
combining two processors onto one chip. In some markets, that would
happen. For example, integrated graphics combines two chips into one
where the technology is not very demanding. The market requirements are
much slower in commercial, corporate desktops and others that require
very little graphics. But if the graphics technology is a defining part
of that system, whether it is a game console or high-end PC or
workstations, the two devices innovate at different rhythms. There is
no reason the two devices want to merge into one in that case. In fact,
combining them into one makes it very difficult to combine two modern
cores into the same substrate on the same schedule. There, what causes
the two to move apart is not difference in programming models but
differences in market requirements and rhythms. By putting it in one
chip, you end up getting the worst of both worlds.
Nelson Gonzalez, CEO of Alienware, said about the merger of ATI and
AMD, "It may be a good thing. The reason I say that is I see ray
tracing is part of the way to go in the future. I don't think it's
going to be handled always at the CPU level. Maybe some FPGA chip.
That's the way to go. We are getting to the point where you have to
run eight geometric processors to process all these polygons. At some
point it doesn't make sense anymore. It makes sense to do ray
tracing.
I would think at this point it makes sense to keep the graphics chip
and the CPU separate. Unless you have many many cores, we're still
away from that. The writing is one the wall. Pixar rendered the Cars
movie with ray tracing. You're going to get a level of realism you
can't get with what we have. The combo of ray tracing and
rasterization makes sense at the beginning. Eventually, the future is
really just pure ray tracing. It's easier to model than draw these
things out.