J
joe smith
What you have been describing so far seems to assume a pretty complete
Everything else still is "today" except the 3.0 specification shaders, and
yup, it -is- required to leverage different generation graphics chips. 2.0
is the entry-level to 'real' shader programming, 1.0 was just the 'practise
round' where it did not offer THAT MUCH functionality over fixedpipe (except
for the vertex programs and even there in a bit limited manner.. marginally
useful for special effects and that sort of things.. special cases, with 2.0
and upwards we could begin writing full unified rendering pipes through
on-demand shader compilation, pretty easy with HLSL runtime compiler in
D3DX...)
There is still a lot to be desired even with 3.0 specs, but it's a nice
foundation / entry level few years from now. All the same properly leveraged
*GPU* doesn't need much spoon feeding at all from the CPU. The CPU just must
tell when to do what, the GPU will handle the rest.. so to speak. The case
you meantion that older hardware must be catered is source of much
frustration, but no pain no gain... like I ranted earlier it's easier to go
with the most common denominator and screw the scalability, in commercial
environment Good Enough is often most economical and therefore feasible. If
I would be doing stuff for my own amusement could afford all kinds of crap
cannot do in day to day work. Etc.. but to cut the story short CPU isn't
bottlebeck for the GPU by default.. it is bottleneck due to conscious (or
non-conscious design choises.
GPU hardware implementation of a still-evolving DX9+ standard. Neither
NVidia nor Ati have complete hardware implementations.That situation
Everything else still is "today" except the 3.0 specification shaders, and
yup, it -is- required to leverage different generation graphics chips. 2.0
is the entry-level to 'real' shader programming, 1.0 was just the 'practise
round' where it did not offer THAT MUCH functionality over fixedpipe (except
for the vertex programs and even there in a bit limited manner.. marginally
useful for special effects and that sort of things.. special cases, with 2.0
and upwards we could begin writing full unified rendering pipes through
on-demand shader compilation, pretty easy with HLSL runtime compiler in
D3DX...)
There is still a lot to be desired even with 3.0 specs, but it's a nice
foundation / entry level few years from now. All the same properly leveraged
*GPU* doesn't need much spoon feeding at all from the CPU. The CPU just must
tell when to do what, the GPU will handle the rest.. so to speak. The case
you meantion that older hardware must be catered is source of much
frustration, but no pain no gain... like I ranted earlier it's easier to go
with the most common denominator and screw the scalability, in commercial
environment Good Enough is often most economical and therefore feasible. If
I would be doing stuff for my own amusement could afford all kinds of crap
cannot do in day to day work. Etc.. but to cut the story short CPU isn't
bottlebeck for the GPU by default.. it is bottleneck due to conscious (or
non-conscious design choises.