HL2 and 9800 pro.

  • Thread starter Thread starter Lee
  • Start date Start date
Doug said:
Uh, wrong. AGP bus is 32-bits wide or 4 bytes,

Show me where it says 32-bits? The standard PCI bus is 32-bits, but I don't
think AGP is so.
furthermore, your memory has a LOT more latency than the AGP bus does so
you'll NEVER realize those kind of xfer rates. I'd like to see any
benchmarks where dual-channel doubles the effective bandwidth.

Fair enough. All those calcs before were just theoretical maximums. More
important than any other factor, though: system memory still needs to feed
the CPU; so the video card can never get the full system memory bandwidth
for itself.
The GeForce 2/GTS also has a slower GPU that could NEVER utilize AGP 8x
because you can't clock-in data faster than the clock speed of the GPU.

AGP8x doesn't actually run at 533 MHz. It's either quad- or eight-pumped,
much like how the Pentium4 FSB is quad-pumped. The standard Geforce2 GTS
runs at 200 MHz core clock.
 
Look for this .pdf file agp30SpecUpdate06-21.pdf. Look at the AD pins in the
AGP spec. these are the pins/lines that carry data bidirectionally and there
are only 32 of them. AGP has more in common w/PCI than you might think (Trdy
and Irdy are both PCI bus signals). This document SPECIFICALLY mentions
32-bit transfers so I'm afraid you are, in fact, WRONG. The "theroetical
bandwidth" figure mentioned in this highly technical document is 2.1GB/sec,
which I guess the fastest GPU's made could keep up with.

Um, the Geforce 2/GTS runs at 200Mhz, assuming it has a 32-bit interface to
the AGP bus it can only clock in 800MB's of data MAX. This assumes the GPU
has NOTHING else to do, but since it only has to support the AGP 4x spec
maybe that's enough, but it could never keep up w/AGP 8x.
 
Doug said:
Look for this .pdf file agp30SpecUpdate06-21.pdf. Look at the AD pins in
the AGP spec. these are the pins/lines that carry data bidirectionally and
there are only 32 of them.

Fair enough. Looks like my sources were incorrect. The bidirectional
full-duplex operation may have been twisted into "64-bit" by some uninformed
author/marketing guy years ago.
Um, the Geforce 2/GTS runs at 200Mhz, assuming it has a 32-bit interface
to the AGP bus it can only clock in 800MB's of data MAX. This assumes the
GPU has NOTHING else to do, but since it only has to support the AGP 4x
spec maybe that's enough, but it could never keep up w/AGP 8x.

Only if 1 bit is transferred per pin per clock cycle. This was true in the
days before DDR, quad-pumped FSB, etc. As you recall, 3dfx cards always had
synchronous core and local memory clock speeds by default. Nowadays, this is
clearly not true, since the AGP 3.0 bus itself runs at 66 MHz (see the PDF),
but transfers 8 bits per pin per clock cycle.
 
It also depends on whether or not the GPU can read from the AGP bus and
read/write to it's local memory simultaneously. If not then one would
preclude the other. I wonder if the local video memory is demultiplexed.
 
Back
Top