Triple buffering option in OpenGL compatability

  • Thread starter Thread starter Art
  • Start date Start date
I just tried it and on and i can't even finish the timedemo anymore ?


I tried it with Vsync on and off ?

Ive heard with it on you get 5 to 10 fps more, if you have vsync on if
it works that is.
 
Art said:
In general, is it best to have this enabled or disabled?

Well.... I get 41fps in the timedemo without v-sync, and with triple
buffering disabled.

The framte rate is reduced to 30 with v-sync enabled...... so off I go to
try this triple buffer thing...

timedemo now reports 40.1fps with vy-sync on (and of course the triple
buffer)


So, sounds good to me!
 
I can't see what the asnwer was about the buffering? Enabled or disabled?
and where do you set it?
 
Display>Settings>Advanced>3D>OpenGL>Compatibility

I don't really know what it does but it let me get near enough the same
frame rate with v-sync on as off. Does anyone know the disadvantages of
this? There must be some negative ffect...
 
The Berzerker said:
Display>Settings>Advanced>3D>OpenGL>Compatibility

I don't really know what it does but it let me get near enough the same
frame rate with v-sync on as off. Does anyone know the disadvantages of
this? There must be some negative ffect...

AFAIK it uses up more memory.
 
Glzmo said:
AFAIK it uses up more memory.

We talking the system's RAM, or memory on the video card?

Either way, glad it's a legit performance boost, if you get me, rather than
making something shit to improve fps etc. I hope it's the system's RAM, I've
got enough to spare.
 
It uses more Frame Buffer memory (however it is a static increase, meaning
regardless of the application, triple buffering will always use the same
amount of additional RAM). So... YES, we are talking about RAM on the vid
card.
 
Well it smooths the timedemo down make sit less jerky or they could be
the Vsync on ?
 
Triple-buffering uses 50% more framebuffer memory on the video card. It
allows you to disable vsync yet not get the tearing.

On a lot of modern games triple-buffering can slow the game down
considerably because the textures get pushed into system memory through AGP,
which is dog-slow compared to video memory. This happens more often if you
have FSAA enabled and less often if your card has a lot of memory (say 256
MB).

I remember trying this option on my 16 MB Voodoo3 with 3DMark 2000. With
triple-buffering, I got a slightly higher score, because the score is
calculated only from the game tests, which used small textures. The 64-MB
texture "tunnel" test, however, slowed down by 80% with it enabled.
 
First of One said:
Triple-buffering uses 50% more framebuffer memory on the video card. It
allows you to disable vsync yet not get the tearing.

On a lot of modern games triple-buffering can slow the game down
considerably because the textures get pushed into system memory through AGP,
which is dog-slow compared to video memory. This happens more often if you
have FSAA enabled and less often if your card has a lot of memory (say 256
MB).

Has anyone found any specific examples of games that have problems when
triple buffering is enabled?
 
First of One said:
Triple-buffering uses 50% more framebuffer memory on the video card. It
allows you to disable vsync yet not get the tearing.

On a lot of modern games triple-buffering can slow the game down
considerably because the textures get pushed into system memory through AGP,
which is dog-slow compared to video memory. This happens more often if you
have FSAA enabled and less often if your card has a lot of memory (say 256
MB).

So in this case (textures pushed into system memory) PCI-E (16*)
would produce a benefit ?
ie. if you use full AA, with Vsync and triple-buffer ?
 
"Have problems" as utterly refusing to run to quitting randomly?
Triple-buffering may require a larger AGP aperture setting. Otherwise it
should just run slowly, but still run.
 
You see only a small improvement with PCIe, theoretically. AGP8x transfers
at 4.2 GB/s. Dual-channel DDR400 is capable of 6.4 GB/s. So a 16-lane PCIe
bus will give you approx. 50% more bandwidth.

This is all just peanuts compared to the 35 GB/s offered by a Geforce
6800U's onboard memory, though. So if textures do get pushed into system
memory, the game will be ass-slow no matter what kind of interface you got.
 
Back
Top