DRS said:
The true problem with any sub-10ms average response time LCD monitor isn't
the response time. It's human visual perception. Human image persistence
(about 10 milliseconds) works for CRTs because it compensates for the
phospher excitement decay whilst the electron beam continues to scan the
rest of the screen. It works against LCDs because they store the image
between frames and the significantly slower image "decay" collides with
the "imprinted image" on the retina, hence the perception of motion blur.
It's a bit more complicated than even that. Human vision DOES have
a persistence phenomenon as described (although its behavior is
more complex), but it also has an "expectation" (built into the eye-brain
system) that things which APPEAR to be moving across the visual field
really ARE moving smoothly across that field. The eye also is never
perfectly stationary, but is constantly in motion in a combination of very
small and relatively sizable rapid movements (micro-saccades and saccades)
and "assumes" that the object in motion is going to appear in the "expected"
relative position within the visual field following those movements. The
LCD behaves more like film - a series of truly static images, shown and
"held" (in a fixed position) than does the CRT (in which there really never
IS a single, whole image presented at once), and suffers from very
similar motion artifacts. (This isn't really in conflict at all with what
you're
saying, but should be taken more as a bit of elaboration on the pheomenon.)
I have serious doubts about the ability of a camera to accurately mimic
this phenomenon.
So far, the results seem to correlate fairly well with perceived blur,
although
the camera probably shouldn't the thought of as actually "mimicking" the
behavior
of the visual system in this area. And as long as we can come up with a
number
SOMEHOW which is shown to be a good predictor of how the display will
be perceived - well, that SHOULD be the goal, right?
Bob M.