M
Mike Engles
Kennedy said:That is merely part of the effect, and only the obvious part which
completely misses the subtlety of the encode-decode effect. Linearity
is *NOT* the only metric of image quality.
To understand this consider what would happen if the CRT had the inverse
gamma (ie. 0.45 instead of 2.2) - then you would have to apply a gamma
compensation of 2.2 to the image. This would have the effect of
darkening the image, which would then be brightened by the CRT. You
would *still* "see the image correctly" in terms of it's brightness
(because you have perfectly compensated the CRT non-linearity) but it
would look very poor in terms of shadow posterisation.
This is trivial to demonstrate. Take a 16-bit linear gradient from
black to white. Apply a gamma of 2.2 which will darken the image. Then
reduce the image to 8-bits, which would be the state it would appear in
prior to being sent to the CRT. Then apply a gamma of 0.45 to simulate
how such a CRT would display the image. It is still apparently the
correct brightness and is perfectly linear. However, it is now severely
posterised in the shadows and a visibly poor gradient.
This exercise should demonstrate clearly that simply precompensating for
the non-linearity of the display is not enough. It is important that
the display non-linearity itself is the opposite of the perceptual
non-linearity, otherwise you need far more bits to achieve tonal
continuity and inevitably waste most of the available levels.
On the contrary, since the gamma compensated image is in a perceptual
evenly quantised state, you have equalised the probability of losing
data by making the lighter parts lighter as you have by making darker
parts darker by any processing you wish to apply. In the linear state
there are insufficient levels to adequately describe the shadows with
8-bit data, and consequently processing in *that* state results in lost
information - in the shadows.
Editing any image will cause image degradation irrespective of the
number of bits. The issue is whether that degradation, or loss of
information, is perceptible. Editing 8-bit images in the linear state
will produce much more perceptible degradations, particularly in the
shadows, than editing in 8-bit gamma compensated data.
And hence your edits are applied with a perceptual weighting to the
available levels.
With 16-bits it is much less of an issue, but the same rules apply - you
have a higher probability of your processing causing loss of details in
the shadows than you have in the highlights, and processing in
"perceptual space" (ie. gamma compensated data) equalises the
probability of data loss throughout the image range so that you do not
damage the shadows any more than the highlights any more than the
mid-tones by the application of the same process.
Seeing the image in a linear state is only part of the solution, and
whilst you continue to focus on linearity at the expense of the other
issues then you will never understand the reason why gamma is necessary.
A binary (1-bit) image is perfectly linear, but isn't a very good
representation of the image, neither is 2, 3 or 4 bits and so on. 6-bits
is adequate (and 8-bits conveniently gives additional headroom for
necessary colour management functions) *if* the available levels
produced by those 8-bits are distributed optimally throughout the
luminance range, which is such that the discrete levels are equally
distributed throughout the perceptual response range. As soon as you
depart from *that* criteria the you increase the risk of discernible
degradation in those regions of the perceptual response range which have
fewest levels. This is irrespective of how many bits you have in your
image although, obviously, the more bits you have the less likely the
problem is to become visible. Less likely doesn't mean never though!
--
Kennedy
Yes, Socrates himself is particularly missed;
A lovely little thinker, but a bugger when he's pissed.
Python Philosophers (replace 'nospam' with 'kennedym' when replying)
Hello
I was not quite arguing that we should not apply a gamma in 16 bit
linear, but to decode not in the display but in the image and then treat
it linearly, rather than in a gamma space. Presumably this code decode
process meets the demand of gamma to maximise the bits.
Mike Engles