Mike Engles said:
Applying a gamma to a image, brightens a linear image. That image is fed
to a CRT which dulls the image. Now this is convenient.
We see the image correctly because the CRT has the opposite non
linearity from that applied as the gamma.
That is merely part of the effect, and only the obvious part which
completely misses the subtlety of the encode-decode effect. Linearity
is *NOT* the only metric of image quality.
To understand this consider what would happen if the CRT had the inverse
gamma (ie. 0.45 instead of 2.2) - then you would have to apply a gamma
compensation of 2.2 to the image. This would have the effect of
darkening the image, which would then be brightened by the CRT. You
would *still* "see the image correctly" in terms of it's brightness
(because you have perfectly compensated the CRT non-linearity) but it
would look very poor in terms of shadow posterisation.
This is trivial to demonstrate. Take a 16-bit linear gradient from
black to white. Apply a gamma of 2.2 which will darken the image. Then
reduce the image to 8-bits, which would be the state it would appear in
prior to being sent to the CRT. Then apply a gamma of 0.45 to simulate
how such a CRT would display the image. It is still apparently the
correct brightness and is perfectly linear. However, it is now severely
posterised in the shadows and a visibly poor gradient.
This exercise should demonstrate clearly that simply precompensating for
the non-linearity of the display is not enough. It is important that
the display non-linearity itself is the opposite of the perceptual
non-linearity, otherwise you need far more bits to achieve tonal
continuity and inevitably waste most of the available levels.
This would seem to be fine if we did nothing else to the image. If we
edit this in 8 bits,with the image is in a brightened state, there is a
danger of making the already brighter bits, brighter and loosing
information.
On the contrary, since the gamma compensated image is in a perceptual
evenly quantised state, you have equalised the probability of losing
data by making the lighter parts lighter as you have by making darker
parts darker by any processing you wish to apply. In the linear state
there are insufficient levels to adequately describe the shadows with
8-bit data, and consequently processing in *that* state results in lost
information - in the shadows.
Editing any image in 8 bits will cause image degradation.
Editing any image will cause image degradation irrespective of the
number of bits. The issue is whether that degradation, or loss of
information, is perceptible. Editing 8-bit images in the linear state
will produce much more perceptible degradations, particularly in the
shadows, than editing in 8-bit gamma compensated data.
If we were using 16 bits and applied the gammaed image to a linear
display, we would have to apply the effect of a CRT to the display, but
we are still editing in a gamma state.
And hence your edits are applied with a perceptual weighting to the
available levels.
I still cannot see why 16 bit
images need not be edited in linear state, and apply the gamma
correction to the image rather than the display, even if you do say that
gamma is a necessity,
With 16-bits it is much less of an issue, but the same rules apply - you
have a higher probability of your processing causing loss of details in
the shadows than you have in the highlights, and processing in
"perceptual space" (ie. gamma compensated data) equalises the
probability of data loss throughout the image range so that you do not
damage the shadows any more than the highlights any more than the
mid-tones by the application of the same process.
because either way we are seeing the image in a
linear state.
Seeing the image in a linear state is only part of the solution, and
whilst you continue to focus on linearity at the expense of the other
issues then you will never understand the reason why gamma is necessary.
A binary (1-bit) image is perfectly linear, but isn't a very good
representation of the image, neither is 2, 3 or 4 bits and so on. 6-bits
is adequate (and 8-bits conveniently gives additional headroom for
necessary colour management functions) *if* the available levels
produced by those 8-bits are distributed optimally throughout the
luminance range, which is such that the discrete levels are equally
distributed throughout the perceptual response range. As soon as you
depart from *that* criteria the you increase the risk of discernible
degradation in those regions of the perceptual response range which have
fewest levels. This is irrespective of how many bits you have in your
image although, obviously, the more bits you have the less likely the
problem is to become visible. Less likely doesn't mean never though!