Mike Engles said:
Hello
Thanks for a amazingly detailed reply.
I do feel that my analogy has some validity.
A linear grey scale signal of 8 bits, has as you say a step of .4%
between levels, each step is even. This signal is put into a CRT. The
CRT when measured does not replicate this. We need to apply a gamma
of.45 or therabouts to make the CRT show, a step of .4% per level. When
we do this the grey scale is measurable and will show the same range as
the original.
Yes it will and, assuming the gamma compensation is implemented in
analogue no resolution will be lost and the 0.4% steps will be equally
spaced when viewed by a linear response sensor. Your eye/brain is *not*
a linear response sensor so if you display an 8-bit linear signal on a
CRT the luminance range looks right, matching the original, but you
*will* see posterisation - most of it in the shadows.
That is why the gamma correction that is applied in Photoshop and other
applications has a limited slope gamma - a true gamma, as should be
implemented on high-bit data, would result in shadow posterisation after
applying the gamma compensation to 8-bit data.
Try this, if you like, using an 8-bit linear ramp and the true gamma
correction curve that you have created. Perhaps seeing the result for
yourself will make the effect will be more believable and give you more
insight into its cause. A 0.4% step should be totally imperceptible
since the eye cannot discern better than around 1-2% steps in perceptual
space so seeing any steps at all is a consequence of the eye gamma
response. Less bits in the original grey scale make the effect even
more obvious - even though the eye cannot discern steps on a
perceptually graded 6-bit scale, if you have 6-bits on a linear scale
the posterisation is *obvious* in the shadows - even on an 8-bit output.
Now IF the eye has a gamma of .33, this is more than capable of showing
the brightness steps correctly.
I can't see how you reach that conclusion - you have lost data in
implementing the CRT gamma compensation of 0.45 digitally with only
8-bit precision and nothing will ever recover that.
It would be a over compensation if I
applied gamma .33 to the CRT.
Yes, because you are linearising the output of the CRT with 0.45 gamma.
But linearising only produces equal steps in one scale - in this case
for a linear sensor. You are not a linear sensor, so the scale of one
axis of the response curve has changed to a power law - with a power of
0.33.
So HAS the eye really a gamma of.33.
Yes.
The difference between linearity space and perceptual space is analogous
to linear and logarithmic graph paper - if you remember that from your
school days or later. On linear graph paper the steps 1, 2, 3, 4, 5
etc. are equally spaced, but if you change that axis to a log scale, as
on log graph paper, then the step from 1 to 2 is 1.71 times as large as
the step from 2 to 3 and 2.41 times as large as the step from 3 to 4 and
so on. The step from 0.1 to 0.2 is the same size as the step from 1 to
2 on the log axis graph paper, while on the linear graph paper it is 10
times smaller.
Those steps that are perfectly even in linear space are compressed in
the highlights and stretched out massively in the shadows in perceptual
space.
In a linear display, we should have no need to apply gamma.45, because
if we measured the steps, they would already be .4% per level, as that
is what we were trying to achieve, when applying the gamma to the
CRT-measurable linearity.
That is correct and, viewed by a linear sensor such as the raw output of
a digital camera, there will be equally spaced 0.4% steps. However your
eyes are not linear, they have about 0.33 gamma and the effect is that
what you perceive is visible quantisation in the shadows and extremely
smooth highlights.
In this instant 8 bits will describe the scale exactly, with no
quantisation effects.
Only for a linear sensor - not for you or any other primate viewing the
image.
Now if in a supposedly linear system we do not
have a measurable linearity, say the slide is underexposed, we will have
to apply a gamma to correct this. It is now that 8 bits will not
describe the image and quantisation will become apparent. We will need
more bits depending on how much correction is needed.
Why? The slide is just underexposed. Using your argument, if you can
see it on the slide then you do not need any gamma correct to reproduce
the output accurately on the display - just increase the illumination or
exposure, as you do to view an underexposed slide in the first place.
In all this you seem to be saying that we need gamma to correct a CRT to
measurable linearity, which will look correct to the world;Is this
perceptual linearity.
No.
You also seem to be saying that having made the
image look correct to the world, that we need a extra gamma to achieve
perceptual linearity. How much extra gamma?
I am not saying, and never have done, that you need *any* additional
gamma. What I am saying, like Poynton, is that if the CRT had never
been invented and only linear displays existed that we would need to
invent gamma to make best use of the available bits in digital signals,
mapping them to perceptual gamma.
For example, 6-bits is adequate for high quality video systems, if the
ADC samples an already gamma corrected signal. This level of digital
data was used for many years in both professional and domestic digital
systems. Chances are that if your VCR or TV has a picture in picture
facility that it is only 6-bits (or less!) unless it is fairly recent.
The frame store that enabled the first live switch between studio and
un-synchronised outside broadcast unit without a frame roll (a
helicopter mounted camera at the 1980 Moscow Olympics) was only 6-bits
deep in the luminance channel.
However, when sampling linear signals that have not been gamma
compensated, 6-bits is inadequate. As you will notice if you try the
suggested test above, 8-bits is barely adequate and, if you want the
same quality as the earlier 6-bit system you need to sample with at
least 13-bits (assuming a gamma of 2.2 in each case).
As you say correctly if when correcting a CRT a gamma was applied in the
analogue domain and then digitised, then 8 bits would describe the
signal.This is because a infinite number of levels are mapped to the
gamma, leaving a infinite nunber of CRT corrected levels of step.4% when
digitised. This will be a gamma corrected 8 bit signal with no
quantisation effects. Measurable linearity, look correct to the world,
presumably perceptualy linear.
Yes - *this* will be approximately perceptually linear. Note the
difference between this and the case you cited above where the response
to the same question was "no". Here you have digitised a gamma
compensated signal - but there are still quantisation effects, it is
just that you cannot perceive them because they are well below the 1-2%
perceptual threshold.
I have to say I remain sceptical.
Nothing wrong with being sceptical of something you don't understand -
that is good. Hopefully some of the suggested tests you can do for
yourself will demonstrate that the scepticism is misplaced.
We do need gamma to correct a CRT,
that is a fact.It looks like if you are correct that we will always need
gamma, because manufacturers are making linear displays non linear, to
emulate a CRT.
They are not making them that way to emulate a CRT. What is the point
of making a linear LCD display that only has a *digital* input emulate a
CRT? They can never be fed from a source designed for an analogue input
CRT. Even for "dual input" displays, if it was only required for CRT
compatibility then it would only need to be applied to the analogue
feed. No, they introduce gamma on inherently linear displays because if
you feed them with a straight 8-bit video signal you will not have
sufficient bits to prevent posterisation in the shadows.