scanner gamma

  • Thread starter Thread starter Bob Whatsima
  • Start date Start date
Kennedy said:
Looks about right to me - and this time you have labelled it as "Eye at
1/3 power", which more correct than last time. It was the label
"inverse" that I was objecting to last time, rather than the precise
data itself, which indicated that this was a precompensation that you
were intending to apply. However, this is really a perceptual response,
rather than just the eye - I don't know that anyone has, in this case,
actually separated out the response of the eye from the interpretation
of the information it produces by the brain. It has been done in other
instances though, such as Campbell's work in determining how much
resolution is produced by the eye and how much is interpreted by the
brain - but that is a different subject. ;-)

One area does look in error though - the blacks of the eye appear to
cross over the inverse gamma of the CRTs (both 2.2 and 1.8) which they
should not. I guess this might just be an misalignment of your screen
grabs used to produce this composite image though.

Anyway, as you can see from your curves, the perceptual response is
almost the inverse of the gamma response of the CRT, so multiplying them
together gives almost a linear perceptual response. Now, obviously,
that would leave the image uncompensated for the CRT, so what your
curves are demonstrating is that by compensating for the gamma of the
CRT you are actually making best use of the available bits in the data.

You can't actually use this "eye gamma" to compensate for anything -
just in case that's where you are heading with this - the CRT pretty
much already achieves that.

Yes, gamma can vary between CRTs - and indeed between colour channels in
the same CRT, though usually by less. However the apparent gamma
changes depending on the light level that the CRT is viewed on. Although
the real CRT is probably around 2.5, when viewed in dimly lit
surroundings, it is preferable for images to have a higher residual
gamma. That is why the "nominal" gamma of a CRT is usually stated as
2.2 - so that it looks right when viewed in a dim room.
--
Kennedy
Yes, Socrates himself is particularly missed;
A lovely little thinker, but a bugger when he's pissed.
Python Philosophers (replace 'nospam' with 'kennedym' when replying)


Hello

Thanks for the reply.


Actually, my premise is that if the eye is gamma.33(steep curve) then it
has a built in correction for a CRT, inverse 2.2 is gamma.45(less steep
curve). One should not need to apply any inverse gamma to a CRT. The eye
should be doing it with gamma to spare. The net result would be a gamma
curve of gamma .33/.45 =.73

I am equating the eye to a black box with a positive gain X and a CRT as
another black box with a negative gain Y. With X being greater than Y,
the output will be a net positive. Now by adding inverse 2.2 to correct
for a CRT we are adding even more gain.

Is this logic correct?

I have made a new plot.

http://www.btinternet.com/~mike.engles/mike/Curves3.jpg

Mike Engles

Mike Engles
 
Mike Engles said:
Actually, my premise is that if the eye is gamma.33(steep curve) then it
has a built in correction for a CRT, inverse 2.2 is gamma.45(less steep
curve). One should not need to apply any inverse gamma to a CRT. The eye
should be doing it with gamma to spare. The net result would be a gamma
curve of gamma .33/.45 =.73

I am equating the eye to a black box with a positive gain X and a CRT as
another black box with a negative gain Y. With X being greater than Y,
the output will be a net positive. Now by adding inverse 2.2 to correct
for a CRT we are adding even more gain.

Is this logic correct?
No. There are two discrete things going on here and you must not
confuse them.

As you said right back at the beginning of this thread, you have to
normalise the output of the CRT to the real world. That requires a
gamma correction of you feed the CRT with information from a linear
sensor. That way the output of the CRT looks right - with correct
intensity shadows, mid-tones and highlights.

That is the primary reason for gamma correction. When that correction
is applied the output of the CRT looks as similar as possible to the
real world.

The second thing that is happening is noise shaping. If you had a
linear display and fed that with a linear digital signal then the
perceived highlights would be represented by many more codes than the
perceived shadows with, hopefully, the mid-tones having an average. That
means that you would be wasting data counts in the highlights, because
you cannot differentiate similar codes, while having insufficient codes
to adequately display the shadows, resulting in posterisation. If you
are lucky, you might have enough bits to display the mid-tones without
such effects, but that depends on the number of bits you use. The same
thing happens with analogue noise. The analogue noise in the linear
sensor would be exaggerated in the shadows and compressed in the
highlights.

This would clearly be unacceptable, and there are only two solutions:
1. Use more bits in the digital data being fed to the linear display
2. Deliberately implement a gamma response in the analogue input of the
display so that it was no longer linear and required a similar gamma
compensation as the CRT. *If* this was implemented then ideal gamma
would be 3, as opposed to the actual CRT gamma of 2.5, with a gamma
correction of 0.333'.

This is what Poynton means when he talks about the coincidence. It has
*nothing* at all to do with the image intensity range - it refers
specifically to noise and quantisation within that intensity range.

You cannot achieve an equivalent effect just by manipulating digital
data, because ultimately it is a case of making the best use of the data
you have throughout the entire luminance range.
Yes. What that shows is that the combination of the perceptual response
and the CRT response distributes noise and digital posterisation almost
evenly throughout the luminance range - however you still need to apply
CRT gamma correction to the linear sensor to produce the correct
luminance levels relative to the real world.

In short, with an 8-bit display you need a linear sensor with many more
than 8-bits to prevent posterisation in the shadows. This is fairly
obvious from the data. 8-bits produces an image on the CRT which has
approximately 0.4% perceived brightness steps between adjacent codes. As
your curves show, this is a little higher in the shadows and a little
lower in the highlights because the eye more than compensates for the
CRT response. However, ignoring that minor residual effect on noise and
quantisation, to get these uniformly spaced perceived brightness steps
you need to compensate the output of the linear sensor for the CRT
gamma.

In the extreme case, if you want the full 8-bit range (which,
fortunately you don't need, because you can barely perceive 1% steps)
with a gamma of 2.2, then you need to have sufficient bits in the linear
sensor to be able to map onto a count of 1 accurately, which turns out
to be around 18-bits. Off course, that means that you then have far
more bits than you need for everything other than the extreme shadows,
so you would end up throwing a lot of information away, but you need
18-bits of linear sensor just to achieve uniform perceived quantisation.
That is why Photoshop limits the gamma slope in the shadows - so that
you *don't* need that many bits - at the expense of some loss of
percieved luminance linearity in the shadows: they get darker quicker
than they really should.

It is also why you need a linear response scanner with far more bits
than just the 8 (probably only 5 or 6 after monitor profiling software
has had its way with your system) you send to the display. If you had a
non-linear scanner with a similar response as the eye, you could get
away with far fewer bits. In a sense this is how old photomultiplier
based drum scanners worked and where Nikon were heading with their
original linear CCD scanner designs where the gamma correction was
implemented in the scanner itself and only 8-bits was ever output to the
computer. Unfortunately both of these approaches have succumbed to
commercial pressure - the former to commercial obsolescence and the
latter to marketing hype that more bits meant a better scanner. Of
course more bits does generally mean better, but you don't need all of
those bits output - you just need to turn that linear quantisation into
perceptual linear quantisation by the application of a an internal gamma
compensation.
 
Kennedy said:
No. There are two discrete things going on here and you must not
confuse them.

As you said right back at the beginning of this thread, you have to
normalise the output of the CRT to the real world. That requires a
gamma correction of you feed the CRT with information from a linear
sensor. That way the output of the CRT looks right - with correct
intensity shadows, mid-tones and highlights.

That is the primary reason for gamma correction. When that correction
is applied the output of the CRT looks as similar as possible to the
real world.

The second thing that is happening is noise shaping. If you had a
linear display and fed that with a linear digital signal then the
perceived highlights would be represented by many more codes than the
perceived shadows with, hopefully, the mid-tones having an average. That
means that you would be wasting data counts in the highlights, because
you cannot differentiate similar codes, while having insufficient codes
to adequately display the shadows, resulting in posterisation. If you
are lucky, you might have enough bits to display the mid-tones without
such effects, but that depends on the number of bits you use. The same
thing happens with analogue noise. The analogue noise in the linear
sensor would be exaggerated in the shadows and compressed in the
highlights.

This would clearly be unacceptable, and there are only two solutions:
1. Use more bits in the digital data being fed to the linear display
2. Deliberately implement a gamma response in the analogue input of the
display so that it was no longer linear and required a similar gamma
compensation as the CRT. *If* this was implemented then ideal gamma
would be 3, as opposed to the actual CRT gamma of 2.5, with a gamma
correction of 0.333'.

This is what Poynton means when he talks about the coincidence. It has
*nothing* at all to do with the image intensity range - it refers
specifically to noise and quantisation within that intensity range.

You cannot achieve an equivalent effect just by manipulating digital
data, because ultimately it is a case of making the best use of the data
you have throughout the entire luminance range.

Yes. What that shows is that the combination of the perceptual response
and the CRT response distributes noise and digital posterisation almost
evenly throughout the luminance range - however you still need to apply
CRT gamma correction to the linear sensor to produce the correct
luminance levels relative to the real world.

In short, with an 8-bit display you need a linear sensor with many more
than 8-bits to prevent posterisation in the shadows. This is fairly
obvious from the data. 8-bits produces an image on the CRT which has
approximately 0.4% perceived brightness steps between adjacent codes. As
your curves show, this is a little higher in the shadows and a little
lower in the highlights because the eye more than compensates for the
CRT response. However, ignoring that minor residual effect on noise and
quantisation, to get these uniformly spaced perceived brightness steps
you need to compensate the output of the linear sensor for the CRT
gamma.

In the extreme case, if you want the full 8-bit range (which,
fortunately you don't need, because you can barely perceive 1% steps)
with a gamma of 2.2, then you need to have sufficient bits in the linear
sensor to be able to map onto a count of 1 accurately, which turns out
to be around 18-bits. Off course, that means that you then have far
more bits than you need for everything other than the extreme shadows,
so you would end up throwing a lot of information away, but you need
18-bits of linear sensor just to achieve uniform perceived quantisation.
That is why Photoshop limits the gamma slope in the shadows - so that
you *don't* need that many bits - at the expense of some loss of
percieved luminance linearity in the shadows: they get darker quicker
than they really should.

It is also why you need a linear response scanner with far more bits
than just the 8 (probably only 5 or 6 after monitor profiling software
has had its way with your system) you send to the display. If you had a
non-linear scanner with a similar response as the eye, you could get
away with far fewer bits. In a sense this is how old photomultiplier
based drum scanners worked and where Nikon were heading with their
original linear CCD scanner designs where the gamma correction was
implemented in the scanner itself and only 8-bits was ever output to the
computer. Unfortunately both of these approaches have succumbed to
commercial pressure - the former to commercial obsolescence and the
latter to marketing hype that more bits meant a better scanner. Of
course more bits does generally mean better, but you don't need all of
those bits output - you just need to turn that linear quantisation into
perceptual linear quantisation by the application of a an internal gamma
compensation.
--
Kennedy
Yes, Socrates himself is particularly missed;
A lovely little thinker, but a bugger when he's pissed.
Python Philosophers (replace 'nospam' with 'kennedym' when replying)


Hello

Thanks for a amazingly detailed reply.

I do feel that my analogy has some validity.

A linear grey scale signal of 8 bits, has as you say a step of .4%
between levels, each step is even. This signal is put into a CRT. The
CRT when measured does not replicate this. We need to apply a gamma
of.45 or therabouts to make the CRT show, a step of .4% per level. When
we do this the grey scale is measurable and will show the same range as
the original.

Now IF the eye has a gamma of .33, this is more than capable of showing
the brightness steps correctly. It would be a over compensation if I
applied gamma .33 to the CRT. So HAS the eye really a gamma of.33.

In a linear display, we should have no need to apply gamma.45, because
if we measured the steps, they would already be .4% per level, as that
is what we were trying to achieve, when applying the gamma to the
CRT-measurable linearity.

In this instant 8 bits will describe the scale exactly, with no
quantisation effects. Now if in a supposedly linear system we do not
have a measurable linearity, say the slide is underexposed, we will have
to apply a gamma to correct this. It is now that 8 bits will not
describe the image and quantisation will become apparent. We will need
more bits depending on how much correction is needed.

In all this you seem to be saying that we need gamma to correct a CRT to
measurable linearity, which will look correct to the world;Is this
perceptual linearity. You also seem to be saying that having made the
image look correct to the world, that we need a extra gamma to achieve
perceptual linearity. How much extra gamma?

As you say correctly if when correcting a CRT a gamma was applied in the
analogue domain and then digitised, then 8 bits would describe the
signal.This is because a infinite number of levels are mapped to the
gamma, leaving a infinite nunber of CRT corrected levels of step.4% when
digitised. This will be a gamma corrected 8 bit signal with no
quantisation effects. Measurable linearity, look correct to the world,
presumably perceptualy linear.

I have to say I remain sceptical. We do need gamma to correct a CRT,
that is a fact.It looks like if you are correct that we will always need
gamma, because manufacturers are making linear displays non linear, to
emulate a CRT.

Thanks for a valuable discussion.
Much food for thought.


Mike Engles
 
Mike Engles said:
Hello

Thanks for a amazingly detailed reply.

I do feel that my analogy has some validity.

A linear grey scale signal of 8 bits, has as you say a step of .4%
between levels, each step is even. This signal is put into a CRT. The
CRT when measured does not replicate this. We need to apply a gamma
of.45 or therabouts to make the CRT show, a step of .4% per level. When
we do this the grey scale is measurable and will show the same range as
the original.
Yes it will and, assuming the gamma compensation is implemented in
analogue no resolution will be lost and the 0.4% steps will be equally
spaced when viewed by a linear response sensor. Your eye/brain is *not*
a linear response sensor so if you display an 8-bit linear signal on a
CRT the luminance range looks right, matching the original, but you
*will* see posterisation - most of it in the shadows.

That is why the gamma correction that is applied in Photoshop and other
applications has a limited slope gamma - a true gamma, as should be
implemented on high-bit data, would result in shadow posterisation after
applying the gamma compensation to 8-bit data.

Try this, if you like, using an 8-bit linear ramp and the true gamma
correction curve that you have created. Perhaps seeing the result for
yourself will make the effect will be more believable and give you more
insight into its cause. A 0.4% step should be totally imperceptible
since the eye cannot discern better than around 1-2% steps in perceptual
space so seeing any steps at all is a consequence of the eye gamma
response. Less bits in the original grey scale make the effect even
more obvious - even though the eye cannot discern steps on a
perceptually graded 6-bit scale, if you have 6-bits on a linear scale
the posterisation is *obvious* in the shadows - even on an 8-bit output.
Now IF the eye has a gamma of .33, this is more than capable of showing
the brightness steps correctly.

I can't see how you reach that conclusion - you have lost data in
implementing the CRT gamma compensation of 0.45 digitally with only
8-bit precision and nothing will ever recover that.
It would be a over compensation if I
applied gamma .33 to the CRT.

Yes, because you are linearising the output of the CRT with 0.45 gamma.
But linearising only produces equal steps in one scale - in this case
for a linear sensor. You are not a linear sensor, so the scale of one
axis of the response curve has changed to a power law - with a power of
0.33.
So HAS the eye really a gamma of.33.
Yes.

The difference between linearity space and perceptual space is analogous
to linear and logarithmic graph paper - if you remember that from your
school days or later. On linear graph paper the steps 1, 2, 3, 4, 5
etc. are equally spaced, but if you change that axis to a log scale, as
on log graph paper, then the step from 1 to 2 is 1.71 times as large as
the step from 2 to 3 and 2.41 times as large as the step from 3 to 4 and
so on. The step from 0.1 to 0.2 is the same size as the step from 1 to
2 on the log axis graph paper, while on the linear graph paper it is 10
times smaller.

Those steps that are perfectly even in linear space are compressed in
the highlights and stretched out massively in the shadows in perceptual
space.
In a linear display, we should have no need to apply gamma.45, because
if we measured the steps, they would already be .4% per level, as that
is what we were trying to achieve, when applying the gamma to the
CRT-measurable linearity.
That is correct and, viewed by a linear sensor such as the raw output of
a digital camera, there will be equally spaced 0.4% steps. However your
eyes are not linear, they have about 0.33 gamma and the effect is that
what you perceive is visible quantisation in the shadows and extremely
smooth highlights.
In this instant 8 bits will describe the scale exactly, with no
quantisation effects.

Only for a linear sensor - not for you or any other primate viewing the
image.
Now if in a supposedly linear system we do not
have a measurable linearity, say the slide is underexposed, we will have
to apply a gamma to correct this. It is now that 8 bits will not
describe the image and quantisation will become apparent. We will need
more bits depending on how much correction is needed.
Why? The slide is just underexposed. Using your argument, if you can
see it on the slide then you do not need any gamma correct to reproduce
the output accurately on the display - just increase the illumination or
exposure, as you do to view an underexposed slide in the first place.
In all this you seem to be saying that we need gamma to correct a CRT to
measurable linearity, which will look correct to the world;Is this
perceptual linearity.
No.

You also seem to be saying that having made the
image look correct to the world, that we need a extra gamma to achieve
perceptual linearity. How much extra gamma?
I am not saying, and never have done, that you need *any* additional
gamma. What I am saying, like Poynton, is that if the CRT had never
been invented and only linear displays existed that we would need to
invent gamma to make best use of the available bits in digital signals,
mapping them to perceptual gamma.

For example, 6-bits is adequate for high quality video systems, if the
ADC samples an already gamma corrected signal. This level of digital
data was used for many years in both professional and domestic digital
systems. Chances are that if your VCR or TV has a picture in picture
facility that it is only 6-bits (or less!) unless it is fairly recent.
The frame store that enabled the first live switch between studio and
un-synchronised outside broadcast unit without a frame roll (a
helicopter mounted camera at the 1980 Moscow Olympics) was only 6-bits
deep in the luminance channel.

However, when sampling linear signals that have not been gamma
compensated, 6-bits is inadequate. As you will notice if you try the
suggested test above, 8-bits is barely adequate and, if you want the
same quality as the earlier 6-bit system you need to sample with at
least 13-bits (assuming a gamma of 2.2 in each case).
As you say correctly if when correcting a CRT a gamma was applied in the
analogue domain and then digitised, then 8 bits would describe the
signal.This is because a infinite number of levels are mapped to the
gamma, leaving a infinite nunber of CRT corrected levels of step.4% when
digitised. This will be a gamma corrected 8 bit signal with no
quantisation effects. Measurable linearity, look correct to the world,
presumably perceptualy linear.
Yes - *this* will be approximately perceptually linear. Note the
difference between this and the case you cited above where the response
to the same question was "no". Here you have digitised a gamma
compensated signal - but there are still quantisation effects, it is
just that you cannot perceive them because they are well below the 1-2%
perceptual threshold.
I have to say I remain sceptical.

Nothing wrong with being sceptical of something you don't understand -
that is good. Hopefully some of the suggested tests you can do for
yourself will demonstrate that the scepticism is misplaced.
We do need gamma to correct a CRT,
that is a fact.It looks like if you are correct that we will always need
gamma, because manufacturers are making linear displays non linear, to
emulate a CRT.
They are not making them that way to emulate a CRT. What is the point
of making a linear LCD display that only has a *digital* input emulate a
CRT? They can never be fed from a source designed for an analogue input
CRT. Even for "dual input" displays, if it was only required for CRT
compatibility then it would only need to be applied to the analogue
feed. No, they introduce gamma on inherently linear displays because if
you feed them with a straight 8-bit video signal you will not have
sufficient bits to prevent posterisation in the shadows.
 
Back
Top