Gamma correction question

  • Thread starter Thread starter Jack Frillman
  • Start date Start date
Tim, the basic idea is that the RGB values 0..255 are not device
dependent. Period. Pixels do not have intensity values in candelas.
The pixel is simply three numeric RGB values between 0..255, without
units, without even a known purpose. One pixels value describes only its
place in that 0..255 range, whatever that means.

0 means mighty dark, the darkest possible value, such as it is.
255 means mighty bright, the brightest possible value, such as it is.
But nothing defines what that means exactly.
Different devices have different capabilities to show this image, and
different devices will have different responses to the same values
0..255 in the same image file.

There is no constant K which improves the situation.

We can only work to achieve "pleasing results" instead of "exact
results". And this is not hard, but there is no concept of exact.
 
Much confusion and gnashing of teeth in this thread. Can somebody tell
me if I've got the following right?

--- Brightness of a pixel on an ideal computer screen
using 24-bit colour:

Lpixel = Lmax * ( x / 255 ) ^ gamma

where
# Lmax is the brightness when the screen is displaying pure white
# x is an 8-bit number 0-255 and Red = Green = Blue = x
# gamma = 2.2 (fixed by the hardware in the screen)

--- Output of an ideal film scanner in 16-bit-linear mode:

y = 65535 * ( 1 - D )

where
# D is the slide density (not the log-density --eg. D=0.9 corresponds
to a transmission of 10%)
# Red = Green = Blue = y, assuming the slide is gray

--- Conversion from 16-bit-linear to 8-bit data:

x = 255 * ( y / 65535 ) ^ (1/gamma)
Well, having correctly qualified your equations with the term "ideal
computer screen" and "ideal film scanner" I would say you have hit the
nail right on the head!

Apart from relatively minor corrections for the stray light output from
the display and the effect of any colour management and monitor profile
you are using (which changes the actual data in the video card LUT just
prior to conversion to the analogue output) that pretty much sums it up
to a first order for real practical systems too. ;-)
 
Wayne Fulton said:
Lmax is not the brightness of the CRT screen, exactly.

On the "ideal computer screen" it would be exactly as Tim defined it,
though. I think Tim was very careful in framing his post. ;-)
The gamma curve is always scaled to be in the intensity range 0..1
The simple value^gamma refers to a 0..1 range of data.

So x/255 scales the original 8 bit value 0..255 to be in the range 0..1.

Which is what Tim said, AIUI.
For example, the maximum value 255 divided by 255 is this 1, precisely.
Divide the value by 255 if the data is currently 8 bits, or 1024 if it is
10 bits, or 4096 if 12 bits, or 65535 if it is 16 bits.

Then raise to the gamma exponent.
So far, so good - just as Tim said. ;-)
Then multiply result by 255 or 1024 or 4096 or 65535 to bring it from
0..1 scale back to be the desired output range, which may not be the same
scale, for example it may simultaneously do 12 bit to 8 bit conversion.
Bzzzzzt!!! Danger Will Robinson, Danger!!

Tim is referring here to how the CRT represents the 8-bit data sent to
it. It originates in 8-bit form but is converted to analogue form by
the time it gets to this stage. Hence, your further quantisation
*after* the application of gamma does *NOT* occur. You have just
introduced an extra and erroneous quantisation error at this point!

Since the signal is in analogue at this stage, the scaling factor is
Lmax, just as Tim defined it. ;-)
 
Tim is referring here to how the CRT represents the 8-bit data sent to
it. It originates in 8-bit form but is converted to analogue form by
the time it gets to this stage. Hence, your further quantisation
*after* the application of gamma does *NOT* occur. You have just
introduced an extra and erroneous quantisation error at this point!

Since the signal is in analogue at this stage, the scaling factor is
Lmax, just as Tim defined it. ;-)



Perhaps it is as you say, as I dont know Tims intention, but he did speak of
the brightness of a pixel, so I immediately thought digital. If I
misunderstood Tim, then my apologies go to Tim.

Regardless, I was referring to 12 to 8 bit digital data conversion with gamma
encoding. The assumed destination was a file, or at least a video board, but
which is digital, so which could never go directly to a CRT. I did then and
do now think it was both clear and correct, but you are welcome to correct it
if needed. Yes, I do wish I had typed 4095 instead of 4096, so that
4095/4095 = 1.

You refer to the CRT doing the decoding, which is analog. Being analog, I
suspect the CRT has its own ways, instead of using the digital formula and a
calculator. We do have models, but which do not influence the CRT. I think
the range 0..1 would be appropriate there, which may or may not match the
supplied data range, but which seems insignificant here today.
 
Wayne Fulton said:
Perhaps it is as you say, as I dont know Tims intention,

Me neither - we only have our personal interpretations of what he wrote
from which to assess his intention. So perhaps it is not as I say, but
I see no reason to specifically caveat a question unless you
deliberately intend the caveat to apply. It makes sense to start by
understanding the ideal situation in any case, then corrections for
practical implementations can be made on top of that.
but he did speak of
the brightness of a pixel, so I immediately thought digital. If I
misunderstood Tim, then my apologies go to Tim.
I thought something must have sent you off down the wrong track, and now
I see what it was. It happens - and it won't be the first time I have
done the same thing myself! ;-)
No harm done.
Regardless, I was referring to 12 to 8 bit digital data conversion with gamma
encoding.

Yes - it wasn't the veracity of your post that I queried, Wayne, rather
whether it answered the correct question. ;-)
 
Wayne Fulton said:
Then specifically what is the difference Chris? What are the specific
details of how this 8 bit gamma encoding followed by decoding can possibly
leave a perceptual advantage? (other than to correct the response of the
CRT of course, that part is a given).

http://chriscox.org/gamma/

With a linear encoding, too many codes are devoted to highlights that
you can't tell the difference between, and too few codes are devoted to
shadow values that you can tell the difference between. With a linear
encoding, you see obvious banding in the shadows.

With a gamma encoding (assuming a gamma near 2.0), the codes are evenly
distributed to match human vision. Thus you don't see any banding.

Poyntons explanation applies to the state while gamma encoded. We dont
view that.

Correct - but you view the result of that.

Then after it is decoded back to linear for presentation and
viewing by humans, then specifically what perceptual effect remains?

The steps between values: quantization.

And how is that accomplished? If this is real, just say the words to make me
into a believer.

It's "accomplished" because you have a limited number of values (256 in
the case of 8 bit images) to encode the image.

Chris
 
Timo said:
Hi Wayne, high end scanners have always been linear and high end
cameras have always provided the linear acquire mode.

Gamma 2.2 (or more accurately gamma 2.5) space is an obvious real
world requirement *only* at the publishing phase (when publishing to
the Web or for viewing with a normal office or home PC). It is
perfectly and easily possible to work in linear and just publish from
there.

Timo - how many times do we have to ask you to stop lying to users?
I don't know why you like to mislead people about image quality and
human vision, but you really need to stop it.

Chris
 
Mike Engles said:
Hello
You would agree that a gamma of .45 would brighten a normal looking
image by redistributing the levels. When displaying this on a CRT with a
gamma of .4,the CRT darkens the image that had gamma.45 applied.

So if the display did NOT darken the image that had gamma.45 applied,
the result would look very bright. So what do you do to make the image
look normal?

Why are you trying to confuse the issue? It's very simple.

The image encoding has nothing to do with the display gamma or transfer
function.

You either remove/correct the image encoding for the display in
software or in the video display card. The display transfer function
has zero effect how you encode the image.

One more time: the image encoding has nothing whatsoever to do with the
display gamma or transfer function.

Chris
 
Wayne Fulton said:
I did understand most of what you said, but I still fail to see any reason
for a perceptual advantage of gamma encoding 8 bit data.

Do you like seeing banding in half of your image?
Or do you want usable images?

If you enjoy banding, then use linear encoding.
If you want usable images, use a gamma encoding.


On this subject, I
alwys get lost during the part about "then magic happens". <g>

There is no magic here.
Just simple math and some basic knowledge of human visual sensitivity.


But you are saying gamma has perceptual
advantage for 8 bit data, as others say too, but I always miss the part
about specifically how this can happen?


Because without the gamma encoding you are giving too few bits/values
to the shadows (where the human visual system is very sensitive) and
too many bits/values to the highlights (where the human visual system
is not very sensitive).

Chris
 
Wayne Fulton said:
I am seeing that now Timo, thanks for the very clear explanation. I sure wish
I'd read this one first. <g>

Sure - reading the misinformation first makes it SOOOO much easier.
(note the sarcasm)

Timo is trying to mislead you.
That's what he does.
Why? We don't know.
But he's been at it for a while, and he never goes away for very long.


Chris
 
Timo - PLEASE stop lying to Wayne.
I don't know why you have this need to mislead and destroy other
people's images, but it needs to stop.

Chris
 
Hello Wayne.

There is one problem. The CRT has decoded the data, but the encoding is
still there. This fine if we do not edit anything, but we need to decode
before editing. Editing in the gamma space can make the image go into
clipping much easier.

No, that is just more of Timo's misinformation.

Chris
 
Timo,

I don't know if you don't understand English very well, or are just
spreading your usual baseless lies.

But your characterization of Mr. Poynton's statement is going beyond
your usual misinformation and into the realm of conspiracy theories
(which you have tread into before).

Please Timo, stop spreading your lies.

Chris
 
Kennedy said:
On the "ideal computer screen" it would be exactly as Tim defined it,
though. I think Tim was very careful in framing his post. ;-)

That's right - for the sake of simplicity, I was ignoring the finite
contrast range of the screen & the Dmax of the scanner.

Thanks to all who replied to my post. I'm glad to learn that "gamma" is
simple and understandable! The web is full of windy and confusing
articles, but I didn't find a simple explanation of the relation between
digital data and traditional darkroom units like density and exposure.
Maybe I should try to write a short article for non-professionals like
me. What do people think?

-Tim
 
Tim said:
That's right - for the sake of simplicity, I was ignoring the finite
contrast range of the screen & the Dmax of the scanner.

Thanks to all who replied to my post. I'm glad to learn that
"gamma" is
simple and understandable! The web is full of windy and confusing
articles, but I didn't find a simple explanation of the relation
between
digital data and traditional darkroom units like density and
exposure.
Maybe I should try to write a short article for non-professionals
like
me. What do people think?

-Tim

Oh yes: that would be a good idea for people like me who get easily
confused by the writings of the experts who clearly do not agree with
each other on several subjects!
In the Photoshop User's Guide there is vitually no talk about gamma,
and judging from this group it is very important to know what it is,
what it does and when to change it into what...

I'm looking forward to your tutorial! Especially in relation to
scanning negatives and positives and archiving the lot.

Regards,
Alex
 
Thanks to all who replied to my post. I'm glad to learn that "gamma" is
simple and understandable! The web is full of windy and confusing
articles, but I didn't find a simple explanation of the relation between
digital data and traditional darkroom units like density and exposure.
Maybe I should try to write a short article for non-professionals like
me. What do people think?

Yes please. I would like to read it. I'm still trying to understand
Kennedy's explanations of gamma encoding.

Geo
 
Sure it would be beneficial. Just do not ignore the relevant facts,
errors in the order of 1000 times would not do good for any article.

Timo Autiokari
 
I'm still trying to understand
Kennedy's explanations of gamma encoding.
In contrast (no pun intended) I am having some difficulty understanding
why people have trouble understanding gamma encoding. ;-)

In fairness, and this comes back to the initial questions raised in this
thread, I don't believe many people have any problem with the concept of
gamma encoding. After all, it is just another encode-decode scheme: you
get out from the decode what you put into the encode.

However, people do seem to have a problem understanding *why* gamma
encoding is better than simple linear encoding, and that is simple: it
uses less bits to achieve the same range of signals without introducing
visual artefacts than linear encoding would. Once you have grasped
that, the rest of the discussion is irrelevant, only serving to explain
*how* it manages to achieve this. Most people don't need, or want, to
know how something is achieved - it is only a tool and successful use of
that tool does not require dissection of its entrails - indeed, such
knowledge can often be an obstacle to optimum use.
 
Chris said:
No, that is just more of Timo's misinformation.

Chris


Hello

Applying a gamma to a image, brightens a linear image. That image is fed
to a CRT which dulls the image. Now this is convenient.

We see the image correctly because the CRT has the opposite non
linearity from that applied as the gamma.

This would seem to be fine if we did nothing else to the image. If we
edit this in 8 bits,with the image is in a brightened state, there is a
danger of making the already brighter bits, brighter and loosing
information. Editing any image in 8 bits will cause image degradation.

If we were using 16 bits and applied the gammaed image to a linear
display, we would have to apply the effect of a CRT to the display, but
we are still editing in a gamma state. I still cannot see why 16 bit
images need not be edited in linear state, and apply the gamma
correction to the image rather than the display, even if you do say that
gamma is a necessity, because either way we are seeing the image in a
linear state.

Mike Engles
 
Mike Engles said:
Applying a gamma to a image, brightens a linear image. That image is fed
to a CRT which dulls the image. Now this is convenient.

We see the image correctly because the CRT has the opposite non
linearity from that applied as the gamma.
That is merely part of the effect, and only the obvious part which
completely misses the subtlety of the encode-decode effect. Linearity
is *NOT* the only metric of image quality.

To understand this consider what would happen if the CRT had the inverse
gamma (ie. 0.45 instead of 2.2) - then you would have to apply a gamma
compensation of 2.2 to the image. This would have the effect of
darkening the image, which would then be brightened by the CRT. You
would *still* "see the image correctly" in terms of it's brightness
(because you have perfectly compensated the CRT non-linearity) but it
would look very poor in terms of shadow posterisation.

This is trivial to demonstrate. Take a 16-bit linear gradient from
black to white. Apply a gamma of 2.2 which will darken the image. Then
reduce the image to 8-bits, which would be the state it would appear in
prior to being sent to the CRT. Then apply a gamma of 0.45 to simulate
how such a CRT would display the image. It is still apparently the
correct brightness and is perfectly linear. However, it is now severely
posterised in the shadows and a visibly poor gradient.

This exercise should demonstrate clearly that simply precompensating for
the non-linearity of the display is not enough. It is important that
the display non-linearity itself is the opposite of the perceptual
non-linearity, otherwise you need far more bits to achieve tonal
continuity and inevitably waste most of the available levels.
This would seem to be fine if we did nothing else to the image. If we
edit this in 8 bits,with the image is in a brightened state, there is a
danger of making the already brighter bits, brighter and loosing
information.

On the contrary, since the gamma compensated image is in a perceptual
evenly quantised state, you have equalised the probability of losing
data by making the lighter parts lighter as you have by making darker
parts darker by any processing you wish to apply. In the linear state
there are insufficient levels to adequately describe the shadows with
8-bit data, and consequently processing in *that* state results in lost
information - in the shadows.
Editing any image in 8 bits will cause image degradation.
Editing any image will cause image degradation irrespective of the
number of bits. The issue is whether that degradation, or loss of
information, is perceptible. Editing 8-bit images in the linear state
will produce much more perceptible degradations, particularly in the
shadows, than editing in 8-bit gamma compensated data.
If we were using 16 bits and applied the gammaed image to a linear
display, we would have to apply the effect of a CRT to the display, but
we are still editing in a gamma state.

And hence your edits are applied with a perceptual weighting to the
available levels.
I still cannot see why 16 bit
images need not be edited in linear state, and apply the gamma
correction to the image rather than the display, even if you do say that
gamma is a necessity,

With 16-bits it is much less of an issue, but the same rules apply - you
have a higher probability of your processing causing loss of details in
the shadows than you have in the highlights, and processing in
"perceptual space" (ie. gamma compensated data) equalises the
probability of data loss throughout the image range so that you do not
damage the shadows any more than the highlights any more than the
mid-tones by the application of the same process.
because either way we are seeing the image in a
linear state.
Seeing the image in a linear state is only part of the solution, and
whilst you continue to focus on linearity at the expense of the other
issues then you will never understand the reason why gamma is necessary.

A binary (1-bit) image is perfectly linear, but isn't a very good
representation of the image, neither is 2, 3 or 4 bits and so on. 6-bits
is adequate (and 8-bits conveniently gives additional headroom for
necessary colour management functions) *if* the available levels
produced by those 8-bits are distributed optimally throughout the
luminance range, which is such that the discrete levels are equally
distributed throughout the perceptual response range. As soon as you
depart from *that* criteria the you increase the risk of discernible
degradation in those regions of the perceptual response range which have
fewest levels. This is irrespective of how many bits you have in your
image although, obviously, the more bits you have the less likely the
problem is to become visible. Less likely doesn't mean never though!
 
Back
Top