Gamma correction question

  • Thread starter Thread starter Jack Frillman
  • Start date Start date
Mike Engles said:
Hello

Yes send me the image or give me a link.
(e-mail address removed).
OK - its on its way. Let me know if it explains what is going on any
better for you.
 
Kennedy said:
OK - its on its way. Let me know if it explains what is going on any
better for you.
--
Kennedy
Yes, Socrates himself is particularly missed;
A lovely little thinker, but a bugger when he's pissed.
Python Philosophers (replace 'nospam' with 'kennedym' when replying)


Hello

Have you sent the image? If yes, I have a awful feeling that my spam
filtering has done it in. I would be obliged if you could send it to my
wife'e Email

(e-mail address removed). She does'nt get spam.

I get a lot of spam;I must change my email name.

Mike Engles
 
Mike Engles said:
Have you sent the image? If yes, I have a awful feeling that my spam
filtering has done it in. I would be obliged if you could send it to my
wife'e Email

(e-mail address removed). She does'nt get spam.

I get a lot of spam;I must change my email name.
Yes, I sent it off yesterday, but I will try this address now. Let me
know if you get it.
 
Kennedy said:
Yes, I sent it off yesterday, but I will try this address now. Let me
know if you get it.
--
Kennedy
Yes, Socrates himself is particularly missed;
A lovely little thinker, but a bugger when he's pissed.
Python Philosophers (replace 'nospam' with 'kennedym' when replying)


Hello

It has arrived, thankyou.
I will study it.

Mike Engles
 
Mike Engles said:
Hello

Yes send me the image or give me a link.
(e-mail address removed).

I did as you suggested and applied a inverse gamma to a 8 bit linear
gradient I made in XARAX. Photoshop make a one with a curious shallow U
shaped histogram.

Only if you fail to read the manual and leave the gradient smoothness
at 100% (spline interpolated)....

It is a idle speculation, but what kind of work practice would we have
now if in the beginning inverse gamma was applied to the TV,rather than
the TV signal? All perceptual corrections would have been done to the
TV. We would then be working in linear and applying a transform when
exchanging data for printing.

The same - because the encoding of the signal has little to nothing to
do with the CRT physics. You would still gamma encode the signal to
get the best use of the bandwidth (and best signal quality for a given
bandwidth).

Chris
 
Timo said:
Yes, inverse gamma is applied because the CRT monitor will apply gamma
2.5. The human vision requires linear light (like it is in the real
life) so the overall tonal reproduction curve (from scene luminances
to the luminances on the media) must be linear.


Timo,

You know that's a lie.
Please stop spreading misinformation.



Editing non-linear image data severely damages the image quality, pls
see some demonstrations here:

And you know that's a lie as well (since you went out of your way to
cook up flawed examples).

Take your B.S. elsewhere.

Chris
 
Kennedy McEwen said:
Human lightness sensitivity has a gamma of around 0.3-0.4

That is not correct. The lightness perception of human vision is
(about) logarithmic. Lightness however does not relate to the task of
viewing the grayrange (be it on the CRT or on a print).

1) When you illuminate a room with one 25W lamp, your perception might
be that the room is dim, the lightness is low.
2) Then when you double that illumination (add another 25W lamp) you
perceive a distinct increase in lightness.
3) When you double this illumination again (add two 25W lamps) you
perceive that the increase in lightness was about the same as it was
in step 2)
4) and so on, for many many doubling of the lamp power. This is a
logarithmic function.

While you do the above experiment the lightness adaptation of the
vision is in effect. This property of the vision allows us to see well
in vastly different lighting situations.

When we view the grayrange the illumination level is fixed, lightness
is fixed, so lightness adaptation of the vision is fixed. Therefore
the logarithmic behavior of the lightness adaptation of the vision is
not in effect when we view the grayrange under stable illumination.

Note when you show on the CRT e.g. a test chart and you then zoom it
up strongly so that only the most darkest part of that chart is shown
then the vision will adapt to that dark screen but this is not a
normal viewing situation of any image. The image coding should not be
chosen n such way that it would optimize such an abnormal viewing
situation.

Image coding in 8-bit/c should be chosen so that it will
optimize the use of the codes in the normal viewing situation
of photographic images.

Such coding is not any gamma function and certainly not a steep gamma
space like 2.2 nor logarithmic coding. The contrast sensitivity in
this situation (normal viewing situation of photographic images) is a
little non-linear, but it depends quite a lot from the image content
and image size how much and in which way it is non-linear. Generally
we only have the gamma function for the purpose of non-linear coding,
a steep gamma space will waste a lot of codes so in case it is used a
very moderate gamma space has to be used, like 1.25. Linear coding is
a good choice in 8-bit/c too and the only rational choice when working
with 16-bit/c images.

Please experiment with the contrast sensitivty at:
http://www.aim-dtp.net/aim/evaluation/perception/perception.htm
(Perceptual Gamma Space Evaluation) do not miss the small
demonstration at the end of the page.
The concept is, as you describe in your document, that the product of
the eye and display gammas should approximate unity so that linear grey
ramp signals sent to the display *appear* as linear ramps with evenly
spaced steps.

That is incorrect. The vision is bullet straight linear for luminance.
In real life the surfaces have luminances and the "eye" see them as
they are, linear light. Contrast sensitivity of the vision is totally
another issue.

Linear image capture from a camera maps the RGB values according to
the surface luminances, linearly. When these RGB values are sent to
CRT the transfer function of the CRT tube will apply a gamma 2.5 over
that (the grid voltage versus beam current is about a gamma 2.5
function) so the image looks very dark. Therefore the image data has
to be taken in to the gamma 2.5 space by applying power of 1/2.5 over
the data so when the tube applies the power 2.5 then the luminances on
the CRT will have the correct linear relation with the luminances in
the original real life scene, so the image looks good, as if you were
looking at the original scene itself.

Timo Autiokari http://www.aim-dtp.net/
 
Chris said:
Only if you fail to read the manual and leave the gradient smoothness
at 100% (spline interpolated)....


The same - because the encoding of the signal has little to nothing to
do with the CRT physics. You would still gamma encode the signal to
get the best use of the bandwidth (and best signal quality for a given
bandwidth).

Chris


Hello

So for a 16 bit linear sensor and a 8 bit linear display we still need
to gamma encode; what level of gamma or inverse gamma would we need to
apply?

Mike Engles
 
So for a 16 bit linear sensor and a 8 bit linear display we still need
to gamma encode; what level of gamma or inverse gamma would we need to
apply?

Mike, sorry, but it isnt clear if you are really getting gamma yet.
Have you seen Poynton's other document on gamma?

He has his Gamma FAQ at http://www.poynton.com/GammaFAQ.html
sort of a quick shorthand review, but it also has a link to his paper named
The rehabilitation of gamma, at
http://www.poynton.com/papers/IST_SPIE_9801/index.html

which is possibly better to comprehend the difference and significance of
these factors, of CRT, the eye, 8 bits, linear response, etc.

Poynton's writing is very precisely stated, perfect almost to a fault, because
the meaning of every word is very important to the meaning. It took me a long
while to grow before I actually understood the significance of it, but when
the terms finally all become precisely clear, then his writing becomes
extremely clear and precise, like magic.

Poynton clearly states the CRT has gamma around 2.5 and that the CRT thus
absolutely requires images with reverse encoding (so the eye avoids seeing
very dark images on the CRT face which the CRT produces due to its losses).
Therefore clearly and obviously, the CRT requires images with gamma encoding.
He does not quibble about his point. It does not matter the source of that
image, nor if 8 bits or 16 bits, or even if digital or analog. The CRT simply
needs images with gamma encoding, simply because the CRT response is going to
do the opposite.

We do tend to use CRT quite a lot, but even the earliest television (not
digital) required it, and they decided to put the CRT gamma encoding in the
camera instead of into every receiver. And still today, all images everywhere
have gamma encoding, period. This is done automatically and silently by
anything that creates images (cameras, scanners, photo editors, etc). Its a
fairly recent thing that programs are even mentioning it but it has always
been there. Otherwise it would look really bad.

Printers also have a gamma issue, not 2.5, but more generally the 1.8 range,
not due to CRT gun losses, but more due to dot gain as the ink spreads on the
paper, becoming darker than planned. This is why Apple early on adopted
images with gamma 1.8 (for their early Apple laser writer printer), and added
the remainder gamma needed by the CRT in the computer video system hardware.

Microsoft and others later put it all in the image, to match what the TV
industry does, and then the printer expects and adapts to this.

In addition to saying the CRT requires gamma, Poynton also does say the words
that the primary purpose of gamma is so 8 bit data can store the data
sucessfully, meaning to be perceived correctly by the human eye after 8 bit
losses. This is NOT saying gamma is for the eye, not at all, no way, the eye
wants the linear result on the CRT face which looks like the original image
scene (same as if we look out the window at the original scene, that original
scene is linear too, which is what the eye wants). In this respect, Poynton is
only saying gamma encoding is so that the 8 bit losses wont be percieved by
the eye as losses, that is, gamma encoding coincidentally approximates
discarding what the eye discards, and keeps what the eye wants. Wasnt planned
or designed by anyone, it just fortunately happened to work out that way,
which is a great thing, because it is absolutely required by the CRT too.
Poynton says it both ways, necessary for the CRT, and for 8 bit digital.

Saying the "primary purpose" of gamma is the 8 bit data still seems very
misleading to me, one because the CRT absolutely requires it, and two because
we could simply use 16 bits and then there would be no 8 bit losses, and no 8
bit issue for the digital medium, and obviously then gamma would not be any
primary purpose for the digital data. It is just that 8 bits need it. And
the CRT needs it.

I am just saying 16 bit images without gamma would do that too, 16 bits would
simply keep everything, much more than the eye can possibly use or perceive,
so the eye doesnt perceive any problem with any 8 bit losses, there simply are
no losses if 16 bits. However the CRT would still perceive a problem without
gamma, that image becomes very dark on the CRT screen, non linear to the eye
then.

Poynton carefully explains this, how the eye sees 1% intensity steps, that is,
we might see 100 step values at any one adaptation of it iris (while viewing
this stationary image). However those 100 perceived values are these 1%
steps, more nearly logarithmic, not linear at all, and he says it would take
9900 linear binary steps (he says 11 bit data) to contain the numeric range of
those 100 1% steps. 8 bits cannot not store 9900 steps, only 256 linear
steps, so 8 bits simply is quite insufficient for the human eye to perceive
the digital data correctly. 16 bits of course can store 65535 values, far
more than the 9900 linear steps to include the 100 1% steps the eye can
perceive, so 16 bits should be no problem for the eye, as is. 16 bits is more
a problem for the hardware then but not the eye.

However, if that 8 bit data were gamma encoded, which is exponential which is
the same as logarithmic, with nearly the same exponent as the eye perceives
intensity anyway, then coincidentally, the data that is lost by the 8 bit
conversion is incidental to those 100 1% steps, not needed to still include
those 1% perceptial steps, the losses simply dont matter now because of how
those losses were selected. Poynton explains this.

So after the CRT display response (the CRT losses) restores that gamma encoded
data to linear (due to the unavoidable non-linear way CRT maps voltage to
intensity), and shows the linear data on the screen (the eye wants to see
linear on the screen), that liner data still has the 8 bit losses. That
previously gamma encoded data (now restored to linar by the CRT, which we
cannot control) does become linear again (for the eye to see). The gamma
encoding sucessfully compensated for the CRT losses.

However most of the data is missing in 8 bits, only 256 steps are present. So
the loses of the 8 bit data are present, because 8 bit data cannot show the
range of 9900 steps needed to show the 100 1% steps which eye can see and does
see. However, the gamma encoding selectively saves data logarithmally
(exponentially, same thing), so what it did save more nearly matches the the
1% step response of the eye, which is all the eye can see anyway. Because
these two effects have very similar exponents and response. Not done for the
eye, but the method is perfect to match the 8 bit losses to what the eye
doesnt see anyway. The important 100 1% steps are retained by the exponential
gamma encoding, which is otherwise required by the CRT anyway.

16 bits could do that part too, simply retaining everything, much more than
the eye could use. So I dont see 8 bits as essential, other than extremely
handy to minimize hardware memory requirments.

Repeating, clearly gamma is NOT done for the eye.

We can say it is done for the CRT, the CRT absolutely requires it because it
is going to respond in an opposite way, which is unacceptable, so we must fix
that so we see the linear scene data on the CRT screen.

Or we can say gamma is done for the 8 bit encoding, to cut the losses in 8 bit
data, so those losses are done in a way invisible to the eye. Not for the
eye, or to match the eye, not at all, but only for the 8 bit data losses so
the eye wont see those losses. That seems essential to do too, assuming we
are going to use 8 bit data. However we could have simply used 16 bits
instead and forgotten that part. That still leaves the CRT requiring gamma.
And printers also need much of it.

So the words "primary purpose" of gamma are not very clear to me. Both the CRT
and 8 bit data require it. Maybe like saying humans require air and water I
suppose. Technically perhaps we could drink beer or orange juice or mountain
dew, so perhaps we can forget about water and be able to say that only air is
THE primary need of humans.. it seems rather philosophical to me <g>

Not sure where we get those drinks without water, but it is also not clear
where we get the 8 bits without 16 bits. <g> Scanners and cameras have more
than 8 bits internally, because all such devices do gamma encoding internally,
because the CRT absolutely requires it, and it is our standard to do it that
way. True if digital or not. But if digital, more than 8 bits is needed to
perform gamma and have good 8 bit data... those 9900 linear steps containing
the 100 1% perceptual steps again. Poynton calls this 11 bits. So camera and
scanners are more than 8 bits internally, at least 12 bits today, but they
routinely output 8 bit data that is gamma encoded. It is simply our standard.
The CRT requires it, and 8 bit digital data requires it. Which of the two is
the most important is less clear to me. <g>
 
Wayne Fulton said:
Have you seen Poynton's other document on gamma?
He has his Gamma FAQ at http://www.poynton.com/GammaFAQ.html

Wayne,

I agree with most that you wrote, however it looks that Mr. Poynton
has had another victim. The contast sensitivity of the vision is *not*
anywhere near the Weber's law (ratio scaling) like Mr. Poynton claims
*when* the task is viewing photograhic images on the CRT or on the
print or the real life scene.

You could very easily test this, not much of math is need for that. I
have a page about this very issue at
http://www.aim-dtp.net/aim/calibration/poynton/chapter_15.htm (Mr.
Poynton's false FAQ) where I show e.g. ratio scaling at 1.75:1, in the
dark end you can detect only an extremely small difference where in
the highlights the same ratio produce a huge visible difference,
according to Mr. Poynton the visible difference would be
similar/equal. There are many other test charts on that page also plus
an Excel workbook about the claculations.

Mr. Poynton incorrectly takes some of the results from psychophysical
perception reseach and applies them directly to a totally different
context, digital image coding and viewing photographic images.
Perception reseach is interested about the overall performance of the
vision, from less than starlight to more than bright daylight, this is
something like range of 4000000:1 at minimum where the range that we
can can see when the adaptation of the vision is fixed (when
illumination level does not change from starlight to bright daylight)
is only something like 200:1 or less.

Coding the images to gamma 2.2 space (as well as to gamma 1.8 space)
strongly deteriorates the image data, the contrast senstitivity of the
vision is no where near to such extremely non-linear function *when*
the task is viewing photograhic images on the CRT or on the print.

Timo Autiokari
 
Wayne Fulton said:
So the words "primary purpose" of gamma are not very clear

One can find many "primary purposes" for gamma from the sadRGB
specification at: http://www.w3.org/Graphics/Color/sRGB where they
list the following:

--> We feel that there are many reasons to support a 2.2 CRT,
including;
-->
--> compatibility with a large legacy of images
--> Photo CD,
--> many Unix workstations,
--> PC's with 256+ colors and their desktop color schemes and icons,
--> several ultra-large image collections,
--> analog television,
--> Apple Macintosh video imagery,
--> CCIR 601 images,
--> a better fit with Weber's fraction,
--> compatibility with numerous standards,
--> TIFF/EP,
--> EXIF,
--> digital TV,
--> HDTV,
--> analog video,
--> PhotoCD,
--> it is closer to native CRTs gamma,
--> and consistency with a larger market of displays.

Notably they did not have the courage to mention perception nor the
"eye", doing so would have been a lie. They just refer to the "Weber's
fraction" (that is usually called as the Weber's law) and do not
mention that compared to what it is supposed to be better in this
regards. It looks to me that the list is in the order of significance
(this "standard" was published in 1996).

Also note that in effect they say that gamma 2.2 is not the gamma of
the native CRT. From this it results that sadRGB profile is not the
optimum for publishing to the Web since the very large majority of
computers that are used for Web surfing are not calibrated to gamma
2.2 space, they are in the gamma space of the native CRT.

Timo Autiokari
 
One can find many "primary purposes" for gamma from the sadRGB
specification at: http://www.w3.org/Graphics/Color/sRGB where they
list the following:

--> We feel that there are many reasons to support a 2.2 CRT,
including;
-->
--> compatibility with a large legacy of images
--> Photo CD,
--> many Unix workstations,
--> PC's with 256+ colors and their desktop color schemes and icons,
--> several ultra-large image collections,
--> analog television,
--> Apple Macintosh video imagery,
--> CCIR 601 images,
--> a better fit with Weber's fraction,
--> compatibility with numerous standards,
--> TIFF/EP,
--> EXIF,
--> digital TV,
--> HDTV,
--> analog video,
--> PhotoCD,
--> it is closer to native CRTs gamma,
--> and consistency with a larger market of displays.

Notably they did not have the courage to mention perception nor the
"eye", doing so would have been a lie. They just refer to the "Weber's
fraction" (that is usually called as the Weber's law) and do not
mention that compared to what it is supposed to be better in this
regards. It looks to me that the list is in the order of significance
(this "standard" was published in 1996).

Also note that in effect they say that gamma 2.2 is not the gamma of
the native CRT. From this it results that sadRGB profile is not the
optimum for publishing to the Web since the very large majority of
computers that are used for Web surfing are not calibrated to gamma
2.2 space, they are in the gamma space of the native CRT.

Timo Autiokari


Hi Timo,

Sorry, but I am not able to debate Weber with you. It seems fine and
preferable to me to just go with the establishment, that's why we pay
them. The eye is an extremely complex device, and there have been decades
of study on it and all the other factors of TV video. I doubt we actually
know it all yet, but the results seem very workable. But regardless,
imaging is done as it is done, and I think my images should fit into that
existing system for images, whatever it is.

I am aware of your site and the years of your past debates on usenet on
these various matters, mainly gamma. I dont understand all of that, but
regardless of my own view of the details, I must say I appreciate your
always being civil to the others who generally behaved so abysmally in
those debates. You always only want to discuss pertinent facts and
details instead of personalities and abuse. I wish we could all manage
that. Thanks for showing it is possible to be civil on usenet.

Incidental here, but to clear a point, I misquoted Poynton before, when I
said "those 9900 linear steps containing the 100 1% perceptual steps
again. Poynton calls this 11 bits." Poynton of course instead calls
this "about 14 bits" on page 6. 2^13 is 8192, short of 9900 values. But
he says gamma maps this to 463 values requiring 9 bits. We use 8 bits
for obvious reasons, which he says is adequate for broadcast standards of
contrast. Top of Page 2 says "11 bits or more", I assume in that same
context.

I think Poynton uses the term "primary purpose" only in regard to the
title of his paper, "The rehabilitation of gamma". As I see it, the
point is that gamma is an extreme adjustment, but required by the CRT,
and therefore has been done forever on all analog image data, which is
obviously the only historical primary purpose of gamma, until fairly
recently.

But gamma is an extremely severe adjustment, and mismatches are possible,
some cases even use different standards, so gamma was considered an
unfortunate requirment for digital data in that sense. So the point of
his title appears to be that gamma is not only for the CRT, but that
gamma actually helps 8 bit data to the extent of making it be possible to
even use 8 bit data at all. 8 bits is totally unacceptable without
something similar to gamma encoding, with gamma being extremely practical
since it is required anyway for the CRT. So for 8 bit binary data, gamma
is a primary purpose, it makes it possible to use 8 bit data, thus the
rehabilitation of gamma. For all other cases, such as 16 bit data or
analog data, the CRT is the only purpose, which seems primary too.

But gamma is required regardless, because it is our standard if no other
reason.
 
Timo Autiokari said:
That is not correct. The lightness perception of human vision is
(about) logarithmic.

Timo, I refer you to your bete noiron this matter. Charles Poynton
states quite clearly, referencing several independent studies, that:
"Experiments have shown that this assumption (that the log function is
an accurate model of lightness sensation) does not hold very well, and
coding according to a power law is found to be a better approximation to
lightness response than a logarithmic function."

This conforms with the experience of many of my professional colleagues
who have undertaken such assessments throughout the development of the
video industry as well as some of my own work in the development of
thermal imaging display systems. The log response issue is a matter of
units and it often causes confusion, but you have taken it to a whole
new level.
Lightness however does not relate to the task of
viewing the grayrange (be it on the CRT or on a print).
In this context, lightness is the perceptual sensation produced by the
brain on exposure of the eye to light under a fixed adaptation. As
such, it is clear that you have misunderstood the original statement and
further comment is unnecessary.
Please experiment with the contrast sensitivty at:
http://www.aim-dtp.net/aim/evaluation/perception/perception.htm

Timo, I know your work. I know your site well. I even recommend people
to specific areas of it, such as setting their monitors up correctly
before attempting to make any profile measurement. However, I do not
subscribe to your theory, which is completely at odds with everything
that I know from my own experience and from the experimental results of
better authorities on the topic than either you or I.
 
Wayne said:
Mike, sorry, but it isnt clear if you are really getting gamma yet.
Have you seen Poynton's other document on gamma?

He has his Gamma FAQ at http://www.poynton.com/GammaFAQ.html
sort of a quick shorthand review, but it also has a link to his paper named
The rehabilitation of gamma, at
http://www.poynton.com/papers/IST_SPIE_9801/index.html

which is possibly better to comprehend the difference and significance of
these factors, of CRT, the eye, 8 bits, linear response, etc.

Poynton's writing is very precisely stated, perfect almost to a fault, because
the meaning of every word is very important to the meaning. It took me a long
while to grow before I actually understood the significance of it, but when
the terms finally all become precisely clear, then his writing becomes
extremely clear and precise, like magic.

Poynton clearly states the CRT has gamma around 2.5 and that the CRT thus
absolutely requires images with reverse encoding (so the eye avoids seeing
very dark images on the CRT face which the CRT produces due to its losses).
Therefore clearly and obviously, the CRT requires images with gamma encoding.
He does not quibble about his point. It does not matter the source of that
image, nor if 8 bits or 16 bits, or even if digital or analog. The CRT simply
needs images with gamma encoding, simply because the CRT response is going to
do the opposite.

We do tend to use CRT quite a lot, but even the earliest television (not
digital) required it, and they decided to put the CRT gamma encoding in the
camera instead of into every receiver. And still today, all images everywhere
have gamma encoding, period. This is done automatically and silently by
anything that creates images (cameras, scanners, photo editors, etc). Its a
fairly recent thing that programs are even mentioning it but it has always
been there. Otherwise it would look really bad.

Printers also have a gamma issue, not 2.5, but more generally the 1.8 range,
not due to CRT gun losses, but more due to dot gain as the ink spreads on the
paper, becoming darker than planned. This is why Apple early on adopted
images with gamma 1.8 (for their early Apple laser writer printer), and added
the remainder gamma needed by the CRT in the computer video system hardware.

Microsoft and others later put it all in the image, to match what the TV
industry does, and then the printer expects and adapts to this.

In addition to saying the CRT requires gamma, Poynton also does say the words
that the primary purpose of gamma is so 8 bit data can store the data
sucessfully, meaning to be perceived correctly by the human eye after 8 bit
losses. This is NOT saying gamma is for the eye, not at all, no way, the eye
wants the linear result on the CRT face which looks like the original image
scene (same as if we look out the window at the original scene, that original
scene is linear too, which is what the eye wants). In this respect, Poynton is
only saying gamma encoding is so that the 8 bit losses wont be percieved by
the eye as losses, that is, gamma encoding coincidentally approximates
discarding what the eye discards, and keeps what the eye wants. Wasnt planned
or designed by anyone, it just fortunately happened to work out that way,
which is a great thing, because it is absolutely required by the CRT too.
Poynton says it both ways, necessary for the CRT, and for 8 bit digital.

Saying the "primary purpose" of gamma is the 8 bit data still seems very
misleading to me, one because the CRT absolutely requires it, and two because
we could simply use 16 bits and then there would be no 8 bit losses, and no 8
bit issue for the digital medium, and obviously then gamma would not be any
primary purpose for the digital data. It is just that 8 bits need it. And
the CRT needs it.

I am just saying 16 bit images without gamma would do that too, 16 bits would
simply keep everything, much more than the eye can possibly use or perceive,
so the eye doesnt perceive any problem with any 8 bit losses, there simply are
no losses if 16 bits. However the CRT would still perceive a problem without
gamma, that image becomes very dark on the CRT screen, non linear to the eye
then.

Poynton carefully explains this, how the eye sees 1% intensity steps, that is,
we might see 100 step values at any one adaptation of it iris (while viewing
this stationary image). However those 100 perceived values are these 1%
steps, more nearly logarithmic, not linear at all, and he says it would take
9900 linear binary steps (he says 11 bit data) to contain the numeric range of
those 100 1% steps. 8 bits cannot not store 9900 steps, only 256 linear
steps, so 8 bits simply is quite insufficient for the human eye to perceive
the digital data correctly. 16 bits of course can store 65535 values, far
more than the 9900 linear steps to include the 100 1% steps the eye can
perceive, so 16 bits should be no problem for the eye, as is. 16 bits is more
a problem for the hardware then but not the eye.

However, if that 8 bit data were gamma encoded, which is exponential which is
the same as logarithmic, with nearly the same exponent as the eye perceives
intensity anyway, then coincidentally, the data that is lost by the 8 bit
conversion is incidental to those 100 1% steps, not needed to still include
those 1% perceptial steps, the losses simply dont matter now because of how
those losses were selected. Poynton explains this.

So after the CRT display response (the CRT losses) restores that gamma encoded
data to linear (due to the unavoidable non-linear way CRT maps voltage to
intensity), and shows the linear data on the screen (the eye wants to see
linear on the screen), that liner data still has the 8 bit losses. That
previously gamma encoded data (now restored to linar by the CRT, which we
cannot control) does become linear again (for the eye to see). The gamma
encoding sucessfully compensated for the CRT losses.

However most of the data is missing in 8 bits, only 256 steps are present. So
the loses of the 8 bit data are present, because 8 bit data cannot show the
range of 9900 steps needed to show the 100 1% steps which eye can see and does
see. However, the gamma encoding selectively saves data logarithmally
(exponentially, same thing), so what it did save more nearly matches the the
1% step response of the eye, which is all the eye can see anyway. Because
these two effects have very similar exponents and response. Not done for the
eye, but the method is perfect to match the 8 bit losses to what the eye
doesnt see anyway. The important 100 1% steps are retained by the exponential
gamma encoding, which is otherwise required by the CRT anyway.

16 bits could do that part too, simply retaining everything, much more than
the eye could use. So I dont see 8 bits as essential, other than extremely
handy to minimize hardware memory requirments.

Repeating, clearly gamma is NOT done for the eye.

We can say it is done for the CRT, the CRT absolutely requires it because it
is going to respond in an opposite way, which is unacceptable, so we must fix
that so we see the linear scene data on the CRT screen.

Or we can say gamma is done for the 8 bit encoding, to cut the losses in 8 bit
data, so those losses are done in a way invisible to the eye. Not for the
eye, or to match the eye, not at all, but only for the 8 bit data losses so
the eye wont see those losses. That seems essential to do too, assuming we
are going to use 8 bit data. However we could have simply used 16 bits
instead and forgotten that part. That still leaves the CRT requiring gamma.
And printers also need much of it.

So the words "primary purpose" of gamma are not very clear to me. Both the CRT
and 8 bit data require it. Maybe like saying humans require air and water I
suppose. Technically perhaps we could drink beer or orange juice or mountain
dew, so perhaps we can forget about water and be able to say that only air is
THE primary need of humans.. it seems rather philosophical to me <g>

Not sure where we get those drinks without water, but it is also not clear
where we get the 8 bits without 16 bits. <g> Scanners and cameras have more
than 8 bits internally, because all such devices do gamma encoding internally,
because the CRT absolutely requires it, and it is our standard to do it that
way. True if digital or not. But if digital, more than 8 bits is needed to
perform gamma and have good 8 bit data... those 9900 linear steps containing
the 100 1% perceptual steps again. Poynton calls this 11 bits. So camera and
scanners are more than 8 bits internally, at least 12 bits today, but they
routinely output 8 bit data that is gamma encoded. It is simply our standard.
The CRT requires it, and 8 bit digital data requires it. Which of the two is
the most important is less clear to me. <g>


Hello

It seems to be a bone of contention as to wether gamma encoding is
primarily for linearising a CRT or for the maximising the use of 8 bits
to discribe a image.

Chris Cox seems to imply that even if we did not have to use a CRT, we
would still have to gamma encode the image.

I was asking what degree of encoding we would need to apply.

Theoretically we would not need to apply any gamma to a linear 8 bit
image when viewing on a linear display, even though perceptually the
shadows might look banded.

In the case of viewing on a CRT, we are actually killing two birds with
the same stone. We are correcting for the CRT as well as making the 8
bit image look perceptually linear. This would only be sucessfull if the
gamma was applied in the analogue domain before digitising, OR applying
the gamma to a 12 or more bit linear image and then converting to 8 bit.

Kennedy has sent me a very good demonstration of how the eye would see
the quantisation and why we need to gamma encode when using 8 bits.

I am still digesting the rest of his correspondence.


Mike Engles
 
It seems to be a bone of contention as to wether gamma encoding is
primarily for linearising a CRT or for the maximising the use of 8 bits
to discribe a image.

Gamma is clearly required and necessary for both purposes, so any contention
only appears to be which of the two requirements is the primary purpose, which
is too philosophical for me. However my own bias is that gamma was necessary
for television many years before we ever digitized image data, and still is,
and it seems difficult to ignore this prior claim. It can be argued that
there are some linear video displays today, perhaps eliminating the need for
one claim (if enough bits), but it can also be argued that there is also 16
bit data today, eliminating the need for the other (if a linear display).

I would imagine the day will come when we routinely show 16 bit data on linear
displays. The 64 bit operating systems seem a step closer. Meanwhile, it is
an extremely lucky coincidence that the one solution solves both problems.
 
Wayne said:
Gamma is clearly required and necessary for both purposes, so any contention
only appears to be which of the two requirements is the primary purpose, which
is too philosophical for me. However my own bias is that gamma was necessary
for television many years before we ever digitized image data, and still is,
and it seems difficult to ignore this prior claim. It can be argued that
there are some linear video displays today, perhaps eliminating the need for
one claim (if enough bits), but it can also be argued that there is also 16
bit data today, eliminating the need for the other (if a linear display).

I would imagine the day will come when we routinely show 16 bit data on linear
displays. The 64 bit operating systems seem a step closer. Meanwhile, it is
an extremely lucky coincidence that the one solution solves both problems.


Hello

That is why I asked Chris Cox the question.
With a linear display and a 16 bit linear image what gamma encoding
would we have to use for reasons of perception?

He has said that we would need gamma encoding in such a scenario, at
least that is my understanding(or lack of);that such encoding is needed
for reasons of perception only and not for a CRT, or for any kind of
display.

Mike Engles
 
Wayne said:
Mike, sorry, but it isnt clear if you are really getting gamma yet.
Have you seen Poynton's other document on gamma?

He has his Gamma FAQ at http://www.poynton.com/GammaFAQ.html
sort of a quick shorthand review, but it also has a link to his paper named
The rehabilitation of gamma, at
http://www.poynton.com/papers/IST_SPIE_9801/index.html

which is possibly better to comprehend the difference and significance of
these factors, of CRT, the eye, 8 bits, linear response, etc.

Poynton's writing is very precisely stated, perfect almost to a fault, because
the meaning of every word is very important to the meaning. It took me a long
while to grow before I actually understood the significance of it, but when
the terms finally all become precisely clear, then his writing becomes
extremely clear and precise, like magic.

Poynton clearly states the CRT has gamma around 2.5 and that the CRT thus
absolutely requires images with reverse encoding (so the eye avoids seeing
very dark images on the CRT face which the CRT produces due to its losses).
Therefore clearly and obviously, the CRT requires images with gamma encoding.
He does not quibble about his point. It does not matter the source of that
image, nor if 8 bits or 16 bits, or even if digital or analog. The CRT simply
needs images with gamma encoding, simply because the CRT response is going to
do the opposite.

We do tend to use CRT quite a lot, but even the earliest television (not
digital) required it, and they decided to put the CRT gamma encoding in the
camera instead of into every receiver. And still today, all images everywhere
have gamma encoding, period. This is done automatically and silently by
anything that creates images (cameras, scanners, photo editors, etc). Its a
fairly recent thing that programs are even mentioning it but it has always
been there. Otherwise it would look really bad.

Printers also have a gamma issue, not 2.5, but more generally the 1.8 range,
not due to CRT gun losses, but more due to dot gain as the ink spreads on the
paper, becoming darker than planned. This is why Apple early on adopted
images with gamma 1.8 (for their early Apple laser writer printer), and added
the remainder gamma needed by the CRT in the computer video system hardware.

Microsoft and others later put it all in the image, to match what the TV
industry does, and then the printer expects and adapts to this.

In addition to saying the CRT requires gamma, Poynton also does say the words
that the primary purpose of gamma is so 8 bit data can store the data
sucessfully, meaning to be perceived correctly by the human eye after 8 bit
losses. This is NOT saying gamma is for the eye, not at all, no way, the eye
wants the linear result on the CRT face which looks like the original image
scene (same as if we look out the window at the original scene, that original
scene is linear too, which is what the eye wants). In this respect, Poynton is
only saying gamma encoding is so that the 8 bit losses wont be percieved by
the eye as losses, that is, gamma encoding coincidentally approximates
discarding what the eye discards, and keeps what the eye wants. Wasnt planned
or designed by anyone, it just fortunately happened to work out that way,
which is a great thing, because it is absolutely required by the CRT too.
Poynton says it both ways, necessary for the CRT, and for 8 bit digital.

Saying the "primary purpose" of gamma is the 8 bit data still seems very
misleading to me, one because the CRT absolutely requires it, and two because
we could simply use 16 bits and then there would be no 8 bit losses, and no 8
bit issue for the digital medium, and obviously then gamma would not be any
primary purpose for the digital data. It is just that 8 bits need it. And
the CRT needs it.

I am just saying 16 bit images without gamma would do that too, 16 bits would
simply keep everything, much more than the eye can possibly use or perceive,
so the eye doesnt perceive any problem with any 8 bit losses, there simply are
no losses if 16 bits. However the CRT would still perceive a problem without
gamma, that image becomes very dark on the CRT screen, non linear to the eye
then.

Poynton carefully explains this, how the eye sees 1% intensity steps, that is,
we might see 100 step values at any one adaptation of it iris (while viewing
this stationary image). However those 100 perceived values are these 1%
steps, more nearly logarithmic, not linear at all, and he says it would take
9900 linear binary steps (he says 11 bit data) to contain the numeric range of
those 100 1% steps. 8 bits cannot not store 9900 steps, only 256 linear
steps, so 8 bits simply is quite insufficient for the human eye to perceive
the digital data correctly. 16 bits of course can store 65535 values, far
more than the 9900 linear steps to include the 100 1% steps the eye can
perceive, so 16 bits should be no problem for the eye, as is. 16 bits is more
a problem for the hardware then but not the eye.

However, if that 8 bit data were gamma encoded, which is exponential which is
the same as logarithmic, with nearly the same exponent as the eye perceives
intensity anyway, then coincidentally, the data that is lost by the 8 bit
conversion is incidental to those 100 1% steps, not needed to still include
those 1% perceptial steps, the losses simply dont matter now because of how
those losses were selected. Poynton explains this.

So after the CRT display response (the CRT losses) restores that gamma encoded
data to linear (due to the unavoidable non-linear way CRT maps voltage to
intensity), and shows the linear data on the screen (the eye wants to see
linear on the screen), that liner data still has the 8 bit losses. That
previously gamma encoded data (now restored to linar by the CRT, which we
cannot control) does become linear again (for the eye to see). The gamma
encoding sucessfully compensated for the CRT losses.

However most of the data is missing in 8 bits, only 256 steps are present. So
the loses of the 8 bit data are present, because 8 bit data cannot show the
range of 9900 steps needed to show the 100 1% steps which eye can see and does
see. However, the gamma encoding selectively saves data logarithmally
(exponentially, same thing), so what it did save more nearly matches the the
1% step response of the eye, which is all the eye can see anyway. Because
these two effects have very similar exponents and response. Not done for the
eye, but the method is perfect to match the 8 bit losses to what the eye
doesnt see anyway. The important 100 1% steps are retained by the exponential
gamma encoding, which is otherwise required by the CRT anyway.

16 bits could do that part too, simply retaining everything, much more than
the eye could use. So I dont see 8 bits as essential, other than extremely
handy to minimize hardware memory requirments.

Repeating, clearly gamma is NOT done for the eye.

We can say it is done for the CRT, the CRT absolutely requires it because it
is going to respond in an opposite way, which is unacceptable, so we must fix
that so we see the linear scene data on the CRT screen.

Or we can say gamma is done for the 8 bit encoding, to cut the losses in 8 bit
data, so those losses are done in a way invisible to the eye. Not for the
eye, or to match the eye, not at all, but only for the 8 bit data losses so
the eye wont see those losses. That seems essential to do too, assuming we
are going to use 8 bit data. However we could have simply used 16 bits
instead and forgotten that part. That still leaves the CRT requiring gamma.
And printers also need much of it.

So the words "primary purpose" of gamma are not very clear to me. Both the CRT
and 8 bit data require it. Maybe like saying humans require air and water I
suppose. Technically perhaps we could drink beer or orange juice or mountain
dew, so perhaps we can forget about water and be able to say that only air is
THE primary need of humans.. it seems rather philosophical to me <g>

Not sure where we get those drinks without water, but it is also not clear
where we get the 8 bits without 16 bits. <g> Scanners and cameras have more
than 8 bits internally, because all such devices do gamma encoding internally,
because the CRT absolutely requires it, and it is our standard to do it that
way. True if digital or not. But if digital, more than 8 bits is needed to
perform gamma and have good 8 bit data... those 9900 linear steps containing
the 100 1% perceptual steps again. Poynton calls this 11 bits. So camera and
scanners are more than 8 bits internally, at least 12 bits today, but they
routinely output 8 bit data that is gamma encoded. It is simply our standard.
The CRT requires it, and 8 bit digital data requires it. Which of the two is
the most important is less clear to me. <g>


Hello Wayne

I have read Charles Poynton's Gamma FAQ quite often and decided to try
and work out how he got his numbers when discussing the spacing of codes
in a perceptual space.

Now a perceptual space should have a gamma of.33 or the inverse of gamma
3.0, at least that is what I understand from the FAQ.

I calculated the actual values for codes 25 and 26 to a power of .33.

So code 25 is (25/255) to power.33x255= 118.49
code 26 is (26/255) to power.33x255= 120.04

Mr Poynton says(page 5 of the gamma PDF) that the luminance ratio
between codes 25 and 26, presumably when raised to a power of .33 is 4%

My calculation makes it 120.4/118.49 = 1.31%
That is very close to the magic 1%

The luminance ratio of codes 5 and 6 (69.85 and 74.18 resp.) is 1.062 or
6.2%.

Perhaps I am not making the correct calculations.

Mike Engles
 
Mr Poynton says(page 5 of the gamma PDF) that the luminance ratio
between codes 25 and 26, presumably when raised to a power of .33 is 4%

My calculation makes it 120.4/118.49 = 1.31%
That is very close to the magic 1%


I assumed Poynton was speaking of the linear image which we see displayed on
the CRT screen, after the CRT non-linearity has necessarily decoded the gamma
encoded data back to be linear again. We never see anything else.

Thus, the difference between 25 and 26 is 1 in 25, or 1/25, which is 4%.

The difference in 100 and 101 is 1 in 100, or 1%, which is the threshold he
mentions at 100 in regard to 1%. And in the bright half, the difference in
200 and 201 is only 0.5%, so we truly cannot distinguish many of those bright
values as unique values (meaning it is unimportant if 8 bits doesnt retain
them all as unique values either, this being the meaning of perceptual).

I cant say much about Weber's Law, so I trust Poynton. Weber's law is well
accepted, it is what is taught, but it covers much more than vision. It is
about detecting "just noticeable differences", against a background (the 1%
delta is detectable). For example, the stars are as bright in the day as at
night, but the background differs. The way to test this with vision is to put
one area of intensity inside a larger intensity area, to see if the center
value is distinguishable against that background.

I am thinking this 1% value is not actually a constant, because I've seen
charts of how it varies slightly with intensity (but tiny with respect to the
overall intensity range of one adaptation of the iris), and to me, it does
seem to require a little more than 1% in the darker areas (some of which seems
possibly attributed to my monitor and adjustments). I think the original
paper (150 years ago) stated this factor as 1% to 2%. Regarding the eye, a
few sources say 2%. Poynton says 1%.

I think the exact details are less important than the concept of there being
perceptual steps, so that gamma for the CRT is also important to allow use of
8 bit data due to these perceptual increments.
 
Wayne said:
I assumed Poynton was speaking of the linear image which we see displayed on
the CRT screen, after the CRT non-linearity has necessarily decoded the gamma
encoded data back to be linear again. We never see anything else.

Thus, the difference between 25 and 26 is 1 in 25, or 1/25, which is 4%.

The difference in 100 and 101 is 1 in 100, or 1%, which is the threshold he
mentions at 100 in regard to 1%. And in the bright half, the difference in
200 and 201 is only 0.5%, so we truly cannot distinguish many of those bright
values as unique values (meaning it is unimportant if 8 bits doesnt retain
them all as unique values either, this being the meaning of perceptual).

I cant say much about Weber's Law, so I trust Poynton. Weber's law is well
accepted, it is what is taught, but it covers much more than vision. It is
about detecting "just noticeable differences", against a background (the 1%
delta is detectable). For example, the stars are as bright in the day as at
night, but the background differs. The way to test this with vision is to put
one area of intensity inside a larger intensity area, to see if the center
value is distinguishable against that background.

I am thinking this 1% value is not actually a constant, because I've seen
charts of how it varies slightly with intensity (but tiny with respect to the
overall intensity range of one adaptation of the iris), and to me, it does
seem to require a little more than 1% in the darker areas (some of which seems
possibly attributed to my monitor and adjustments). I think the original
paper (150 years ago) stated this factor as 1% to 2%. Regarding the eye, a
few sources say 2%. Poynton says 1%.

I think the exact details are less important than the concept of there being
perceptual steps, so that gamma for the CRT is also important to allow use of
8 bit data due to these perceptual increments.


Hello

In a system of numbers that would be correct,but we are talking about
levels and the luminance difference between them.

Surely in a linear image the the luminance difference between levels
are the same,256 levels correspond to .4% per level(well within the 1%
margin), but we would see them in a perceptual space.
I
n a perceptual space which is what he I suppose was talking about, those
levels are raised to a power of.33.

Mike Engles
 
Mike Engles said:
The luminance ratio of codes 5 and 6 (69.85 and 74.18 resp.)
is 1.062 or 6.2%. Perhaps I am not making the correct calculations.

The CRT has max luminance (that typically is something at about
100cd/m2) . Prints also have max luminance, with them it depends on
the illumination, put something like 314/0.8 Lux on the photo then the
white of the paper is at about 100dc/m2.

CRT as well as a photo on the paper also have a blackpoint, the
digital level 0 does not mean that the area of the screen that is
driven by level 0 would emit zero photons. It is the same with the
print too.

For photographic prints as well as for a better CRTs it is possible to
achieve about 250:1 dynamic range. From that, if the max luminance is
100dc/m2 then the digital level 0 is equal to 0.4 cd/m2. The digital
codespace then divide that range (from 0.4cd/m2 to 100cd/m2) according
to how the codespace was selected.

Timo Autiokari http://www.aim-dtp.net/
 
Back
Top