K
Kennedy McEwen
OK - its on its way. Let me know if it explains what is going on anyMike Engles said:Hello
Yes send me the image or give me a link.
(e-mail address removed).
better for you.
OK - its on its way. Let me know if it explains what is going on anyMike Engles said:Hello
Yes send me the image or give me a link.
(e-mail address removed).
Kennedy said:OK - its on its way. Let me know if it explains what is going on any
better for you.
--
Kennedy
Yes, Socrates himself is particularly missed;
A lovely little thinker, but a bugger when he's pissed.
Python Philosophers (replace 'nospam' with 'kennedym' when replying)
Yes, I sent it off yesterday, but I will try this address now. Let meMike Engles said:Have you sent the image? If yes, I have a awful feeling that my spam
filtering has done it in. I would be obliged if you could send it to my
wife'e Email
(e-mail address removed). She does'nt get spam.
I get a lot of spam;I must change my email name.
Kennedy said:Yes, I sent it off yesterday, but I will try this address now. Let me
know if you get it.
--
Kennedy
Yes, Socrates himself is particularly missed;
A lovely little thinker, but a bugger when he's pissed.
Python Philosophers (replace 'nospam' with 'kennedym' when replying)
Mike Engles said:Hello
Yes send me the image or give me a link.
(e-mail address removed).
I did as you suggested and applied a inverse gamma to a 8 bit linear
gradient I made in XARAX. Photoshop make a one with a curious shallow U
shaped histogram.
It is a idle speculation, but what kind of work practice would we have
now if in the beginning inverse gamma was applied to the TV,rather than
the TV signal? All perceptual corrections would have been done to the
TV. We would then be working in linear and applying a transform when
exchanging data for printing.
Timo said:Yes, inverse gamma is applied because the CRT monitor will apply gamma
2.5. The human vision requires linear light (like it is in the real
life) so the overall tonal reproduction curve (from scene luminances
to the luminances on the media) must be linear.
Editing non-linear image data severely damages the image quality, pls
see some demonstrations here:
Kennedy McEwen said:Human lightness sensitivity has a gamma of around 0.3-0.4
The concept is, as you describe in your document, that the product of
the eye and display gammas should approximate unity so that linear grey
ramp signals sent to the display *appear* as linear ramps with evenly
spaced steps.
Chris said:Only if you fail to read the manual and leave the gradient smoothness
at 100% (spline interpolated)....
The same - because the encoding of the signal has little to nothing to
do with the CRT physics. You would still gamma encode the signal to
get the best use of the bandwidth (and best signal quality for a given
bandwidth).
Chris
So for a 16 bit linear sensor and a 8 bit linear display we still need
to gamma encode; what level of gamma or inverse gamma would we need to
apply?
Wayne Fulton said:Have you seen Poynton's other document on gamma?
He has his Gamma FAQ at http://www.poynton.com/GammaFAQ.html
Wayne Fulton said:So the words "primary purpose" of gamma are not very clear
One can find many "primary purposes" for gamma from the sadRGB
specification at: http://www.w3.org/Graphics/Color/sRGB where they
list the following:
--> We feel that there are many reasons to support a 2.2 CRT,
including;
-->
--> compatibility with a large legacy of images
--> Photo CD,
--> many Unix workstations,
--> PC's with 256+ colors and their desktop color schemes and icons,
--> several ultra-large image collections,
--> analog television,
--> Apple Macintosh video imagery,
--> CCIR 601 images,
--> a better fit with Weber's fraction,
--> compatibility with numerous standards,
--> TIFF/EP,
--> EXIF,
--> digital TV,
--> HDTV,
--> analog video,
--> PhotoCD,
--> it is closer to native CRTs gamma,
--> and consistency with a larger market of displays.
Notably they did not have the courage to mention perception nor the
"eye", doing so would have been a lie. They just refer to the "Weber's
fraction" (that is usually called as the Weber's law) and do not
mention that compared to what it is supposed to be better in this
regards. It looks to me that the list is in the order of significance
(this "standard" was published in 1996).
Also note that in effect they say that gamma 2.2 is not the gamma of
the native CRT. From this it results that sadRGB profile is not the
optimum for publishing to the Web since the very large majority of
computers that are used for Web surfing are not calibrated to gamma
2.2 space, they are in the gamma space of the native CRT.
Timo Autiokari
Timo Autiokari said:That is not correct. The lightness perception of human vision is
(about) logarithmic.
In this context, lightness is the perceptual sensation produced by theLightness however does not relate to the task of
viewing the grayrange (be it on the CRT or on a print).
Please experiment with the contrast sensitivty at:
http://www.aim-dtp.net/aim/evaluation/perception/perception.htm
Wayne said:Mike, sorry, but it isnt clear if you are really getting gamma yet.
Have you seen Poynton's other document on gamma?
He has his Gamma FAQ at http://www.poynton.com/GammaFAQ.html
sort of a quick shorthand review, but it also has a link to his paper named
The rehabilitation of gamma, at
http://www.poynton.com/papers/IST_SPIE_9801/index.html
which is possibly better to comprehend the difference and significance of
these factors, of CRT, the eye, 8 bits, linear response, etc.
Poynton's writing is very precisely stated, perfect almost to a fault, because
the meaning of every word is very important to the meaning. It took me a long
while to grow before I actually understood the significance of it, but when
the terms finally all become precisely clear, then his writing becomes
extremely clear and precise, like magic.
Poynton clearly states the CRT has gamma around 2.5 and that the CRT thus
absolutely requires images with reverse encoding (so the eye avoids seeing
very dark images on the CRT face which the CRT produces due to its losses).
Therefore clearly and obviously, the CRT requires images with gamma encoding.
He does not quibble about his point. It does not matter the source of that
image, nor if 8 bits or 16 bits, or even if digital or analog. The CRT simply
needs images with gamma encoding, simply because the CRT response is going to
do the opposite.
We do tend to use CRT quite a lot, but even the earliest television (not
digital) required it, and they decided to put the CRT gamma encoding in the
camera instead of into every receiver. And still today, all images everywhere
have gamma encoding, period. This is done automatically and silently by
anything that creates images (cameras, scanners, photo editors, etc). Its a
fairly recent thing that programs are even mentioning it but it has always
been there. Otherwise it would look really bad.
Printers also have a gamma issue, not 2.5, but more generally the 1.8 range,
not due to CRT gun losses, but more due to dot gain as the ink spreads on the
paper, becoming darker than planned. This is why Apple early on adopted
images with gamma 1.8 (for their early Apple laser writer printer), and added
the remainder gamma needed by the CRT in the computer video system hardware.
Microsoft and others later put it all in the image, to match what the TV
industry does, and then the printer expects and adapts to this.
In addition to saying the CRT requires gamma, Poynton also does say the words
that the primary purpose of gamma is so 8 bit data can store the data
sucessfully, meaning to be perceived correctly by the human eye after 8 bit
losses. This is NOT saying gamma is for the eye, not at all, no way, the eye
wants the linear result on the CRT face which looks like the original image
scene (same as if we look out the window at the original scene, that original
scene is linear too, which is what the eye wants). In this respect, Poynton is
only saying gamma encoding is so that the 8 bit losses wont be percieved by
the eye as losses, that is, gamma encoding coincidentally approximates
discarding what the eye discards, and keeps what the eye wants. Wasnt planned
or designed by anyone, it just fortunately happened to work out that way,
which is a great thing, because it is absolutely required by the CRT too.
Poynton says it both ways, necessary for the CRT, and for 8 bit digital.
Saying the "primary purpose" of gamma is the 8 bit data still seems very
misleading to me, one because the CRT absolutely requires it, and two because
we could simply use 16 bits and then there would be no 8 bit losses, and no 8
bit issue for the digital medium, and obviously then gamma would not be any
primary purpose for the digital data. It is just that 8 bits need it. And
the CRT needs it.
I am just saying 16 bit images without gamma would do that too, 16 bits would
simply keep everything, much more than the eye can possibly use or perceive,
so the eye doesnt perceive any problem with any 8 bit losses, there simply are
no losses if 16 bits. However the CRT would still perceive a problem without
gamma, that image becomes very dark on the CRT screen, non linear to the eye
then.
Poynton carefully explains this, how the eye sees 1% intensity steps, that is,
we might see 100 step values at any one adaptation of it iris (while viewing
this stationary image). However those 100 perceived values are these 1%
steps, more nearly logarithmic, not linear at all, and he says it would take
9900 linear binary steps (he says 11 bit data) to contain the numeric range of
those 100 1% steps. 8 bits cannot not store 9900 steps, only 256 linear
steps, so 8 bits simply is quite insufficient for the human eye to perceive
the digital data correctly. 16 bits of course can store 65535 values, far
more than the 9900 linear steps to include the 100 1% steps the eye can
perceive, so 16 bits should be no problem for the eye, as is. 16 bits is more
a problem for the hardware then but not the eye.
However, if that 8 bit data were gamma encoded, which is exponential which is
the same as logarithmic, with nearly the same exponent as the eye perceives
intensity anyway, then coincidentally, the data that is lost by the 8 bit
conversion is incidental to those 100 1% steps, not needed to still include
those 1% perceptial steps, the losses simply dont matter now because of how
those losses were selected. Poynton explains this.
So after the CRT display response (the CRT losses) restores that gamma encoded
data to linear (due to the unavoidable non-linear way CRT maps voltage to
intensity), and shows the linear data on the screen (the eye wants to see
linear on the screen), that liner data still has the 8 bit losses. That
previously gamma encoded data (now restored to linar by the CRT, which we
cannot control) does become linear again (for the eye to see). The gamma
encoding sucessfully compensated for the CRT losses.
However most of the data is missing in 8 bits, only 256 steps are present. So
the loses of the 8 bit data are present, because 8 bit data cannot show the
range of 9900 steps needed to show the 100 1% steps which eye can see and does
see. However, the gamma encoding selectively saves data logarithmally
(exponentially, same thing), so what it did save more nearly matches the the
1% step response of the eye, which is all the eye can see anyway. Because
these two effects have very similar exponents and response. Not done for the
eye, but the method is perfect to match the 8 bit losses to what the eye
doesnt see anyway. The important 100 1% steps are retained by the exponential
gamma encoding, which is otherwise required by the CRT anyway.
16 bits could do that part too, simply retaining everything, much more than
the eye could use. So I dont see 8 bits as essential, other than extremely
handy to minimize hardware memory requirments.
Repeating, clearly gamma is NOT done for the eye.
We can say it is done for the CRT, the CRT absolutely requires it because it
is going to respond in an opposite way, which is unacceptable, so we must fix
that so we see the linear scene data on the CRT screen.
Or we can say gamma is done for the 8 bit encoding, to cut the losses in 8 bit
data, so those losses are done in a way invisible to the eye. Not for the
eye, or to match the eye, not at all, but only for the 8 bit data losses so
the eye wont see those losses. That seems essential to do too, assuming we
are going to use 8 bit data. However we could have simply used 16 bits
instead and forgotten that part. That still leaves the CRT requiring gamma.
And printers also need much of it.
So the words "primary purpose" of gamma are not very clear to me. Both the CRT
and 8 bit data require it. Maybe like saying humans require air and water I
suppose. Technically perhaps we could drink beer or orange juice or mountain
dew, so perhaps we can forget about water and be able to say that only air is
THE primary need of humans.. it seems rather philosophical to me <g>
Not sure where we get those drinks without water, but it is also not clear
where we get the 8 bits without 16 bits. <g> Scanners and cameras have more
than 8 bits internally, because all such devices do gamma encoding internally,
because the CRT absolutely requires it, and it is our standard to do it that
way. True if digital or not. But if digital, more than 8 bits is needed to
perform gamma and have good 8 bit data... those 9900 linear steps containing
the 100 1% perceptual steps again. Poynton calls this 11 bits. So camera and
scanners are more than 8 bits internally, at least 12 bits today, but they
routinely output 8 bit data that is gamma encoded. It is simply our standard.
The CRT requires it, and 8 bit digital data requires it. Which of the two is
the most important is less clear to me. <g>
It seems to be a bone of contention as to wether gamma encoding is
primarily for linearising a CRT or for the maximising the use of 8 bits
to discribe a image.
Wayne said:Gamma is clearly required and necessary for both purposes, so any contention
only appears to be which of the two requirements is the primary purpose, which
is too philosophical for me. However my own bias is that gamma was necessary
for television many years before we ever digitized image data, and still is,
and it seems difficult to ignore this prior claim. It can be argued that
there are some linear video displays today, perhaps eliminating the need for
one claim (if enough bits), but it can also be argued that there is also 16
bit data today, eliminating the need for the other (if a linear display).
I would imagine the day will come when we routinely show 16 bit data on linear
displays. The 64 bit operating systems seem a step closer. Meanwhile, it is
an extremely lucky coincidence that the one solution solves both problems.
Wayne said:Mike, sorry, but it isnt clear if you are really getting gamma yet.
Have you seen Poynton's other document on gamma?
He has his Gamma FAQ at http://www.poynton.com/GammaFAQ.html
sort of a quick shorthand review, but it also has a link to his paper named
The rehabilitation of gamma, at
http://www.poynton.com/papers/IST_SPIE_9801/index.html
which is possibly better to comprehend the difference and significance of
these factors, of CRT, the eye, 8 bits, linear response, etc.
Poynton's writing is very precisely stated, perfect almost to a fault, because
the meaning of every word is very important to the meaning. It took me a long
while to grow before I actually understood the significance of it, but when
the terms finally all become precisely clear, then his writing becomes
extremely clear and precise, like magic.
Poynton clearly states the CRT has gamma around 2.5 and that the CRT thus
absolutely requires images with reverse encoding (so the eye avoids seeing
very dark images on the CRT face which the CRT produces due to its losses).
Therefore clearly and obviously, the CRT requires images with gamma encoding.
He does not quibble about his point. It does not matter the source of that
image, nor if 8 bits or 16 bits, or even if digital or analog. The CRT simply
needs images with gamma encoding, simply because the CRT response is going to
do the opposite.
We do tend to use CRT quite a lot, but even the earliest television (not
digital) required it, and they decided to put the CRT gamma encoding in the
camera instead of into every receiver. And still today, all images everywhere
have gamma encoding, period. This is done automatically and silently by
anything that creates images (cameras, scanners, photo editors, etc). Its a
fairly recent thing that programs are even mentioning it but it has always
been there. Otherwise it would look really bad.
Printers also have a gamma issue, not 2.5, but more generally the 1.8 range,
not due to CRT gun losses, but more due to dot gain as the ink spreads on the
paper, becoming darker than planned. This is why Apple early on adopted
images with gamma 1.8 (for their early Apple laser writer printer), and added
the remainder gamma needed by the CRT in the computer video system hardware.
Microsoft and others later put it all in the image, to match what the TV
industry does, and then the printer expects and adapts to this.
In addition to saying the CRT requires gamma, Poynton also does say the words
that the primary purpose of gamma is so 8 bit data can store the data
sucessfully, meaning to be perceived correctly by the human eye after 8 bit
losses. This is NOT saying gamma is for the eye, not at all, no way, the eye
wants the linear result on the CRT face which looks like the original image
scene (same as if we look out the window at the original scene, that original
scene is linear too, which is what the eye wants). In this respect, Poynton is
only saying gamma encoding is so that the 8 bit losses wont be percieved by
the eye as losses, that is, gamma encoding coincidentally approximates
discarding what the eye discards, and keeps what the eye wants. Wasnt planned
or designed by anyone, it just fortunately happened to work out that way,
which is a great thing, because it is absolutely required by the CRT too.
Poynton says it both ways, necessary for the CRT, and for 8 bit digital.
Saying the "primary purpose" of gamma is the 8 bit data still seems very
misleading to me, one because the CRT absolutely requires it, and two because
we could simply use 16 bits and then there would be no 8 bit losses, and no 8
bit issue for the digital medium, and obviously then gamma would not be any
primary purpose for the digital data. It is just that 8 bits need it. And
the CRT needs it.
I am just saying 16 bit images without gamma would do that too, 16 bits would
simply keep everything, much more than the eye can possibly use or perceive,
so the eye doesnt perceive any problem with any 8 bit losses, there simply are
no losses if 16 bits. However the CRT would still perceive a problem without
gamma, that image becomes very dark on the CRT screen, non linear to the eye
then.
Poynton carefully explains this, how the eye sees 1% intensity steps, that is,
we might see 100 step values at any one adaptation of it iris (while viewing
this stationary image). However those 100 perceived values are these 1%
steps, more nearly logarithmic, not linear at all, and he says it would take
9900 linear binary steps (he says 11 bit data) to contain the numeric range of
those 100 1% steps. 8 bits cannot not store 9900 steps, only 256 linear
steps, so 8 bits simply is quite insufficient for the human eye to perceive
the digital data correctly. 16 bits of course can store 65535 values, far
more than the 9900 linear steps to include the 100 1% steps the eye can
perceive, so 16 bits should be no problem for the eye, as is. 16 bits is more
a problem for the hardware then but not the eye.
However, if that 8 bit data were gamma encoded, which is exponential which is
the same as logarithmic, with nearly the same exponent as the eye perceives
intensity anyway, then coincidentally, the data that is lost by the 8 bit
conversion is incidental to those 100 1% steps, not needed to still include
those 1% perceptial steps, the losses simply dont matter now because of how
those losses were selected. Poynton explains this.
So after the CRT display response (the CRT losses) restores that gamma encoded
data to linear (due to the unavoidable non-linear way CRT maps voltage to
intensity), and shows the linear data on the screen (the eye wants to see
linear on the screen), that liner data still has the 8 bit losses. That
previously gamma encoded data (now restored to linar by the CRT, which we
cannot control) does become linear again (for the eye to see). The gamma
encoding sucessfully compensated for the CRT losses.
However most of the data is missing in 8 bits, only 256 steps are present. So
the loses of the 8 bit data are present, because 8 bit data cannot show the
range of 9900 steps needed to show the 100 1% steps which eye can see and does
see. However, the gamma encoding selectively saves data logarithmally
(exponentially, same thing), so what it did save more nearly matches the the
1% step response of the eye, which is all the eye can see anyway. Because
these two effects have very similar exponents and response. Not done for the
eye, but the method is perfect to match the 8 bit losses to what the eye
doesnt see anyway. The important 100 1% steps are retained by the exponential
gamma encoding, which is otherwise required by the CRT anyway.
16 bits could do that part too, simply retaining everything, much more than
the eye could use. So I dont see 8 bits as essential, other than extremely
handy to minimize hardware memory requirments.
Repeating, clearly gamma is NOT done for the eye.
We can say it is done for the CRT, the CRT absolutely requires it because it
is going to respond in an opposite way, which is unacceptable, so we must fix
that so we see the linear scene data on the CRT screen.
Or we can say gamma is done for the 8 bit encoding, to cut the losses in 8 bit
data, so those losses are done in a way invisible to the eye. Not for the
eye, or to match the eye, not at all, but only for the 8 bit data losses so
the eye wont see those losses. That seems essential to do too, assuming we
are going to use 8 bit data. However we could have simply used 16 bits
instead and forgotten that part. That still leaves the CRT requiring gamma.
And printers also need much of it.
So the words "primary purpose" of gamma are not very clear to me. Both the CRT
and 8 bit data require it. Maybe like saying humans require air and water I
suppose. Technically perhaps we could drink beer or orange juice or mountain
dew, so perhaps we can forget about water and be able to say that only air is
THE primary need of humans.. it seems rather philosophical to me <g>
Not sure where we get those drinks without water, but it is also not clear
where we get the 8 bits without 16 bits. <g> Scanners and cameras have more
than 8 bits internally, because all such devices do gamma encoding internally,
because the CRT absolutely requires it, and it is our standard to do it that
way. True if digital or not. But if digital, more than 8 bits is needed to
perform gamma and have good 8 bit data... those 9900 linear steps containing
the 100 1% perceptual steps again. Poynton calls this 11 bits. So camera and
scanners are more than 8 bits internally, at least 12 bits today, but they
routinely output 8 bit data that is gamma encoded. It is simply our standard.
The CRT requires it, and 8 bit digital data requires it. Which of the two is
the most important is less clear to me. <g>
Mr Poynton says(page 5 of the gamma PDF) that the luminance ratio
between codes 25 and 26, presumably when raised to a power of .33 is 4%
My calculation makes it 120.4/118.49 = 1.31%
That is very close to the magic 1%
Wayne said:I assumed Poynton was speaking of the linear image which we see displayed on
the CRT screen, after the CRT non-linearity has necessarily decoded the gamma
encoded data back to be linear again. We never see anything else.
Thus, the difference between 25 and 26 is 1 in 25, or 1/25, which is 4%.
The difference in 100 and 101 is 1 in 100, or 1%, which is the threshold he
mentions at 100 in regard to 1%. And in the bright half, the difference in
200 and 201 is only 0.5%, so we truly cannot distinguish many of those bright
values as unique values (meaning it is unimportant if 8 bits doesnt retain
them all as unique values either, this being the meaning of perceptual).
I cant say much about Weber's Law, so I trust Poynton. Weber's law is well
accepted, it is what is taught, but it covers much more than vision. It is
about detecting "just noticeable differences", against a background (the 1%
delta is detectable). For example, the stars are as bright in the day as at
night, but the background differs. The way to test this with vision is to put
one area of intensity inside a larger intensity area, to see if the center
value is distinguishable against that background.
I am thinking this 1% value is not actually a constant, because I've seen
charts of how it varies slightly with intensity (but tiny with respect to the
overall intensity range of one adaptation of the iris), and to me, it does
seem to require a little more than 1% in the darker areas (some of which seems
possibly attributed to my monitor and adjustments). I think the original
paper (150 years ago) stated this factor as 1% to 2%. Regarding the eye, a
few sources say 2%. Poynton says 1%.
I think the exact details are less important than the concept of there being
perceptual steps, so that gamma for the CRT is also important to allow use of
8 bit data due to these perceptual increments.
Mike Engles said:The luminance ratio of codes 5 and 6 (69.85 and 74.18 resp.)
is 1.062 or 6.2%. Perhaps I am not making the correct calculations.