In a system of numbers that would be correct,but we are talking about
levels and the luminance difference between them.
Surely in a linear image the the luminance difference between levels
are the same,256 levels correspond to .4% per level(well within the 1%
margin), but we would see them in a perceptual space.
I
n a perceptual space which is what he I suppose was talking about, those
levels are raised to a power of.33.
I didnt follow everything Mike, but we dont ever look at those encoded values.
I think there is no concept of perceiving encoded values, when would that be
useful? We do encode the data, but the CRT does always decode it, which is
why we must do it, so we do always see decoded linear data. Ideally, if the
data pixel was originally intensity value 26, then the encoded/decoded 26
comes back out as the relative linear value 26 again. This is of course often
not exactly true of most values, but since what we can perceive is in
exponential steps of 1% anyway, the hope is that it comes back out as nearly
the same perceptually, close enough is good enough. This hope comes from the
similarity of the 1/2.2 and 1/3 power curve exponents for CRT and the
perceptual eye. Gamma is not done for that hope, gamma is done for the CRT,
but this hope is a major secondary advantage.
The data values like 25 or 26 are linear intensity values. Everything we can
see and touch is a linear intensity value. There is no concept of absolute
intensity in the RGB system, but still it is relative intensity, and it is
linear (meaning, not gamma encoded). Poynton says that Weber's eye can
differentiate 1% non-linear steps, but 25 to 26 is a linear 4% step to which
Poynton refers.
What we look at is linear intensity. What we perceive is not. We can measure
intensity externally with various instruments, but the only way we can measure
what the brain perceives is with methods like Webers background test. I cant
say about the precise numerical accuracy of 1%, I do think the 1% varies a
little, and to me, 2% seems better at the dark end. Even more sometimes,
which certainly to some extent could also be my imperfect monitor/settings. I
suspect Poynton simply picked one midpoint value for convenience. That's not
at all a critical issue for me. It is the concept that is beautiful, instead
of the precise numbers.
256 steps are more steps than Poyntons 100 steps, so 256 steps superficially
would seem adequate to hold that data. However the 8 bit RGB we see is 256
linear steps, whereas Poyntons perceptual values are the 1% steps, which are
logarithmic/exponential so to speak. Not at all the same concept.
We do see others making the serious mistake saying 256 is greater than 100,
but which is very conceptually wrong in this case. Timo's page he linked says
256 is almost 3 times greater than Poyntons 100 (immediately below his Q60
target image - he says 276, but JPG can only hold 256). But of course,
Poynton instead says these 100 1% perceptual steps require 9900 linear values
and 14 bits to contain them. Poynton says gamma encoding for the CRT reduces
these 9900 values and 14 bits of data into 463 values and 9 bits. He says
that 8 bits has adequate contrast/quality for broadcast standards, and
certainly 8 bits and 256 values is convenient for us.
One might think this non-linear gamma encoding method would be a reprehensible
way to treat linear data. This is lossy compression. We cannot reverse it and
recover the original data exactly. Many values are shifted slightly.
The issue is if the decoded linear values are shifted enough to be perceived
as shifted, meaning are the differences much more than 1% after decoding? The
human eye perception establishes an acceptable 1% error range, where errors
are not detectable. Gamma does create errors in this range, but the match is
pretty good. I think this is a major point, and is perhaps the concept you
are overlooking.
However, there are other major overriding factors:
1. The CRT requires gamma of 1/2.2 anyway. It is not a choice. We simply
must do it, regardless. It is a given, there are no options. We have always
done it for all images. We can imagine future options with 16 bit linear
displays, but that is still in the undetermined future.
2. The human eye does recognize intensity via these 1% perception levels,
which roughly compares to a gamma of 1/3, so this gamma 1/2.2 encoding which
is absolutely required for the CRT anyway, works out pretty well to hide the 8
bit losses. The human perception establishes an acceptable 1% error range, and
gamma requires a certain error range, especially so for 8 bit data. No one
says this is exactly correct, it is not designed to be correct that way. It is
not even designed, and gamma is not done for the eye. It just happened to
work out. But regardless, the 1/2.2 gamma is absolutely required anyway for
the CRT, and this does in fact work out quite well, good enough, to allow use
of 8 bit data, with results roughly near this 1% perceputal tolerance.
If it were somehow otherwise true that gamma 1/2.2 for the CRT was not
sufficient compression to allow 8 bit data to be used, then of course we
simply would never have standardized on using 8 bit data. We would have used
16 bit data, or whatever it took to work. But 8 bits is sufficient with
gamma, and we must do gamma anyway, so for this reason, we can and do use 8
bits. I'm just not of the opinion that 8 bit data is the "primary purpose of
gamma" <g>