Kennedy said:
Because the shadows are heavily compressed.
The onset of haloes occurs at much heavier degrees of filtration on
highlights because the data is then compressed in the highlights when
gamma is finally applied.
That is the problem - the filter isn't being applied perceptually, so it
depends heavily on the image content what the results are.
Incidentally, after our previous discussion on gamma I was a little
concerned that you seemed to remain unconvinced of the quantisation
issue and that I was not explaining what was happening very well. I
have thought about this a bit more and, I think, I have a graphic which
explains it fairly succinctly. Basically it is an extension of your own
curves diagrams, with the key addition of quantisation points on each of
the gamma curves as they are applied to linear data. I can send you a
copy if you are interested. The effect is quite clear, at least to me,
but it would be interesting to know if it makes it any clearer for you.
--
Kennedy
Yes, Socrates himself is particularly missed;
A lovely little thinker, but a bugger when he's pissed.
Python Philosophers (replace 'nospam' with 'kennedym' when replying)
Hello
Yes send me the image or give me a link.
(e-mail address removed).
I did as you suggested and applied a inverse gamma to a 8 bit linear
gradient I made in XARAX. Photoshop make a one with a curious shallow U
shaped histogram.
Of course it shows that there are no longer 256 codes, only about 160.
The same effect when I applied inverse gamma by writing in the gamma to
my Matrox video cards registry. That is why we both agreed that for 8
bits we need to apply inverse gamma before the A/D and I think we agreed
that almost no devices did that. Inverse gamma or any kind of processing
to any 8 bit image just damages the image,but we still need to apply the
gamma to a 8 bit lineasr even if the image is damaged, in order to see
it.
It did not prove to me that we need in every case to apply a extra
inverse gamma for perceptual reasons.
That is why I looked up the link for gamma from the PNG group. That does
in very few words make the case for extra inverse gamma beyond
measurable linearity, depending on the viewing conditions. That really
made sense. Of course even if we did work in linear, we would still have
to apply at least inverse 1.8 in order to print the image. We now just
do the whole thing at the scan.
It is a idle speculation, but what kind of work practice would we have
now if in the beginning inverse gamma was applied to the TV,rather than
the TV signal? All perceptual corrections would have been done to the
TV. We would then be working in linear and applying a transform when
exchanging data for printing.
Mike Engles