Timo said:
That has no relevance in digital imaging what so ever. It does have
relevance with analog TV broadcast where that analog transmission path
(the transmitter, the antenna circuits at both ends and the receiver)
add noise to the information, just similarly like the analog audio
tape and the analog video tape adds noise to the information. We do
not have such a noise source in digital imaging (nor with digital TV
nor digital audio).
Timo, whilst I am well aware that you have virtually made a career out
of spreading misinformation on the entire topic of gamma and its
application, your statement above moves your achievement in that field
onto a new plane entirely. I would have thought that nobody, not even
you, could deny the existence of quantisation noise in *any* digital
medium. However your demonstrable ignorance of this basic fact of
digital life entirely typifies the erroneous arguments that you promote.
Vision *always* adapts, there is no such thing like "non-adapted
dynamic range". At a given adaptation level (when looking at a scene
where the illumination level does not change) the vision can detect
about 200:1 dynamic range . Light(ness) adaptation is the very
property that makes it possible for the vision to be functional over a
huge range of illumination levels, from less than starlight to more
than bright sunny summer day, that range is something like 100000000:1
or more. But at any given adaptation level we only detect a tiny 200:1
range.
More bullshit from Timo I'm afraid. :-(
With a reasonable match of gamma to the perceptual response of the eye,
200 discrete levels is certainly adequate. However in sheer linear
terms, the *unadapting* sensitivity range vastly exceeds this -
otherwise there would be no need for gamma at all. Whilst we all
recognise that this is what you argue, almost everyone by now knows that
you are completely wrong!
Just browsing around a local video or computer store examining the
specifications of LCD and plasma displays indicates that even the worst
of them have contrast specifications vastly superior to your 200:1 level
for the eye (many reaching 500 and 800:1) but they are *still* vastly
inferior - even under store illumination levels - to the contrast of
even a moderate CRT. Brighter they may be, but with *much* poorer
contrast. By some magic, without taking these displays outside or into
different environments to induce adaptation changes in the eye, that
limitation is extremely obvious to anyone who cares to look. The
alternative to magic is simply that Timo is completely wrong - and,
fortunately, that has been established for a considerable number of
years now.
No, the level 0 represents the Dmax of the device in concern and it is
the very same Dmax no matter if the codespace is linear or non-linear.
Correct - is this a first? However as usual, Timo misleads.
8-bits linearly *can* only represent a density range of 2.4 - if the
Dmax of the device is less than this then certainly that will limit the
density available on the display. In most cases however even a
moderately decent display will have a Dmax which exceeds the range which
can be represented linearly by 8-bits - and the choice is then to settle
for a limited brightness, increase the brightness to achieve a decent
white at the expense of reduced Dmax, or to increase the contrast on the
display and make posterisation very visible in the blacks. Either way -
8-bits linear fails to meet the density range available from bottom of
the range displays. Even LCD monitors these days are regularly
achieving contrasts of 500:1 or 800:1 - well beyond the range of
linearly encoded 8-bit data. The only way to overcome this limitation
with 8-bits *is* gamma - and gamma *does* function as a compander,
extending the dynamic range of the digital signal.
There are no devices that provide such an enormously large dynamic
range 198668:1 or 17.6 stops so such coding is highly lossy. The very
best devices give you just 10 stops range so 7.6 stops are useless in
gamma 2,2, codespace.
You must be looking at some very poor displays Timo! 1000:1 contrast
(10 stops range) is achieved by top end LCDs now - and LCDs aren't
particularly opaque pixels when switched off, so the backlight is still
quite visible. As for the need for such density handling ability in the
data: that range doesn't *just* have to support the image on the display
- there must also be sufficient headroom to accommodate the system
colour management and monitor profile without visible loss of tonality.
Of course the 8-bit video in 2.2 gamma space has more contrast range
than even the best displays - it *has* to otherwise the system of colour
management employed simply would not work at all!
As for the image content itself, we have justified 16-bit scanners with
a linear signal handling range in excess of 64000:1 because that is what
it takes to see the scan the full range we can see on the film. Some
feel the need for a little more, arguing on this forum and elsewhere
(with the aid of demonstrated examples) for 17 or 18-bit linear
sampling, even resorting to composite scan methodologies to achieve that
with the current systems available. 17.6 bits linear data can be
adequately displayed on 8-bit video with the appropriate gamma. The
object of defining standards is to ensure that they not only cope with
the displays of today, but those of tomorrow - and there are some
photo-emissive display technologies just emerging which already knock
the contrast limits of CRTs into touch. Fortunately, we already have a
video standard (8-bits with gamma encode/decode) which will cope
admirably with them.