Gamma correction question

  • Thread starter Thread starter Jack Frillman
  • Start date Start date
Please do the same analysis in the highlights also where the 8-bit
gamma 2.2 space will remove more than 4 bits from that original 10-bit
image information, so divide by 16 (or 18).


I didnt understand the part about divide by 16. But yes, at the top end, eight
10-bit linear values 960-969 get mapped to 8-bit 248 with gamma. We do also
get adjacent values 247 and 249, which we cannot differentiate from 248
anyway, so more cant help much, but yes, it is of course a compression of
tonal detail. Linear conversion from 10 bit would expect 4 values to 1, but
all things have a cost. I'm a believer now said:
So the 8-bit/c gamma 2.2 space will remove eat a lot of editing
headroom, you only have less than 6-bit (linear) data there in the
highlights.

You are arguing that editors should, but dont, convert gamma files to linear
data for editing. No doubt that has some effect, but that decode operation
makes those 8 bit values 11, 15, 18 disappear, and it also leaves them in the
awkward position of needing to save that 8 bit data result with gamma encoding
again. I suppose they could decode to 16 bits to edit, or they could just
edit the gamma data.

No, gamma 2.5 is not. The native gamma space of the CRT tubes is 2.5
and that is horribly steep. E.g. the banding that can be seen in many
images that have large area of bright blue sky is due to the extreme
compression that gamma space 2.5 cause in the high portion of the
range.

Steep, but required in the real world, required by the CRT, much of it
required by the printer, and also required by our standards.
 
Timo said:
That has no relevance in digital imaging what so ever. It does have
relevance with analog TV broadcast where that analog transmission path
(the transmitter, the antenna circuits at both ends and the receiver)
add noise to the information, just similarly like the analog audio
tape and the analog video tape adds noise to the information. We do
not have such a noise source in digital imaging (nor with digital TV
nor digital audio).
Timo, whilst I am well aware that you have virtually made a career out
of spreading misinformation on the entire topic of gamma and its
application, your statement above moves your achievement in that field
onto a new plane entirely. I would have thought that nobody, not even
you, could deny the existence of quantisation noise in *any* digital
medium. However your demonstrable ignorance of this basic fact of
digital life entirely typifies the erroneous arguments that you promote.
Vision *always* adapts, there is no such thing like "non-adapted
dynamic range". At a given adaptation level (when looking at a scene
where the illumination level does not change) the vision can detect
about 200:1 dynamic range . Light(ness) adaptation is the very
property that makes it possible for the vision to be functional over a
huge range of illumination levels, from less than starlight to more
than bright sunny summer day, that range is something like 100000000:1
or more. But at any given adaptation level we only detect a tiny 200:1
range.
More bullshit from Timo I'm afraid. :-(
With a reasonable match of gamma to the perceptual response of the eye,
200 discrete levels is certainly adequate. However in sheer linear
terms, the *unadapting* sensitivity range vastly exceeds this -
otherwise there would be no need for gamma at all. Whilst we all
recognise that this is what you argue, almost everyone by now knows that
you are completely wrong!

Just browsing around a local video or computer store examining the
specifications of LCD and plasma displays indicates that even the worst
of them have contrast specifications vastly superior to your 200:1 level
for the eye (many reaching 500 and 800:1) but they are *still* vastly
inferior - even under store illumination levels - to the contrast of
even a moderate CRT. Brighter they may be, but with *much* poorer
contrast. By some magic, without taking these displays outside or into
different environments to induce adaptation changes in the eye, that
limitation is extremely obvious to anyone who cares to look. The
alternative to magic is simply that Timo is completely wrong - and,
fortunately, that has been established for a considerable number of
years now.
No, the level 0 represents the Dmax of the device in concern and it is
the very same Dmax no matter if the codespace is linear or non-linear.
Correct - is this a first? However as usual, Timo misleads.

8-bits linearly *can* only represent a density range of 2.4 - if the
Dmax of the device is less than this then certainly that will limit the
density available on the display. In most cases however even a
moderately decent display will have a Dmax which exceeds the range which
can be represented linearly by 8-bits - and the choice is then to settle
for a limited brightness, increase the brightness to achieve a decent
white at the expense of reduced Dmax, or to increase the contrast on the
display and make posterisation very visible in the blacks. Either way -
8-bits linear fails to meet the density range available from bottom of
the range displays. Even LCD monitors these days are regularly
achieving contrasts of 500:1 or 800:1 - well beyond the range of
linearly encoded 8-bit data. The only way to overcome this limitation
with 8-bits *is* gamma - and gamma *does* function as a compander,
extending the dynamic range of the digital signal.
There are no devices that provide such an enormously large dynamic
range 198668:1 or 17.6 stops so such coding is highly lossy. The very
best devices give you just 10 stops range so 7.6 stops are useless in
gamma 2,2, codespace.
You must be looking at some very poor displays Timo! 1000:1 contrast
(10 stops range) is achieved by top end LCDs now - and LCDs aren't
particularly opaque pixels when switched off, so the backlight is still
quite visible. As for the need for such density handling ability in the
data: that range doesn't *just* have to support the image on the display
- there must also be sufficient headroom to accommodate the system
colour management and monitor profile without visible loss of tonality.
Of course the 8-bit video in 2.2 gamma space has more contrast range
than even the best displays - it *has* to otherwise the system of colour
management employed simply would not work at all!

As for the image content itself, we have justified 16-bit scanners with
a linear signal handling range in excess of 64000:1 because that is what
it takes to see the scan the full range we can see on the film. Some
feel the need for a little more, arguing on this forum and elsewhere
(with the aid of demonstrated examples) for 17 or 18-bit linear
sampling, even resorting to composite scan methodologies to achieve that
with the current systems available. 17.6 bits linear data can be
adequately displayed on 8-bit video with the appropriate gamma. The
object of defining standards is to ensure that they not only cope with
the displays of today, but those of tomorrow - and there are some
photo-emissive display technologies just emerging which already knock
the contrast limits of CRTs into touch. Fortunately, we already have a
video standard (8-bits with gamma encode/decode) which will cope
admirably with them.
 
Wayne said:
I am seeing that now Timo, thanks for the very clear explanation. I sure wish
I'd read this one first. <g> But yes, I can see the 8 bit gamma perceptual
advantage now. I just couldnt get started right I suppose, I was approaching
it from the wrong end. All that mumbo-jumbo about the eyes response was a
false trail too, I knew it couldnt be that, the eye never sees it. It is
only about these simple numbers you point out here.

Given 10 bit linear data containing each value from 0..30, then:

10 bit linear data converted to 8 bits with 0.45 gamma encoding,
gives the same 8 bit result values as the
corresponding 10 bit gamma data truncated to 8 bits,
and both have 8 bit gamma values of 0, 11, 15, 18, 21, etc.

2.2 gamma decoding does convert these unique 8 bit values back to
values 0.0, 0.3, 0.5, 0.7, 1.0, 1.3, etc,
but these still remain unique values, because they are analog on the CRT
screen, not binary values then. In this way, the 8 bit data could be the
carrier between two sets of analog data, but I wasnt seeing this before.

I dont know how a LCD video driver might treat these values like 0.3. I dont
know its internal mechanism, but at minimum, its driver could round off the
gamma computation, which would be some advantage.

But 10 bit liner data truncated to 8 bits (divide by 4) has every four values
repeated intead of unique (mod 4), so first 4 values are 0, next 4 are 1,
next four are 2, etc, not unique, which is a less acceptable plan.
Gamma encoding this result then creates first four are 0, next four are 21,
etc, also not so acceptable. The fifth value is the same 21 value, but less
unique values. So truncating linear files is not a good plan, gamma really
is better for the 8 bit data.

8 bit linear data containing the same 0, 1, 2, 3, etc, would be a much better
start than the truncated linear data, and gamma encoding would then also
allow each value to contain a unique gamma value. However it isnt so clear
what the source of this 8 bit linear data might be. A graphic editor
perhaps.

Do I have a technical out here, maybe it is just about truncating the linear
file instead of about the 8 bit file? <g>

Just kidding, yes, I do get it now, thanks much Timo. The gamma encoded 8
bit data does contain more unique values in this way, which is a perceptual
advantage, even if the human eye response curve is not a factor. It is
indeed extremely convenient that the CRT does need this gamma correction
anyway.


Hello Wayne.

There is one problem. The CRT has decoded the data, but the encoding is
still there. This fine if we do not edit anything, but we need to decode
before editing. Editing in the gamma space can make the image go into
clipping much easier.


Take the Dolby analogy. The high frequencies below a certain threshold
are boosted, the signal is recorded and the played back through a
decoder. The mixing channel receives from the tape machine a
signal whose frequency content is the same as the original, but the
noise floor is reduced. If we sent the undecoded signal into the mixing
channel and applied the decoding in the loudspeakers, we would be mixing
in a percptually correct way, but any further tone correction can send
the mixing channel into clipping. This can easily be prooved. We can set
our channel output to a particular level, then take out the Dolby
decoder, the output level will jump and be much toppier.

At the moment we do our image editing with the CRT as the decoder, which
is fine for TV but not for editing our images.

Mike Engles
 
Yes it does, but it is no longer linearly encoded - so the ratio of the
lowest level possible to the highest is no longer 1/256th, but
(1/256)^gamma. In this case gamma is 2.2, so that gives a ratio of
1/198668, or a dynamic range of 20log((256)^2.2), which is 105.96dB.

If you say so, but the largest number actually in the file is only 255.
If any ambient noise crept in somehow, it might be like Dolby, but coming
out, the the largest number decoded from the file still only represents 255.
My video screen doesnt exactly dazzle me.

But it wont do you much good to argue with me Kennedy, because I've changed
my mind now said:
Wayne, try doing this on an Excel spreadsheet with *all* the 16-bit
numbers - that will show you exactly how much data is lost when you
implement the coding wrongly.

Yes, Timo convinced me and I have looked closer at both ends. I already knew
that, but something about reading Poyntons argument again about matching the
eyes perception threw me off badly. The same spreadsheet facts are true
regardless if any human eye exists or not, or for any other sensing device
too. So an important part is about presevering these lower unique values.

But yes, while done anyway for the CRT, this is done at the expense of
reducing the number of brighter tones, which the eye doesnt differentiate so
well anyway. In that sense, the bright end cost is related to the eye, what
we can afford to pay. Dark end benefit too, but the retained unique values
there are also a plus for any sensing device.
 
Kennedy McEwen said:
8-bits linearly *can* only represent a density range of 2.4

No. 8-bit digitalization represents what ever measurement range that
you choose to quantify into 8-bit. You can change the 8 with what ever
integer number that is greater than 0.

Please do you homework.

Timo Autiokari
 
Mike Engles said:
Take the Dolby analogy. The high frequencies below a certain threshold
are boosted, the signal is recorded and the played back through a
decoder. The mixing channel receives from the tape machine a
signal whose frequency content is the same as the original, but the
noise floor is reduced. If we sent the undecoded signal into the mixing
channel and applied the decoding in the loudspeakers, we would be mixing
in a percptually correct way, but any further tone correction can send
the mixing channel into clipping. This can easily be prooved. We can set
our channel output to a particular level, then take out the Dolby
decoder, the output level will jump and be much toppier.
Mike,
you are confusing the fact that Dolby is more than just a
compander - it has a frequency response as well. It is the frequency
response that causes the effect that you hear in your analogy, and this
would not occur with a flat response compander, such as the original DBX
system. Neither can it possibly occur in gamma encoding-decoding cycle.
The mean brightness level certainly increases, but the gamma function
certainly does not permit the output to clip - all encode-decode
functions are implemented with reference to full scale (ie. peak white
on images and 0dBv in audio).
 
Wayne Fulton said:
If you say so, but the largest number actually in the file is only 255.
If any ambient noise crept in somehow, it might be like Dolby, but coming
out, the the largest number decoded from the file still only represents 255.
Sorry to labour this, Wayne, but it is exactly like Dolby (well, apart
from the frequency response - your images don't change colour with
gamma!). ;-)
My video screen doesnt exactly dazzle me.

Nor should it! ;-)

The peak level on your display is still 255 and the maximum quantisation
noise in the highlights is still 1 part in 256. In Dolby, the peak
output signal is still 0dBv, just as it was without Dolby, and the noise
on those high level signals is still the tape noise at perhaps 40dB
below that. In both cases however, the low level signals have been
reduced below what would otherwise have been possible to achieve,
because the noise in the low amplitudes (tape in Dolby and quantisation
in digital images) has been reduced by the expanding process.
 
Timo Autiokari said:
No. 8-bit digitalization represents what ever measurement range that
you choose to quantify into 8-bit. You can change the 8 with what ever
integer number that is greater than 0.

Please do you homework.
Take own advice Timo - linearly encoded it can only represent a density
range of 2.4! The only way to change that is to introduce dynamic range
compression first. Perhaps if you did some homework yourself some year
ago you wouldn't now be labouring under delusions and generally
spreading misinformation!
 
Wayne Fulton said:
I didnt understand the part about divide by 16. But yes, at the top end, eight
10-bit linear values 960-969 get mapped to 8-bit 248 with gamma.

Oh yes, my mistake, I had a wrong seed value on my spreadsheet model.
So divide by 8 or 9 (reduce 3 bits).
You are arguing that editors should, but dont, convert gamma
files to linear data for editing.

Actually I'm not, I just say that with current SW highest quality
results can only be had when the image data is linear. But since you
took this into the discussion, in my opinion it should be so that
editing operations give the very same high quality result no matter
what the coding space of the image is (no matter in what ICC profile
the image is). The editing algorithms can be very easily written so
that they take the non-linearity of the working-space properly into
account and then return the result into that RGB working-space.
I suppose they could decode to 16 bits to edit, or they could just
edit the gamma data.

Most of the operations already are internally done in 16-bit (or
15-bit in Photoshop) even if the data is in 8-bit/c. This is very
often absolutely necessary in order to avoid major round off errors.
Steep, but required in the real world, required by the CRT, much of it
required by the printer, and also required by our standards.

This requirement is *only* there at the publishing time. You can very
easily acquire linear, edit in linear and then just publish to what
ever space the output require. This is how high end professionals work
and have been working from the very beginning of digital imaging.

Timo Autiokari
 
Kennedy McEwen said:
Take own advice Timo - linearly encoded it
can only represent a density range of 2.4!

I try once more in an other way,

1-bit digitalization represents what ever measurement range that you
choose to quantify into 1-bit. And it does it perfectly and linearly.

And 100000000-bit digitalization represents what ever measurement
range that you choose to quantify into 100000000 bits. And it does it
perfectly and linearly.

It looks to me that you need some serious education in digital
instrumentation.

Timo Autiokari
 
Timo Autiokari said:
You can very
easily acquire linear, edit in linear and then just publish to what
ever space the output require. This is how high end professionals work
and have been working from the very beginning of digital imaging.
No it isn't - and never has been!

It is only very recently that high end drum scanners even captured the
image in a linear manner and encoded the gamma function digitally.
Photomultiplier based drum scans always applied gamma well before the
ADC, which is why they got such excellent results with the same number
of bits as conventional scanners.

Similarly, digital video imaging is, and always has been, implemented in
gamma space - the gamma is applied IN THE CAMERA, prior to ever being
digitally encoded.
 
Kennedy said:
Mike,
you are confusing the fact that Dolby is more than just a
compander - it has a frequency response as well. It is the frequency
response that causes the effect that you hear in your analogy, and this
would not occur with a flat response compander, such as the original DBX
system. Neither can it possibly occur in gamma encoding-decoding cycle.
The mean brightness level certainly increases, but the gamma function
certainly does not permit the output to clip - all encode-decode
functions are implemented with reference to full scale (ie. peak white
on images and 0dBv in audio).
--
Kennedy
Yes, Socrates himself is particularly missed;
A lovely little thinker, but a bugger when he's pissed.
Python Philosophers (replace 'nospam' with 'kennedym' when replying)


Hello

I do speak from experience that a undecoded signal like a snare drum,
recorded to peak can overload the line input of a channel. If that
decoding was done in a virtual space like the speakers amd not in the
tape return path and I was not aware of it, and added further eq, I
would certainly have input channel distortion. I would be hearing the
correct frequencies, but would be adjusting a undecoded signal.

Actually Bono from U2 always recorded his vocals like that, even in live
concerts, because he liked the effect. It made controlling the extra
high frequency boost to the spill from stage monitors really difficult
to control.

That is my analogy of editing images in a gamma space while seeing a
linear one. That is why something like unsharp mask, seems to work
disproportionately on the brighter ranges of the image. That is why
peple try all sorts of masking to avoid halo effects. Doing sharpening
in a linear space does not have the same haloing for a given sharpness
setting, but does block the shadows, because the contrast is increased
evenly.

Philosopically I feel that should not be done, by curiously I usually
prefer it. It might just be conditioning. Our ears and eyes can adjust
to anything and regard that as definitive, digital TV for example.


Mike Engles
 
Timo Autiokari said:
I try once more in an other way,

1-bit digitalization represents what ever measurement range that you
choose to quantify into 1-bit. And it does it perfectly and linearly.
Not if it is coded linearly without offset. 0 represents zero. 1 can
then represent whatever signal amplitude you like. The dynamic range
represented by the data is still the same.
It looks to me that you need some serious education in digital
instrumentation.
For someone who regularly misleads the general public on the use, or in
your parlance misuse, of gamma, it is surprising that you do not
recognise the effect of such a non-zero baseline on a logarithmic
measure such as dynamic range. Sad as it seems, I have probably
forgotten more about digital instrumentation, and its quantification,
that you ever knew.

Now please stop misleading the public and go back under your lonely
rock. We have tired of your latest attempt to gain credence for your
misplaced theories.
 
Mike Engles said:
Hello

I do speak from experience that a undecoded signal like a snare drum,
recorded to peak can overload the line input of a channel.

Yes, it can happen with Dolby - usually as a consequence of drift and
ageing of the analogue circuits, or their misalignment in the first
place, which was a major problem with practical Dolby implementations.
There is also the question of how the peak signal is assessed in the
first place, because the companding is not flat across the frequency
range. Consequently there is a question of how the output level is
matched - are the high frequencies amplified or the low frequencies
attenuated? Either does the compression/expansion job, but one leads to
a volume lift when the encoder or decoder is switched out. However that
isn't what happens in gamma (or in a correctly adjusted audio compander)
- low level signals are always attenuated by the expander process,
rather than high level signals being further amplified. This is the
same issue Wayne was concerned about with the comparison when he
suggested his monitor would dazzle him if the effect was as Dolby - but
it always works by extending the range downwards in the direction of the
low levels, not up - which would lead to the saturation problems you are
concerned about.

You have actually seen this yourself in the gamma curves you have
calculated - how many of those resulted in data ever exceeding the
maximum of the particular range used? None - your concerns about
headroom working in gamma space simply don't exist.
 
Wayne Fulton said:
So an important part is about presevering these lower unique values.

But in the reality, when we look at natural scene, it is not so very
important than you seem to think/belive.

As a conclusion, I'd like to refer to Mr. Poynton's document at
http://www.poynton.com/Poynton-color.html where he writes:

"Permanent, easy solutions to many of the problems in tone and color
reproduction in computing require assistance - even leadership - from
the developers and manufacturers of hardware and software. Solving
<i>that</i> problem is the primary goal of the Gamma FAQ and Color FAQ
documents."

Please read that intently.

In other words Mr. Poynton says there that when your primary goal is
high quality results you should not read nor believe his so called
"faq". An he confess that he is an advocate of the industry. Easy
solutions very very rarely go hand in hand with high quality.

Timo Autiokari
 
Much confusion and gnashing of teeth in this thread. Can somebody tell
me if I've got the following right?

--- Brightness of a pixel on an ideal computer screen
using 24-bit colour:

Lpixel = Lmax * ( x / 255 ) ^ gamma

where
# Lmax is the brightness when the screen is displaying pure white
# x is an 8-bit number 0-255 and Red = Green = Blue = x
# gamma = 2.2 (fixed by the hardware in the screen)

--- Output of an ideal film scanner in 16-bit-linear mode:

y = 65535 * ( 1 - D )

where
# D is the slide density (not the log-density --eg. D=0.9 corresponds
to a transmission of 10%)
# Red = Green = Blue = y, assuming the slide is gray

--- Conversion from 16-bit-linear to 8-bit data:

x = 255 * ( y / 65535 ) ^ (1/gamma)


thanks,
Tim
 
Much confusion and gnashing of teeth in this thread. Can somebody tell
me if I've got the following right?

--- Brightness of a pixel on an ideal computer screen
using 24-bit colour:

Lpixel = Lmax * ( x / 255 ) ^ gamma

where
# Lmax is the brightness when the screen is displaying pure white
# x is an 8-bit number 0-255 and Red = Green = Blue = x
# gamma = 2.2 (fixed by the hardware in the screen)


Lmax is not the brightness of the CRT screen, exactly.

The gamma curve is always scaled to be in the intensity range 0..1
The simple value^gamma refers to a 0..1 range of data.

So x/255 scales the original 8 bit value 0..255 to be in the range 0..1.
For example, the maximum value 255 divided by 255 is this 1, precisely.
Divide the value by 255 if the data is currently 8 bits, or 1024 if it is
10 bits, or 4096 if 12 bits, or 65535 if it is 16 bits.

Then raise to the gamma exponent.

Then multiply result by 255 or 1024 or 4096 or 65535 to bring it from
0..1 scale back to be the desired output range, which may not be the same
scale, for example it may simultaneously do 12 bit to 8 bit conversion.

result = 255 x (value/4096)^gamma for 12 to 8 bit examples.

The two numbers 255 and 4096 are due to the 8 bit and 12 bit ranges of
the data. The maximum result value (of 1) does drive the screen to
maximum, there is that correspondance, but it is not quite the same idea.
It is about the range of data instead of the screen. The screen has
adjustments for brightness to accomodate the range of the data.

This gamma processing is the encoding to prepare the data for the CRT
losses. The CRT hardware response is reciprocally opposite, 2.2 instead
of 1/2.2, which is just how life is, and the CRT does this curve itself,
at the electron gun, but before the screen phosphorus. So the result we
see on the phosphorus is again the linear original. The point of doing
this is to compensate for the nonlinear CRT response.

Then years later when we invented digital data, then this enhancement of
8 bit data (preservation of the lowest unique values) was discovered to
be a second reason to do it. Or rather, the existing required-anyway
scheme worked quite well as is, otherwise something else would have been
necessary for the 8 bit retention.

THe value x does not require equal red, green, blue. Instead each
component is done individually, maintained individually, we are speaking
of one such value. When we just say consider some value, this is one
value. This one value is just for convenience of the discussion. You
could assume equal or grayscale luminance, but it is just one value.
 
Sorry, I'm getting too sloppy sometimes, going too fast without thinking
enough first. For nitpickers, some of the errors were that 4096 should be
4095, the 4096 range of 0..4095. Same for 1023.

And it is of course the video board that scales the analog output voltage
to drive the CRT, from the corresponding digital input range.
 
Wayne said:
result = 255 x (value/4096)^gamma for 12 to 8 bit examples.
[lots snipped]

Thanks for the reply. Sorry to be persistent, but I really want to
understand the relation between the physical quantities and the digital
data.

Suppose I take a slide (black and white, for simplicity) and scan it.
It seems clear that the brightness of the image on an ideal computer
screen should be proportional to the transparency of the slide. ie:

L = k*T

where L is the brightness of a particular pixel, in candelas,
T is the transparency of the corresponding point on the slide
and k is a constant

The log-density of the slide is: -log(1-T)

When I load this image into an editing program, it shows me the digital
data for each pixel. For a grey-scale JPEG, the editing program shows
me values Red = Green = Blue = x, where x is a number between 0 and 255.

Now what is the relation between the number x and the physical
quantities L & T? From what I have read I *think* it is this:

x = 255 * T^(1/gammascanner)
and
L = k * (x/255)^gammascreen

where gammascanner = gammascreen = 2.2

(If dealing with a 16-bit image, then 255 becomes 65535 in the above
equations)

Is that right?

By the way, Wayne, your scanning-tips pages are great. Very clear.
Thanks for writing them.

-Tim
 
Tim said:
--- Brightness of a pixel on an ideal computer screen
using 24-bit colour:
Lpixel = Lmax * ( x / 255 ) ^ gamma

The luminance of the pixel:

L = (Lmax-Lblackpoint)*(x/255)^2.5+Lblackpoint

where:
Lmax is the luminance when the screen is displaying pure white
x is an 8-bit number 0-255 and Red = Green = Blue = x
gamma = 2.5 (fixed by the CRT tube)

Extremely huge errors are created if the Lblackpoint is omitted. The
area on the the display that is is driven by R=G=B=0 always have some
luminance, part of that is due to the interior lighting reflecting off
from the display surface, part comes from internal refflections inside
the CRT tube (or leakage transmittance in case of flat panels).

There is no such ideal displays that could show an area in a picture
that does not emit or reflect any photons at all.
--- Output of an ideal film scanner in 16-bit-linear mode:

y = 65535 * ( 1 - D )

The CCD sensors of scanners are linear in respect to transmittance (or
reflectance in case of flatbed scanner). A film scanner could then
manipulate the tonal reproduction in a way or another, there is no
single ideal way to do that.
--- Conversion from 16-bit-linear to 8-bit data:

x = 255 * ( y / 65535 ) ^ (1/gamma)

Yes, x rounded to integer.

Timo Autiokari
 
Back
Top