How to calc 8-bit version of 16-bit scan value

  • Thread starter Thread starter SJS
  • Start date Start date
S

SJS

Hi,

I would like to convert 16-bit scan values to 8-bit values but am unsure
of the correct method. I am assuming a gamma of 2.2.

My current formula is :

output = ((input / 65536) E (1 / 2.2)) * 256

Is this correct ?

Some of my scans have 8-bit values of 1 or 2. This surprises me as the
minimum non-zero 16-bit scan value my scanner can produce is 4.
Applying the above formula gives me an 8-bit value of 3 so I should
never get an 8-bit value of 1 or 2 in a raw file. These are colour
files not greyscale.

-- Steven
 
SJS said:
Hi,

I would like to convert 16-bit scan values to 8-bit values but am unsure
of the correct method. I am assuming a gamma of 2.2.

My current formula is :

output = ((input / 65536) E (1 / 2.2)) * 256

Is this correct ?

Some of my scans have 8-bit values of 1 or 2. This surprises me as the
minimum non-zero 16-bit scan value my scanner can produce is 4.
Applying the above formula gives me an 8-bit value of 3 so I should
never get an 8-bit value of 1 or 2 in a raw file. These are colour
files not greyscale.

-- Steven

To convert 16 bit to 8 bit drop the upper 8 bits or divide by 256.
Do an integer divide to drop the fractions.

There are three 16 bit values in color, 16 bits of red, 16 bits of green and
16 bits of blue. For a total of 48 bits.

So, you would divide the Red value by 256, the Green Value by 256 and the
Blue value by 256, leaving 24 bit color.
 
To convert 16 bit to 8 bit drop the upper 8 bits or divide by 256.
Do an integer divide to drop the fractions.

Oops, in my post I should have stated that the 16-bit values are gamma
1.0 and the 8-bit values are gamma 2.2. Are you sure that I simply drop
the lower 8 bits ? Wouldn't this just reduce my density range to 256 ?

-- Steven
 
SJS said:
Oops, in my post I should have stated that the 16-bit values are gamma
1.0 and the 8-bit values are gamma 2.2. Are you sure that I simply drop
the lower 8 bits ? Wouldn't this just reduce my density range to 256 ?

-- Steven
Do not drop the lower 8 bits, drop the upper 8 bits.
In a 16 bit word, you want to drop the high byte and keep the low byte.
If you know how to shift bits the operation is a << 8 or a left shift of 8
bits.

16 bit values are 0-65535 and 8 bit values are 0-255

By dropping the high byte you reduce the the number of colors that can be
represented from 65536 to 256 in each Red, Green or Blue color.

I do not know if it will affect gamma.

All I know is that the output of the scanner is nothing but data. How much
data depends on what the software has told the scanner to do.

Check out www.scantips.com.
Wayne Fulton is really good with scanners. Also check http://hamrick.com/
He writes software for scanners. (VueScan)
 
CSM1 said:
Do not drop the lower 8 bits, drop the upper 8 bits.
In a 16 bit word, you want to drop the high byte and keep the low byte.
If you know how to shift bits the operation is a << 8 or a left shift of 8
bits.
I think you need to revise both of those statements.

Left hand... meet right hand!
16 bit values are 0-65535 and 8 bit values are 0-255
So in 16 bit format, we get red=61217 for example, or EF21 in
hexadecimal.

Drop the upper 8 bits, results in 21 in hexadecimal, or 33 decimal.
Now, 61217 is quite a high luminance red in 16-bit colour, but 21 is a
low luminance red in 8-bit colour - so quite clearly, dropping the upper
8 bits is complete rubbish. Indeed, doing so results in very small
changes in luminance having quite large changes in the converted data.
For example, 61182 in 16-bit colour is only 0.06% darker than the
original colour but, since it is EEFE in hexadecimal then using your
conversion results in an 8-bit colour level of FE, or 254, which is
nearly saturated!

Now look at your other suggestion, left shifting the data by 8 bits.
61217 decimal, EF21 hexadecimal, becomes EF2100 hexadecimal, which is a
24-bit number of value 15671552 in decimal. More rubbish! That os
because left shifting is equivalent to *multiplying* the data by 2 for
each shift! So what do we do with this meaningless huge number? Drop
the upper bits as in your other suggestion and end up with zero for
everything?
I do not know if it will affect gamma.
Or, apparently, what you are talking about! If you do find that your
operation works then you have a more fundamental problem with your data,
such as reading big endians as little endians or vice versa.

SJS original suggestion gives the correct solution, although dropping
the lower 8-bits is irrelevant if the division is integer.
Alternatively, shift *right* by 8 bits!

As for gamma, I don't think he need make any adjustment at all. Gamma
is applied to the 16-bit and 8-bit data scaled to the peak white and
black levels, so it automatically works. Having said that, SJS later
post indicated that the gamma of the two data sets was different, so he
will have to apply gamma to the 16-bit data prior to converting - doing
so after will result in missing codes.
 
As for gamma, I don't think he need make any adjustment at all. Gamma
is applied to the 16-bit and 8-bit data scaled to the peak white and
black levels, so it automatically works. Having said that, SJS later
post indicated that the gamma of the two data sets was different, so he
will have to apply gamma to the 16-bit data prior to converting - doing
so after will result in missing codes.

Hi Kennedy,

I think the formula does gamma conversion ( ^(1/2.2) ). I have since
read that formula changes at low luminence values so the slope is
limited to 4.5 (apologies if my terminology is incorrect here). This
slope limiting explains how I could get an 8-bit sample of 1 or 2 from a
16-bit sample with a granularity of 4 (14-bits << 2).

This slope limiting seems a bit excessive for a scanner as it limits the
density range to just over 1000. Testing with my Canon scanner
indicates that the slope is limited to 9 instead of 4.5.

Regards,

Steven
 
SNIP
SJS original suggestion gives the correct solution, although dropping
the lower 8-bits is irrelevant if the division is integer.
Alternatively, shift *right* by 8 bits!

Although binary correct, the proper thing to do is divide by 257.
If we assume 255 to be the brightest value one can represent in 8 bits and
65535 the brightest in 16 bits, then dividing by 257 will preserve that
relation. Doing so will preserve slightly more accuracy.
This of course assumes equal gamma, or prior gamma adjustment.

Bart
 
Bart van der Wolf said:
SNIP

Although binary correct, the proper thing to do is divide by 257.
If we assume 255 to be the brightest value one can represent in 8 bits and
65535 the brightest in 16 bits, then dividing by 257 will preserve that
relation. Doing so will preserve slightly more accuracy.
This of course assumes equal gamma, or prior gamma adjustment.

Bart
Thanks, Bart. You are so correct.
 
Bart van der Wolf said:
SNIP

Although binary correct, the proper thing to do is divide by 257.
If we assume 255 to be the brightest value one can represent in 8 bits and
65535 the brightest in 16 bits, then dividing by 257 will preserve that
relation. Doing so will preserve slightly more accuracy.
This of course assumes equal gamma, or prior gamma adjustment.
Bart, I am surprised at you. Divide by 256 and truncate, not round,
dear boy! Or just right shift by 8 bits.

Division by 257 is, though in a minor manner, simply erroneous -
particularly in shadows.
 
Kennedy McEwen said:
Bart, I am surprised at you. Divide by 256 and truncate, not round,
dear boy! Or just right shift by 8 bits.

Division by 257 is, though in a minor manner, simply erroneous -
particularly in shadows.


floor((65535 + 128) / 257) = 255

That _is_ the correct way to convert a 0..65535 value to a 0..255 value.
Dividing by 256 is wrong.

Chris
 
floor((65535 + 128) / 257) = 255

That _is_ the correct way to convert a 0..65535 value to a 0..255 value.
Dividing by 256 is wrong.

Oh please no. What next ? Bankers' rounding ?

I'm sure that the correct way to convert a 16-bit value to 8-bit is to
ignore the last 8 bits (or shift right 8 bits). The range is split into
65536 (not 65535) steps and the shift or drop operation changes the
split into 256 (not 255) equal steps. Any rounding (e.g. + 128) will
reduce the size of the 0th step. 'floor((65535 + 128) / 257)' will
result in the 0th step being half the width of the remaining steps.

-- Steven
 
SJS said:
Oh please no. What next ? Bankers' rounding ?
Indeed, this thread is becoming educational - as to how poor the average
knowledge of elementary mathematics, even simple arithmetic, is today!
 
Chris Cox said:
floor((65535 + 128) / 257) = 255

That _is_ the correct way to convert a 0..65535 value to a 0..255 value.
Dividing by 256 is wrong.
I do hope they keep you well away from coding these days, Chris!

For data= 129, your method gives the WRONG answer!
(129+128)/257 = 1.000, and floor(1.000) =1, but the correct answer is 0!

However,
129 div 256 = 0, the correct answer.
255 div 256 = 0, the correct answer.
256 div 256 =1, the correct answer.
65535 div 256 =255, the correct answer.
32768 div 256 = 128, the correct answer.
32767 div 256 = 127, the correct answer!

In the programming language you used, this is simply floor(x / 256).

For reference, all methods of converting 16-bit data to 8-bit should
result in *exactly* 256 sequential 16-bit numbers resulting in the same
8-bit result for *every* 8-bit value. Your proposed formula results in
257 16-bit numbers mapping to every 8-bit except for 0 and 255, which
have 128 each. Why do you want to discriminate against these levels?

Don't they teach basic arithmetic in schools any more?
 
For reference, all methods of converting 16-bit data to 8-bit should
result in *exactly* 256 sequential 16-bit numbers resulting in the same
8-bit result for *every* 8-bit value. Your proposed formula results in
257 16-bit numbers mapping to every 8-bit except for 0 and 255, which
have 128 each. Why do you want to discriminate against these levels?

Don't they teach basic arithmetic in schools any more?

It depends on the kind of math you use. For example, f(x) is a linear
function that maps [0 .. 65535] to [0.0 .. 1.0]. In other words, f(x)
can be written as f(x) = x / 65535.

Likewise, g(x) maps [0 .. 255] to [0.0 .. 1.0] and can be written as
g(x) = x / 255.

Now we need a function h(x) from [0 .. 65535] to [0 .. 255] such that
f(x) = g(h(x)). Now x/65535 = h(x) / 255 therefore, h(x) = 255 / 65535 =
1 / 257.
 
Kennedy McEwen said:
Bart, I am surprised at you. Divide by 256 and truncate, not round,
dear boy! Or just right shift by 8 bits.

As I said, in binary integer calculation, shifting bits, and multiplying or
dividing by a power of 2 is correct, truncation is implicit in most
programming languages. But in this case we are trying to preserve as much
accuracy as possible when mapping(!) one range to another.

Using simple binary math in multiplication would otherwise result in a
maximum value of 255*256=65280, thus wasting potential accuracy of 255
values.
Division by 257 is, though in a minor manner, simply erroneous -
particularly in shadows.

Range mapping is the name of the game. It is not erroneous, but intended
behavior!!!

Dividing 65535 by 256 is 255.996... (that is closer to 256 than 255, so you
prefer an error of almost 1 bit). Dividing 65535 by 257 is 255, a perfect
mapping of range maxima.

Dividing 0..255 by 256 or 257 is 0 in all cases. Dividing 256 by 256 is 1,
while 256 divided by 257 is still zero, such is the price of mapping. The
behavior is intended, in order to preserve 255 more potential values in the
highlights.

Bart
 
Philip Homburg said:
Don't they teach basic arithmetic in schools any more?

It depends on the kind of math you use. For example, f(x) is a linear
function that maps [0 .. 65535] to [0.0 .. 1.0].

Correct. Range mapping doesn't have to follow simple powers of 2 boundaries.

Bart
 
Now we need a function h(x) from [0 .. 65535] to [0 .. 255] such that
f(x) = g(h(x)). Now x/65535 = h(x) / 255 therefore, h(x) = 255 / 65535 =
1 / 257.

Hmmm, I fear I am arguing with people much smarter than myself but here
goes.

Perhaps the confusion exists because the light measurement (for density
range) starts at 1 (white) and tends towards 0 (black) whereas our
numbering starts at 0 and tends towards 1 (0/256 to 255/256).

The light level represented by 255 is really the area greater than 255
and includes 256. Sensors can't really measure pure black and all they
can do is say that 0 (black) is darker than the minimum detectable light
(which would cause a reading of 1).

So, when talking about readings from sensors (8-bit or 16-bit) and
considering that our light range goes from 1 downwards, we should add 1
to any reading. This also means we can compare readings and determine
ratios correctly. For example, 255 becomes 256/256 and 0 becomes 1/256.
From this we can see that 255 = white and 0 = <= 1/256 the intensity of
white.

None of this changes my belief that correct way to 16-bit to 8-bit is
drop the rightmost bits (assuming no gamma conversion).

-- Steven
 
Dividing 65535 by 256 is 255.996... (that is closer to 256 than 255, so you
prefer an error of almost 1 bit). Dividing 65535 by 257 is 255, a perfect
mapping of range maxima.

Dividing 0..255 by 256 or 257 is 0 in all cases. Dividing 256 by 256 is 1,
while 256 divided by 257 is still zero, such is the price of mapping. The
behavior is intended, in order to preserve 255 more potential values in the
highlights.
Hi Bart,

I still don't think the 257 concept is correct. Since we are mapping a
range of 1 towards 0 perhaps we should use the one's complement of the
binary numbers when changing the precision.

e.g. complement 65535 = 0
shift right 8 = 0
8-bit complement = 255

We are using numbers that map a range from 0 upwards when the range is
really defined from 1 downwards.

-- Steven
 
Back
Top