Yes, it is truncation, but intended, its the purpose in order to group
the data evenly. Obviously there is no possible concept of "rounding"
those results differently, 0 and 255 are the only possible result
values these two end groups can have, and these results are full range
and linear and precisely accurate as is. Every point between these end
points are also equally good, perfectly distributed. It is all
beautifully and ideally and optimally and theoretically and exactly
perfect, in every possible respect, and by definition (because this is
simply how numbers work).
The starting point is a function that maps numbers in the range [0.0 ... 1.0]
onto a set of integers. There are at least three methods:
1) divide by 255 and round. 0.0 maps to 0, 1.0 maps to 255 and everything
else has an average error of 0.25/255 or 1/1020.
2) divide by 256 and truncate. 0.0 maps to 0, 255/256 maps to 255.
Average error is 0.5/256 or 1/512
3) subtract 0.5/256, divide by 256 and round, 1/512 maps to 0, 511/512
maps to 255, and the average error is 0.25/256 or 1/1024.
Now you are getting ridiculous, Philip. It is impossible to map
numbers, even real numbers, from the range [0.0 .. 1.0] to a set of
integers by any of those methods!
1) divide by 255 results in real numbers in the range of [0.0 ..
0.00'3921568627450980'] and no amount of rounding will shift this range
to [0 .. 255]. Perhaps you mean multiply rather than divide.
2) divide by 256 and truncate has similar results, so again perhaps you
mean multiply.
3)subtract 0.5/256 (= 0.001953125) and divide by 256 results in the
range becoming [-0.00000762939453125 .. 0.00389862060546875] and, again,
no amount of rounding will map this to the range you refer.
Since none of the three methods that you describe actually performs the
operation you claim for it, the errors you compute are irrelevant.
Furthermore, even if you had specified methods which performed the
claimed operations the error you compute would still be irrelevant since
at no point in your argument do you define what that error is based on -
ie. what is ZERO error or, in simple terms, what are you claiming to be
a perfect computation. Before you even begin to convince anyone that
your definition of perfect really is perfect, you must explain why that
is the case.
For the situation in the subject thread, I contest that integer division
by 256 (or a shift right by 8 bits) is perfect because it scales the
original number of states into the target number of states with equal
distribution and weighting throughout the full range. No other method
suggested so far achieves this property and the only arguments put
forward for their alleged superiority is a reference to some undefined -
and apparently undefinable - error magnitude.
Using the comparative conversion suggested by Jens-Michael of an 8-bit
image to 4-bit for simplicity, the perfection of the integer division is
immediately apparent. Simply create a ramp from peak black to peak
white across and image 256 pixels wide. Then convert the image to 4-bit
data using either integer division by 16 or the equivalent of the
alternative method you argue. Ignoring your obvious computational
errors above, I suspect that this reduces to the function int((source +
8)/17) as suggested by Jens-Michael.
The two images are significantly different. Using simple integer
division by 16 and truncation, the full range from black to white is
produced with an even population - as would be expected from a linear
ramp original, which also has an even population. Each colour in the
range from 0 to 15 is represented by exactly 16 pixels wide. The "add 8
and divide by 17" method results again in the full range from black to
white being produced but, contrary to what Jens-Michael suggested, looks
much less natural because each colour is now represented by 17 pixels
wide except for peak black and white, which are only 9 pixels wide each.
In short, a linear ramp has been transformed into an "S" curve!
By examining the resulting data from exactly this test it is very clear
that the reduced error argument for the alternative to simple integer
division is false, because it ignores one basic fact:
as the number of colours reduces, the value which represents peak white
reduces as a proportion of the number of available states available in
the range. In other words, for 16-bit data, peak white is 65535 of
65536 available colours. For 8 bit data it is only 255 of 256 states,
whilst for 4-bit data it is only 15 of 16 states. In short, reducing
the number of available colours ALSO reduces the range threshold
required to achieve peak white. Consequently, the "error" that you
estimate based on the difference between the integer result and real
number divisions is completely erroneous in imaging terms. Quite
simply, your "constant" or average error estimates given above and in
previous posts are complete bunkum - they may be accurate in numerical
terms for computation of the minimum difference between integers and
real numbers, but in terms of image luminance, it is completely false.
In fact, if you compute the luminance error correctly (taking account of
the change in thresholds across the range) both methods have *exactly*
the same average error across the entire range. I therefore invoke
Einstein's universal rule - nothing should be more complex than it needs
to be. Simply right shift the data by the required number of bits. For
an x86 processor this is around 2-5 clock cycles per instruction,
depending on whether the data is in cache or not, compared to 3-5 clock
cycles for an add and 26-28 cycles for an integer division, again
depending on whether the data is a cache hit or not. In short, shifting
the data is around 6 to 15 times faster, has exactly the same mean
luminance error and retains histogram integrity than the alternative.
(I'm am done with this thread. Unless I made a serious error, you can
keep counting your fractions, I don't care).
You have made numerous serious errors in that, and previous, posts!