Which means you truncate (instead of round) at the expense of accuracy, thus
causing the error to be almost 1 bit (0.FF).
Rounding is a fine tool used to benefit arithmetic computation, but this
task is not computing anything. We already have perfectly good data and
the goal should be to NOT change it. This data is in the form of 65536
possible 16-bit values, and our only goal is to sort it into 256 possible
8-bit values, specifically exactly 256 equally spaced groups each holding
exactly 256 values.
The data values are of course already correctly divided this way by high
byte value (each possible high byte represents 256 possible and equally
spaced 16-bit values - the natural way). The result of the truncation
method (shift by 8 or divide by 256) simply groups the data in the way it
is already grouped by high byte values, without additional false
manipulation. There are precisely 256 equal groups, equally spaced over the
entire range (including perfectly at the end points), each group containing
precisely 256 values. It simply doesnt get any better. There is no issue
about rounding the values at all, nor is it about computing NEW values.
Accuracy is of course important, but I am more impressed with its
preservation than in any so-called halfbit accuracy of the +127/257
manipulation. You may think the range is better centered on the 8 bit
value, but it is already arranged in a different way which works extremely
well without exception.
Adding 127 to all values should be just the effect of rounding, with no
actual effect on the tonal linearity, unfortunately except at the
endpoints. It shortchanges the zero group to contain at most only 128
possible values 0..127, instead of the 256 values of 0..255. It also
similarly places too many values into the top group. These are not equal
divisions, and this change sure seems an error, modified data for no
necessary reason.
We know that physical devices know no absolute values for the brightness of
RGB data, each device type reproduces any numeric value as best it can,
different than other devices. Absolute accuracy is pretty much fictitious.
What is important instead is the linearity and the range of the data. The
truncation method gives importance to these factors which actually matter,
without changing the data at all, so I'd call that very accurate.
No, it produces exactly 256 possible results, while reducing the error in
the process (e.g. 65535 / 256 = 255 + an error of 255/256, but 65535/257=255
+0).
Surely your calculator shows the same as mine, that 257x255 = 65535.
Therefore dividing 65535 values by 257 obviously sorts into 255 groups of
257 values each. Granted, there are 65536 values, the extra value could be
another group (not calculated by your method however), but we have already
incorrectly used 255 of that groups values elsewhere. The +127 skew helps
hide this, but this is an approximation when none is needed, and I'd say a
serious error. The correct goal is obviously 256 equal groups of 256
values each, and the data is already equally grouped this way by high byte.
All we need for best result is what the data already says.
You stress that 65535 value, and the /257 scheme seems designed for it, but
it is only one of 65536 possible values. Either method gives 255 for it,
the absolute maximum value possible.
What about the 127 values 128 to 255? Only truncation correctly gives zero
for this set.
I doubt any actual visible harm from the +127/257 scheme, as any other
factor affecting scanned images is vastly larger. It seems wrong
nevertheless.