How to calc 8-bit version of 16-bit scan value

  • Thread starter Thread starter SJS
  • Start date Start date
Kennedy said:
Unfortunately, Don, very little does seem to be getting through. Like
most kids, he still hasn't learned that he doesn't know it all.

Why am I not surprised to read this kind of statement from you? Well,
must have been predictable - even for a 'kid' like me.

Grossibaer
 
Kennedy said:
Indeed I did, because that is exactly the case. The zero exists and
defines the base level. There are still 655366 states in the range,
which is the number that I used.

indeed there are 65536 states and I never denied that. But your
arguments contained the VALUE 65536 as part of the mathematical proof of
your concept. While using the zero as a VALUE in this 'proof' at the
same time.
Either your calculations may not be zero based or you may never think of
a number of 65536 in the same formula (but you were not the only one
mixing both value ranges in the same argument).
The words I used were "You ignore the zero state" - at no time did I
refer to the number 65536 except in reference to the total number of
states in the range.

If I always use the range 0..65535 then I do not ignore the zero state
and also do not use any different number than 65536 states, as the rang
0..65535 includes the zero as well as the 65535 and all 65534 values in
between.

Clearly not!

Unlikely given the first statement.

You're even ripping apart logical statements combined with an AND
operator. Okay, it's completely fruitless to further answer any of your
posts.

The numbers are the same, but what they refer to requires language other
than mathematics.

Indeed. It not only requires language other than mathematics but also
requires understanding other than mathematics. And there you seem to
lack a whole bit.
I never denied that dividing a 16 bit value by 256 (shifting it left 8
bit) gives mathematically valid mapping of 16->8 bit. But it does no
justice to the task of mapping a 16 bit brightness level to an 8 bit
brightness level.
You have repeatedly accused me of using 65536 in
reference to a peak level, which I certainly have not done

No, as part of the mathematical proof of your concept - in a formula
that works zero-based. That's a difference and that's invalid.

Okay, I'm gone now. Do what you want and get what you desire. And enjoy
a happy smile on your face that you 'obviously must have defeated me'
since I'm not answering anymore.

My life expands beyond and above this newsgroup and there are other and
way more profiting things than arguing with someone who wouldn't accept
the reality even if it woul bite him into the face.

'don't come with facts - I have my prejudice' seems to be your concept.
Be happy with it and enjoy mathematics. I wish you it will not fail you
someday when it is really important.

Grossibaer
 
Kennedy said:
No, it is true in the output level from your video card. 255 and 15
represent the same identical peak white output in their relevant scales
- there is no whiter level. Similarly there is nothing blacker than 0
and this is identical in both scales.


Perhaps you will enlighten us all with your interpretation of an output
level which is darker than 0, in either scale, together with outputs
which are lighter than 15 and 255 in their relevant scales!

It's not the output, it's the INPUT you're working with.
And here we got the cause of your problems to understand the concept.
We're not discussing mapping output levels of a video card, we were
discussing converting an INPUT value from a sensor to a different value
range. And that's a big difference. maybe too big for you.


No, the results that you got prove that your video card and screen gamma
are incorrectly set!

Oh, no, I surely had to calibrate the gamme for original image and every
convterted image differently?
No ... wait ... sounds stupid ... IS stupid!
What a difference makes the gamma or any other calibration if I present
original image data and onverted image data on the same device?
If I would compare the image on screen with the photographed object in
my hands, then it would of course make a difference, but when I just
compare the quality of one conversion and another conversion with the
original image on the same screen, it is completely unimportant.

Oh, well, with 16 bit per color channel, you have actually NEVER SEEN
the original 48 bit scan on your display. Or does your monitor do
48/64bit color depth? And when you calibrated your monitor to give best
results of the result of your conversion, it's obvious that your
conversion gives best results.

Never trust a statistic you didn't cheat yourself.
Precisely, which is why luminance output and *not* which conversion
produces minimum error against an arbitrary mathematical rounding scheme
is what matters. You are the correspondent which has been continually
referring to one conversion producing a lower error than the other, yet
you have only been able to quantify that error in mathematical terms
against some arbitrary numerical reference, rather than in luminance
terms.

Neither was I the one who has thrown in the average or total error into
discussion nor did I calculate it on a mathematical way in any of my
posts (I was just using the - correct - values someone else posted
before). I rather explained more than once what happens with the
brightness levels one and the other way. But it seems this slipped your
attention (along other things).
And a histogram isn't something else than a visible representation of a
mathematical result.
Liar! I quote (from your message of 27th June):
"You can obviously not even _count_ right."

Oh, you used your calculater to count to 9? Well, then... I'm speechless
about your mathematical skills.

I just wanted to give you justice and read (and answer) all of the posts
you wrote even if I originally wanted to stop with my last answer to one
of your others posts, but as I wrote there: it's fruitless.
And I have better things to do than wasting hours with this discussion,
since I already know that my opinion has been seconded by all people I
asked to judge my results, and knowing that you will never change your
mind.

After all, the purpose of this whole thread was not to change your mind
(or mine) but to asnwer someones questions which agorithm woudl be best
for a conversion. There have been enough answers, simple and
complicated, good and bad (not 'correct' and 'wrong') so I think this is
really the point where no further reply is needed.

I bet you cannot resist replying anyway. But chances are I won't read
it.

Grossibaer
 
SJS said:
I can see the sense in your approach here (add 8, divide by 17). This
does allow you to record the intensity extremes (0 and 1) correctly at
the expense of slightly increasing the width of each step in between.

You exactly got my thoughts when working on the problem _without_ any
other input.
Whether the error goes from 0 to 15 or from -8 to +7 depends on how the
output is interpreted. If I had a display system (video card / CRT)
running in 4-bit mode I would expect a 0x0 pixel to be black. But which
value in 8-bit mode would give me the same display ? Would it be 0x00
or 0x07 or 0x0F ? Similarly, in 4-bit mode I would expect 0xF to be
white. But would the equivalent in 8-bit mode be 0xF0 or 0xF7 or 0xFF ?

(besides it is -8 to +8 since the ranges except black and white are 17
and not 16, with zero error in the middle, but that's inimportant to the
concept)

The question is how you think the 0x08 value should be shown. As pitch
black or dark gray. And should 0x0f be shown as pitch black or rather as
dark gray (which would be the result for 0x10 in both methods).
And I found that the +8/17 mehtod gives the better results.
Of course the difference is much less visible when it comes to 16 to 8
bit conversion.
I have tried to see how Photoshop converts say from 4-bit to 8-bit but
my junior version doesn't seem to support such things.

Get a 4 bit grayscale image and save it as true color image. Then look
at the color values.
If Photoshop would use *16 for this job, you'll never see an 255 color
value in the resulting image. if you see, it's definitely not using /16
and *16 for the conversion. (or it uses different algorithms for both
directions)
I hope that
experts who write mainstream image software have developed a 'standard'
that covers this. I can see pros and cons to both approaches (divide by
256 versus divide by 257) but if one is in common use then I would also
use it as it is probably at least as good as the alternatives and also
it is usually better to follow standards rather than fight the entire
world. If there are lots of approaches in common use then I guess we
have a problem.

As I already stated several times: what gives the better result is the
better solution.
And after conversion you end up with an image that has certain data and
is shown in a certain way,. And if this image pleases you, then it is
completely unimportant how you got it. If you only make it to an image
that pleases you.

And if Photoshop and all other programs would scramble your image to
unusable crap, and no program would provide any 'working' solution,
well, you would get the best you can get and you had to be content with
the result. Even if there COULD be a better solution.

Both methods produce a valid 8 bit (per color) image which would show up
identically on any software capable of handling 8 bit depth per color
(in the limitations of the used output device). Only the content of the
image would differ a bit. So the important standard is the format of the
resulting image, not the conversion method.
Thanks for your input. I can finally see the sense in dividing by 257
as suggested by others in this thread (Bart was first I believe).

Maybe, I only remember that I wasn't - even if I could easily jump in
with the experiences I made jsut a few months ago when writing code for
exactly this problem.
Do you know how software (e.g. Photoshop) converts data from one format
to another ? Is there a popular standard ?

Aren't algorithms claimed intellectual property? AFAIK the USA (driven
by the large companies) try to make algorithms patentable.
I guess, nobody will tell teh 'inner secrets' of their software and
whether they use a shifht right by 8 bit or make a more complicated
division. ;)

Grossibaer
 
Jens-Michael Gross said:
indeed there are 65536 states and I never denied that. But your
arguments contained the VALUE 65536 as part of the mathematical proof of
your concept. While using the zero as a VALUE in this 'proof' at the
same time.

Where? You have stated this allegations on several occasions, but have
not once identified the statement in my argument where both *values* of
0 and 65536 are used. The divisor is determined by the ratio of the
number of states. There are 65536 states in the original format
(whether that ranges from 0 to 65535 positive binary, -32768 to +32767
twos complement or any other 16bit coding scheme is irrelevant).
Similarly there are 256 states in the output format (and again, whether
that 8 bit range is 0 to 255 positive binary, -128 to +127 two's
complement or any other format is equally irrelevant). Photoshop, for
example, uses integer maths for 16-bit computation, so the range is
-32768 to +32767, hence previous comments that PS only really supports a
15-bit range.

Using *only* the number of states in each range we can determine the
ratio between them, and thus the divisor of the conversion, to be
65536/256 = 256.

Please state *exactly* where in the above deduction I use both 0 and
65536 in the same range - otherwise apologise for your repetitive and
unfounded allegation.
Either your calculations may not be zero based or you may never think of
a number of 65536 in the same formula (but you were not the only one
mixing both value ranges in the same argument).

The argument is stated completely above - show where *both* 0 and 65536
are used in defining the number of states in the range. If both states
existed then there would be a total of 65537 states - I have not used
that number in any part of the discussion.
If I always use the range 0..65535 then I do not ignore the zero state
and also do not use any different number than 65536 states, as the rang
0..65535 includes the zero as well as the 65535 and all 65534 values in
between.

However you have not used the number of states in the range in your
conversion, but the peak value - which you assume to be 65535 (even
though in 16bit integers that is +32767).

Actually, yes - see your statement which you repeated yet again n the
3ed paragraph quoted above. This is a clear, although totally
unjustified, statement that you have repeatedly made that I use *both* 0
and 65536 values in the same range, but not once have you specified
where that has occurred in my argument. In fact, I have been at pains
to point out that the ratio between the two ranges, and thus the divisor
in any conversion, is completely independent of the max and min values
in the range, merely on the number of states within it.
 
Don said:
No, it doesn't need its own lookup table and if you think it does, it
shows you don't really understand.

Like I said last time, you just can't seem to grasp the concept.

Indeed. If I have 65536 equidistant values and map them to 256 values by
any non-linear 'adaptive' algorithm, then the result aren't 256
equidistant values (or you could dump the 'adaptive' algorithm). And if
they are not equidistant, You'll need a lookup table for the brightness
distance between the values. Anything else is clearly beyond my grasp
(and beyond what any currently used reproduction device or imaging
software would handle).
Except you really want the image to be altered in a way that you get a
linear histogram. Wouldn't look much like the original, but if it
pleases you...
I explained it quite clearly already and there's no point in repeating
as it doesn't seem be to getting through.

See above.

Wait, I have an idea... are you using one of these experimental
computers where you can store more than one mathematical bit in every
physical bit by using different voltage levels when writing to ram? Then
perhaps it might work. I thought these devices have all been dumped when
ram started to get cheaper.
Unfortunatly would it require some equally specialized storage media to
let the result survive the next power-down. Or a specialized file format
with, say, 16 bit per 8 'bit' value.

I wonder where there would be the advantage over sticking with the
original 16 bit data?

Grossibaer
 
Jens-Michael Gross said:
It's not the output, it's the INPUT you're working with.
And here we got the cause of your problems to understand the concept.
We're not discussing mapping output levels of a video card, we were
discussing converting an INPUT value from a sensor to a different value
range. And that's a big difference. maybe too big for you.
I suggest that you read the subject line of the thread!

These are OUTPUT levels from the scanner - the input is a photon flux
level which is generally measured in the range of many tens of billions
of photons per second per square metre per steradian, and cannot be
described in either 8 or 16-bit integers.
Oh, well, with 16 bit per color channel, you have actually NEVER SEEN
the original 48 bit scan on your display. Or does your monitor do
48/64bit color depth? And when you calibrated your monitor to give best
results of the result of your conversion, it's obvious that your
conversion gives best results.

Exactly! And this is the very reason that the discussion diverted to
comparing 8 and 4 bit ranges such that the difference is visible!
Neither was I the one who has thrown in the average or total error into
discussion nor did I calculate it on a mathematical way in any of my
posts (I was just using the - correct - values someone else posted
before).

Irrelevant, you chose to use that calculation of error as part of your
argument. In doing so you implicitly supported it and all of its
derivation. Blaming the calculation on someone else merely undermines
your entire argument since you now have no evidence to support your
position.
I rather explained more than once what happens with the
brightness levels one and the other way. But it seems this slipped your
attention (along other things).
And a histogram isn't something else than a visible representation of a
mathematical result.

No, it is a visible representation of the luminance output which is
*independent* of the data range used to produce those luminance levels.
Oh, you used your calculater to count to 9? Well, then... I'm speechless
about your mathematical skills.
You have already demonstrated that your brain is incapable of
calculating, consistently.
 
Unfortunately, Don, very little does seem to be getting through. Like
most kids, he still hasn't learned that he doesn't know it all.

Yeah. Just ignore him. I mean, ignore "it"... ;o)

Don.
 
Jens-Michael Gross said:
Get a 4 bit grayscale image and save it as true color image. Then look
at the color values.
If Photoshop would use *16 for this job, you'll never see an 255 color
value in the resulting image. if you see, it's definitely not using /16
and *16 for the conversion. (or it uses different algorithms for both
directions)
Photoshop (v7.1) doesn't actually support 4 bit images internally (the
first operation it makes on opening a 4-bit image is to convert it to
8-bit indexed colour mode) so the data is assigns to the nominal levels
after that conversion is irrelevant. Furthermore, as Wayne pointed out
in an earlier post, you are now converting from a 4-bit image luminance
descriptor to an 8-bit descriptor - completely the opposite function of
that being addressed in the subject line of the thread!

Photoshop will, however, convert an 8-bit full range image to any number
of levels you chose, using the Posterise function, but retain the data
in 8-bit format. So, create an 8-bit greyscale image with 256 pixels
wide, linearly ramping from 0 to 255 across the image. Do not rely on
the Photoshop gradient tool for this, which introduces a dither to the
data, but, if necessary, create a file directly with a hex editor to be
sure that the numbers are a precise linear ramp with each level
represented by a single pixel width. Select the
Image|Adjustment|Posterise and input 16 as the number of discrete levels
required (ie. a 4-bit range). Please explain why Photoshop then
produces a 16 level ramp with equal widths for all of the 16 levels.
From the original 256 equispaced levels, the lower 16 (neither 8 nor 9!)
are converted to black, the upper 16 (neither 8 nor 9!) are converted to
white, and every level in between is sourced from 16 levels (not 17!) in
the original 8-bit range. In short, in determining which levels of the
0..255 range are being mapped to the 16 level range Photoshop *is*
simply truncating the data.

Same thing happens if you take a 16-bit greyscale image that is 256
pixels wide and create a linear ramp from 0 to 255, then convert that
image to 8-bit greyscale. The resulting linear ramp is again equispaced
indicating that Photoshop has implemented the conversion from 16-bit
data to 8-bit by truncation. (In spite of Chris Cox's comments earlier
in this thread!)

Similarly, Paintshop Pro, which does support 4-bit graphic images
directly will reduce an 256 colour, 8-bit, image to 16 colours (4-bits)
directly. Guess what, that does the same! (Select Colors|Decrease Color
Depth| 16 colors (4-bit)|Optimised Octree - Nearest Color).

Now, just because Photoshop and Paintshop Pro do it that way does not
make it right - there have been numerous bugs and errors in both of
those applications over the years of their development to rely on either
using a "correct" algorithm - however it does put the lie to your
argument that all programs implement the conversion using your preferred
method. Clearly the two most popular image processing packages on the
PC and Mac platforms use a shift right to reduce the number of bits
representing images.
 
On Wed, 30 Jun 2004 01:55:03 +0100, Kennedy McEwen

[snip]
Now, just because Photoshop and Paintshop Pro do it that way does not
make it right - there have been numerous bugs and errors in both of
those applications over the years of their development to rely on either
using a "correct" algorithm - however it does put the lie to your
argument that all programs implement the conversion using your preferred
method. Clearly the two most popular image processing packages on the
PC and Mac platforms use a shift right to reduce the number of bits
representing images.

Hi Kennedy,

Thanks for answering my question and all your input and well-researched
info in this thread.

-- Steven
 
Back
Top