Don said:
On Fri, 30 Sep 2005 18:36:39 +0200, "Lorenzo J. Lucchini"
[snip]
Wouldn't you think an image would get to have a much higher bit depth
than 8 bit if you multi-scan it 200 times, even if each of the scans is
made at 8-bit?
Not really, especially not for flatbeds because of the stepper motor
inaccuracies. You will just blur the image more. Not to mention it
will take "forever" to acquire all those 100s of samples. And all
along you have a ready and (compared to 200 * 8-bit scans) a quick
solution i.e. 16-bit!
Ok, you're probably right here for "real" images.
But this doesn't apply to the slanted edge: you aren't *really* taking
200 scans, it's just that every scan line "counts as a sampling pass" in
reconstructing the ESF.
The "16-bit quick solution" don't change much for scanning a slanted
edge, as you have to do the oversampling anyway.
It might be that scanning the edge at 16 bit still gives better results
than scanning it at 8 bit. Let's find out...
No, let's not find out: SFRWin doesn't accept 16 bit edges.
(Which might be a clue that they're not necessary, anyway)
Don't know about Imatest.
By the way, I see that the levels (or perhaps the gamma) are different
if I scan at 16-bit and if I scan at 8-bit, with otherwise the same
settings. Actually, the 16-bit scan clips. Wonderful, another bug in my
fine scanner driver!
The point I'm making is why try to "fix" 8-bit when there is 16-bit
readily available? Now, if your scanner did *not* have 16-bit then,
yes, trying to get 8-bit as accurate as possible makes sense.
But having said that, in my life I've done even sillier things (much,
*much*, sillier things!) simply because they were fun. And if that's
the goal, than just ignore everything I say and have fun!
)
But, no, one goal is to make a program alternative to Imatest (its SFR
function, at least) and SFRWin, and the other goal is to reconstruct a
PSF to sharpen images.
The goal is not to get 16-bit from 8-bit... aren't you just getting
confused with other threads or parts of this thread?
Yes, currently I'm scanning things at 8-bit. Yes, I'm scanning my
slanted edges at 8-bit, too.
But my program works in floating point, it can load both 8-bit or 16-bit
edge images (though the code for loading 16-bit PPM isn't tested right
now); it's just that I'm using it with 8-bit images right now.
It's not functional enough to make a difference at the moment, in any case!
Yes, 8-bit image is inadequate which results in inadequate metrics.
Let's agree on terms. I took the "metrics" as meaning the slanted edge
test results, and the "image" is just the image, that is the picture to
be sharpened (or whatever).
It will not improve guessing because we only have 8-bit eyes (some say
even only 6-bit) so you will not even be able to perceive or see the
extra color gradation available in 16-bit. But *mathematics* will!
No, I was saying that *with an 8-bit image* your "guess" couldn't be
better than my measurements -- the best you can achieve is to make it
*as good as* the measurements. Visibile or not to the eye, there are
only 8 bpc in the image.
And then, if our eyes really have the equivalent of 6 bits, you wouldn't
be able to guess as well as you measure even with a poor 8-bit image!
[snip]
Halos are precisely one of the things I wish to avoid with the PSF
method, which should be able to compute optimal sharpening that
*doesn't* cause haloes.
By definition (because of increased contrast) it will cause haloes.
Whether you see them or not is another matter. If you zoom in and
compare to the original you will see them.
No wait -- my understanding is that, by definition, "optimal sharpening"
is the highest amount you can apply *without* causing haloes.
Perhaps unsharp mask in particular always causes them, I don't know, but
there isn't only unsharp mask around.
Haloes show quite clearly on the ESF graph, and I assure you that I
*can* apply some amount of sharpening that doesn't cause "hills" in the
ESF graph.
Yes, that's exactly what it boils down to! You have to balance one
against the other. Which means, back to "guessing" what looks better.
As far as noise is concerned, yes, this is mostly true.
But noise and haloes are two separate issues!
Anyway, what would seem a reasonable "balance" to me is this: apply the
best sharpening you can that does not cause noise to go higher than the
noise a non-staggered-CCD-array would have.
This is what I'd call the "right" sharpening for Epson scanners: make it
as sharp as the linear CCD scanners, making noise go no higher than a
linear CCD scanner's (since you know a linear CCD of the same size as my
staggered CCD has more noise in general, as the sensors are smaller).
I'll try, just for laughs (or cries).
But even after measuring what's going on with a ruler, I'm still afraid
there is very little that can be done.
Possibly, one could scan a ruler next to the film, and use some program
to "reshape" every color channel based on the positions of the ruler
ticks... but, I dunno, I have a feeling this is only going to work in
theory.
It will work in practice too if you have guides along both axis. The
trouble is that's very clumsy and time consuming.
If you do decided to do that I would create a movable frame so you
have guides on all 4 sides. That's because the whole assembly wiggles
as it travels so the distortion may not be the same at opposite edges.
Also, the misalignment is not uniform but changes because the stepper
motor sometimes goes faster and sometimes slower! So you will not be
able to just change the height/width of the image and have perfect
reproduction. You'll actually have to transform the image. Which means
superimposing a grid... Which means figuring out the size of that grid
i.e. determine the variance of stepper motor speed change... Argh!!
Yes, this is precisely what I meant with "is only going to work in
theory". Remember also that I was talking in the context of color
aberrations, which means the process would have to be repeated *three
times* separately for each channel!
It'd take ages of processing times... and, also, what kind of
super-ruler should we get? Any common ruler just isn't going to be good
enough: the ticks will be to thick, non-uniformely spaced and unsharp;
and the transparent plastic the ruler is made of will, itself, cause
color aberrations.
Of course, the key question is, is it worth it? In my case, in the
end, I decided it wasn't. But it still bugs me! ;o)
I know. By the way, changing slightly the topic, what about two-pass
scanning and rotating the slide/film 90 degrees between the two passes?
I mean, we know the stepper motor axis has worse resolution than the CCD
axis. So, perhaps multi-pass scanning would work best if we let the CCD
axis get a horizontal *and* a vertical view of the image.
Of course, you'd still need to sub-pixel align and all that hassle, but
perhaps the results could be better than the "usual" multi-pass scanning.
Clearly, there is a disadvantage in that you'd have to physically rotate
your slides or film between passes...
I would *not* align them because that would change the values!!! And
those changes are bound to be much more than what you're measuring!
Hm? I don't follow you. When you have got the ESF, you just *have* your
values. You can then move them around at your heart's will, and you
won't lose anything. Which implies that you can easily move the three
ESFs so that they're all aligned (i.e. the "edge center" is found in the
same place), before taking any kind of average.
In principle, you should never do anything to the data coming from the
scanner if the goal is to perform measurements. That's why even gamma
is not applied but only linear data is used for calculations.
Yes, and I'm not doing anything to the data *coming from the scanner*;
just to the ESF, which is a high-precision, floating point function that
I've calculated *from* the scanner data.
It's not made of pixels: it's made for x's and y's, in double precision
floating point. I assure you that I'm already doing so much more
(necessary) evil to these functions, that shifting them around a bit
isn't going to lose anything.
I really think the best way is to simply do each channel separately
and then see what the results are. In theory, they should be pretty
equal. If you want a single number I would then just average those
three results.
Yes, in theory. In practice, my red channel has a visibly worse MTF than
the green channel, for one.
[snip]
No, the answer is not based on any one individual person. The answer
is quite clear if you read out the gray values. Whether one person can
see those grays and the other can't doesn't really matter in the
context of objective measurements.
But then why were 50% and (expecially) 10% chosen as standard?
Because of some physical reason? No, because they make perceptual sense:
10% is the boundary where the average human eye stops seeing contrast.
No, there is physical reason why those luminance percentages were
chosen. It's to do with how our eyes are built and the sensors for
individual colors.
Did you just say "with how our eyes are built"? Now that's perceptual!
Ok, not necessarily perceptual in the sense that it has to do with our
brain, but it has to do with the observer.
MTF10 is chosen *because the average observer can't see less than 10%
contrast* (because of how his eyes are built, or whatever; it's still
the observer, not the data).
I mean, if you're measuring how fast a car is going, are you going to
change the scale because of how you perceive speed or are you going to
ignore your subjective perception and just measure the speed?
Hmm, we definitely base units and scales of measurements on our
perception. We don't use light-years, we use kilometers; some weird
peoples even use "standardized" parts of their bodies (like my "average
observer", you see), such as feet, inches and funny stuff like that ;-P
Sure, we do use light-years now when we measure things that are
*outside* our normal perception.
This is the same thing. You measure the amounts of gray the scanner
can't resolve. How you perceive this gray is, is totally irrelevant to
the measurement.
But the problem is that there is *no* amount of gray the scanner can't
resolve! It can resolve everything up to half Nyquist. I mean *my*
scanner. It's just that it resolves frequencies near Nyquist with such a
low contrast that they're hardly distinguishable.
Where do you draw the line? Just how much uniformely gray your test
pattern must be before you say "ok, this is the point after which my
scanner has no useful resolution"?
I don't see a choice other than the perceptual choice. Which also has
the advantage of being a farily standard choice.
[snip]
They may be testing a different thing, though. BTW, why don't you just
use Imatest?
Because they're asking money for it
I've had my trial runs, finished
them up, and I'm now left with SFRWin and no intention to buy Imatest
(not that it's a bad program, it's just that I don't buy much of
anything in general).
I'm not sure I would call myself a "free software advocate", but I
definitely do like free software. And certainly the fact that my program
might be useful to other people gives me more motivation to write it,
than if it were only useful to myself.
Not necessarily altruism, mind you, just seeing a lot of downloads of a
program I've written would probably make me feel a star
hey, we're
human.
Anyway, as I said on the outset. I'm just kibitzing here and threw in
that luminance note because it seemed contradictory to the task.
But instead of wasting time on replying carry on with programming!
)
I can't keep programming all day anyway! -- well, I've done that
yesterday, and the result was one of those headaches you don't quickly
forget.
Anyway, have you tried out ALE yet? I don't think it can re-align
*single* rows or columns in an image, but it does perform a lot of
geomtry transformation while trying to align images. And it works with
16 bit images and all, which you were looking for, weren't you? It's
just so terribly slow.
by LjL
(e-mail address removed)