Don said:
I have a scanner without multiscanning, so I have to do it manually.
The problem is how do I eliminate extreme values when averaging?
Let's say 4 scans result in following values for a point: 123, 125,
127 and 250. Obviously, 250 is a fluke and this spike should be
eliminated before averaging out the first three values.
Question: How do I do this in Photoshop 6? Layering the 4 images with
opacity of 50%, 33% & 25% would include the extreme value.
Don,
I am sure you won't be surprised by this but, "been there, done that,
doesn't work" - at least not well enough with the scanner you are using.
;-)
Multiscanning was the main reason I upgraded from the LS-20 (very
similar to your scanner) to the LS-2000. When I bought the LS-20 I
mistakenly believed that this would be a doddle to implement myself and
that it was just a marketing ploy by Nikon to encourage folk with less
programming skills to buy the more expensive scanners - a bit like your
current train of thought - I was wrong then, just as you are now. :-(
Really, the only way that you can remove outliers is to write a
programme yourself - after all, only you know what your criteria for an
outlier is. Yes, you could analyse the scanner data but, lets face it,
life is just too short to consider multiscanning by more than 16x, and
that is just too small a sample to get an accurate measure of the
standard deviation of each element in the CCD of your particular
scanner. Added to which, this will change as a function of temperature
in any case, so the criteria you set now will have little relevance come
August.
The weightings that you suggest are completely the wrong way to go about
it, since the noise present in the highest weighting will dominate your
results - recall that noise adds in quadrature, so the 33% image will
only contribute 43% of the first term in the noise integral, with
subsequent layers adding even less. The noise reduction of an IIR
filter (which is effectively what you are setting up in the time domain,
is given by 1/sqrt(2K-1), where K is the scaling factor of subsequent
terms in the IIR filter. Your suggested parameters of 50%, 30% and 25%
approximate (very coarsely) to a K factor of sqrt(2) and, even if
continued to infinity (well, 16x integration which would give 8bpc
limits) would only result in an SNR reduction of some 26% - and that is
the theory, so it includes a mechanism for removing outliers! Do you
really think this is worth the effort doing it this way?
Added to which, Photoshop will only layer with 8bpc data - so you are
introducing more quantisation noise than you will ever recover through
multiscanning!
You really need to write a specific program to implement this, weighting
each scan equally an d removing any outliers against a very coarse
criteria - or use an application which has already done it properly,
such as Vuescan.
After all of that, you will find that multipass multiscanning simply
does not work as well as single pass multiscanning in any case. When I
tried to implement it with a scanner such as yours, I found that the
main obstacle was scanner precision - it was not possible to align the
entire frame perfectly on each pass such that multiscanning could be
achieved without significant resolution loss. The tradeoff, is to
weight the terms of the integral, as you have, thus retaining the
resolution of the first scan - but that limits the SNR improvement much
more dramatically than the loss in spatial resolution with equal
weighting! An adaptive approach is optimal, rejecting local scan
information which differs from the primary scan above a set level,
whether that is due to noise or image content - but the end result
cannot be greater than the theory predicts, and once rejections are
included, the performance gains of the theory plummet.
The only solution I could think of - and I discussed this with Ed
Hamrick at the time, though he had other priorities - was to implement
some form of feature extraction such that each frame could be resampled
to correlate well with subsequent frames, permitting the full noise
reduction benefit of equal weighting to be obtained. However resampling
means a loss of resolution as well, so that has to be taken into account
too. Having gone into the mathematics of all this, it isn't too
surprising that the limits of what can be achieved, in a general type of
SNR-Resolution product assessment, are not much more than the overall
25% advantage that comes from the weighting approach. Of course the
value you get heavily depends on the stability of the scanner between
subsequent passes over the same image - in the case of the LS-20 it was
barely within 2 pixels across the entire field, although I would hope
the mechanism change on the LS-30 improved that a little. With perfect
alignment then you could get close to the single pass multiscanning
performance, but I don't know of any scanners that even approach that -
even a second pass for the IR capture causes problems with ICE on some
otherwise high performance film scanners that require it!
If you want to see what multiscanning can actually offer you, download a
copy of Vuescan and try the demo version. Unlike most shareware, it is
not limited in capability - it just puts a watermark on the resulting
image so yo cannot use it for real, which means you can evaluate all of
its capabilities before buying it. Despite the fact that I don't use
this package myself, it is not without significance that this is not the
first time I have recommended it to you!