Bart van der Wolf said:
That's what I meant with 'better'.
A somewhat misleading term - akin to claiming better service by reducing
the completion time for jobs by not doing them as well. ;-)
If you follow that logic then you can downsample the image to 1x1 pixels
and get a perfect result because it is impossible to get closer to the
limit for that resolution!
Deconvolving at the original size will only emphasize the lack of
alignment.
Not at all. As you stated, when sufficient randomly misaligned images
are averaged the effect is the same as a gaussian blur. Deconvolution
with the appropriate gaussian will, in theory, perfectly recover the
original image free from any blur. In practice, unless a huge number of
frames is so averaged, the noise introduced by the deconvolution process
will exceed the noise reduction of the averaging in the first place, so
you don't actually win anything - which is what I understood by your
comment "up to a point". However, deconvolution at full resolution
certainly doesn't emphasise the lack of alignment, it eliminates it. In
fact, there is some argument for applying the process after up-sampling,
so that the deconvolution filter of the sub-pixel misalignment can be
more accurately estimated.
With the additional loss of accuracy introduced by down-sampling (note
the "up to a point"), there is a larger chance of restoring a
'meaningful' signal (with less registration error)
Only because you have reduced the sampling density to the point where
the error is insignificant in the first place!
No, that is not what I meant, not in absolute terms anyway, because
resolution is sacrificed. The improvement is in a better looking image,
better in less obvious registration errors.
The improvement *is* that the registration errors are less significant
at the lower resolution because the information that describes that
misregistration is thrown away in the downsampling.
As a side note, there are solutions where several dithered samples can
deliver higher resolution than the individual undersampled images can
provide. You may be familiar with a process called "drizzling", used in
astronomy (
http://xxx.lanl.gov/abs/astro-ph/9808087).
After a brief overview of it, I am surprised that managed to get past a
peer review! The technique described appears to be no more than one
variation of a very common technique which has been practised in the
imaging industry, and actually implemented on the HST that they
reference, for many years and known as microscan.
In many systems small but controlled misregistrations are deliberately
introduced into the subsampled image so that a supersampled image can be
composed from a sequence of frames. In other systems, the
misregistration is permitted to occur randomly but is measured, and thus
the supersampled composite can be produced. Production of the
supersampled image in such cases has used the variable reconstruction
pixel in every case I have ever seen where the image displacement is
uncontrolled.
Either way, there is nothing relevant in that paper nor in the process
to what is going on here with misregistered multisampling, because there
is no method of determining the misregistration *prior* to the averaging
being computed in the case of multipass multisampling scans.
Yes, but again after losing absolute resolution in the downsampling
process, of which I'm fully aware.
Not necessarily. You don't *need* to downsample - all that is doing is
throwing resolution information away for the benefit of SNR. In fact,
if deconvolution worked, and it would if you had enough passes, then you
would actually benefit from upsampling for the reasons provided above.
There are also more advanced methods than USM (which tends to enhance
noise).
There are lots, but and they would all work just as well, and in some
cases better, than deconvolving with a gaussian. Deconvolution is an
exceptionally noisy process, primarily due to the inevitable presence of
zeros, or near zeros, in the filter matrix.
Lower spatial frequency limit, of course.
Time for a little experiment.
As with sharpening filters there are many downsampling algorithms each
producing a different MTF WITHIN THE PASS BAND OF THE DOWNSAMPLED
OUTPUT! Of course, some will have the effect of boosting MTF within
that pass band compared to the original - even simple filters can do
that. NONE, and certainly none of the examples you have provided, can
retain let alone enhance data which exceeds the pass band of the output.
Consequently, if useful information lies in that region - and it should
because otherwise why bother multisampling a high resolution image in
the first place? - it is lost. Nothing can recover that. Yes, you will
be closer to the limit of what can practically be contained in the
downsampled resolution, but so what? The whole point of multisampling
is to increase the signal to noise ratio without losing resolution. If
you are prepared to downsample then use a low pass filtering algorithm
on a single pass and get the benefit of noise reduction from that. Half
the image size and double the SNR with the appropriate algorithm.
This is all under the assumption that we don't want to enlarge the
image, and a 50% reduction will still allow a decent output size.
But what if it won't? I don't think we have stated anywhere that we are
working to such assumptions. If we were, then there are far easier ways
of getting the required results without multisampling. For example, a
lower noise 800x600 image perfectly suitable for web display can be
achieved from a single pass scan at 4000ppi by suitable downsampling
than can be achieved by 16x multisampling if the full resolution is
retained. In fact, you would need about 40x multisampling to achieve
the same SNR in the full resolution image.
Scanning at 5400 ppi and downsampling to 50% will still allow 8.5x12.8
inch (21.6x32.4 cm) output at 300ppi without interpolation.
Bart, have you read the title of the thread? Perhaps you have
information of a soon to be re-released LS-4000 that samples at 5400ppi?
You may have overstepped the terms of your NDA with Nikon if you have
such information. ;-)