SNIP
Let's see:
Since deriving the actual PSF from the measured LSFs has
proven difficult, you somehow extimate an hypothetical PSF,
then derive the resulting LSFs and compare them to the
actual measured LSFs, by meaning of least squares.
Correct.
But how do you exploit the differences and correct the
presumed PSF?
The presumed PSF is generated in almost the same dimensions as the
Imatest edge profile CSV output (I use -5.75 to +9.75 pixels in 0.25
steps due to differencing), and is based on 3 different Standard
Deviations. So for all cells in a 63x63 quarter pixel square I
calculate the combined amplitudes of (in my case) 3 Gaussian PSF
distributions.
I then use Excel's Solver add-in to minimize the squared error between
the LSF (1 dimensional integral of the presumed PSF) and the 1st
difference of the edge profile. That doesn't take very long to
calculate, in the order of just a couple of seconds depending on
processing power.
In fact, I've been fiddling with the spreadsheet and have come up with
an even more accurate variation on the same theme, I take the integral
of the LSF and minimize the error with the original edge profile (that
will accommodate to irregular/noisy edge profiles better).
Now I need to tidy up the spreadsheet a bit, to avoid whatever little
human intervention is still needed after copy/paste of the Imatest
data.
Currently the only human intervention needed, besides starting the
Solver, is in setting the High-Pass filter kernel size, because that
is also calculated (experimental feature, may need some more work),
but once set it can be saved and forgotten.
<
http://www.reindeergraphics.com/free.shtml#customfilter> allows to
set a 7x7 pixel convolution kernel, which is better than Photoshop's
5x5, and it allows floating point input (copy from spreadsheet, paste
in e.g. notepad txt file).
I'm looking at an iterative approach.
A spatial-based iterative convolution with 2 different kernels is
performed on the soruce image. At each pass, an error
indicator is evaluated. When the difference between 2
consecutive passes is low enough, the algorythm stops.
Sounds a bit like <
http://www.ra-dec.de/bv/filter-la.html> where an
intermediate image is convolved (? the author is not clear) with a
convolution kernel and compared to the original. From the difference
between them a new intermediate image is calculated, and so on for
several iterations.
But this way, you won't exploit the benefits of non-Gaussian
PSFs. While with true deconvolution, you may use non-
symmetrical kernels, multiple kernels, and so on. It should
also help (by using non-symmetrical kernels) reducing the
effects of motion blur.
Right? Or am I missing something?
You are right, but it depends on the goal one sets. My first goal was
to reduce/compensate for the effect of the lens+AA-filter in DSLRs,
and lens+film+scanner MTF losses (AKA capture sharpening). A benefit
of Gaussians is that they are separable, i.e. the rows and columns can
be convolved in two passes with a much smaller FIR filter. Also, the
combination of several blur sources is often well approximated by a
Gaussian.
There is no fundamental problem in generating non-Gaussian PSFs, it
only complicates the calculations, and the application of them needs
very large kernels to accommodate their shape. The same goes for
HP-filters, they can be any shape.
I'm also looking into adding De-focus correction, the PSF of which
resembles more that of a cylinder than of a Gaussian.
SNIP
The email address I use here is suffucated by an enormous
amount of spam unfortunately.
But if you swap the domain with "libero dot it" (same
username), the resulting address is valid.
Done. Let's discuss the spreadsheet details by email until we have
something worth sharing with a larger group or with Norman Koren.
Bart