HvdV said:
Hi All,
This post is getting really off topic, sorry about that, but I felt it
was still relevant to the film scanner topic, hope you agree!
It is not necessary to do the integration by resampling, the
integration follows from the known shape of the measurement object and
other constraints which might be imposed, like the known bandwidth of
the optics.
The sloping edge isn't doing integration! You are measuring the PSF
with finer precision than the sampling density permits on its own by
moving the stimulus relative to the samples. Knowing the shape of the
stimulus, whether an edge or a line, makes no difference to this.
Without oversampling all you can achieve is a measurement of the PSF
with a precision of no better than a single pixel pitch - and the
unknown position of the stimulus within that pitch means that you have
no knowledge of where the aliased components lie or how they affect your
single datapoint result.
What you do with the slanted edge fits also in this framework: without
knowing it can be represented by a step function at a certain angle you
cannot decode the aliased frequencies in the data.
I am beginning to wonder if you have understood what the slanted edge
actually achieves. Firstly, your statement is completely wrong, because
it is very simple to decode the aliased frequency content simply because
the slanted edge permits oversampling of the data. So let me explain
the process. Lets assume that the edge in question has a gradient of
1/10th - that is, for every 10 pixels down the frame, the edge moves one
pixel to the left.
Now, if you select the data from left to right, ie. across the edge, on
any line then all you get is an edge spread function sampled at the
pixel sampling rate. Because of this sampling limit, you cannot
determine the MTF of the system any higher than half of the sampling
frequency - the Nyquist limit. Furthermore, since you do not know the
phase of the edge relative to the samples, you cannot determine where
the aliased components of the MTF will lie, or how many times that
spectrum is "folded" into the pass band. If you are lucky, the MTF will
have reduced to zero by the sampling frequency itself, but if the fill
factor of the CCD is less than 100% that is unlikely. Recall that the
MTF of a perfectly flat response sensor is sinc(pi.a.f), where a is the
pixel width, which does not fall to its first zero until f=1/a. At 100%
fill factor, a is also the pixel pitch, and so the MTF does not reach
zero until the sampling frequency itself. At the Nyquist limit, this is
63.7%, but for practical CCDs where the pixel width is less than the
pitch, this is likely to be much higher. So, measuring the PSF across
an edge (or a line) directly is completely useless because the aliased
component of the MTF is dominated by uncontrolled alias components.
However... turn the source data through 90 degrees!
Instead of looking at the samples across the edge, take the samples in
the axis that is almost parallel to the edge. Now, instead of a data
set which abruptly transitions from black to white, you have data which
gradually transitions - because at each new line the phase of the edge
relative to the samples has progressed by 1/10th of a pixel - the line
gradient. Consequently, the data series *down* the edge is almost the
same data as would be produced if the edge were perfectly vertical and
samples from a single pixel position had been taken while it was moved
by one tenth of a pixel past it. In short, the sampling frequency by
which the PSF is measured has been increased by a factor of 10. By
taking data from well before the edge reaches a particular column to
well after it has passed, the full extent of the ESF can be determined
with 10x the resolution of the raw sampling system. Now, when you
compute the MTF from the PSF measured at this resolution (10x the
original sampling frequency of the sensor) it will have no alias
components up to 5x the original sampling frequency (ie. the Nyquist
limit of the oversampled rate). The whole system has been "geared up"
by the gradient of the edge.
Unless you are really unlucky, the optical system, or optical and
electrical crosstalk in the CCD itself, will have run out of all
resolution by 5x the sampling frequency, so you no longer have to worry
about the aliased components - oversampling due to the slant has
resulted in an adequate precision of the PSF and consequently the MTF.
If it hasn't, then the MTF you end up with will show a finite and
significant level at the oversampled Nyquist limit, and re-testing with
a steeper gradient will resolve the problem.
As you should now see, the edge isn't the most valuable part of this
process, the slope is. That is what produces the oversampling, and that
is what eliminates the aliasing from the measurement. The edge merely
provides a flat frequency test spectrum.
This leads to my initial remark: is a step function the ideal test
object? Due to the rapidly decaying frequency content of a step
function I think it isn't.
Rapidly decaying frequency content? Take a look at the FT of an edge
function! It is a flat spectrum - all frequencies are present with
equal amplitude! (Limited only by the width of the FT domain you choose
to restrict your measurement to!)
A thin bar would be better, but still you'd measure only a line in the
2D OTF, plane if you go for the full 3D OTF.
No, a thin bar has a much more rapidly decaying spectrum - the frequency
spectrum of the line is sinc(pi.a.f) where a is the thickness of the
line. An infinitely thin line has the same frequency spectrum as an
edge, but infinitely small power, making it useless as a test tool. A
line, or spot, which is similar in size to the actual PSF of the unit
under test is a reasonable compromise because that will have adequate
power to stimulate response *and* have a known frequency response which
can then be compensated for by deconvolution. This is a compromise,
since the deconvolution adds noise, and without changing the position of
the test relative to the samples, the aliased components corrupt the end
result by an unknown amount.
I suspect the sole reason edges are used is that they are easy to make
at this scale. Nothing wrong with that, but it is a good idea to keep
looking for better test objects.
Well, hopefully you will now understand that this is NOT the reason for
their use. The edge, contrary to your assertion, has a flat frequency
spectrum, extending to infinity - the same as an infinitely thin line -
and also carries significant power, up to 50% of the total possible. In
addition, being tilted relative to the samples, it enables significant
oversampling without the need to move the test object relative to the
samples.
As I mentioned previously, the use of edges as a reliable test method
has only recently been adopted by ISO (following extensive assessment).
Up till then they were using lines - or spots: it doesn't actually
matter which, the spot just gives all axes simultaneously. As far as I
am aware, the edge was first used in the early 1950s on TV systems as a
visual assessment of frequency response (ie. MTF) but the sloping edge
quantitative computational MTF method was invented in the mid 1980s - at
Malvern!
Sufficiently small circular or spherical objects with known diameter
will do the trick, position can be random.
Position cannot be random *UNLESS* you are only interested in the MTF
below the frequency at which the alias components fall to zero.
Unfortunately, you don't know the MTF of the system under test until you
have measured it, so you cannot tell where this criteria occurs and,
more importantly, how significant it is once you have.
As I said earlier, if you are measuring a system where the MTF is
significantly less than the Nyquist limit of your sampling system then
your proposed solution works fine. Not many scanners fall into that
category and virtually no film scanners do.
Please reread the quote, it states the Nyquist rate is related to the
properties input function or signal.
No it doesn't - try reading it yourself! ;-)
Your quote doesn't define a Nyquist frequency or limit nor even refer to
it at all!
What it defines is the minimum sampling frequency required to
unambiguously sample a signal.
By consequence, a given sampling frequency can unambiguously sample a
signal up to a critical bandwidth - and that is the Nyquist limit of
that sampling frequency. Nothing in your quote contradicts that or is
at odds with the use of the term "Nyquist" (though it should perhaps
have a small "n") in this or any other field.
After checking some literature, it seems optics related literature
uses this form. In one reference, Numerical Recipes, the usage was
mixed between yours and this one.
Its been common usage in digital imaging since long before I started
working in the field, which is more than 25 years ago. First useage of
the term "Nyquist Limit" appears to be Claude Shannon's 1948 paper, so I
suspect that is a precedent over any subsequent quote you might find.
Which is how it *should* be done.
So you propose throwing out established test methodology until the
impossible is invented! Get real!
For an incoherent system it's delta_x = lambda/(2 n sin(alpha)),
alpha the half aperture angle, n the refractive index. For 500nm light,
.45 radians aperture you get ~280 nm. In ppi that is ~90.000. For a
coherent system it is half of that, close to your value. But in systems
like this there is always partial coherency. BTW a jam-jar-bottom lens
of that aperture has the same bandwidth.
You did read what I wrote in that example, I assume?
So, pray tell those reading so far, what the half angle of an f/2 cone
is?
Just for reference, f/# = 1/(2 sin(alpha)) in your parlance. On my
calculator, a 0.45radian semi-angle is actually 1/(2 x 0.435), or
f/1.15!
Got a clue yet as to why your result is almost double what I stated?
;-)
ok, that makes it easier to meet the bandwidth of the lens.
No it doesn't!!! The bandwidth of the lens is potentially much greater.
What it does is introduce a more significant bandwidth limit - the large
pixel size - relative to the sampling rate! (or, looking at it the other
way, increases the sampling rate relative to the pixel size that is
limiting the bandwidth).
One advantage of using an reversed iterative deconvolution technique is
that you separate the measurement space from the object (in this case
the PSF) space. That allows you to estimate the object outside the
measured region, very useful to remove blur, but also to handle
'missing' data. In order to do that in noisy conditions you need to put
in as much as possible a priory information, for example the known
geometry of the test object, but also the statistical properties of the
noise, aperture of the lens, and so on.
But it *DOESN'T* overcome the aliasing issue!!! All you are doing by
deconvolution is compensating for the limited spectral characteristics
of the test pattern! The edge (or an infinitely thin line) does not
have a limited spectral content. In other words, what your method does
is to compensate for a poorly chosen test function!
There is a little more involved...
It always does since its Fourier transform is finite.
Having a finite FT does not mean that the PSF extends to adjacent
samples with sufficient power to measure it! If you can't measure it
then it might as well have an unlimited FT! We have already seen that
the diffraction limit of an f/2 optic requires sampling at 50,000ppi, so
a 4000ppi sensor isn't actually going to measure anything significant
beyond the noise on adjacent pixels until the f/# gets pretty high!
Yes, you can do the averaging, the constraints in the deconvolution do
this for you automatically!
Only if the line is straight, not sloped, and then without any
oversampling - so you are stuffed either way.
A common misunderstanding. You *can* retrieve 'lost' spatial frequency
components *and* lost spatial structures at the same time.
Try reading Shannon's original paper - you will find your statement
inconsistent with his law. You can *estimate* what the lost or
corrupted data may have been but you cannot recover it - other than by
an oversampling process, such as the sloping edge produces. Since we
are discussing measurements, not approximations, that is what is
necessary.
There is quite an amount of literature on the topic. For example
'Introduction to inverse problems in imaging' by Bertero & Boccacci is
quite accessible.
But I'm afraid nothing short of me producing a piece of software which
does the job will convince you.
That is probably the first thing you have said I agree with.
When you have done it, submit it to ISO as a replacement for their
clearly inadequate test methodology!
It defines the ground rules of the test you intend us to adopt in
preference to the existing ones, so it is extremely pertinent to the
argument!
BTW, it would be interesting to see film scanner MTFs in more than one
directions.
The edge approach gives horizontal and vertical MTF - there is a
possibility that it is completely different from either of these in
other axes, a probability that approaches zero for any commercial
scanner or digital camera.