Plustek OptikFilm 7200

  • Thread starter Thread starter rafe bustin
  • Start date Start date
R

rafe bustin

Has anyone heard of or used this little beastie?

It's a 35 mm film scanner advertised as 7200 dpi
optical.

There are scan samples on the web, here:

<http://www.plustek.com/film/tornado.htm>

Most amazing is the price: under $200.
($189 at TigerDirect.)

From the scan samples, I'm ready to believe
it's comparable to most "4000 dpi" scans
I've seen.

Pretty freaking amazing, at least at this
price point. Apparently one German website
measured it at 2900 dpi using USAF test
targets.


rafe b.
http://www.terrapinphoto.com
 


OK, so the thing measures out at 2900 dpi.
No surprise. That just happens to be the
same number that Phil Lippincott assigned
to the LS-8000 on his "scannerforum" years
ago.

A real, measured 2900 dpi for $200 is
a major achievement.

I wonder how the Minolta 5400 would test out
with these same USAF 1951 test targets.
Has anyone tried that? Is there publicly
posted info on how to interpret these targets?


rafe b.
http://www.terrapinphoto.com
 
rafe bustin said:
OK, so the thing measures out at 2900 dpi.
No surprise. That just happens to be the
same number that Phil Lippincott assigned
to the LS-8000 on his "scannerforum" years
ago.

The 'test' I have seen from Phil Lippincott was seriously flawed, in
favor of his own brand. His judgements since that, have for me
devaluated to being misleading and of little value :-(
It is quite simple to test the true performance by scanning a slanted
edge test target, or an image of such a target.
A real, measured 2900 dpi for $200 is
a major achievement.

Yes, although I'm allergic to lies about specifications.
I wonder how the Minolta 5400 would test out
with these same USAF 1951 test targets.
Has anyone tried that? Is there publicly
posted info on how to interpret these targets?

The USAF 1951 targets are not suited for testing discrete sampling
systems like scanners or digicams. Slight (mis-)alignment with the
sensor array can easily double or half the results. The ISO proposed a
method based on a slanted edge, which allows to derive the MTF curve
of the system.

http://www.xs4all.nl/~bvdwolf/main/foto/Imatest/SFR_DSE5400_GD.png
shows the response curve of the (my) scanner alone. The combined
response of camera lens+film+scanner compared with (too) many other
capture devices is given in
http://www.xs4all.nl/~bvdwolf/main/foto/Imatest/SFR_Graphs.png ,
although the scan results do assume the use of decent lenses, tripod,
mirror lock-up and, (in this case) Provia film. Different film or
technique obviously gives somewhat different results for all capture
devices mentioned.

The slanted edge target (SET) and how to make one has been discussed
in this group and has been mentioned in combination with the program
one can use to evaluate it, Imatest (www.imatest.com).

Bart
 
rafe said:
OK, so the thing measures out at 2900 dpi.
No surprise. That just happens to be the
same number that Phil Lippincott assigned
to the LS-8000 on his "scannerforum" years
ago.

A real, measured 2900 dpi for $200 is
a major achievement.

I wonder how the Minolta 5400 would test out
with these same USAF 1951 test targets.
Has anyone tried that? Is there publicly
posted info on how to interpret these targets?
What I would like to see is a proper measurement of the resolution of
this beast - in fact, a decent assessment from someone who is fairly
capable and independent.

For a scanner that promises so much, it has a remarkably quiet user base
- whether complaining that it fails to deliver or shouting the praises
that it does at such a low price. ;-)

Has anyone actually seen one of these units?
 
What I would like to see is a proper measurement of the resolution of
this beast - in fact, a decent assessment from someone who is fairly
capable and independent.

For a scanner that promises so much, it has a remarkably quiet user base
- whether complaining that it fails to deliver or shouting the praises
that it does at such a low price. ;-)

Has anyone actually seen one of these units?


I haven't seen one. I only learned of its
existence a few days ago.

There are a couple of scan samples here:

<http://www.plustek.com/film/tornado.htm>

and I have to admit these are surprsingly
good, in terms of the detail I can see.

I can't believe the 7200 dpi claim, at
least not from what little I've seen.
But I believe it's an amazing little
beastie at $189.

Considering I'd spent over $2K (combined)
for my first two film scanners, and only
got 2700 dpi from the better one... I'm
a bit peeved, in fact.

Grab tornado_1.psd and tornado_2.psd from
the URL above. Tell me if you think your
Nikon might have made a sharper scan.



rafe b.
http://www.terrapinphoto.com
 
rafe bustin said:
There are a couple of scan samples here:

<http://www.plustek.com/film/tornado.htm>

and I have to admit these are surprsingly
good, in terms of the detail I can see.

I can't believe the 7200 dpi claim, at
least not from what little I've seen.
But I believe it's an amazing little
beastie at $189.

Considering I'd spent over $2K (combined)
for my first two film scanners, and only
got 2700 dpi from the better one... I'm
a bit peeved, in fact.

Grab tornado_1.psd and tornado_2.psd from
the URL above.
Yes, I had a look at these a couple of days ago when you first posted
the link and they do look good for a scanner of that price. However,
being scanned at 3600ppi it is difficult to tell whether the limitations
are a consequence of the scanner or the downsampling used to get
3600ppi. For example, on the scan of the tailplane with the insignia
there is clear evidence of colour fringing on the lettering, with cyan
bleeding from the top of the horizontal lines and magenta bleeding from
the bottom, yellow bleeding to the right and blue to the left.

I am fairly certain these colours are not on the aircraft lettering,
which should have a clear black outline, but they may be on the original
slide. Other sharp transition lines in the image show similar colour
distortion. It is unlikely that the camera lens would have such
perfectly orthogonal distortion characteristics, but this crop is from
the far left of the frame so it may just be coincidental alignment. The
crop from the cockpit area also shows a similar colour fringing,
although it is less obvious, perhaps because of the lack of such high
contrast edges, but it is still there, for example on the seam between
the cockpit and fuselage.

It is difficult to say whether this colour distortion is actually a
limitation of the scanner below 3600ppi or if it the consequence of a
limitation below the claimed 7200ppi being downsampled.
Tell me if you think your
Nikon might have made a sharper scan.

Difficult to say if the Nikon would be sharper. In the current
condition of the scan, complete with colour fringing, I am sure that the
Nikon would be sharper and certainly cleaner in that there would be much
less fringing introduced by the scanner. If this fringing disappeared,
or was much reduced, by scanning at the native 7200ppi then I doubt that
the Nikon would do any better at all. It all depends on what is causing
that distortion.
 
Hi Bart,
http://www.xs4all.nl/~bvdwolf/main/foto/Imatest/SFR_DSE5400_GD.png shows
the response curve of the (my) scanner alone. The combined response of
I see your response goes beyond the Nyquist frequency of the digitized image;
how do you measure that? I can image you're using the diagonals in the
spectrum, or do you combine results from different scans at different positions?
Also I noted that your 50% MTF is at 60 cy/mm whereas in the results you
published on
http://www.jamesphotography.ca/bakeoff2004/scanner_test_results.html
a 50% MTF of 23.8 is given. Can you explain?
What was the test object in your 'scanner alone'? razor edge?
camera lens+film+scanner compared with (too) many other capture devices
is given in
http://www.xs4all.nl/~bvdwolf/main/foto/Imatest/SFR_Graphs.png ,
That plot represents quite some work!

-- hans
 
HvdV said:
I see your response goes beyond the Nyquist frequency of the
digitized image; how do you measure that?

By oversampling the sensor element's response. A slanted edge target
of adequate size allows to take (a statistically useful multiple of)
about 10 samples at different overlap positions per pixel. When you
realize that the Nyquist frequency limits reliable resolution to
(pixel dimension)/2 cycles per image dimension, it (hopefully)
explains why oversampling allows to measure beyond Nyquist (which
helps to value/quantify the aliasing tendency).
I can image you're using the diagonals in the spectrum, or do you
combine results from different scans at different positions?

In fact, a slanted edge target allows to produce an accurate
(oversampled) Line Spread Function. The MTF is just the Fourier Power
Spectrum of the first derivative of that LSF. So, basically all that
is needed is an accurate Edge Spread Function for the image dimension
of interest, and some math.
Also I noted that your 50% MTF is at 60 cy/mm whereas in the results
you published on
http://www.jamesphotography.ca/bakeoff2004/scanner_test_results.html
a 50% MTF of 23.8 is given. Can you explain?

That was due to:
1. A less than perfect target capture/slide, like caused by less than
perfect focus for the lens used to capture the image (or the lens I
used just performs better than Jim Hutchinson's).
2. In the Bakeoff 2004 you get a combined lens+film+scanner imaging
chain result, and in my "scanner alone" the non-scanner image is as
close to perfect (neither camera lens aberration nor focus error or
camera shake) as can be realistically expected.
What was the test object in your 'scanner alone'? razor edge?

Yes, a razor like edge mounted at a slight slant in a slide frame. So
camera lens+film MTF was eliminated, and only the edge quality was
limiting the scanners' MTF.
That plot represents quite some work!

It's not too bad, although using a program like "Imatest", and MS
Excel for the summary graphs, really helps ;-). I used to use an ISO
provided example program for the SFR evaluations, but Imatest
(www.imatest.com) is much more convenient, and it provides several
other useful image quality evaluations.

Bart
 
Bart van der Wolf said:
In fact, a slanted edge target allows to produce an accurate
(oversampled) Line Spread Function. The MTF is just the Fourier Power
Spectrum of the first derivative of that LSF.

Not quite. The slanted edge produces an oversampled *edge* spread
function. The first derivative of that edge *approximates* the line
spread function. The MTF is the fourier transform of the line spread
function.

The significance of the difference between your description and the
correction is that the LSF is only approximated by differentiating the
ESF. In particular, if there is any hysteresis, lag or asymmetry in the
actual LSF then the approximation can be significantly inaccurate. Such
asymmetries are almost inevitable on a sub-sample scale (as produced by
the oversampling) and can be significant at larger scales simply due to
the bandwidth of the readout electronics.

Ideally, two edges (black to white and white to black) are required to
correct for this effect.
 
Bart van der Wolf said:
SNIP

Thanks for the correction, and additions.
Bart, since you are using this test application, it might be worth
running the same MTF assessment on a negative image of the standard
pattern, to check the asymmetric response that your scanner has is not
driving the results you achieve.

I don't know if Imatest would work with that, although I suspect it
would. However, it would be easy enough to invert the data after
capture to ensure that it would be compatible.
 
SNIP
Bart, since you are using this test application, it might be worth
running the same MTF assessment on a negative image of the standard
pattern, to check the asymmetric response that your scanner has is
not driving the results you achieve.

I don't know if Imatest would work with that, although I suspect it
would. However, it would be easy enough to invert the data after
capture to ensure that it would be compatible.

In my case it happens to give identical results (ignore red lines):
http://www.xs4all.nl/~bvdwolf/temp/scan0017_Y_cpp.png
shows the edge profile, and SFR, and after inverting it in Photoshop
http://www.xs4all.nl/~bvdwolf/temp/scan0017PSinvert_Y_cpp.png
the evaluation gives identical results in these linear gamma
evaluations.

In some other tests I have seen, the Edge profile may show a slightly
more asymmetric edge but that might be caused by slope limited gamma
conversion (one of the reasons I prefer Gamma = 1.0 tests right from
the start).

Bart
 
Hi Bart,
By oversampling the sensor element's response. A slanted edge target of
adequate size allows to take (a statistically useful multiple of) about
10 samples at different overlap positions per pixel. When you realize
I see, so Imatest takes it as multiple 1D edge responses to decode the
aliased frequencies from the image.
that the Nyquist frequency limits reliable resolution to (pixel
dimension)/2 cycles per image dimension, it (hopefully) explains why
The sensor element size will indeed reduce the higher frequencies and cause
zeros in the band.
oversampling allows to measure beyond Nyquist (which helps to
value/quantify the aliasing tendency).


In fact, a slanted edge target allows to produce an accurate
(oversampled) Line Spread Function. The MTF is just the Fourier Power
Spectrum of the first derivative of that LSF. So, basically all that is
needed is an accurate Edge Spread Function for the image dimension of
interest, and some math.
Would be a good idea to have a line object, one could do away with the ugly
differentiation then. The smaller size gives rise to a weaker signal, but
that can be countered by integrating along the line. I'm not speculating, I
did something like this in the microscopy field.
Still better is measuring Point Spread Functions, but it takes some equipment
to manufacture test samples.
That was due to:
1. A less than perfect target capture/slide, like caused by less than
perfect focus for the lens used to capture the image (or the lens I used
just performs better than Jim Hutchinson's).
2. In the Bakeoff 2004 you get a combined lens+film+scanner imaging
chain result, and in my "scanner alone" the non-scanner image is as
close to perfect (neither camera lens aberration nor focus error or
camera shake) as can be realistically expected.
So the other parts in the chain largerly dominate the results, which explains
why everything is so close together. This suggests that the 'bakeoff' is
fairly useless to judge scanner quality. I'm glad you published your
measurements!
It's not too bad, although using a program like "Imatest", and MS Excel
for the summary graphs, really helps ;-). I used to use an ISO provided
example program for the SFR evaluations, but Imatest (www.imatest.com)
is much more convenient, and it provides several other useful image
quality evaluations.
I looked at the imatest results, I get the impression the accuracy of the
OTFs is fairly limited for higher frequencies. I wonder how they reproduce
if you repeat an experiment.

-- Hans
 
Would be a good idea to have a line object, one could do away with the
ugly differentiation then. The smaller size gives rise to a weaker
signal, but that can be countered by integrating along the line. I'm
not speculating, I did something like this in the microscopy field.

Unfortunately it doesn't work with sampled imaging sensors, although it
is standard practice for continuous systems.

You need to have a line object which is much finer than the pixel pitch
to stimulate response up to and beyond Nyquist. The actual position of
the line, relative to the samples then becomes a variable, indeed THE
relevant variable, and the output must be measured as a function of this
- in short, the line must be scanned across the samples. One solution
to this is to tilt the line in the same way as the tilted edge, so that
the pixels can be oversampled - but that then prevents you from
integrating along the line to compensate for the reduced signal.

I have been using the sloping edge method for almost 2 decades now,
since it was pioneered by a colleague of mine at what was then the Royal
Signals and Radar Establishment, in Malvern, England - the same
establishment that developed radar, LCDs and a host of other
technologies we take for granted. It is fairly recent that it has been
standardised by ISO for the measurement of digital cameras and scanners.
Still better is measuring Point Spread Functions, but it takes some
equipment to manufacture test samples.

Again, you need to scan the point across a sample, in 2 dimensions, to
measure the PSF. Without scanning you cannot get information beyond
Nyquist, and even the PSF you get below Nyquist (which means the spot is
larger than the sample pitch) will be vary significantly due to the
phase of the aliased components.
 
Hi Kennedy,
(cc to Kennedy, getting OT)
Unfortunately it doesn't work with sampled imaging sensors, although it
is standard practice for continuous systems.
Why? Even in the speudo 1-D approach I can imagine averaging over multiple
similarly shifted pixels. Which can be extended to multiple lines.
You need to have a line object which is much finer than the pixel pitch
to stimulate response up to and beyond Nyquist. The actual position of
First of all, to the side, 'Nyquist' is used a bit strangely in this
newsgroup, in signal and image processing the following is used:
The Nyquist-Shannon sampling theorem stablishes that "when sampling a signal
(e.g., converting from an analog signal to digital), the sampling frequency
must be greater than twice the bandwidth of the input signal in order to be
able to reconstruct the original perfectly from the sampled version"
(http://en.wikipedia.org/wiki/Nyquist-Shannon_sampling_theorem)

In short, the critical sampling frequency (Nyquist rate) follows from the
signal, not from the sensor. If you violate the theorem, as Bart's
measurements show some scanners do, the perfect reconstruction is not always
possible. Except for simple objects. The role of the anti aliasing filters is
to throw away bandwidth in order to reduce the Nyquist rate so it matches the
sensor pitch.

Regarding the size of the test object, line or sphere, it does not need to be
extremely small, but you have to know its geometry. Roughly, the test object
can be as wide as the PSF, beyond that accuracy suffers. You recover the true
PSF from the image with an inverted deconvolution procedure, which also gets
you around the holes in the object spectrum.
I have been using the sloping edge method for almost 2 decades now,
since it was pioneered by a colleague of mine at what was then the Royal
Signals and Radar Establishment, in Malvern, England - the same
establishment that developed radar, LCDs and a host of other
technologies we take for granted. It is fairly recent that it has been
standardised by ISO for the measurement of digital cameras and scanners.
Incidently, the technique I mentioned above was developed >15 years ago in
the contex of a superresolution project with a former collegue of yours, Roy
Pike, he worked for or with the RSRE around that time.
Again, you need to scan the point across a sample, in 2 dimensions, to
measure the PSF. Without scanning you cannot get information beyond
Nyquist, and even the PSF you get below Nyquist (which means the spot is
larger than the sample pitch) will be vary significantly due to the
phase of the aliased components.
I'm not so sure this can't be done. Using the a priori knowledge of the test
object geometry and positivity of the signal you can probably reconstruct the
position of the test objects with sub-pixel accuracy, which would be as good
as scanning them.
IMO the really nasty problem here is that all this assumes that we are
dealing with a linear system, which is not entirely true.

Sorry for the rather OT character, cheers, Hans
 
Hi Kennedy,
(cc to Kennedy, getting OT)
Why? Even in the speudo 1-D approach I can imagine averaging over
multiple similarly shifted pixels.

Because it isn't a matter of averaging, but of measuring the output as a
function of the phase of the input stimuli relative to the samples. You
can average as many similarly shifted pixels relative to the input
stimulus as you like, but that will only give you one value - and that
is just a single point on the LSF. You must then *change* the phase of
the stimulus and measure another point on the LSF.
Which can be extended to multiple lines.

Yes, you can use multiple lines, but these must be arranged in an exact
relation to the samples so that each line is at an incremental phase
relative to the sample. That is almost impossible to achieve with a
scanner since the mechanical steps and the optical magnification are not
likely to be accurate or stable enough for such a pattern of multiple
lines to be produced.
First of all, to the side, 'Nyquist' is used a bit strangely in this
newsgroup, in signal and image processing the following is used:
The Nyquist-Shannon sampling theorem stablishes that "when sampling a
signal (e.g., converting from an analog signal to digital), the
sampling frequency must be greater than twice the bandwidth of the
input signal in order to be able to reconstruct the original perfectly
from the sampled version"
(http://en.wikipedia.org/wiki/Nyquist-Shannon_sampling_theorem)
And the consequence is that the maximum input frequency that the
sampling system can unambiguously represent, the Nyquist limit
frequency, is half the sampling frequency.
In short, the critical sampling frequency (Nyquist rate) follows from
the signal, not from the sensor.

But with scanners, your sampling frequency is fixed and hence THAT
determines the Nyquist limit.
If you violate the theorem, as Bart's measurements show some scanners
do, the perfect reconstruction is not always possible.

Indeed - actually *most* film scanners undersample the input, while most
modern flatbed scanners oversample the input. Thus you cannot determine
the LSF of a film scanner without using some method to oversample the
data.
Except for simple objects. The role of the anti aliasing filters is to
throw away bandwidth in order to reduce the Nyquist rate so it matches
the sensor pitch.
When you invent a filter that will throw away the unwanted bandwidth
above the Nyquist limit of your scanner without loss of contrast, ie
poor MTF, below the Nyquist limit then you will be a rich man! ;-)
There are techniques to do this with monochromatic sources, but
currently none that work broadband. The best you can do currently is to
design the sensor so that the sampling exceeds the optical cut-off of
the system, which requires very high resolution. (eg. an f/2 diffraction
limited lens will resolve out to 25,400cy/in which, as you note above,
requires 50,000ppi sampling!)

A lesser, but more practical, alternative is to oversample the CCD
elements themselves, by half-stepping and/or staggering the lines. In
this technique, the MTF of the CCD pixel itself, nominally a sinc
function, multiplied by the MTF of the practical optic, effectively cuts
the response around the first zero of the sinc function. Since that
occurs at a spatial frequency of 1/(cell width), and the CCD may be
close to 100% fill factored, all that is required to meet the Nyquist
criteria is to sample at half the cell pitch. Hence two CCD rows are
used with a half pixel stagger between them. Epson were the first to
introduce this about a decade ago with their HyperCCD, although Canon
preceded them with a mechanically dithered system using a single CCD
line for each colour instead of two. These days you would be hard
pressed to find a high resolution flatbed that doesn't have this
feature.

However, note that you are now sampling at a frequency where the MTF of
the system at the Nyquist limit, and for some range of frequencies below
that, is zero. The consequence is that whilst aliasing is completely
avoided by oversampling flatbed designs, the image itself looks soft.
You only need to look in the archives of this group to find that one of
the most common questions or complaints is about flatbeds not
delivering the resolution that they claim - well, they generally do, but
by the time they reach the Nyquist frequency for that sampling
resolution, the MTF has fallen to zero because that is specifically what
they are intended to achieve.
Regarding the size of the test object, line or sphere, it does not need
to be extremely small, but you have to know its geometry. Roughly, the
test object can be as wide as the PSF, beyond that accuracy suffers.

Yes, and since the PSF of a film scanner (ie. an undersampled imaging
sensor) is smaller than the pixel pitch, that means that not only does
the stimulus have to be extremely small, but its exact phase relative to
the CCD cell is critical.
You recover the true PSF from the image with an inverted deconvolution
procedure, which also gets you around the holes in the object spectrum.

Again, this only recovers the PSF and is a consequence of the fact that
the signal you measure is the convolution of the PSF and the spot, or
line, shape. However it doesn't overcome the fact that you have
insufficient resolution to determine the PSF in the first place unless
you come up with some oversampling scheme. Furthermore, the larger the
spot or line is, the lower the zeros in its MTF appear and consequently
the noisier that deconvolution becomes - the noise is a consequence of
the spectral nulls - even though inversion prevents an unspecified
result occurring, it doesn't eliminate the noise it causes. That is why
the differentiation of an edge is better, completely avoiding the issue
of spectral nulls.

Let's try explaining this another way. Consider a single pixel in the
CCD - that has a spatial frequency response, the CCD MTF. Lets ignore
the optics for the moment, which only degrade the MTF of the overall
system. The MTF is just the modulus of the fourier transform of the
PSF. Now, since you only have one pixel, you cannot measure the PSF
directly in a single measurement - you need to scan the point across the
field and measure the output of the single pixel at each and every
position. At some positions the spot will be off the pixel, giving no
signal, at others it will be perfectly aligned with the pixel, giving
maximum signal, and at other positions in between, the pixel and the
spot will partially overlap giving an intermediate signal. The shape of
the signal plotted against the scan position is a convolution of the
spot and the pixel PSF. Since the spot is finite, you can get the pixel
PSF by deconvolving the overall PSF with the shape of the spot. That is
just standard stuff.

The critical issue though, is that you must have sufficient data of the
PSF shape to be able to determine the MTF. As a consequence of
Nyquist-Shannon, you actually need more than 2 non-zero points on each
side of the PSF to get a minimum approximation of the MTF. All of this
is fine if the PSF is large and extends across several CCD cells,
because then you can eliminate the scanning step and get adequate detail
on the PSF shape simply by examining the data from adjacent CCD cells -
they will produce the same output as the original cell would when the
spot was scanned into that position. But that is not the case on high
resolution scanners which have been specifically designed to reproduce
as much resolution as possible. In these devices, the PSF is small -
usually less than, or at least comparable with, a pixel pitch - and that
means you need to have data on the PSF which is finer than the pixel
pitch: your scan steps must be less than a pixel pitch. In short, the
fine scanning step oversamples the single cell.

Now, instead of scanning the spot across the cell in very fine steps,
you can achieve the same effect by tilting a line slightly off of the
scan axis. Then each pitch sized step produces a slight shift of the
line relative to the CCD cell and hence a measure of the output from the
cell at a fine scan step - the ratio of the pixel step and the shift
simply being the gradient of the line. Technically this gives a
convolution of the line width and the system LSF, which is the PSF in
the axis across the line, but critically, it is at finer steps than the
coarse samples - the LSF has been oversampled. Thus the shape of the
LSF can be determined with sufficient accuracy by deconvolving it with
the known line width, and then FTing and determine the MTF in that axis.
Similarly the line can be replaced with an edge and the ESF
differentiated to obtain the LSF.

The critical step is oversampling - whether using a line or an edge is
irrelevant. However, you have no means of oversampling AND averaging
along a line, which is necessary to compensate for the loss of signal,
while the edge technique requires no averaging because it uses maximum
contrast throughout, and gives a good SNR at each position.
Incidently, the technique I mentioned above was developed >15 years ago
in the contex of a superresolution project with a former collegue of
yours, Roy Pike, he worked for or with the RSRE around that time.
Not a name I recognise - but it sounds like he worked with Lettington
who discussed this type of process with me a lot. Even so, it doesn't
address the oversampling - you need to have adequate information on the
shape of that PSF before you can get the MTF. Without oversampling
there just isn't enough detail and deconvolution, however it is
performed, isn't going to add in what isn't there to begin with.
I'm not so sure this can't be done. Using the a priori knowledge of the
test object geometry and positivity of the signal you can probably
reconstruct the position of the test objects with sub-pixel accuracy,
which would be as good as scanning them.

You haven't used many commercial scanners, have you? ;-)
When a manufacturer quotes 4000ppi, it could be anything from less than
3900 to 4100 or more - and the same device will vary throughout a
significant part of that range as a function of temperature and focus
position. So you don't have any "a-priori" knowledge of the test
geometry at all. Even worse, you might think you do and consequently
produce meaningless data.
IMO the really nasty problem here is that all this assumes that we are
dealing with a linear system, which is not entirely true.
Actually, the linearity of most CCDs is pretty impressive - more than
good enough for these purposes, although I would not base a measurement
on an edge that came close to saturating the CCD range for this very
purpose. IIRC, Imatest actually warns specifically about that.
 
Hi All,

This post is getting really off topic, sorry about that, but I felt it was
still relevant to the film scanner topic, hope you agree!
Because it isn't a matter of averaging, but of measuring the output as a
function of the phase of the input stimuli relative to the samples. You
can average as many similarly shifted pixels relative to the input
stimulus as you like, but that will only give you one value - and that
is just a single point on the LSF. You must then *change* the phase of
the stimulus and measure another point on the LSF.
It is not necessary to do the integration by resampling, the integration
follows from the known shape of the measurement object and other constraints
which might be imposed, like the known bandwidth of the optics.
What you do with the slanted edge fits also in this framework: without
knowing it can be represented by a step function at a certain angle you
cannot decode the aliased frequencies in the data. This leads to my initial
remark: is a step function the ideal test object? Due to the rapidly decaying
frequency content of a step function I think it isn't. A thin bar would be
better, but still you'd measure only a line in the 2D OTF, plane if you go
for the full 3D OTF.
I suspect the sole reason edges are used is that they are easy to make at
this scale. Nothing wrong with that, but it is a good idea to keep looking
for better test objects.
Yes, you can use multiple lines, but these must be arranged in an exact
relation to the samples so that each line is at an incremental phase
relative to the sample. That is almost impossible to achieve with a
scanner since the mechanical steps and the optical magnification are not
likely to be accurate or stable enough for such a pattern of multiple
lines to be produced.
Sufficiently small circular or spherical objects with known diameter will do
the trick, position can be random.
And the consequence is that the maximum input frequency that the
sampling system can unambiguously represent, the Nyquist limit
frequency, is half the sampling frequency.
Please reread the quote, it states the Nyquist rate is related to the
properties input function or signal. After checking some literature, it seems
optics related literature uses this form. In one reference, Numerical
Recipes, the usage was mixed between yours and this one.
When you invent a filter that will throw away the unwanted bandwidth
above the Nyquist limit of your scanner without loss of contrast, ie
poor MTF, below the Nyquist limit then you will be a rich man! ;-)
There are techniques to do this with monochromatic sources, but
currently none that work broadband. The best you can do currently is to
design the sensor so that the sampling exceeds the optical cut-off of
the system, which requires very high resolution. (eg. an f/2 diffraction
Which is how it *should* be done.
limited lens will resolve out to 25,400cy/in which, as you note above,
requires 50,000ppi sampling!)
For an incoherent system it's delta_x = lambda/(2 n sin(alpha)),
alpha the half aperture angle, n the refractive index. For 500nm light, .45
radians aperture you get ~280 nm. In ppi that is ~90.000. For a coherent
system it is half of that, close to your value. But in systems like this
there is always partial coherency. BTW a jam-jar-bottom lens of that aperture
has the same bandwidth.
A lesser, but more practical, alternative is to oversample the CCD
elements themselves, by half-stepping and/or staggering the lines. In
this technique, the MTF of the CCD pixel itself, nominally a sinc
function, multiplied by the MTF of the practical optic, effectively cuts
the response around the first zero of the sinc function. Since that
occurs at a spatial frequency of 1/(cell width), and the CCD may be
close to 100% fill factored, all that is required to meet the Nyquist
criteria is to sample at half the cell pitch. Hence two CCD rows are
used with a half pixel stagger between them. Epson were the first to
introduce this about a decade ago with their HyperCCD, although Canon
preceded them with a mechanically dithered system using a single CCD
line for each colour instead of two. These days you would be hard
pressed to find a high resolution flatbed that doesn't have this feature.
ok, that makes it easier to meet the bandwidth of the lens.
Again, this only recovers the PSF and is a consequence of the fact that
the signal you measure is the convolution of the PSF and the spot, or
line, shape. However it doesn't overcome the fact that you have
insufficient resolution to determine the PSF in the first place unless
you come up with some oversampling scheme. Furthermore, the larger the
spot or line is, the lower the zeros in its MTF appear and consequently
the noisier that deconvolution becomes - the noise is a consequence of
the spectral nulls - even though inversion prevents an unspecified
result occurring, it doesn't eliminate the noise it causes. That is why
the differentiation of an edge is better, completely avoiding the issue
of spectral nulls.
One advantage of using an reversed iterative deconvolution technique is that
you separate the measurement space from the object (in this case the PSF)
space. That allows you to estimate the object outside the measured region,
very useful to remove blur, but also to handle 'missing' data. In order to do
that in noisy conditions you need to put in as much as possible a priory
information, for example the known geometry of the test object, but also the
statistical properties of the noise, aperture of the lens, and so on.
Let's try explaining this another way. Consider a single pixel in the
CCD - that has a spatial frequency response, the CCD MTF. Lets ignore
the optics for the moment, which only degrade the MTF of the overall
system. The MTF is just the modulus of the fourier transform of the
PSF. Now, since you only have one pixel, you cannot measure the PSF
directly in a single measurement - you need to scan the point across the
field and measure the output of the single pixel at each and every
position. At some positions the spot will be off the pixel, giving no
signal, at others it will be perfectly aligned with the pixel, giving
maximum signal, and at other positions in between, the pixel and the
spot will partially overlap giving an intermediate signal. The shape of
the signal plotted against the scan position is a convolution of the
spot and the pixel PSF. Since the spot is finite, you can get the pixel
PSF by deconvolving the overall PSF with the shape of the spot. That is
just standard stuff.
There is a little more involved...
The critical issue though, is that you must have sufficient data of the
PSF shape to be able to determine the MTF. As a consequence of
Nyquist-Shannon, you actually need more than 2 non-zero points on each
side of the PSF to get a minimum approximation of the MTF. All of this
is fine if the PSF is large and extends across several CCD cells,
It always does since its Fourier transform is finite.

Now, instead of scanning the spot across the cell in very fine steps,
you can achieve the same effect by tilting a line slightly off of the
scan axis. Then each pitch sized step produces a slight shift of the
line relative to the CCD cell and hence a measure of the output from the
cell at a fine scan step - the ratio of the pixel step and the shift
simply being the gradient of the line. Technically this gives a
convolution of the line width and the system LSF, which is the PSF in
the axis across the line, but critically, it is at finer steps than the
coarse samples - the LSF has been oversampled. Thus the shape of the
LSF can be determined with sufficient accuracy by deconvolving it with
the known line width, and then FTing and determine the MTF in that axis.
Similarly the line can be replaced with an edge and the ESF
differentiated to obtain the LSF.

The critical step is oversampling - whether using a line or an edge is
irrelevant. However, you have no means of oversampling AND averaging
along a line, which is necessary to compensate for the loss of signal,
while the edge technique requires no averaging because it uses maximum
contrast throughout, and gives a good SNR at each position.
Yes, you can do the averaging, the constraints in the deconvolution do this
for you automatically! These can also be extended over multiple objects, even
multiple images.
Not a name I recognise - but it sounds like he worked with Lettington
who discussed this type of process with me a lot. Even so, it doesn't
address the oversampling - you need to have adequate information on the
shape of that PSF before you can get the MTF. Without oversampling
there just isn't enough detail and deconvolution, however it is
performed, isn't going to add in what isn't there to begin with.
A common misunderstanding. You *can* retrieve 'lost' spatial frequency
components *and* lost spatial structures at the same time.
There is quite an amount of literature on the topic. For example
'Introduction to inverse problems in imaging' by Bertero & Boccacci is quite
accessible.
But I'm afraid nothing short of me producing a piece of software which does
the job will convince you.
You haven't used many commercial scanners, have you? ;-) That's not an argument.
When a manufacturer quotes 4000ppi, it could be anything from less than
3900 to 4100 or more - and the same device will vary throughout a
significant part of that range as a function of temperature and focus
position. So you don't have any "a-priori" knowledge of the test
geometry at all. Even worse, you might think you do and consequently
produce meaningless data.
Calibration errors are always a serious concern, sure. BTW, it would be
interesting to see film scanner MTFs in more than one directions.
Actually, the linearity of most CCDs is pretty impressive - more than
good enough for these purposes, although I would not base a measurement
on an edge that came close to saturating the CCD range for this very
purpose. IIRC, Imatest actually warns specifically about that.
Sorry, I was not thinking of CCD linearity but rather of the coherency
problem, but found it too off topic to mention.

-- Hans
 
HvdV said:
Hi All,

This post is getting really off topic, sorry about that, but I felt it
was still relevant to the film scanner topic, hope you agree!

It is not necessary to do the integration by resampling, the
integration follows from the known shape of the measurement object and
other constraints which might be imposed, like the known bandwidth of
the optics.

The sloping edge isn't doing integration! You are measuring the PSF
with finer precision than the sampling density permits on its own by
moving the stimulus relative to the samples. Knowing the shape of the
stimulus, whether an edge or a line, makes no difference to this.
Without oversampling all you can achieve is a measurement of the PSF
with a precision of no better than a single pixel pitch - and the
unknown position of the stimulus within that pitch means that you have
no knowledge of where the aliased components lie or how they affect your
single datapoint result.
What you do with the slanted edge fits also in this framework: without
knowing it can be represented by a step function at a certain angle you
cannot decode the aliased frequencies in the data.

I am beginning to wonder if you have understood what the slanted edge
actually achieves. Firstly, your statement is completely wrong, because
it is very simple to decode the aliased frequency content simply because
the slanted edge permits oversampling of the data. So let me explain
the process. Lets assume that the edge in question has a gradient of
1/10th - that is, for every 10 pixels down the frame, the edge moves one
pixel to the left.

Now, if you select the data from left to right, ie. across the edge, on
any line then all you get is an edge spread function sampled at the
pixel sampling rate. Because of this sampling limit, you cannot
determine the MTF of the system any higher than half of the sampling
frequency - the Nyquist limit. Furthermore, since you do not know the
phase of the edge relative to the samples, you cannot determine where
the aliased components of the MTF will lie, or how many times that
spectrum is "folded" into the pass band. If you are lucky, the MTF will
have reduced to zero by the sampling frequency itself, but if the fill
factor of the CCD is less than 100% that is unlikely. Recall that the
MTF of a perfectly flat response sensor is sinc(pi.a.f), where a is the
pixel width, which does not fall to its first zero until f=1/a. At 100%
fill factor, a is also the pixel pitch, and so the MTF does not reach
zero until the sampling frequency itself. At the Nyquist limit, this is
63.7%, but for practical CCDs where the pixel width is less than the
pitch, this is likely to be much higher. So, measuring the PSF across
an edge (or a line) directly is completely useless because the aliased
component of the MTF is dominated by uncontrolled alias components.

However... turn the source data through 90 degrees!

Instead of looking at the samples across the edge, take the samples in
the axis that is almost parallel to the edge. Now, instead of a data
set which abruptly transitions from black to white, you have data which
gradually transitions - because at each new line the phase of the edge
relative to the samples has progressed by 1/10th of a pixel - the line
gradient. Consequently, the data series *down* the edge is almost the
same data as would be produced if the edge were perfectly vertical and
samples from a single pixel position had been taken while it was moved
by one tenth of a pixel past it. In short, the sampling frequency by
which the PSF is measured has been increased by a factor of 10. By
taking data from well before the edge reaches a particular column to
well after it has passed, the full extent of the ESF can be determined
with 10x the resolution of the raw sampling system. Now, when you
compute the MTF from the PSF measured at this resolution (10x the
original sampling frequency of the sensor) it will have no alias
components up to 5x the original sampling frequency (ie. the Nyquist
limit of the oversampled rate). The whole system has been "geared up"
by the gradient of the edge.

Unless you are really unlucky, the optical system, or optical and
electrical crosstalk in the CCD itself, will have run out of all
resolution by 5x the sampling frequency, so you no longer have to worry
about the aliased components - oversampling due to the slant has
resulted in an adequate precision of the PSF and consequently the MTF.
If it hasn't, then the MTF you end up with will show a finite and
significant level at the oversampled Nyquist limit, and re-testing with
a steeper gradient will resolve the problem.

As you should now see, the edge isn't the most valuable part of this
process, the slope is. That is what produces the oversampling, and that
is what eliminates the aliasing from the measurement. The edge merely
provides a flat frequency test spectrum.
This leads to my initial remark: is a step function the ideal test
object? Due to the rapidly decaying frequency content of a step
function I think it isn't.

Rapidly decaying frequency content? Take a look at the FT of an edge
function! It is a flat spectrum - all frequencies are present with
equal amplitude! (Limited only by the width of the FT domain you choose
to restrict your measurement to!)
A thin bar would be better, but still you'd measure only a line in the
2D OTF, plane if you go for the full 3D OTF.

No, a thin bar has a much more rapidly decaying spectrum - the frequency
spectrum of the line is sinc(pi.a.f) where a is the thickness of the
line. An infinitely thin line has the same frequency spectrum as an
edge, but infinitely small power, making it useless as a test tool. A
line, or spot, which is similar in size to the actual PSF of the unit
under test is a reasonable compromise because that will have adequate
power to stimulate response *and* have a known frequency response which
can then be compensated for by deconvolution. This is a compromise,
since the deconvolution adds noise, and without changing the position of
the test relative to the samples, the aliased components corrupt the end
result by an unknown amount.
I suspect the sole reason edges are used is that they are easy to make
at this scale. Nothing wrong with that, but it is a good idea to keep
looking for better test objects.

Well, hopefully you will now understand that this is NOT the reason for
their use. The edge, contrary to your assertion, has a flat frequency
spectrum, extending to infinity - the same as an infinitely thin line -
and also carries significant power, up to 50% of the total possible. In
addition, being tilted relative to the samples, it enables significant
oversampling without the need to move the test object relative to the
samples.

As I mentioned previously, the use of edges as a reliable test method
has only recently been adopted by ISO (following extensive assessment).
Up till then they were using lines - or spots: it doesn't actually
matter which, the spot just gives all axes simultaneously. As far as I
am aware, the edge was first used in the early 1950s on TV systems as a
visual assessment of frequency response (ie. MTF) but the sloping edge
quantitative computational MTF method was invented in the mid 1980s - at
Malvern!
Sufficiently small circular or spherical objects with known diameter
will do the trick, position can be random.

Position cannot be random *UNLESS* you are only interested in the MTF
below the frequency at which the alias components fall to zero.
Unfortunately, you don't know the MTF of the system under test until you
have measured it, so you cannot tell where this criteria occurs and,
more importantly, how significant it is once you have.

As I said earlier, if you are measuring a system where the MTF is
significantly less than the Nyquist limit of your sampling system then
your proposed solution works fine. Not many scanners fall into that
category and virtually no film scanners do.
Please reread the quote, it states the Nyquist rate is related to the
properties input function or signal.

No it doesn't - try reading it yourself! ;-)

Your quote doesn't define a Nyquist frequency or limit nor even refer to
it at all!

What it defines is the minimum sampling frequency required to
unambiguously sample a signal.

By consequence, a given sampling frequency can unambiguously sample a
signal up to a critical bandwidth - and that is the Nyquist limit of
that sampling frequency. Nothing in your quote contradicts that or is
at odds with the use of the term "Nyquist" (though it should perhaps
have a small "n") in this or any other field.
After checking some literature, it seems optics related literature
uses this form. In one reference, Numerical Recipes, the usage was
mixed between yours and this one.

Its been common usage in digital imaging since long before I started
working in the field, which is more than 25 years ago. First useage of
the term "Nyquist Limit" appears to be Claude Shannon's 1948 paper, so I
suspect that is a precedent over any subsequent quote you might find.
Which is how it *should* be done.

So you propose throwing out established test methodology until the
impossible is invented! Get real!
For an incoherent system it's delta_x = lambda/(2 n sin(alpha)),
alpha the half aperture angle, n the refractive index. For 500nm light,
.45 radians aperture you get ~280 nm. In ppi that is ~90.000. For a
coherent system it is half of that, close to your value. But in systems
like this there is always partial coherency. BTW a jam-jar-bottom lens
of that aperture has the same bandwidth.

You did read what I wrote in that example, I assume?
So, pray tell those reading so far, what the half angle of an f/2 cone
is?
Just for reference, f/# = 1/(2 sin(alpha)) in your parlance. On my
calculator, a 0.45radian semi-angle is actually 1/(2 x 0.435), or
f/1.15!
Got a clue yet as to why your result is almost double what I stated?
;-)
ok, that makes it easier to meet the bandwidth of the lens.

No it doesn't!!! The bandwidth of the lens is potentially much greater.
What it does is introduce a more significant bandwidth limit - the large
pixel size - relative to the sampling rate! (or, looking at it the other
way, increases the sampling rate relative to the pixel size that is
limiting the bandwidth).
One advantage of using an reversed iterative deconvolution technique is
that you separate the measurement space from the object (in this case
the PSF) space. That allows you to estimate the object outside the
measured region, very useful to remove blur, but also to handle
'missing' data. In order to do that in noisy conditions you need to put
in as much as possible a priory information, for example the known
geometry of the test object, but also the statistical properties of the
noise, aperture of the lens, and so on.

But it *DOESN'T* overcome the aliasing issue!!! All you are doing by
deconvolution is compensating for the limited spectral characteristics
of the test pattern! The edge (or an infinitely thin line) does not
have a limited spectral content. In other words, what your method does
is to compensate for a poorly chosen test function!
There is a little more involved...
It always does since its Fourier transform is finite.

Having a finite FT does not mean that the PSF extends to adjacent
samples with sufficient power to measure it! If you can't measure it
then it might as well have an unlimited FT! We have already seen that
the diffraction limit of an f/2 optic requires sampling at 50,000ppi, so
a 4000ppi sensor isn't actually going to measure anything significant
beyond the noise on adjacent pixels until the f/# gets pretty high!
Yes, you can do the averaging, the constraints in the deconvolution do
this for you automatically!

Only if the line is straight, not sloped, and then without any
oversampling - so you are stuffed either way.
A common misunderstanding. You *can* retrieve 'lost' spatial frequency
components *and* lost spatial structures at the same time.

Try reading Shannon's original paper - you will find your statement
inconsistent with his law. You can *estimate* what the lost or
corrupted data may have been but you cannot recover it - other than by
an oversampling process, such as the sloping edge produces. Since we
are discussing measurements, not approximations, that is what is
necessary.
There is quite an amount of literature on the topic. For example
'Introduction to inverse problems in imaging' by Bertero & Boccacci is
quite accessible.
But I'm afraid nothing short of me producing a piece of software which
does the job will convince you.

That is probably the first thing you have said I agree with.
When you have done it, submit it to ISO as a replacement for their
clearly inadequate test methodology!
That's not an argument.

It defines the ground rules of the test you intend us to adopt in
preference to the existing ones, so it is extremely pertinent to the
argument!
BTW, it would be interesting to see film scanner MTFs in more than one
directions.

The edge approach gives horizontal and vertical MTF - there is a
possibility that it is completely different from either of these in
other axes, a probability that approaches zero for any commercial
scanner or digital camera.
 
Kennedy McEwen wrote:
Try reading Shannon's original paper - you will find your statement
inconsistent with his law. You can *estimate* what the lost or
corrupted data may have been but you cannot recover it - other than by
an oversampling process, such as the sloping edge produces. Since we
are discussing measurements, not approximations, that is what is necessary.
You think you can measure something without dealing with uncertainties?
That is probably the first thing you have said I agree with.
But you wouldn't trust it, right?
You might order that book though. One could say its central idea is that in
order to interpret an image you always need to estimate 'restore' the object
from the available data. You are welcome to object against that, saying that
one cannot measure anything that way.
When you have done it, submit it to ISO as a replacement for their
clearly inadequate test methodology!
I didn't say it was 'inadequate', merely that the choice to use an edge is
probably more driven by the need to have an easy to manufacture test object
than anything else. As soon as it is easy to obtain bright point or line
objects the standard tests can indeed be updated.

-- hans
 
Back
Top