ALE, the right way to do multi-pass multi-scanning?

  • Thread starter Thread starter ljlbox
  • Start date Start date
L

ljlbox

Please have a look at ALE, at http://auricle.dyndns.org/ALE/ .

This program claims to improve resolution and/or noise levels by
merging data from multiple scans.

The sample images are certainly impressive, and if you look at the
captions (or at the program's documentation, if you dare!) you'll see
that it's not simply the usual upsample-align-mix-downsample stuff --
or at least, that's definitely not the only possibility the program
gives.

The main downside, at least on my computer, is that ALE is *incredibly
slow*, so I haven't been able to make extensive tests. But I stronly
recommend people like Don ;-) to have a look at it. Just keep in mind
that testing on two 1200dpi 35mm film scans takes some minutes on an
Athlon XP 1600 (or whatever processor I've got), and that I wouldn't
recommend trying with 2400dpi.


Besides, anybody knows of a good (but faster than ALE!) program to do
automatic sub-pixel alignment?

by LjL
(e-mail address removed)
 
Please have a look at ALE, at http://auricle.dyndns.org/ALE/ .
....

But I stronly
recommend people like Don ;-) to have a look at it.

Even without the prompting, just seeing "multi-pass multi-scan" perks
my ears up! ;o)
Just keep in mind
that testing on two 1200dpi 35mm film scans takes some minutes on an
Athlon XP 1600 (or whatever processor I've got), and that I wouldn't
recommend trying with 2400dpi.

Sub-pixel alignment is very computationally expensive and time
consuming. Indeed, most programs don't test the whole image but only a
few key points and then extrapolate from that i.e. transform.

As I discovered when I was writing my own program this actually has
some advantages for the cases where the image is not misaligned
uniformly. And no image I tried has been misaligned uniformly! To my
surprise, even on the CCD array axis there is some misalignment!
Besides, anybody knows of a good (but faster than ALE!) program to do
automatic sub-pixel alignment?

Don't know about faster but I take it you tried HDR Shop?

Anyway, I'm off to check ALE up! Thanks for the tip!

Don.

P.S. Looking at the site, I like the GNU aspect!!! Worried about this,
though, on Known Bugs page:

2D alignment with control points does not work properly.

Anyway, more later...
 
Don ha scritto:
[snip]
[snip]

Besides, anybody knows of a good (but faster than ALE!) program to do
automatic sub-pixel alignment?

Don't know about faster but I take it you tried HDR Shop?

I've tried version 1, but I had the impression that it didn't really
perform alignment, but just merged two already aligned images...?
[snip]

P.S. Looking at the site, I like the GNU aspect!!! Worried about this,
though, on Known Bugs page:

2D alignment with control points does not work properly.

I don't remember what exactly is meant with "control points", but
standard alignment does work, although, as I said, quite slowly.
You specify a percentage of (random?) pixels you want to be used for
alignment with the option "--mc" -- or "--no-mc" if you want all pixels
to be used. "--mc" is for "Monte Carlo alignment", whatever that is.


by LjL
(e-mail address removed)
 
Don ha scritto:

[snip]

Sub-pixel alignment is very computationally expensive and time
consuming. Indeed, most programs don't test the whole image but only a
few key points and then extrapolate from that i.e. transform.

As I discovered when I was writing my own program this actually has
some advantages for the cases where the image is not misaligned
uniformly. And no image I tried has been misaligned uniformly! To my
surprise, even on the CCD array axis there is some misalignment!

ALE can apparently perform a few transformations to align images that
are non-uniformely misaligned. The manual suggests "--euclidean" for
scanners, although it also says that "--euclidean" includes rotations,
and I can't see how a scanner could rotate images...

Anyway. I was also thinking about "rolling my own" aligner, even though
I haven't started yet. But since you have already tried... have you
tried adding artificial marks to the scan?
For example, you could print evenly-spaced marks (thin lines, for
example) on a strip of paper, and place that strip of paper next to the
film.
After you've got the scans, wouldn't alignment be easier and faster if
performed on the marks instead of on the image?
Perhaps even the film holes could be used as marks, if the scanner
doesn't allow scanning an area larger than the film.

Also, can you explain shortly the kind of algorithms you worked with?
I have no clue how "real" programs do it; to begin with, I thought of
just writing a shell script that uses the Netpbm tools.

In particular, these commands sound interesting:
pnmpsnr measure difference between two images
pgmedge edge-detect a PGM image

I thought there was also a command to "measure an image's sharpness" or
something like that, but I can't find it anymore. No idea how it was
supposed to work, assuming I haven't just dreamed it existed.

I would basically just move one of the two images randomly and take a
measurement, either with pnmpsnr or by creating a third image with
pgmedge and seeing how bright it is.
Then I would iterate the process until pnmpsnr gives me a high enough
value, or the brightness of the pgmedge-generated image gets low
enough.

At every iteration, I would reduce the range for random motion, based
on pnmpsnr's or pgmedge's feedback. However, I'm afraid both programs
would give me steadily meaningless results while the images are badly
aligned, and only start working correctly when they are already
almost-well-aligned.


by LjL
(e-mail address removed)
 
(e-mail address removed) ha scritto:
[snip]

Anyway. I was also thinking about "rolling my own" aligner, even though
I haven't started yet. But since you have already tried... have you
tried adding artificial marks to the scan?
For example, you could print evenly-spaced marks (thin lines, for
example) on a strip of paper, and place that strip of paper next to the
film.
After you've got the scans, wouldn't alignment be easier and faster if
performed on the marks instead of on the image?

[snip]

Oh, wait, I suppose I forgot that when scanning transparencies the
scanner only scans, well, transparencies :=)

So, make that using a strip of *film* and marking that. Bit more
complicated, though.

However, my film holder already does have tiny holes (though not evenly
spaced, but that's not essential) next to the film! I wonder what
they're there for, since the Epson software doesn't do multi-pass
scanning at all.

(Actually, my film holder has a lot of strange features I don't
understand...)

I suppose that lines -- or even better, crosshairs -- would still be
better, though.


by LjL
(e-mail address removed)
 
SNIP
I've tried version 1, but I had the impression that it didn't
really perform alignment, but just merged two already
aligned images...?

That's my experience as well.

SNIP
I don't remember what exactly is meant with "control
points", but standard alignment does work, although, as
I said, quite slowly.

Instead of using the program's automatic selection, a user defined
selection of points to align (can be useful if there are large
out-of-focus areas).
You specify a percentage of (random?) pixels you want to
be used for alignment with the option "--mc" -- or "--no-mc"
if you want all pixels to be used. "--mc" is for "Monte Carlo
alignment", whatever that is.

Monte Carlo methods refer to a certain random selection of points (see
<http://en.wikipedia.org/wiki/Monte_Carlo_method> for some background,
and <http://mathworld.wolfram.com/MonteCarloMethod.html> for a more
formal explanation and useful literature references).

Bart
 
I've tried version 1, but I had the impression that it didn't really
perform alignment, but just merged two already aligned images...?

No, it does align because alignment is "automatic". What I mean by
this is that the original HDR Shop algorithm has the alignment
"built-in".

However, I suspect the problem may be 8-bit depth. Correlation
algorithms depend on very small differences between pixels (in
floating point math). So 8-bit may not offer enough elbow room. But
I'm only guessing here...
I don't remember what exactly is meant with "control points", but
standard alignment does work, although, as I said, quite slowly.
You specify a percentage of (random?) pixels you want to be used for
alignment with the option "--mc" -- or "--no-mc" if you want all pixels
to be used. "--mc" is for "Monte Carlo alignment", whatever that is.

I'll have to try it this weekend...

Don.
 
Anyway. I was also thinking about "rolling my own" aligner, even though
I haven't started yet. But since you have already tried... have you
tried adding artificial marks to the scan?

Yes, I started out by writing a "global" alignment program. However,
the problem was this takes a very long time to run! It's virtually
impossible to do the ~120 MB files I'm working with.

So, I limited the alignment to a 500x500 or 1000x1000 pixel square in
the middle of the image. To my surprise I then discovered that the
corners were out of alignment - and often in opposite directions!!!

So, I then put 4 anchor points in all corners with variable offset
from the edge to try out different settings. This improved things but
then the middle was often out of alignment! Aaaarrrggg!

So, the next step is to overlay a matrix of points across the whole
image and align based on that. The problems are many... For one, it
takes a very long time to correlate all those points! But first of all
I have to determine the minimum distance between individual points.

In other words, I need to establish the rate of misalignment. This is
not easy as misalignment is different with each scan! So I'll have to
average...

What I find most frustrating is that the firmware does *not* allow
scanning each line individually. If that were implemented there would
be no need for alignment!

I could then not only do multiple exposures "in situ" but also
multiple focus settings! And then advance the stepper motor to the
next line! That would be fantastic! Not only would the dynamic range
be perfect but I could have every point in focus (my cardboard mounted
Kodachromes are quite warped).
For example, you could print evenly-spaced marks (thin lines, for
example) on a strip of paper, and place that strip of paper next to the
film.
After you've got the scans, wouldn't alignment be easier and faster if
performed on the marks instead of on the image?
Perhaps even the film holes could be used as marks, if the scanner
doesn't allow scanning an area larger than the film.

The problem, as I hinted above, is that the misalignment is in all
directions. So you can't rely on edges alone. Well, at least that's
what happens on my Nikon LS-50.
Also, can you explain shortly the kind of algorithms you worked with?
I have no clue how "real" programs do it; to begin with, I thought of
just writing a shell script that uses the Netpbm tools.

In particular, these commands sound interesting:
pnmpsnr measure difference between two images
pgmedge edge-detect a PGM image

I didn't know either how the real programs did this. First, I tried
figuring it out for myself and "invented" bilinear interpolation only
find out about it later.

Next I spent weeks on the Net and did a lot of reading/learning. It's
all quite complicated and I didn't want to spend too much time
digressing but essentially it involves two steps: correlation and
interpolation.

Correlation is a mathematical process in order to determine the
"similarity" of two items. It involves floating point math which is
why it takes such a long time.

Once the sub-pixel (mis)alignment is established interpolation is used
to shift the images. There are different methods and the most common
are bilinear, biquadratic and bicubic. All of them cause blurring
whereby bilinear causes most, but it's the fastest, and bicubic causes
least, but it's very time consuming.
I thought there was also a command to "measure an image's sharpness" or
something like that, but I can't find it anymore. No idea how it was
supposed to work, assuming I haven't just dreamed it existed.

I'm not familiar with Netpbm tools but I'm sure there are many Linux
libs for this. I work on a Windows machine and since this is "labor of
love" I wanted to do everything myself rather than use existing
libraries. Of course, that's very time consuming...
I would basically just move one of the two images randomly and take a
measurement, either with pnmpsnr or by creating a third image with
pgmedge and seeing how bright it is.
Then I would iterate the process until pnmpsnr gives me a high enough
value, or the brightness of the pgmedge-generated image gets low
enough.

I did correlation in two steps: full pixel alignment, and then
sub-pixel alignment. In each case I specify a "radius" where to
search.

For example, I know from empirical data that my scans are never more
than 1 pixel apart, so I specify a radius of 2 pixels, just in case.
This means, I have to compare each pixel to 25 pixels in the other
image!

Once that's done, I do sub-pixel alignment. After a lot of empirical
testing I settled on 4 divisions per axis. And that means 49 sub-pixel
positions to compare each pixel to!
At every iteration, I would reduce the range for random motion, based
on pnmpsnr's or pgmedge's feedback. However, I'm afraid both programs
would give me steadily meaningless results while the images are badly
aligned, and only start working correctly when they are already
almost-well-aligned.

That's why I do the rough full-pixel alignment first.

In theory, one should compare each pixel in one image to *all* pixels
in the other image, and do that at every sub-pixel division!

Not enough time in this Universe to do that ;o) so I settled on the
above process.

Don.
 
Hi Don,
So, I then put 4 anchor points in all corners with variable offset
from the edge to try out different settings. This improved things but
then the middle was often out of alignment! Aaaarrrggg!
Was it XY alignment or also drift along Z of either the film or the axial
lens position?
So, the next step is to overlay a matrix of points across the whole
image and align based on that. The problems are many... For one, it
takes a very long time to correlate all those points! But first of all
I have to determine the minimum distance between individual points.

In other words, I need to establish the rate of misalignment. This is
not easy as misalignment is different with each scan! So I'll have to
average...
Sounds like you'll have to do a rubber sheet transformation to align...
What I find most frustrating is that the firmware does *not* allow
scanning each line individually. If that were implemented there would
be no need for alignment!
Well, one point here for Minolta + Silverfast: although I can't be sure it
does exactly that, from the sound, reduced scan speed and improved SNR I
gather that indeed the Minolta can do line averaging. Not with the Minolta sw
though.
I could then not only do multiple exposures "in situ" but also
multiple focus settings! And then advance the stepper motor to the
That would be nice indeed, but I think the current Z-stepper motors are way
to slow to do XZ-Y scans. Could be fixed with a construction like in a CD
player, or with a piezo driver.
Correlation is a mathematical process in order to determine the
"similarity" of two items. It involves floating point math which is
why it takes such a long time.
To speed up correlation type operations, have a look at http://www.fftw.org/

Good luck with your nice project, Hans
 
Hi Hans,
Was it XY alignment or also drift along Z of either the film or the axial
lens position?

It was both! That surprised me because, while I can understand that
the stepper motor is not very accurate at this level of precision, the
distance between the CCD cells remains constant!?

The only explanation I have is that the whole assembly "wiggles" as it
travels i.e. sometimes left side advances more than right and vice
versa. And that means the angle of the CCD array changes constantly in
relation to the object being scanned.
Sounds like you'll have to do a rubber sheet transformation to align...

Exactly! I could just put an arbitrary mesh over it but because the
correlation is so computationally demanding I want to keep this at
minimum. However, determining the resolution of this mesh is not easy
as explained above.

Anyway, it just takes too long. I may get back to this, just for fun,
but for now I only want to finally finish scanning so the 4 anchor
points is what I'm going with.
Well, one point here for Minolta + Silverfast: although I can't be sure it
does exactly that, from the sound, reduced scan speed and improved SNR I
gather that indeed the Minolta can do line averaging. Not with the Minolta sw
though.

Every single-pass multi-scan scanner does that. The problem is all
they do is take the *same* exposure repeatedly and then average.

What I want to do is take (only two) *different* exposures and tone
map! That's far better than multi-scanning because you're actually
resolving detail directly. And considerably faster!

All multi-pass multi-scanning does is simply keep resampling in order
to reduce noise. It does not eliminate noise, just improves the signal
to noise ratio. Indeed you can get very similar results (and much
faster!) by simply applying a small amount of Gaussian Blur to the
noisy part of the image. The result of this blurring is virtually
indistinguishable from multi-scanning!

What I want to do actually eliminates noise. Well, by boosting
exposure the noise is drowned and becomes immeasurably small.
That would be nice indeed, but I think the current Z-stepper motors are way
to slow to do XZ-Y scans. Could be fixed with a construction like in a CD
player, or with a piezo driver.

No, I'm not talking about real time focus adjustments during the scan.
I would do it "batch" by taking multiple scans with different focus.

What I would do first, is make a "focus map" (my term) by sampling
focus in a way similar to the above "rubber sheet", for example, at 12
points in the image (assuming a 3 x 4 image ratio). Once that is
established I would then take start the actual scan by performing
several scans at each scan line to get each image point in focus.
After that those multiple images are merged and every single pixel
would be in focus regardless of how warped the film is!!!
To speed up correlation type operations, have a look at http://www.fftw.org/

Dialing as we speak! :o)
Good luck with your nice project, Hans

Thanks! It's a lot of fun but unfortunately I just wanted to finish
scanning because I'm strapped for time. But I do hope to go back to
this because it's really neat. I have the developer kit from Nikon but
it's very high level. What I really need to do is disassemble the
firmware and see if I can modify it. But that's a whole new level of
complexity: how to get the firmware out, is it encrypted, what
microcontroller is used, etc, etc...

Don.
 
Hi Don,
It was both! That surprised me because, while I can understand that
the stepper motor is not very accurate at this level of precision, the
distance between the CCD cells remains constant!?

The only explanation I have is that the whole assembly "wiggles" as it
travels i.e. sometimes left side advances more than right and vice
versa. And that means the angle of the CCD array changes constantly in
relation to the object being scanned.
I don't know about the Nikon, but if you see how flimsy the Minolta tray is
I'm surprised it does the job at all. Asking for a reproducibility of 1 pixel
over 5000 is probably too much, and for regular usage it is also not necessary.
Exactly! I could just put an arbitrary mesh over it but because the
correlation is so computationally demanding I want to keep this at
minimum. However, determining the resolution of this mesh is not easy
as explained above.
Just checked, with FFTW on a good PC you could do a 4096x6144 one-channel
cross correlation in about 7s.
Every single-pass multi-scan scanner does that. The problem is all
they do is take the *same* exposure repeatedly and then average.
ok. You'd say that with a LED light source it is very doable to modulate the
intensity.
What I want to do is take (only two) *different* exposures and tone
map! That's far better than multi-scanning because you're actually
resolving detail directly. And considerably faster!

All multi-pass multi-scanning does is simply keep resampling in order
to reduce noise. It does not eliminate noise, just improves the signal
to noise ratio. Indeed you can get very similar results (and much
Well, provided you start with a sufficient bit depth that is quite effective
to get more dynamic range.
faster!) by simply applying a small amount of Gaussian Blur to the
noisy part of the image. The result of this blurring is virtually
indistinguishable from multi-scanning!
If you were in effect oversampling, which quite some people claim you are
doing with a 5400dpi scanner, yes, Gaussian blurring with a small sigma can
replace scan line averaging without loosing detail. Because that works over
two dimensions, it can be even more effective than 2x averaging. Still, I'm
not so sure these people are right.
What I want to do actually eliminates noise. Well, by boosting
exposure the noise is drowned and becomes immeasurably small.




No, I'm not talking about real time focus adjustments during the scan.
I would do it "batch" by taking multiple scans with different focus.
Doing an XZ-Y scan (moving the focus first, then the Y) does exactly that,
without the need to reposition over Y. However, with the current slow
focusing motors this is IMO not feasible, so it's going to be the rubber
sheet technique.
What I would do first, is make a "focus map" (my term) by sampling
focus in a way similar to the above "rubber sheet", for example, at 12
points in the image (assuming a 3 x 4 image ratio). Once that is
established I would then take start the actual scan by performing
several scans at each scan line to get each image point in focus.
After that those multiple images are merged and every single pixel
would be in focus regardless of how warped the film is!!! Sounds doable...


Thanks! It's a lot of fun but unfortunately I just wanted to finish
scanning because I'm strapped for time. But I do hope to go back to
this because it's really neat. I have the developer kit from Nikon but
it's very high level. What I really need to do is disassemble the
firmware and see if I can modify it. But that's a whole new level of
complexity: how to get the firmware out, is it encrypted, what
microcontroller is used, etc, etc...
That's probably more work than doing the rubber sheet mapping, maybe
so much it can't be done in evening hours. In addition, while you work
new scanner models might come out, maybe one which skips scanning
and images straight onto a large CCD.


Cheers, Hans
 
HvdV said:
Hi Don,

[snip]

If you were in effect oversampling, which quite some people claim you
are doing with a 5400dpi scanner, yes, Gaussian blurring with a small
sigma can replace scan line averaging without loosing detail. Because
that works over two dimensions, it can be even more effective than 2x
averaging. Still, I'm not so sure these people are right.

I'm with you in not being so sure.
Even if the 5400dpi (or whatever) are oversampled, if you look at the
"slanted edge" thread I started some weeks ago, you'll find some clues
that, maybe, even an oversampled scan can offer more *real resolution*
than a perfectly sampled one.

The resolution is just hard to see by eye because of the shape the MTF
gets due to the oversampling, but this can be undone in software.
If noise is significant, however, it will be greatly amplified by the
process; from this follows, IMHO, that multi-sampling can still be a
very valid technique.

Gaussian blurring can probably be a solution if you aren't going to try
and get a better MTF ( = more eye-visible resolution), since you will
just blur frequencies that aren't really visible anyway... but those
frequencies *are* there, they *can* be restored (though not very easily
in my recent experience!), and they get *destroyed* by blurring.

by LjL
(e-mail address removed)
 
Don said:
[snip]
To speed up correlation type operations, have a look at http://www.fftw.org/

Dialing as we speak! :o)

It's the library I'm currently using for my slanted edge stuff.
From what I've gathered, it's the fastest general-purpose (i.e. not
machine specific) FFT library around (and certainly the fastest
open-source FFT library).

if you have a
double Data[SIZE];

containing your real (image) data, you just do

fftw_complex* DFT=fftw_malloc((SIZE/2+1)*sizeof(fftw_complex));
fftw_plan Plan=fftw_plan_dft_r2c_1d(SIZE, Data, DFT, FFTW_ESTIMATE);
fftw_execute(Plan);

and you get your transform in DFT, with DFT[0] being the real part
and DFT[1] being the imaginary part of each value; i=0 is the DC
frequency.

Gets somewhat more difficult with multidimensional transforms, due to
the way C deals with multidimensional arrays, but still not too hard --
it's all quite clearly put in the manual anyway.

by LjL
(e-mail address removed)
 
Hi Hans,
I don't know about the Nikon, but if you see how flimsy the Minolta tray is
I'm surprised it does the job at all. Asking for a reproducibility of 1 pixel
over 5000 is probably too much, and for regular usage it is also not necessary.

Oh, I realize that and I wasn't really complaining. Given that we are
talking about a consumer product the level of precision at 4000 dpi is
actually quite amazing! I mean, we're talking less than a pixel
misalignment (in most cases) and that's pretty good!
Just checked, with FFTW on a good PC you could do a 4096x6144 one-channel
cross correlation in about 7s.

My program is a bit more complicated and it's also a question of
precision. For example, just going down to quarter of a pixel ends up
with 49 iterations (and that's without full pixel correlation, I do
separately for efficiency. So, things add up fairly quickly. One way I
speed it all up is by not doing the whole picture but only a window in
each corner. As a side effect that actually improves accuracy because
- as I mentioned earlier - different parts of the image move in
different directions. So if I were to do the full image I'd only end
up with an average value.
ok. You'd say that with a LED light source it is very doable to modulate the
intensity.

That's what I thought at first, but the exposure is done by simply
increasing time. The LED intensity stays constant.
If you were in effect oversampling, which quite some people claim you are
doing with a 5400dpi scanner, yes, Gaussian blurring with a small sigma can
replace scan line averaging without loosing detail. Because that works over
two dimensions, it can be even more effective than 2x averaging. Still, I'm
not so sure these people are right.

The conventional wisdom is that 4000 dpi is the limit for film in
normal usage. Going up to 5400 dpi may extract more data but only if
the image was taken with a tripod, high resolution film and perfect
exposure. On the other hand, talking purely in terms of numbers, I
read that to really resolve everything one needs about 10,000 dpi.
That's because grain size is not uniform.

Regarding Gaussian Blurring, I did not spend much time on
multi-scanning because Nikon disabled the functionality in firmware
for marketing reasons... :-/ However, I ran some tests by doing
multi-pass multi-scanning (with alignment) and even going up to 16x
sampling I could not detect any improvement over Gaussian Blur.

One important point, though. Gaussian Blur needs to be applied in
steps i.e. different dynamic range "bands" need different amounts of
Gaussian Blur, i.e. darker areas need more. I ran some test by using 4
8-bin steps (on a 256 bin histogram) covering the range from 0-31. The
image, however, was 16-bit, of course. But I have to add that the
"evaluation" was based purely on subjective perception.
That's probably more work than doing the rubber sheet mapping, maybe
so much it can't be done in evening hours. In addition, while you work
new scanner models might come out, maybe one which skips scanning
and images straight onto a large CCD.

Exactly! Together with my constant lack of time, I'll probably never
get around to it, but that's the sort of thing I love to do for fun.
:-)

Don.
 
It's the library I'm currently using for my slanted edge stuff.

That's good! So, if I have questions I know who to ask! ;o)
From what I've gathered, it's the fastest general-purpose (i.e. not
machine specific) FFT library around (and certainly the fastest
open-source FFT library).

if you have a
double Data[SIZE];

containing your real (image) data, you just do

fftw_complex* DFT=fftw_malloc((SIZE/2+1)*sizeof(fftw_complex));
fftw_plan Plan=fftw_plan_dft_r2c_1d(SIZE, Data, DFT, FFTW_ESTIMATE);
fftw_execute(Plan);

and you get your transform in DFT, with DFT[0] being the real part
and DFT[1] being the imaginary part of each value; i=0 is the DC
frequency.

Gets somewhat more difficult with multidimensional transforms, due to
the way C deals with multidimensional arrays, but still not too hard --
it's all quite clearly put in the manual anyway.


Thanks Lorenzo! Scheduled for this weekend. Time permitting, as
usual... :-(

BTW, I still haven't looked at ALE!!! The days are just *too short*!

Do you have a library to make the days, oh... 30 hours long? ;o)

Don.
 
Hi Lorenzo,
I'm with you in not being so sure.
Even if the 5400dpi (or whatever) are oversampled, if you look at the
"slanted edge" thread I started some weeks ago, you'll find some clues
that, maybe, even an oversampled scan can offer more *real resolution*
than a perfectly sampled one.
Sorry, didn't follow that thread -- though I think it is great that you put
the stuff on http://sourceforge.net/projects/slantededge. Maybe an idea to
post some results there?
The resolution is just hard to see by eye because of the shape the MTF
gets due to the oversampling, but this can be undone in software.
If noise is significant, however, it will be greatly amplified by the
process; from this follows, IMHO, that multi-sampling can still be a
very valid technique.
Agreed, reducing the sensor noise and knowing the 2D-OTF one could improve on
the scanner properties. The question however is whether this is worthwhile
when the largest culprits are the film properties (MTF and grain noise), and
that is ignoring the camera lens.
Gaussian blurring can probably be a solution if you aren't going to try
and get a better MTF ( = more eye-visible resolution), since you will
just blur frequencies that aren't really visible anyway... but those
That's the point people are making in the endless 8MP vs film threads: there
might be information in the 100 cycles/mm (about the limit of the scanner)
region, but it is so weak it doesn't contribute to the percieved quality.
frequencies *are* there, they *can* be restored (though not very easily
in my recent experience!), and they get *destroyed* by blurring.
Indeed, I've clearly seen details in the 50 cycles/mm range in my scans (seen
from the spectrum of a tiled rooftop), and I don't believe the overall MTF of
the total system drops so fast there isn't anything worthwhile above that.

-- Hans
 
Back
Top