Now that's a P.S. how I like them ;-)
Thank you very much, Don, for having taken the time to answer my
question in detail.
You're most welcome. As you could see from that P.S. (and this loooong
message) it's the type of thing I also like to learn more about.
I use the scanners in my normal B&W workflow, where I still work with
film (and hope to do so for a long time). I am thus looking for every
way to squeeze the maximum out of those machines.
I'm just scanning my slide, negative and prints collection but I also
want to achieve maximum quality. That's because I archive the original
scans and then edit a copy to create processed results for viewing on
a monitor (I don't print).
Therefore, I scan "raw". If you check this group's archives there were
many messages on the subject. Basically it means scanning at maximum
resolution and color depth the scanner is capable of without using any
of the editing features of scanner software. In other words, get
"uncontaminated" data directly from the scanner. The only exception is
hardware based feature such as ICE because you can't do that later.
Such a "raw" scan is often called a "digital negative".
In theory a pure "raw" scan should also be in gamma 1.0 but, like most
people, I don't do that because I work in gamma 2.2 so the first step
would be a conversion to 2.2 anyway. But it certainly doesn't hurt to
scan in gamma 1.0.
I've been preoccupied by the recently resurfaced subject of "grain
alias" (or however one wants to call it) for a long time now, and my
approach is clearly to try to catch the photographic grain as well as
possible. This means highest possible resolution.
That was one of my first concerns too. The trouble is grain is a very
"elastic" thing.
Nominally a 4000 dpi scanner should get everything on the film. The
only exception may be a perfectly exposed shot taken with a tripod
and high quality film. The key word here is "may". It also depends on
scanner optics, etc.
However, not all grain is the same size. Every film has some grain
which is very small. The consensus seems to be that in order to get
every piece of grain regardless of film rating is to have a scanner
with about 10,000 dpi, in other words a drum scanner.
But there's a catch. At this level of precision the scanner does not
see film as flat. The film also has a depth (i.e. thickness). That's
why grain is often called "grain clouds".
Given all that, 4000 is most likely to be enough resolution but 5400
doesn't hurt, although scanner optics or the shot may make the
difference hard or even impossible to see.
Finally, as I like to say, a scanned image is really "a picture of a
picture". When one scans film one is taking a picture of this film,
not of the original subject! (That's why I went digital to avoid this
intermediate step although lately I've been tempted to go back to try
these new low grain films specifically made for scanning.)
At some point I was hoping that if they use different CCDs for the three
wavelengths, and if they don't compensate for the different locations
of those arrays, there may have been a way to interweave the pixels of
the 3 channels such as to get a higher monochrome resolution...
But my feet must have lost contact with the ground by now, and I should
try to get down again.
I know! From time to time, I do a "reality check" and ask myself if
I'm going too far in this quest for perfection and maximum quality.
It's good to consider that and make reasonable compromises.
But speaking of B&W film, two things. I scan B&W film in color because
different LEDs (in my Nikon scanner) sample different wavelengths.
This makes it easier to make corrections later. Which leads me to the
second thing: I noticed that the 3 channels are *not* in sync i.e.
there's a slight sub-pixel misalignment! Anyway, see below another
interesting PS with more wisdom from Kennedy about this! ;o)
In brief, things working against perfect channel alignment are:
- residual scanner head motion
- finite misalignment of LED sources on the optic axis.
I am further interested in the geometry of the array because I would
like to do my own research/modeling of how the image of the photographic
grain is being distorted when it gets in the neighbourhood of the
scanner's resolution.
The CCD array is linear which means they are just next to each other.
The only exception is the twin array CCDs as mentioned earlier.
You probably know this already, but that's not the same as what
happens in digital cameras where the array is two dimensional. There
are a number of different ways these chips work but the most common is
the so-called "Bayer pattern". This means that each "software" (image)
pixel is generated from 4 "hardware" pixels (samples) arranged in a
regular square pattern (1 red, 1 blue and 2 (!) green). There are
variations on this theme (hexagonal pattern, pixel in pixel, etc) but
that's the basic principle.
Anyway, the reason there are two green pixels is because humans are
more sensitive to green than to other two colors. The human perception
ratio is (roughly) red 30%, green 59% and blue 11%. These 4 samples
are then mathematically combined to create a "pixel". But since they
don't occupy the same space (obviously) the result is "blurry". Our
eyes interpret such results as "smooth". Also, don't forget that there
is much more post-processing in a digicam (usually involving
interpolation!) than in a scanner.
So scanners nominally produce a much better result but (as I mentioned
above) the problem is scanners sample the film's impression of the
real world while digicams sample the real world directly!
For the details of the optical path, I'm just curious about the kind of
optics they use. Ordinary circular lenses, or maybe rather long
cylindrical ones (since the info to be reproduced is basically
one-dimensional)? What shapes are used for the diaphragm, if there is
one at all? And so on, all this having an impact on the way tiny signals
like grain are being handled.
This is something Kennedy would be perfect for!
Although I'm a software person too, I'm not too eager to write something
that goes that deep down the layers towards the hardware. But maybe some
kind of smart Photoshop plugin based on exact knowledge of what's in my
(as raw as possible) files?
I actually did try to go that deep and even got the software developer
kit (SDK) from Nikon. The trouble is the SDK is still far too high a
level for my taste.
Anyway, I ended up writing my own scanner program but my concern was
more about the dynamic range, specifically as it relates to
Kodachromes. It's all finished now, but I scanned each slide twice
(multi-pass) and then combined them after sub-pixel alignment and my
own version of "tone mapping" (I actually extract the characteristic
curve of each two scans which gives me maximum flexibility).
But what I really wanted to do is perform this "twin scan" single-pass
in firmware to eliminate sub-pixel alignment, but that just proved to
be too time consuming. Nikon doesn't provide any information about the
firmware so it would mean disassembling the firmware and I just didn't
have the time - although I love doing things like that.
The bottom line is I started this some 3 years ago thinking I will
finish in a month or two. Instead, I learned more than I ever wanted
to and ended up writing three programs!
Don.
P.S.