I've wondered about that. Seems it gives better separation, but it may
be at the cost of accuracy.
No, your concern is based on a misconception - see below.
Objects in the thing being scanned with
colors that fall between the color 'frequencies' of relatively sharp
LED color
spectrums doesn't register properly in the results.
That would be the case if is was a general purpose flatbed scanner and
would result in extreme metamerism - different scan colours depending on
the exact spectrum of the object being scanned. Just as the colour of
some clothes are different under different lighting conditions.
However these are DEDICATED film scanners, and the film dye spectra
already overlap, so there is no possibility of an image being visible on
the film being "missed" by the narrow spectral bandwidth of the LED.
It would be the
same
as if it weren't there. Ideally each LED would have a
"square" color profile with one butting against the next, but it
doesn't.
There are big gaps inbetween. Seems like filters that overlap a bit
(like
film profiles do) would provide an overall increased accuracy of
result.
No - quite the contrary.
*If* the spectral response was flat across a wide band with all three
colours abutting each other then the actual response would be a 3-point
convolution of the response profile of the detector and the spectral
density of the dyes. In order to achieve pure colour separation these 3
outputs require deconvolution - and with only 3 data outputs required
this is relatively simple matrix manipulation. However it is the
deconvolution process which increases the noise.
Consider what would happen in a process with many colour scanning and
reproduction steps if the scanner had the response you suggest. The
film dye spreads some red into the green and blue and vice versa for the
other colours. This cross-coupling is then picked up by the broad
scanner response in each colour and then printed, perhaps onto another
film with 3 filtered lamps, each with their own broad response cross
coupling into each other, and responded to by the film layers each with
their colour responses spreading into each other. Then the second
generation film is developed and the colours spread out in the spectral
characteristics of the dye again. So the colour purity and saturation
of the image is grossly reduced in the second generation reproduction.
Then the second film is scanned and the process repeated, reducing
colour purity further (ie loss of saturation). You don't have to go
through many steps to convolve the response of each sensor in the final
scan across the entire visible band - resulting in a completely
monochrome image. If you have ever tried to photograph colour
photographs you are probably familiar with the fact that the second
generation copy never has the same saturation as the original - exactly
this effect.
The only ways to avoid this are:
1. Ensure that the response and spectra of the sensors, the film and the
dyes do not bleed into each other at all and that they all match in
wavelength exactly or
2. Deconvolve the colours at each scanning and printing step, with a
consequential increase of noise at each process step, ultimately
resulting in loss of image detail.
Measuring the response of each dye at a spot wavelength in the first
place eliminates this issue entirely. In a multistep process all you
need do then is use a matching set of LEDs to illuminate each generation
of film. The only loss of colour purity is then in the mismatch between
the film response and the dye reproduction, resulting in a colour
balance shift, but not a loss of saturation. In practical systems that
do exactly this, lasers are typically used these days to ensure that the
colours remain pure throughout each generation and there is minimum loss
of saturation.
The spectral response of your eye is quite broad with significant cross
coupling between the colours, since the eye evolved in a real world with
multiple spectral input, not a fixed dye spectra for only 3 layers of
emulsion. However, when you scan the film the last think you want to do
is introduce further mixing of the colours with the consequential loss
of purity and saturation. If you do, then you have to introduce some
deconvolution steps with consequential loss of SNR.
So, I'm not entirely sure that strong separation really is something
good in the final result. Does sound good otherwise.
It is the ideal solution - for the reason I mentioned previously, the
CCD outputs are a true representation of the image, without the need for
matrix deconvolution of the colours from each other.
At very least, it makes the CCD cheaper. Needs only one third as many
sensors on it, and gets rid of the filters.
The CCD is marginally cheaper, but the downside is that it makes the
illumination system more complex and hence expensive. I would hazard a
guess that overall it is a more expensive solution than tricolour
scanning of a white source.
I think this is why
Minolta
went that way with their new model.
They didn't. They use a white LED, not a single linear CCD with a
tricolour LED, which is what gives the benefit.
A white LED is usually 3 individual rgb LEDs, rather than a broad
spectrum white - hence the colour produced doesn't look natural. So the
new Minolta will get the same spectral purity as the Nikon, even though
they are using broad band colour filters on the tri-linear CCD.
They've done things that clearly
look
like signs of cost-reduction.
Yes, but changing to a single line CCD wasn't one of them. ;-)