multi-sampling: does it do anything?

  • Thread starter Thread starter Robert Feinman
  • Start date Start date
R

Robert Feinman

Multisampling has become popular with the new generation of scanners.
This is supposed to increase dynamic range and lower noise in the
densest part of the film. The mathematics of this, however, would
seem to indicate a very modest effect.

Let's assume we are scanning with a 16 bit output to maintain
"best" quality (another disputed point).
If we sample each image point twice we can effect the lowest bit
in the image. That is we might change it from a 0 to a 1 or the
reverse. If we oversample 4x we can effect the lowest 2 bits.

Let's assume that we have calibrated our scanner so the darkest
values are around 30 and the lightest around 250. This leaves us
a little room for alteration in the image editor without clipping.
So changing the darkest 1 or 2 bits can potentially alter the 30 value
in a range of 28-32 or so. I doubt anyone will notice this in a final
print or online presentation.

Oversampling with a digital camera probably works better since, with
a long exposure, it possible to average hundreds of readings as the
data is being collected.

I'd be interested to see any results that show noticeable results
from oversampling with a film scanner.
 
Robert Feinman said:
Multisampling has become popular with the new generation of scanners.
This is supposed to increase dynamic range and lower noise in the
densest part of the film. The mathematics of this, however, would
seem to indicate a very modest effect.

Let's assume we are scanning with a 16 bit output to maintain
"best" quality (another disputed point).
If we sample each image point twice we can effect the lowest bit
in the image. That is we might change it from a 0 to a 1 or the
reverse. If we oversample 4x we can effect the lowest 2 bits.
<pedant>
BTW - it is "affect" not "effect". The change you effect affects the
result. ;-)
Let's assume that we have calibrated our scanner so the darkest
values are around 30 and the lightest around 250. This leaves us
a little room for alteration in the image editor without clipping.
So changing the darkest 1 or 2 bits can potentially alter the 30 value
in a range of 28-32 or so. I doubt anyone will notice this in a final
print or online presentation.
If that was what multisampling did then you would be right, but it is
not what happens because your mathematics assume noise free scanning,
and noise is exactly what multisampling addresses.

There are many sources of noise on a scanner, including noise on the
supply voltages to the sensor and the analogue amplification stages,
noise on the reference voltages of the analogue to digital convertor,
noise on the supply driving the illumination source which causes the
brightness of the source to be noisy, noise on the timing circuit which
controls the exposure time of each CCD sample, and so on. So, when you
consider your scanner system scanning at 16-bit output you cannot assume
that the noise is only the quantisation noise, appearing in the least
significant bit, as you appear to have done above.

One of the dominant sources of noise, particularly for scanning negative
sources for reasons which will become obvious in a few moments, is
random noise in the arrival of photons from the light source, through
the emulsion and onto the CCD. Since the arrival of each photon is
totally unrelated to the arrival of any other (ie. the photons do not
affect each other) then it turns out that the noise on the number of
photons arriving in a given time interval is just the square root of the
total number of photons. This is just the same statistical phenomena
that governs flipping a coin. On average if you flip a coin 10 times
then you will get 5 heads and 5 tails, but as everyone knows it rarely
turns out as perfect as that. Sometimes you get 10 heads and no tails,
sometimes 3 heads and 7 tails. If you run 100 experiments to flip a
coin ten times and record the number of heads you get in each experiment
then the average will be pretty close to 5. You will also be able to
work out the average deviation from that value over all 100 experiments.
The "standard" method of computing this variation is to square the
difference between the number of heads in each experiment and the
average (which gives equal weighting to positive and negative
differences), sum these values and divide by one less than the number of
experiments (to account for the mean itself) and then calculate the
square root of the result (which returns the number to the same scale as
the original heads and tails). What you will then find is that this
noise is pretty close to the square root of the number of coins being
flipped in each experiment, in this case 10 coins, giving an average
deviation from the mean of 3.16. because this is the standard method of
computing the average deviation from the mean, it is known as the
"standard deviation". So, from 100 experiments flipping 10 coins, you
can reasonably expect an average of 5 heads +/- 3.16.

Of course, there is a variability in this noise, so that if you now run
100 experiments each of 100 tests of flipping 10 coins then you can
determine how much that standard deviation varies in practice. However,
the more experiments you run, the closer to an average of 5 you will
get, and the closer the standard deviation, or noise, will be to 3.16.

So what has this all to do with scanners and multisampling?

Well, the photons which arrive at the CCD cell, which create the voltage
that the ADC turns into a data value for that pixel, have the same
statistics. Consequently, although the average number of photons
arriving at the CCD during each exposure period will be roughly the
same, it can vary by the square root of the total number of photons.
That means that if you have two adjacent pixels with exactly the same
density on the film, or you measure the same pixel twice using
multi-sampling, you can expect the number of photons detected by the CCD
to vary between those two measurements by, on average, the square root
of the total number of photons measured in each sample. In addition,
the signal to noise ratio is also the square root.

So, if you can detect more photons in the exposure period then you can
increase the signal to noise. However, this is where you run into a
problem. The CCD converts each photon to an electron, which it stores
on a little capacitor in the cell (actually the capacitor *is* the
photodetector in a CCD) and the more electrons stored on the capacitor,
the larger the voltage that is produced - until that voltage reaches a
limiting bias voltage when each additional electron is just spilled out
onto the adjacent cell or onto a special "anti-blooming" track on the
device. You can think of this as the CCD cell simply being a bucket
which is being filled with water from a stream. You can measure the
flow rate of the stream by allowing it to flow into the bucket for a
period of time, but once the bucket is full, it just overflows and you
cannot measure any more.

Typically, a linear CCD used in a commercial grade scanner will have a
storage capacity at each cell of around 100,000 electrons before it
saturates. Ignoring the fact that the quantum efficiency (how many
electrons are produced by each photon) is typically much less than one,
this corresponds to around 100,000 photons which can be detected in the
exposure period before the cell saturates. This in turn means that the
*best* signal to noise ratio that the CCD can produce is roughly the
square root of 100,000, which is around 316:1. You will note
immediately that this is *much* less than the dynamic range of the
16-bit data range in your scanned image - in fact, the maximum signal to
noise is equivalent to about 8.5 bits. That is because this is the
noise on the maximum signal - the highlights in a slide or the shadows
on a negative image.

Of course the CCD will detect much fewer photons from denser parts of
the emulsion and the noise will again be the square root of the number
of detected photons. If you are using 16-bit data then the scanner can
respond to photon arrival rates which are 1/65536th of the saturation
level, producing around 2 electrons in the cell, with an average noise
of around 1.4. Don't think this is too strange, having 1.4 electrons of
noise - it is no different from having 3.16 heads of noise in the
experiment with the coins, even though each coin only has one head and
one tail. Of course, for such low levels of signal, other noise sources
such as some of those mentioned in the first paragraph above will
dominate this, and it is not unusual to have CCD readout noises of 10-20
electrons, which are added to the other sources. Again, this makes the
variation on a single scan pass much greater than the quantisation of
the 16-bit data range you are using, and this is what effectively
determines the Dmax of the scanner - not number of bits in the ADC as
the marketing department of some manufacturers would have you believe.

Returning to the best signal to noise in a single sample though, this is
clearly limited by the storage capacity of the CCD. What you need are
bigger CCDs - and you will notice that professional digicameras do have
much bigger CCDs than consumer cameras, for just this same reason. As
already shown, for a single sample, the best signal to noise you can
expect is equivalent to around 8.5-bits. If, however, you take two
samples and add the data together then you have produced the equivalent
of a CCD with twice the storage capacity - and improved the signal to
noise ratio by around a factor of 1.4x. If you take four samples of the
same pixel then you have quadrupled the storage capacity and thus
effectively doubled the signal to noise - you now have the equivalent of
around 9.5-bits. 16 samples gives you the equivalent of around 2-bits
of noise reduction, or an SNR equivalent to 10.5-bits.

Now, when you are scanning a slide the noise on the highlights isn't
really a problem - you naturally expect to see more noise there because
your eyes do exactly the same thing as the scanner and you cannot
perceive it. Adjacent, unsaturated, pixels from exactly the same
density of emulsion will vary by this noise, and it can bee seen by
appropriate adjustment of the levels to pull detail out of the
highlights. More perceptible, however, is noise in the shadows because
your eyes expect a lower noise floor as the photon flux decreases.
However, as we have seen, in the shadows the signal to noise is worse
because of additional noise sources. Hopefully, these noise sources are
random and uncorrelated from sample to sample and, if they are, then
they will also be reduced in effect by the square root of the number of
multisamples used.

For scanning a negative, however, the shadows in the image are the
highlights in the negative. So the best SNR in the image is in the
shadows and is only 8.5-bits on a single scan - so multisampling really
makes a significant difference scanning negatives, both improving the
signal to noise and the dynamic range of the image by the square root of
the number of samples taken.

As you can see from the above, the effect is a lot more than just a few
bits in the entire range that your assessment suggested. Sampling the
same pixel twice does not limit the change to only the least significant
bit - that is just bad mathematics. Each sample is like flipping a coin
a number of times (about 100,000 times). On average, the difference
between samples is the square root of the number of flips, not just
+/-1, and that is essentially where your assessment went wrong.
 
Excellent, and worth quoting in full!

Thanks, once again, for a clear explanation on behalf of all who like
to read this sort of concise yet comprehensive digest!

Don.

P.S. As Hecate would say: Give the man a coconut! ;o)
 
Excellent, and worth quoting in full!

Thanks, once again, for a clear explanation on behalf of all who like
to read this sort of concise yet comprehensive digest!

Don.

P.S. As Hecate would say: Give the man a coconut! ;o)
LOL!

Let me second that. I always read Kennedy's posts. I don't always
understand them, mind.....

But, if I want to ask a technical question I know who I'd want to
answer.
 
Excellent explanation. I do have another question about multi-pass
scanning on a film scanner. The multiple scans must be aligned perfectly
with each other for the software algorithm to work correctly. To achieve
this, the scanner's stepping motor must be able to reposition the film
carrier at precisely the same spot, and sensors must maintain the same
start and end points for each of the passes. When a film scanner claims
that it supports multi-pass scanning, does it imply that it has built-in
hardware support (e.g. memory of stepping motor and sensor positioning)
for this purpose? If a film scanner does not make such a claim, how well
will multi-pass scanning work on them?
 
Excellent explanation. I do have another question about multi-pass
scanning on a film scanner. The multiple scans must be aligned perfectly
with each other for the software algorithm to work correctly. To achieve
this, the scanner's stepping motor must be able to reposition the film
carrier at precisely the same spot, and sensors must maintain the same
start and end points for each of the passes. When a film scanner claims
that it supports multi-pass scanning, does it imply that it has built-in
hardware support (e.g. memory of stepping motor and sensor positioning)
for this purpose? If a film scanner does not make such a claim, how well
will multi-pass scanning work on them?
Yes, multisampling requires multiple samples of exactly the same piece
of film to be captured, otherwise there is a trade-off between noise
reduction and image sharpness, depending on the accuracy of the
position. For this reason, almost all scanners which intrinsically
support multisampling do so on a single pass, stopping the scanner head
whilst all the samples are taken for a line of pixels. Scanners which
rely on multipass multisampling do not (in any case that I am aware of)
have any additional support to aid the positioning, however it may be
that the accuracy with which they can reposition the scanner head on
subsequent passes is adequate for the scan resolution that they use.

For a good example of this type of issue, read some of Don's posts on
his experiences having upgraded from the LS-30 to the LS-50. Neither
scanner intrinsically supports multisampling, but can be used to produce
multi-pass multisampling either in Vuescan or by reassembling the images
in Photoshop or another package. The LS-30 scans at a maximum of
2700ppi, and was accurately repositioned enough for subsequent scans to
be aligned with better than a pixel tolerance throughout the scan
(although this was considerably better than I ever achieved with similar
vintage Nikon scanners, so there may be some part to part variation).
The LS-50 scans at 4000ppi and does not have adequate repositioning
accuracy to align subsequent scans for multipass multisampling without a
significant loss of image sharpness across the film length.

If you are going to rely on multisampling regularly, then buy a scanner
which intrinsically supports it on a single pass.
 
Recently said:
A really great overview of noise sources
Just wanted to say that this is excellent, and I appreciate the time you
took to present this in such a complete yet concise manner! It's a saver.

Neil
 
Neil said:
Just wanted to say that this is excellent, and I appreciate the time you
took to present this in such a complete yet concise manner! It's a saver.
Thanks for the compliments folks, but it isn't a perfect explanation by
a long way. In particular, while reading it back after posting, I
spotted quite a few errors in detail (eg. the formula for standard
deviation is out by the number of samples, which should be n*(n-1), not
just n-1), however they don't affect the overall gist of the message
concerning what multisampling does or when and why it is effective.
 
Multisampling has become popular with the new generation of scanners.
This is supposed to increase dynamic range and lower noise in the
densest part of the film. The mathematics of this, however, would
seem to indicate a very modest effect.

Let's assume we are scanning with a 16 bit output to maintain
"best" quality (another disputed point).
If we sample each image point twice we can effect the lowest bit
in the image. That is we might change it from a 0 to a 1 or the
reverse. If we oversample 4x we can effect the lowest 2 bits.

Let's assume that we have calibrated our scanner so the darkest
values are around 30 and the lightest around 250. This leaves us
a little room for alteration in the image editor without clipping.
So changing the darkest 1 or 2 bits can potentially alter the 30 value
in a range of 28-32 or so. I doubt anyone will notice this in a final
print or online presentation.

Oversampling with a digital camera probably works better since, with
a long exposure, it possible to average hundreds of readings as the
data is being collected.

I'd be interested to see any results that show noticeable results
from oversampling with a film scanner.
I didn't get any offers of an example so I've created one myself.
It's a scan of an empty slide mount so the densities seen by the scanner
should exceed anything found with real film.
Rather than go over all the detail here just follow the tips link on my
website. It's the one about multiscanning (obviously).
For those not wishing to take the time my, conclusion is that with at
least the Minolta 5400 it doesn't do anything visible.
If you have a counter example please provide.
 
Robert Feinman said:
I didn't get any offers of an example so I've created one myself.
It's a scan of an empty slide mount so the densities seen by the scanner
should exceed anything found with real film.
Rather than go over all the detail here just follow the tips link on my
website. It's the one about multiscanning (obviously).
For those not wishing to take the time my, conclusion is that with at
least the Minolta 5400 it doesn't do anything visible.
If you have a counter example please provide.
Robert,
Two comments on your tests:
Gamma:
Your images appear to have been scanned with a gamma matched to the
screen, which enhances the gain in the shadows, and therefore has the
effect of increasing the noise in that region. Hence the dominant noise
you do see is in the shadows, not the highlights. There is nothing
wrong with this and it is how the image will correctly be viewed, but it
does mean that you are analysing pre-processed data, which can give
misleading results. Of course, when scanning negatives this is doubly
true, since the negative starts with more noise in the shadows for the
reasons I explained in my earlier post.

Data range:
You state that what variation there is in your image is minimal.
However, you quantify that variation in terms of an 8-bit range! Not
surprisingly, the noise you quote is no more than the quantisation noise
anticipated for an 8-bit range. You'll find even less variation if you
estimate the results from a binary image. ;-)

Certainly, if you are only interested in 8-bit scans then multisampling
is of little benefit, since the worst case noise is likely to be of the
order of 1/300th of the peak white level. But if you are only
interested in 8-bit scans then why did you waste your money buying a
16-bit scanner? Your analysis of the benefits of multi-sampling could
equally well be titled "Is a 10/12/14/16-bit scanner ADC worthwhile?". I
suspect that you already know the answer to that one, but the analysis
you have conducted is equally applicable to it and, not surprisingly,
comes to the same wrong conclusions.

I strongly suggest that you repeat the same experiment, as I just have
to check that the results do show the effect ;-) with 16 bit, unity
gamma scans and use the Photoshop Histogram tool to calculate the mean
and standard deviation for a fixed area in the black and white portions
of each multisample image that you produce. I will point out that this
will indicate the effect quite well directly, but there is clearly a
precision limitation on the Photoshop calculations. This becomes
obvious if the levels tool is used to stretch the contrast in the black
and white regions. For example, I used 0-15 in the blacks and 240-255
in the whites to implement a gain of x15 on the data in those regions.
The resulting Photoshop calculations for the standard deviation
(estimated over an array of 70x70 (4900 total) pixels shows a clear
relationship which is approximately proportional to the square root of
the number of samples in each scan, as expected.

I can send you the source files for my tests taken with a Nikon LS-4000
if you like, however the results are:

Multiple Shadows Highlights
Samples R G B R G B
1 2.12 2.20 2.16 19.06 17.91 14.84
2 1.54 1.71 1.63 13.89 13.14 11.13
4 1.23 1.33 1.28 10.62 9.92 8.59
8 1.04 1.08 1.05 8.54 7.73 6.7
16 0.91 0.97 0.93 7.02 6.35 5.58

As you can see from these figures, the benefit of multisampling becomes
less as you increase the number of samples, in other words, the
advantage becomes less than the square root law would suggest. For
example, the numbers in the above chart show a roughly 1.35x ratio
between 2 samples and one sample for highlights, but something closer to
1.2x for the same highlights between 16 samples and 8 samples and only
about 1.1x for the shadows.

This is due to several factors conspiring against you. Firstly,
multisampling only reduces the random noise as I explained previously.
If there is a systematic noise in the scanner, such as clock
breakthrough or, more likely, fixed pattern offset and response
variations between the CCD channels, then multisampling will not have
any effect on that. Consequently, as the random noise effects are
reduced by more multisampling, the systematic noise becomes the dominant
residual noise, and thus the full benefit expected of a cleaner scan is
not obtained.

Another reason for this departure from expectations is due to the nature
of the noise sources themselves. In my previous explanation I
specifically concentrated on the effect of photon noise partly because
that is the dominant noise in shadows for negative scans, but also
because that demonstrates the effect best. Photon noise has a perfectly
white power spectrum - which means that the noise power density is the
same at all frequencies. However, some of the noise sources present on
CCDs do not produce white noise, but instead have a 1/f^n spectrum,
where n is usually a number around 1. This is usually just referred to
as 1/f noise or flicker noise.

The significance of this is that the frequency in question is the
inverse of the time taken for each multisample pixel - hence there is a
greater flicker noise component for 16x multisampling than there is for
1x multisampling. For the same reason, there is a much greater flicker
noise component for multipass multisampling than there is for single
pass multisampling since each pixel is built up over almost the entire
scan period, rather than just the time to sample the same pixel a number
of times. This means a much more rapid departure from the expected
benefit curve for multipass multisampling than for single pass
multisampling (oh well, I did tell Don he would have been better off
with a used LS-4000 than a new LS-50, and proper multisampling was the
main reason :-( ).
 
As you can see from these figures, the benefit of multisampling becomes
less as you increase the number of samples, in other words, the
advantage becomes less than the square root law would suggest. For
example, the numbers in the above chart show a roughly 1.35x ratio
between 2 samples and one sample for highlights, but something closer to
1.2x for the same highlights between 16 samples and 8 samples and only
about 1.1x for the shadows.
That, presumably, is why in all but a few cases, I generally find that
2 passes is enough?
 
Hecate said:
That, presumably, is why in all but a few cases, I generally find that
2 passes is enough?
Whether two passes is enough just depends on your threshold of
satisfaction, just as the point at which the time to scan more isn't
worth the benefit gained is also a personal threshold. Usually both of
these thresholds are reached long before the point at which further
scanning will provide absolutely no benefit. I use x4 on negatives and
x2 on slides as a matter of course and occasionally increase these if
the original camera exposure was off - meaning the image on the film
needs some contrast enhancement.

I am not aware of any scanner driver that actually allows the absolute
limit of multiscanning to be reached but you can see that it probably
isn't much beyond 16x for the Nikon LS-4000. At a guess I would say
that if the driver permitted it, 64x multiscanning (ie. two more steps)
would be the absolute upper limit beyond which no advantage would be
achieved.
 
(oh well, I did tell Don he would have been better off
with a used LS-4000 than a new LS-50, and proper multisampling was the
main reason :-( ).

Indeed you did! But, as we all know, I just couldn't get over the
second-hand bit... :-(

Still, the "LD-50" is a pretty nice little scanner! ;o)

Don.
 
Recently said:
Thanks for the compliments folks, but it isn't a perfect explanation
by a long way. In particular, while reading it back after posting, I
spotted quite a few errors in detail (eg. the formula for standard
deviation is out by the number of samples, which should be n*(n-1),
not just n-1), however they don't affect the overall gist of the
message concerning what multisampling does or when and why it is
effective.
Yes, I noticed that as well, however, the point was not lost. As you say,
it doesn't change the overall gist of the message.

As important is the acknowledgements that there are *many* sources of
noise, and that multi-sampling is intended to minimize the impact of these
sources, not just the photon error rate, even though their contribution
would best be represented as a complex matrix and that the LSB is not the
quantity most impacted by the process.

Regards,

Neil
 
Kennedy said:
Yes, multisampling requires multiple samples of exactly the same piece
of film to be captured, otherwise there is a trade-off between noise
reduction and image sharpness, depending on the accuracy of the
position. For this reason, almost all scanners which intrinsically
support multisampling do so on a single pass, stopping the scanner head
whilst all the samples are taken for a line of pixels. Scanners which
rely on multipass multisampling do not (in any case that I am aware of)
have any additional support to aid the positioning, however it may be
that the accuracy with which they can reposition the scanner head on
subsequent passes is adequate for the scan resolution that they use.

Oops. I made the wrong assumption that a scanner will have to reposition
for each multisampling scan pass. Keeping the head stationary and taking
multiple samples is a better alternative. But thanks for confirming that
some kind of hardware support is indeed needed for doing multisampling
right.
For a good example of this type of issue, read some of Don's posts on
his experiences having upgraded from the LS-30 to the LS-50. Neither
scanner intrinsically supports multisampling, but can be used to produce
multi-pass multisampling either in Vuescan or by reassembling the images
in Photoshop or another package. The LS-30 scans at a maximum of
2700ppi, and was accurately repositioned enough for subsequent scans to
be aligned with better than a pixel tolerance throughout the scan
(although this was considerably better than I ever achieved with similar
vintage Nikon scanners, so there may be some part to part variation).
The LS-50 scans at 4000ppi and does not have adequate repositioning
accuracy to align subsequent scans for multipass multisampling without a
significant loss of image sharpness across the film length.

If you are going to rely on multisampling regularly, then buy a scanner
which intrinsically supports it on a single pass.

Agreed. And be skeptical about any scanning software such as VueScan
that claims support of multisampling for scanners which do not have any
intrinsic hardware supports.
 
I didn't get any offers of an example so I've created one myself.
It's a scan of an empty slide mount so the densities seen by the scanner
should exceed anything found with real film.
Rather than go over all the detail here just follow the tips link on my
website. It's the one about multiscanning (obviously).
For those not wishing to take the time my, conclusion is that with at
least the Minolta 5400 it doesn't do anything visible.
If you have a counter example please provide.
Because of some questions about my methodology I've revised my online
discussion. It now has some 16 data as well as a histogram.
My conclusions remain the same, however. I don't believe there will be
any visible difference in the final image from the effects of
multisampling.
With slides the image data is in the region of poor visible contrast
and with negatives the maximum density of the film is much less than the
opaque example I chose, so the noise as a percentage of the data will be
even smaller. So even in the highlights the effect will be minimal.
 
Robert Feinman said:
Because of some questions about my methodology I've revised my online
discussion. It now has some 16 data as well as a histogram.
My conclusions remain the same, however. I don't believe there will be
any visible difference in the final image from the effects of
multisampling.
With slides the image data is in the region of poor visible contrast
and with negatives the maximum density of the film is much less than the
opaque example I chose, so the noise as a percentage of the data will be
even smaller. So even in the highlights the effect will be minimal.
But, even though you have now included 16-it data, you are still
assuming that because the noise is lower than 8-bit quantisation that it
can never be visible! This simply is not true - if it was then there
would never be a need for a scanner with more than an 8-bit ADC.

How you can claim that the 0.3-2% noise to peak signal ratio you measure
is irrelevant *and* that a 16-bit scan is justified for "large-scale
tonality adjustments" is beyond logic! A 2% noise floor is on the
threshold of visibility without any tonal range adjustments - were this
distribution in the green channel it would be clearly visible - so even
a minor change would bring out the difference between multiscanned
images.

Incidentally, I have some problems with the consistency of your data. In
an 8-bit scale you quote rgb data of 251,252,253. I would expect this
to transform to something around values of 64256, 64512 & 64768 rather
than the levels you quote of 6970-6992, 456-467 & 1421-1451. Even your
dark data 68,21,17 in an 8-bit scale doesn't match these 16-bit values.
I even wondered whether perhaps you were switching between hexadecimal
and decimal notation, but still cannot see any correlation between the
quoted 8-bit numbers and the 16-bit ones. Since these are the data that
you derive your percentage distributions and conclusions from, it is
important to achieve consistency.

Another problem that I have with your data is that the blacks are *so*
light, when this is an "empty slide mount". From that I assume that the
blacks are the slide mount itself - which should produce near zero data,
except for noise and dark current, which should be removed automatically
from the scanner output by the driver and will be quite low in any case.
Is this really a raw scan? If so, do you have an explanation for the
light source which has so effectively illuminated the slide mount? Do
all your slides scan blacks at this level of black before you edit them?

Unfortunately, your last paragraph is meaningless because the images
themselves are only 8-bit jpg data for the web, without links to the
original 16-bit files.
 
Kennedy McEwen wrote: SNIP

Agreed. And be skeptical about any scanning software such as VueScan
that claims support of multisampling for scanners which do not have any
intrinsic hardware supports.

AFAIK VueScan only claims to offer multi-pass scanning if the scanner
doesn't allow multi-sampling. By the way, the manufacturer's driver software
usually offers no such multi-pass option, so I don't understand you
scepticism.

Bart
 
Back
Top