Minolta 5400: The classic (mostly green) lines

  • Thread starter Thread starter Markus Malmqvist
  • Start date Start date
It might not seem much, but if you consider that this is linear data and
the error from the black level after calibration will then be scaled by
the gamma compensation, it is very significant.

For example, just looking at green (red would be a lot worse whilst blue
a little better) if the exposure 2 figure was used, but the exposure was
actually 12 (which is not unexpected for a long exposure) then the
residual average black would be 14 too high. Consequently, when
compensated for gamma 2.2, this would produce a result that was 1406 on
the 16 bit range, or 5 on Photoshop's 0-255 levels range - which are
very visible offsets from black. This is a perennial problem with CCDs
which require dark current correction in linear and are only converted
to gamma compensated output later - those small errors are very
significant.

I see your logic here but you are talking about a difference of 14 when
one reading is 0 and the other is 14 and you are using gamma conversion
without slope limiting. If the readings are say 30 and 44 they become
1988 and 2366 in gamma 2.2 mode. The difference is then about 1.5 in
8-bit mode.

I aim for a DRange of 3 with the FS4000 (it is meant to be better but in
reality it is worse). If I could get a reliable black reading of 30
this would be great. However, if I do a 100 line scan I find the
average error for each sample point is about 40. By this I mean that if
the average of the 100 readings for a point is 80 I can expect most
readings to be from 40 to 120. With this sort of accuracy I don't think
a shift of 14 in the black level is very significant. Are other CCD
scanners much more accurate than I am seeing here ?
However, it isn't the average RGB levels in each exposure range that
matters, but the maximum change. Those are the pixels that have a high
dark current level and consequently will change most with exposure. Note
that dark current isn't the only thing that gives rise to a dark field
signal - there are threshold variations across the chip as well as
general offsets between the output and the ADC reference datum. You
would probably consider a change in red from 0 to 45 to be significant.
But it is no worse than you are getting for the average, and I bet that
a few lines are significantly worse than that - these are the lines that
will be most visible in the scan.

I take your point about average versus maximum. I will do some more
testing.
It appears, from the above data, that the increase in dark signal to the
longer exposure is being compensated by the margin. Many CCDs have
guard pixels on either side specifically for this purpose and the
average of those guard pixels is subtracted from the signal pixels
either on the CCD output itself or off-chip. However it has no effect
on the dark current distribution, or non-unformity, and it is the
non-uniformy and its change that is significant here.

Thanks for the info about guard pixels and the mystery of the margin.
It might help in understanding this if you draw a distinction between
noise and signal. The dark current is just a signal, albeit an unwanted
one. Variation of that dark current, both across the CCD from pixel to
pixel, and with time, ie. from sample to sample of the same pixel, is
noise. So the margin cannot be a base noise level, but it can be an
average base signal.

Yes, apologies for my incorrect terminology.

-- Steven
 
Steven said:
I see your logic here but you are talking about a difference of 14 when
one reading is 0 and the other is 14 and you are using gamma conversion
without slope limiting.

Yes, I am assuming that you are setting a true black level based on the
reference, so the error will result in a true black offset, which is
then correctly gamma compensated.
If the readings are say 30 and 44 they become
1988 and 2366 in gamma 2.2 mode. The difference is then about 1.5 in
8-bit mode.
However, doing it that way round you are compensating for the gamma on
false data. Gamma is associated with the image display and perception,
not its capture, so dark offset should be implemented linearly to get a
true linear signal which is then gamma compensated.
I aim for a DRange of 3 with the FS4000 (it is meant to be better but in
reality it is worse). If I could get a reliable black reading of 30
this would be great. However, if I do a 100 line scan I find the
average error for each sample point is about 40. By this I mean that if
the average of the 100 readings for a point is 80 I can expect most
readings to be from 40 to 120. With this sort of accuracy I don't think
a shift of 14 in the black level is very significant. Are other CCD
scanners much more accurate than I am seeing here ?

I don't think the raw data actually matters - there is no guarantee that
0 out of the CCD will digitise as 0 on the ADC, in fact there are good
technical reasons (to do with output linearity) why it shouldn't.
However the offset should be correctly removed after capture, which it
can't be if the calibration is using a different exposure.
 
My default settings on the 5400 is 16 bit max resolution at 4X and
ICE+GD enabled, and the scan time is very long. Yet I have never
encountered the problem you described. I use the Minolta sw.

Hmm, this means that my unit could be slightly faulty. I have done some
testing, and I have not yet solved the issue of sometimes having no visible
lines even after extremely steep Curves adjustment, noise is visible as
pixel specks, but no lines. Usually the lines are there, something like 20
units (8-bit...), difficult to see at least on my CRT monitor. The exposure
length obviously strenghtens the lines, IF they are present.

What I did find out was that my exposure of the dark slide I have used for
testing was ridiculously long. I locked the Auto Exposure using that slide,
and then scanned a non-exposed developed black part of a film strip. The
used exposure cut straight through the film and was EASILY VISIBLE as a
magenta "glow". So I guess pumping up the exposure is not sensible even for
bigger signal to noise ratio, even if there were no lines. This leaves me
with no formula for the "best" exposure. But perhaps I should realize, that
it should be easy to find an exposure that is good enough.

I also found that using longer exposure causes more visible lines than using
shorter exposure and then Curves to compensate for the darkness.

One other thing... Is it difficult for every consumer scanner to properly
capture a very contrasty slide? I have noticed that if a slide has strong
highlights and significant shadow detail, not blowing the highlights means
that the shadow detail is barely visible. The noise partly scrambles the
shadow detail, but 4X multisampling clears most of that problem. So I have
yet to resort to high/low exposure scanning.
Be careful with the Auto Exposure option in the Preference. It must be
enabled for slides and to use the Exposure Lock button. Kind of screwy.

I don't think this is completely correct. Yes, negatives have different
control for automatic exposure in the GUI, but you can scan slides without
enabling Auto Exposure. The default exposure (0) is pretty low, so usually
you need to raise that. I guess the Exposure Lock is not needed, because the
HW control already defines exposure. Note that even in this mode the device
compensates for ICE/GD so that the prescan brightness will stay about
constant, if the HW exposure controls are not moved.

--markus
 
Yes, I am assuming that you are setting a true black level based on the
reference, so the error will result in a true black offset, which is
then correctly gamma compensated.

If my white reading is 65000 and the CCD has a DRange of 3 shouldn't I
regard anything less than 65 as black ? Surely I would be wrong to say
14 is brighter than 0. Isn't this all just noise ?
However, doing it that way round you are compensating for the gamma on
false data. Gamma is associated with the image display and perception,
not its capture, so dark offset should be implemented linearly to get a
true linear signal which is then gamma compensated.

I am not disputing the theory here but was criticising your your example
comparing 0 to 14 which made the difference seem like a serious problem.
Can a difference of less than one quarter of the minimum perceptible
light level be regarded as serious ?

I will do some more tests and look for the maximum difference as well as
the average.

-- Steven
 
Markus said:
Hmm, this means that my unit could be slightly faulty. I have done some
testing, and I have not yet solved the issue of sometimes having no visible
lines even after extremely steep Curves adjustment, noise is visible as
pixel specks, but no lines. Usually the lines are there, something like 20
units (8-bit...), difficult to see at least on my CRT monitor. The exposure
length obviously strenghtens the lines, IF they are present.

What I did find out was that my exposure of the dark slide I have used for
testing was ridiculously long. I locked the Auto Exposure using that slide,
and then scanned a non-exposed developed black part of a film strip. The
used exposure cut straight through the film and was EASILY VISIBLE as a
magenta "glow". So I guess pumping up the exposure is not sensible even for
bigger signal to noise ratio, even if there were no lines. This leaves me
with no formula for the "best" exposure. But perhaps I should realize, that
it should be easy to find an exposure that is good enough.

I also found that using longer exposure causes more visible lines than using
shorter exposure and then Curves to compensate for the darkness.

One other thing... Is it difficult for every consumer scanner to properly
capture a very contrasty slide? I have noticed that if a slide has strong
highlights and significant shadow detail, not blowing the highlights means
that the shadow detail is barely visible. The noise partly scrambles the
shadow detail, but 4X multisampling clears most of that problem. So I have
yet to resort to high/low exposure scanning.

My understanding is that getting both highlight and shadow details in a
slide (or negative) is always difficult for a scanner's hw to handle. On
the 5400, there is hw exposure control in the Exposure Control tab to
recover the highlight or the shadow clippings. There are scanners that
do not have such controls. After the scanner's hw captures a scan, its
sw or another imaging editor like PS can recover more of the highlight
or shadow details, provided that they are in the scans captured by hw to
begin with.
I don't think this is completely correct. Yes, negatives have different
control for automatic exposure in the GUI, but you can scan slides without
enabling Auto Exposure. The default exposure (0) is pretty low, so usually
you need to raise that. I guess the Exposure Lock is not needed, because the
HW control already defines exposure. Note that even in this mode the device
compensates for ICE/GD so that the prescan brightness will stay about
constant, if the HW exposure controls are not moved.

How the Auto Exposure option in the Preference work is described in the
manual, and is not made up by me. There are many such details scattered
in the manual.

It is important to have the correct settings in the Preference, or the
previews and scans could look very dark. That may lead one to believe
that that default exposure is low. With my settings in the Preference, a
preview looks close to the slide's original exposure but a little
duller. I only need to adjust the Exposure Control tab for slides that
have exposure problems to begin with, or if there are clippings in the
histograms.

Let us know what you Preference settings are and how you control your
exposure, and we can compare notes.
 
Steven said:
If my white reading is 65000 and the CCD has a DRange of 3 shouldn't I
regard anything less than 65 as black ?
Nay.

Surely I would be wrong to say
14 is brighter than 0.
Nay.

Isn't this all just noise ?
And thrice nay. ;-)

It is definitely not "just noise".

As a trivial example, if the CCD output 0volts, you could still have an
offset between that and the 0volt reference of the ADC, resulting in
non-zero data for the CCD output. In any case, the CCD will not be
0volts for the black level anyway, since the output has a limited linear
range.

The CCD does not have a fixed Drange limit of 3 either. It may have a
minimum Drange of 3, a typical Drange of 3 or even, in a very poor
device, a maximum Drange of 3, but you can never say, without measuring
it that it will have a Drange of 3. You can measure the dark level
noise and assess what the Drange is from that, but even then you can
still measure signals which are less than the noise floor if you have
sufficient samples to average.
I am not disputing the theory here but was criticising your your example
comparing 0 to 14 which made the difference seem like a serious problem.
Can a difference of less than one quarter of the minimum perceptible
light level be regarded as serious ?
Doesn't that depend on what you consider "serious" ? In the context I
was saying that it is serious in comparison to your original statement
that it was negligible. If it did represent a quarter of the minimum
perceptible difference in light level then it wouldn't be serious, but
it doesn't - it represents a 5 level change on a 0-255 range of steps
that are fairly evenly distributed in perceptual space (gamma 2.2).
Consequently it represents a change similar to that of 5-6bits of data,
which is pretty close to being visible on single random pixels and
readily visible as a posterised edge.
 
Markus said:
One other thing... Is it difficult for every consumer scanner to properly
capture a very contrasty slide? I have noticed that if a slide has strong
highlights and significant shadow detail, not blowing the highlights means
that the shadow detail is barely visible.

Absolutely true on all consumer scanners I tried.
This is where drum scanners really shine (other than actually capturing
more details from the original).
Either you pull the exposure so much that you blow the highlights and
maybe also cause charge bleeding and flare, or you sacrifice shadow
details, because the signal drops so much that is eaten by noise.

Moreover, I've seen that is close to impossibile to combine say a first
pass with low exposure to retain highlights and a second pass with long
exposure to see through shadows: you would not have perfect picture
registration, and you'd loose ultimate sharpness.
Of course this is a problem on DSLRs too, but they tend to have better
DR than a slide film / consumer scanner combination... :(

Anyway, a "nailed" CCD exposure, multisampling and dark frame
subtraction are helpful aids.

Fernando
 
My understanding is that getting both highlight and shadow details in a
slide (or negative) is always difficult for a scanner's hw to handle. On
the 5400, there is hw exposure control in the Exposure Control tab to
recover the highlight or the shadow clippings. There are scanners that
do not have such controls. After the scanner's hw captures a scan, its
sw or another imaging editor like PS can recover more of the highlight
or shadow details, provided that they are in the scans captured by hw to
begin with.

Yeah, with hw controls one can at least get all the detail using two scans
and combining them. As I said, I have always been able to save enough shadow
detail by using Auto Exposure to get highlights just without clipping and
multisampling to save the shadow detail. Of course, by combining two scans
the result might be even better.

I guess that usually negative films pack the same DR present when shooting a
scene to a more narrow film DR than slide films.
....
It is important to have the correct settings in the Preference, or the
previews and scans could look very dark. That may lead one to believe
that that default exposure is low. With my settings in the Preference, a
preview looks close to the slide's original exposure but a little
duller. I only need to adjust the Exposure Control tab for slides that
have exposure problems to begin with, or if there are clippings in the
histograms.

Let us know what you Preference settings are and how you control your
exposure, and we can compare notes.

Actually I have not had much problem with exposure. Or I have had one
problem: scanning dark slides with ridiculously long exposure due to blindly
using the Auto Exposure for slides option without compensation for the
darkness. This also escalated the "lines" fault.

To me it seems that the prescan and final scan are quite close to each
other, not depending whether I use Auto Exposure for slides or not. I think
that for dark slides finding sensible exposure is easier by starting from
the "base" exposure without automatics and increasing the exposure using hw
controls.

It is true, that I have lately found that even though the Minolta software
might show no clipping, the PS histogram might show mild clipping. Perhaps
this happens, because I use normal 16-bit mode, not linear? I have not
learned all the benefits of using linear mode, but I did notice in one
instance, that using linear produced more pure dark colors. I guess I must
use manual gamma and black point settings to get the benefits of linear
mode? I do not expect that using linear mode when scanning and posi-linear
Minolta profile in PS would lead to much improvement? Well ok, I did say
that colors were better at least with one slide...

--markus
 
Nay.
Nay.
And thrice nay. ;-)

It is definitely not "just noise".

Oops, three strikes and I'm out.

If I multi-sample (100x) a black bar and check the results for each
sample point I find the mean deviation of each reading from the average
for its sample point is 35 or more.

Samples Avg Range Error
Value Avg/Max Avg/Max
R 4000 69 241/408 49/65
G 4000 56 176/284 35/47
B 4000 62 186/296 37/48

It is average errors such as these that make me think 14 is
insignificant.

For white readings the errors are larger but proportionally smaller.

Samples Avg Range Error
Value Avg/Max Avg/Max
R 4000 62204 1859/3008 296/384
G 4000 62269 1309/1992 207/264
B 4000 62366 1227/2140 193/250

Do these ranges/errors seem reasonable ? If your scanners provide much
tighter results then perhaps an offset of 14 is significant.
The CCD does not have a fixed Drange limit of 3 either. It may have a
minimum Drange of 3, a typical Drange of 3 or even, in a very poor
device, a maximum Drange of 3, but you can never say, without measuring
it that it will have a Drange of 3. You can measure the dark level
noise and assess what the Drange is from that, but even then you can
still measure signals which are less than the noise floor if you have
sufficient samples to average.

Isn't my black test above showing the noise level ? I thought the
ranges and errors are due to noise but maybe I'm wrong.
it doesn't - it represents a 5 level change on a 0-255 range of steps
that are fairly evenly distributed in perceptual space (gamma 2.2).

I agree that 0-14 in 16-bit becomes 0-5 in 8-bit gamma 2.2. If I could
get reliable readings at such low levels I would be a happy person.

-- Steven
 
Steven said:
Oops, three strikes and I'm out.

If I multi-sample (100x) a black bar and check the results for each
sample point I find the mean deviation of each reading from the average
for its sample point is 35 or more.

Samples Avg Range Error
Value Avg/Max Avg/Max
R 4000 69 241/408 49/65
G 4000 56 176/284 35/47
B 4000 62 186/296 37/48

It is average errors such as these that make me think 14 is
insignificant.
I wouldn't suggest that I know what makes you think. ;-)

Seriously though, you are talking about different things here, random
noise and systematic error, so your comparisons are not quite valid. A
systematic error of 14 on a background of 35 random noise would be very
visible as a line structure. In the first test, where we arrived at a
difference of 14 on the medium channel (and remember that one was a lot
worse than that) we were comparing the signal from different dark
exposures. That error signal would be present on all pixels in the
image, a systematic error, and would be compared to the noise on those
pixels. However, the noise on each pixel is essentially random, so it
is averaged by eye across a large number of pixels and consequently the
perceived level reduces significantly. The error signal, due to
mis-calibration, is not reduced by averaging because it is present and
equal on every sample output by the same CCD cell.

Now, how much averaging occurs in an image really depends on a lot of
factors, including output resolution, viewing distance and the type of
structure that the systematic error introduces. Experiments that I
conducted about 15-20 years ago with a large number of military trained
observers actually examined the visibility of line errors in an image
and, to be imperceptible, the systematic error needed to be a *lot*
better than the noise of the image even on much lower resolution images
that you will get from a film scanner. I am not permitted to reveal the
actual threshold, due to the nature of that work, but it is a
surprisingly large factor lower than the noise. However it is easy
enough these days to run your own tests that will confirm this and
arrive at a factor that is acceptable to you.

Also, I am not quite sure what your measurements are above, because you
refer to multiple samples of 100 in the text and 4000 samples in the
table. If these are true 100x multisample results then the noise
(error?) seems ridiculously high. Similarly you have a range average
and max in the table which seem unrelated to the error average and max.
So I don't follow what this data is or how it relates to the noise in
your scanner.

This should be relatively simple to tabulate. The average signal is
just the arithmetic mean across the number of samples, ie. Sum(x)/n,
where x is the data for each sample and n is the number of samples. The
Noise on that signal, assuming a white noise spectrum ie. a gaussian
distribution, is just the standard deviation of the data, or
sqrt((n.sum(x^2)-sum(x)^2)/(n*(n-1))
For white readings the errors are larger but proportionally smaller.

Samples Avg Range Error
Value Avg/Max Avg/Max
R 4000 62204 1859/3008 296/384
G 4000 62269 1309/1992 207/264
B 4000 62366 1227/2140 193/250

Do these ranges/errors seem reasonable ? If your scanners provide much
tighter results then perhaps an offset of 14 is significant.
In terms of the noise on the black level, or rms weighted average
deviation from the mean, 35 seems pretty large. You can check this
against the results I posted in another thread recently which showed the
progressive degradation of the black level and its noise on am LS-4000
scanner, at http://www.kennedym.demon.co.uk/results/gamma10.jpg
where the noise on the black level is less than 5 in the 16-bit range.
That data is from a 16x multisample, which will reduce the noise from
the single sample reading, but it is from an extremely long exposure
which would more than balance that out.

In terms of the random noise on the white level, that will be dominated
by the random shot noise of the photons that are producing the input
signal on the CCD. This is just the square root of the total number of
photons captured during the exposure. With linear CCDs used in
scanners, this typically produces a noise which is comparable with the
quantisation noise of only an 8-bit system so, depending on what they
actually mean, your figures fro average error don't look too out of
place.
Isn't my black test above showing the noise level ? I thought the
ranges and errors are due to noise but maybe I'm wrong.
It is showing some variation, I am not sure it is a measure related to
noise.
 
Moreover, I've seen that is close to impossibile to combine say a first
pass with low exposure to retain highlights and a second pass with long
exposure to see through shadows: you would not have perfect picture
registration, and you'd loose ultimate sharpness.

That's exactly what I have been doing for some time now!

The scans will not align perfectly, and some scanners are better than
others, but on my Nikon LS-50 after 100s of scans - no, make that
1000s of scans! - the misalignment is so far always been less than 1
pixel.

This means, the two images must be sub-pixel aligned before they are
merged.

The second problem is color "sync", as I call it. Namely, scanning at
different exposures (once for highlights and once for shadows) will
result in different color balance of the two scans due to the
non-linear response.

There are two solutions to this: theoretical and pragmatic.

The theoretical solution requires an exposure response curve for each
film being scanned. Practically, that is close to impossible to do
especially if your film is 20-year old Kodachromes, which is what I've
been wrestling with...

So, I solve this by using the pragmatic method. First I decide on a
point where the two images will merge (working with 2.2 gamma images I
make my life more difficult but the "sweet spot" is around 32 on the
histogram).

After that I create a band-pass filter (beware of luminance because
it's a "weighted" calculation!) to isolate - in my case - the
histogram bin at 32 (of course, working with 16-bit that's actually
256 bins, but never mind...).

I measure the color at this point on both scans, and then create a
curve for the shadow scan to bring it down to the nominal (highlights)
scan. After that I can merge the two images with a hard edge cut-off.

There are various "sloppy" methods to do this, commonly known as
"contrast masking" using the Gaussian Blur filter or Photoshop's
blending modes, but they all let image data from "undesired" portion
of the image "bleed" into the good part. Because of that those methods
don't work when the images are merged in the middle of a gradient, but
acceptable results can be achieved with images where the border
between shadows and highlights is well defined.

With the above method, however, there is no "mixing" of images.
Instead, the shadows scan only contributes the shadows, and the
highlights scan only contributes the highlights.

Effectively, this turns any scanner into a variable dynamic range
scanner. In my case, my 14-bit scanner acts as an 18-bit scanner. The
results are fantastic but the process is very time consuming.

Don.
 
Seriously though, you are talking about different things here, random
noise and systematic error, so your comparisons are not quite valid. A
systematic error of 14 on a background of 35 random noise would be very
visible as a line structure. In the first test, where we arrived at a
difference of 14 on the medium channel (and remember that one was a lot
worse than that) we were comparing the signal from different dark
exposures. That error signal would be present on all pixels in the
image, a systematic error, and would be compared to the noise on those
pixels. However, the noise on each pixel is essentially random, so it
is averaged by eye across a large number of pixels and consequently the
perceived level reduces significantly. The error signal, due to
mis-calibration, is not reduced by averaging because it is present and
equal on every sample output by the same CCD cell.

This is an excellent argument for the value of differences less than the
noise level.
This should be relatively simple to tabulate. The average signal is
just the arithmetic mean across the number of samples, ie. Sum(x)/n,
where x is the data for each sample and n is the number of samples. The
Noise on that signal, assuming a white noise spectrum ie. a gaussian
distribution, is just the standard deviation of the data, or
sqrt((n.sum(x^2)-sum(x)^2)/(n*(n-1))

Doesn't StdDev = sqrt (sum(x^2) / n) ? This is different from your
formula so I added both to my reporting code.

Exposure = 2, Scanlines = 100
Sample Avg Range Error Std Dev RKM Dev
points Value Avg/Max Avg/Max Avg/Max Avg/Max
R 4000 64 235/384 48/62 58/73 32/45
G 4000 52 172/276 35/44 42/51 23/32
B 4000 57 181/288 36/46 44/55 24/34

This is the black scan data that I use to calculate my digital offsets
when calibrating the scanner so the apparently high average values
aren't a problem, they are just the best I can get by adjusting the
analogue offsets.

The ranges are high (depressingly so) but I think they are valid as I
would expect the range to be four times the error.

I did see your gamma10.jpg and your scanner seems miles better than
mine. I have only tested my code on two other FS4000's and the results
were similar to mine. Scans done with Vuescan on my scanner also show
similar noise.

Thanks for your effort and replies.

-- Steven
 
Markus said:
Yeah, with hw controls one can at least get all the detail using two scans
and combining them. As I said, I have always been able to save enough shadow
detail by using Auto Exposure to get highlights just without clipping and
multisampling to save the shadow detail. Of course, by combining two scans
the result might be even better.

I guess that usually negative films pack the same DR present when shooting a
scene to a more narrow film DR than slide films.


Actually I have not had much problem with exposure. Or I have had one
problem: scanning dark slides with ridiculously long exposure due to blindly
using the Auto Exposure for slides option without compensation for the
darkness. This also escalated the "lines" fault.

To me it seems that the prescan and final scan are quite close to each
other, not depending whether I use Auto Exposure for slides or not. I think
that for dark slides finding sensible exposure is easier by starting from
the "base" exposure without automatics and increasing the exposure using hw
controls.

It is true, that I have lately found that even though the Minolta software
might show no clipping, the PS histogram might show mild clipping. Perhaps
this happens, because I use normal 16-bit mode, not linear? I have not
learned all the benefits of using linear mode, but I did notice in one
instance, that using linear produced more pure dark colors. I guess I must
use manual gamma and black point settings to get the benefits of linear
mode? I do not expect that using linear mode when scanning and posi-linear
Minolta profile in PS would lead to much improvement? Well ok, I did say
that colors were better at least with one slide...

If I scan in 16-bit linear mode and convert the scan to posi-linear
profile in PS, I do notice the PS histograms will shift from the Minolta
histograms. If I scan in 16-bit non-linear mode and set up the Color
Match preference properly, there is no need to convert to posi-linear in
PS and the PS histograms do not shift. In both approaches, the converted
scans will look the same in PS. It is important to set up the preference
correctly.
 
Steven said:
This is an excellent argument for the value of differences less than the
noise level.
Indeed - take a look at some of Fernando's posted results from his
Minolta. The worst of those lines are just about the same amplitude as
the noise.

It is also, in essence, the same fundamental principle that makes
half-toning work, so it isn't all bad - if it wasn't for this effect you
probably would have a hard time getting any hard copies of those scans
you are making! ;-)
Doesn't StdDev = sqrt (sum(x^2) / n) ? This is different from your
formula so I added both to my reporting code.

That depends what 'x' is. ;-)

Your formula is almost correct if x is the deviation of each data point
from the mean of all the data.

In my formula x is simply the data point, although it can also be the
deviation since a constant offset, such as the mean, is transparent to
it.

There is also a minor difference in the denominators between n and
sqrt(n*(n-1)) due to the difference between gaussian (continuous) and
poisson (discrete) statistics. Here we have discrete samples, so the
more complex expression is correct. However, for large values of n the
two denominators converge and with 100 samples this should only
represent about 0.5% difference.

So, with the correct 'x' used, each formula should give very similar
results. Since they don't, there is clearly an error in the calculation
somewhere.
Exposure = 2, Scanlines = 100
Sample Avg Range Error Std Dev RKM Dev
points Value Avg/Max Avg/Max Avg/Max Avg/Max
R 4000 64 235/384 48/62 58/73 32/45
G 4000 52 172/276 35/44 42/51 23/32
B 4000 57 181/288 36/46 44/55 24/34

This is the black scan data that I use to calculate my digital offsets
when calibrating the scanner so the apparently high average values
aren't a problem, they are just the best I can get by adjusting the
analogue offsets.

The ranges are high (depressingly so) but I think they are valid as I
would expect the range to be four times the error.
I am still not sure what these values actually are - can you explain
what "range" and "error" are?

I assume that Avg is just the average output of all 100 lines of 4000
pixels.

I know this seems obvious to you, but I can't see where you have
actually defined these anywhere in your posts so far, and minor
differences in what you mean can be significant.
I did see your gamma10.jpg and your scanner seems miles better than
mine.

Well, I guess it depends on which figures you are comparing here.
However my pk-pk variation between all of the cells after calibration
appears to be around 5 or so, after some filtering along the scan
direction to eliminate random noise from the estimate - filtered.jpg. If
you are comparing the unfiltered data in gamma10.jpg then the pk-pk
non-uniformity value of 35 doesn't look too much better than the
62,44,46 you have as the maximum error above. Also, note that my
figures are residual non-uniformity on the calibrated scan. From what
you say above, your data refers to the calibration scan itself.

Are we comparing apples and oranges?
 
So, with the correct 'x' used, each formula should give very similar
results. Since they don't, there is clearly an error in the calculation
somewhere.

You're right. I was feeding your formula with the errors rather than
the values. Results now are consistent.

Exposure = 2, Scanlines = 100
Sample Avg Range Error Std Dev RKM Dev
points Value Avg/Max Avg/Max Avg/Max Avg/Max
R 4000 58 228/384 47/60 56/72 56/73
G 4000 52 171/268 35/45 42/54 42/55
B 4000 58 182/300 36/46 44/54 44/54
I am still not sure what these values actually are - can you explain
what "range" and "error" are?

I have 100 readings for each sample point. The range is the difference
between the lowest and highest readings. The max range is the highest
of the 4000 ranges for the colour. The average range is the mean of the
4000 ranges.

The error is the mean deviation of the 100 readings for a sample point
from the mean of the 100 readings. The max error is the worst mean
deviation I find for the colour and the average error is the mean of the
4000 'error's for the colour. I will label the column 'Mean dev'.
I assume that Avg is just the average output of all 100 lines of 4000
pixels.

Yes, Avg is just the mean of the 400,000 readings for the colour.
I know this seems obvious to you, but I can't see where you have
actually defined these anywhere in your posts so far, and minor
differences in what you mean can be significant.

My presentation in this thread has been woeful. Do the definitions
above help ? I hope my stats are meaningful and I'm not reporting
irrelevant details.

I have now changed my code to precede each image scan with a black scan
so I do black level calibration using the exposure settings determined
for the image scan. This must be progress because it is slower and more
complicated.
Are we comparing apples and oranges?

Yes. I will try to repeat your test on my scanner.

-- Steven
 
If I scan in 16-bit linear mode and convert the scan to posi-linear
profile in PS, I do notice the PS histograms will shift from the Minolta
histograms. If I scan in 16-bit non-linear mode and set up the Color
Match preference properly, there is no need to convert to posi-linear in
PS and the PS histograms do not shift. In both approaches, the converted
scans will look the same in PS. It is important to set up the preference
correctly.

OK, I thought that all color correction work and other actual image
processing is better to do in PS? Perhaps I should check the Color Match
feature when...

There might be some differences in the terminology we use. As I understand
it, when scanning in linear mode, the scan will use Minolta posi-linear
profile. Thus in Photoshop I assign the posi-linear profile and convert to
working RGB (Adobe RGB). PS will open a dialog which lets me do it with one
mouseclick.

I have now started using linear mode for all scans, because I have seen more
cases there it provides better color accuracy.

--markus
 
Steven said:
You're right. I was feeding your formula with the errors rather than
the values. Results now are consistent.
OK - that makes more sense. The formula I gave allows you to work out
the SD without having to calculate the mean first, so you only have one
run through the data set. I am pretty sure it works out to be faster
too - at least for large arrays it does.
Sample Avg Range Error Std Dev RKM Dev
points Value Avg/Max Avg/Max Avg/Max Avg/Max


I have 100 readings for each sample point. The range is the difference
between the lowest and highest readings. The max range is the highest
of the 4000 ranges for the colour. The average range is the mean of the
4000 ranges.
So the range is the pk-pk variation from each cell of the CCD measured
over 100 samples. The average range is the average pk-pk noise across
all 4000 cells, and the maximum is the worst - ie. the highest pk-pk
noise cell.

So your range column is just a measure of the pk-pk noise on the
individual cells. Have I got that right?
The error is the mean deviation of the 100 readings for a sample point
from the mean of the 100 readings.
Right - I think I see where the confusion is coming from. Here you seem
to be again calculating a value related to the noise on each cell.
However in this case you are calculating an arithmetic average of the
noise in each cell, which would only be meaningful if the noise had a
flat distribution - with big errors equally as common as small errors,
such as you would get from a random number generator. However, I am
sure that if you plot the error rate, you will find that it is much
closer to a poisson distribution than a top hat, with big errors much
less common than small errors. So the standard deviation is what you
should be using here, and this will give you the same value as a noise
power meter would if you connected it directly to each cell (with the
appropriate scaling, of course).
The max error is the worst mean
deviation I find for the colour and the average error is the mean of the
4000 'error's for the colour. I will label the column 'Mean dev'.

OK, now I am completely confused! (I had a quick look at this last
night but couldn't follow the logic then, so I waited until I was less
tired to have another read - but still fall over at the same point.)
Here you seem to be ignoring what you say above about calculating error
and are now calculating a quantity based on the deviation of the mean of
each cell from the mean of all 4000 mean cells. This would be a direct
comparison to the values that I am calculating, with Max Error being
comparable to my pk-pk non-uniformity. For the same reason as above
though, I don't think the Avg error is a statistically relevant value,
and this should be the SD.
My presentation in this thread has been woeful.

Not really, you had it clear in your own mind what you were trying to
measure, and now we are trying to match this with another set of data
calculated by someone else that has what they were trying to measure
clear in their mind. Its just a case of making sure that we have the
same measurements, especially when we are calling them the same.
Do the definitions
above help ?

I think so, although I am still a bit unsure about what your Error
actually is due to the change of description I have explained above.
I hope my stats are meaningful and I'm not reporting
irrelevant details.

I think your average figures may well be, simply because of the nature
of noise.
I have now changed my code to precede each image scan with a black scan
so I do black level calibration using the exposure settings determined
for the image scan. This must be progress because it is slower and more
complicated.
Copernican astronomy was progress from Ptolemeic astronomy because it
got rid of the complications of orbital epicycles. More complicated
isn't always better. ;-)
 
On Sun, 13 Mar 2005 18:54:40 +0000, Kennedy McEwen

[ snip ]
So the range is the pk-pk variation from each cell of the CCD measured
over 100 samples. The average range is the average pk-pk noise across
all 4000 cells, and the maximum is the worst - ie. the highest pk-pk
noise cell.

So your range column is just a measure of the pk-pk noise on the
individual cells. Have I got that right?

Yes.

[ snip ]
OK, now I am completely confused! (I had a quick look at this last
night but couldn't follow the logic then, so I waited until I was less
tired to have another read - but still fall over at the same point.)

I have changed my headings here to hopefully make it understandable.
This is definitely my problem as any report is useless if unclear.

Exposure = 2, Scanlines = 100
Cells Avg Peak-pk Mean Dev Std Dev RKM Dev
Value Avg/Max Avg/Max Avg/Max Avg/Max
R 4000 58 228/384 47/60 56/72 56/73
G 4000 52 171/268 35/45 42/54 42/55
B 4000 58 182/300 36/46 44/54 44/54

My testing here is different from yours as I am processing on a cell by
cell basis rather than a scanline by scanline basis. If scanlines are
vertical the input table I have is 100 columns of 4000 rows for each
colour.

For each cell (row of 100 readings) I calculate peak-to-peak, mean
deviation, and standard deviation (twice). I then use these answers to
update the summaries that appear in the report. I also calculate the
mean for each cell and keep this as my digital offset for the cell.
Copernican astronomy was progress from Ptolemeic astronomy because it
got rid of the complications of orbital epicycles. More complicated
isn't always better. ;-)

When I first read this thread I thought about black level shift when
changing exposure so I did black scans at the nine possible exposures on
the FS4000 and couldn't see much difference (certainly no visible
difference). Keeping nine black level tables is extra work and doesn't
really address the problem because my auto-exposure logic can select an
exposure between two of the allowed settings. I can achieve this by
selecting the slower setting and reducing the shutter duty-cycles
slightly. So now I re-calculate the black level table after I have
loaded all the exposure settings.

The software overhead is minimal and the extra time per image scan is
acceptable but what I don't like is that I have to turn the lamp off for
the black scan and then turn it on for the image scan. I am hoping that
switching the lamp on and off won't cause any problems.

-- Steven
 
Markus said:
OK, I thought that all color correction work and other actual image
processing is better to do in PS? Perhaps I should check the Color Match
feature when...

I too belong to this school of raw scanning. That's why I limit the
5400's sw to controlling only the scanner's hw, and correct everything
else in PS.
There might be some differences in the terminology we use. As I understand
it, when scanning in linear mode, the scan will use Minolta posi-linear
profile. Thus in Photoshop I assign the posi-linear profile and convert to
working RGB (Adobe RGB). PS will open a dialog which lets me do it with one
mouseclick.

I started out with this workflow, but found it to have a couple of
problems. The scanner preview and the scans BEFORE assigning to the
posi-linear profile both look too dark. There is also a shift in the PS
histograms after assigning the profile. Using a slightly different
workflow eliminates both of these problems. Instead of scanning linear,
scan nonlinear which automatically assigns the posi-linear profile at
the 5400. The preview and scan looks the same and not dark, and there is
no histogram shift in PS. To do this, set up the Preference correctly
using its Color Match. I learned this here from Fernando, I think.

The confusion in our posts maybe due to how we adjust exposure at the
5400. I only use the hw Exposure Control tab, and not the sw Image
Correction tab.
I have now started using linear mode for all scans, because I have seen more
cases there it provides better color accuracy.

I see no such difference in the scans using the two approaches described
above. It makes sense since both approaches turn a linear scan into a
non-linear one using the same posi-linear profile. The only difference
is where the profile assignment is being done.
 
Fernando said:
If you use the same exposure for the dark scan ,and of course if you
work in 16 bits/channel Linear, the dark scan should have ... "similar"
lines. :)

Well, still have not tried it, I believe it would help, but I do not like to
go into two-pass scanning, scanning is already tedious enough. :-)

However, using your 16-bit/linear idea, I was able to confirm that the error
is completely linear, and it also increases in linear fashion from left to
right. This enables creating a Photoshop CS action, that pretty completely
removes the error signals by utilizing maximum error sampling and a
difference mode layer with gradient layer mask.

--markus
 
Back
Top