D
Don
I don't recall mentioning that.
My mistake, then. Might have been Analog Gain.
Don.
I don't recall mentioning that.
Oh I just love it when numpties bring up that hoary old piece of urbanDon said:Theory is very nice - in theory... And as you know I always thirst to
learn more of it, but - as can be seen below - practice (i.e. context)
plays a significant part in real life situations. Which all reminds me
of...
In theory, a bumblebee was once declared insufficiently aerodynamic to
be able to fly. In practice, numerous bumblebees strongly disagreed...
No, in practice. You still haven't achieved the effect that multiscanAnd this bumblebee also strongly disagrees... ;o)
Do note that the stress should be on the "bee" part (as in "worker
bee"), not on the "bumble" part... ;o)
In theory...
As there should be, but the following indicates that you have hit aAccording to you, due to Photoshop's 15-bit and integer math
"limitations", multi-pass multi-scanning only makes sense with up to 2
scans. And yet...
I have scanned 18 times and then I threw away 2 most extreme scans.
Next I used Photoshop to combine the remaining 16 scans in increments
of 4 to end up with: 4x, 8x, 12x and 16x multiscans. Comparing those
there is a clear and incremental reduction of noise at each junction
with 16x nearly eliminating all, perhaps with only about 1% of shadows
- if that - still having some very minor noise.
However, there was no increase in shadow detail.
What a coincidence - worked out the area under a 0.3 pixel radiusAs I already
mentioned, a very similar effect can be achieved by simply selecting
the shadows (threshold = 32) and applying 0.3 Gaussian Blur. The
multiscanned images are still slightly superior in that they appear a
tad sharper.
Multiscan does bring out the additional shadow detail that I wouldPlease repeat this (it's a genuine question because I really want to
know) and report if you see any difference between the two. I
understand if you can't, because sub-pixel alignment of multi-pass
scans is excrutiating and very time consuming. It took me about a 1
day per image!
I have no indication that Nikonscan uses floating point arithmetic, andBTW, do you have any references that NikonScan uses floating point
(when calculating multiscanned images) and not integer math like
Photoshop? I can't find anything in the manual.
I think it was. ;-)Don said:My mistake, then. Might have been Analog Gain.
The objective here is to make sure that the printer driver is notHoward said:Kennedy and Don,
Thank you. I appreciate both perspectives.
Kennedy, one thing I must note is that your final message to ME (not
Don), was a surprise.
Based on your detailed report on the old "Epson-Inkjet" list a few
years ago, I've always scanned at the integer divisor of the scanner's
maximun optical resolution (e.g., on a 2400 ppi scanner: 300, 400,
600, 800, 1200 or 2400) that would give me the largest size photo I
wanted without causing the printer resolution to fall below the 240 -
300 range --- BUT LETTING THE FINAL DPI "FALL WHERE IT MAY" WITH NO
RESAMPLING IN PHOTOSHOP.
But based on your post to me above, I see that now (perhaps due to
newer and better printers/drivers) you suggest RESAMPLING THE FINAL
PHOTOSHOP RESOLUTION TO THE NEAREST OF 240, 360, or 720 DPI. Although
you did not say so, my guess is that you suggest that such resampling
in Photoshop be a DOWNsample.
If I interpret all this correctly, then this will be a significant
change in my workflow. But change for the better is a good thing!
....
Sorry Don, but theory is important to
understanding what is happening, what to expect and why.
As for your
allegorical urban legend: if you do some research into that you will
find it is quite appropriate to your current situation and problem - not
understanding the full story.
As there should be, but the following indicates that you have hit a
precision limit.
--- cut ---Nevertheless, remember
that the first step in your methodology is to reduce the opacity of the
top layer by 50% - which means you have already exhausted the 15 bit
capacity of PS, so I am not surprised you cannot see much improvement.
The calculation I gave above did not take account of the methodology you
have previously suggested. The noise texture change you perceive
suggests that you are just looking at truncation limits.
But shadow detail is limited by noise so, if you are achieving the
reduction you claim then you would be seeing increased shadow detail if
it is there - and it is, as your following test shows:
Multiscan does bring out the additional shadow detail that I would
expect - as I have already mentioned to you in another post. In fact,
to prove this I sandwiched a slide with a piece of unexposed and
developed film - it was the only way of getting controllably dense
material. However, by the time you get to 16 frame multiscan the
improvement is flattening off. Up to 8 frames works pretty close to
simple theory.
Don said:Absolutely! Which is exactly why I wrote above that I always thirst to
learn more.
Which made it an irrelevant point, since I am telling you the result ofMy point was that there is a difference between pure
theory and theory applied in practice.
Only your theory predicted that there would be no noise at all - and atCase in point: My brand new bouncing baby LS-50 is rated at 14-bits.
In theory, that should give me more than enough dynamic range. And yet
in practice (as you yourself very comprehensively explained recently)
photons tend to disagree and there is noise in dark areas - hence this
discussion about multiscanning...
So why are you ignoring the obvious errors in your theory that 16 frameThat was *exactly* my point - so no wonder it's appropriate! ;-)
No, just not much more than 2. It would only be exactly 2 ifWhich according to you should be at 2 scans in my case, right?
--- cut ---
--- cut ---
And yet I see a clear and continuing improvement at each step: 4x, 8x,
12x and 16x.
Why don't I find that surprising?What's more, this improvement is not flat (equal across
the whole shadow area) but each successive step distinctly clears up
more (i.e. goes deeper into) shadows.
Think again Don - that is *exactly* what would happen, right up to theTherefore, this seems to indicate that we can eliminate lack of
(Photoshop) precision causing a simple "blurring of noise" because
that would be equally distributed across the whole area and not
"selective" by incrementally clearing up deeper shadows - as the scan
count rises - without affecting areas already cleared up.
I mean, we don't really *know* that NikonScan works with 16-bits
internally? Or, do we?
After all, I (and I'm guessing you too)
presumed that Photoshop's 16-bit was true 16 bit - until we learned
better...
Unfortunately Don, your tests confirm quite the opposite - that youBut even if there were 1 bit of difference I'm still not
convinced that it would have such a drastic effect as my test above
seem to confirm.
Check the word "much" in the quotation above, you missed it the firstIn my current workflow, it was only after scanning at ~AG +2 and
layering the two images in order to compare them that I've located
where the detail was. Switching to multiscanned image and increasing
contrast almost to the point of distortion I could indeed observe (a
hint of) detail in the multi-pass multicanned image.
I have been through this exercise in the past and *quantified* the dataSo, the detail is there but it's effectively masked by such low
contrast that it's of no practical use (well, to me, anyway).
If you have the time, please try this: Do a nominal multiscan. Then
scan again but boost AG until noise is gone to the same extent as in
the multiscan. After that "synchronize the histograms" so that the
shadows (where the noise is) in both images are of equal brightness
and contrast. Don't worry about the rest of the image because,
obviously, clipped areas can't be "synchronized".
I'm curious if you can spot as much difference between the two shadow
areas as I can? It's a roundabout, circumstantial way of trying to
determine if single-pass multiscan is equal to (properly performed)
multi-pass multiscan.
Numbers are not subjective, which is why I can be absolutely dogmaticOn reflection, it may be a case of two subjective judgments, i.e. I
may be seeing insufficient contrast or detail but that very same image
may look satisfactory to you!?
Only the difference between the two approaches would be subtle if youActually, that's probably it. I have, most likely, been "spoiled" by
+2 AG scans which reveal so much more detail in the shadows that I'm
"blind" to subtle detail in multiscan images.
Case in point: My brand new bouncing baby LS-50 is rated at 14-bits.
In theory, that should give me more than enough dynamic range. And yet
in practice (as you yourself very comprehensively explained recently)
photons tend to disagree and there is noise in dark areas - hence this
discussion about multiscanning...
Your concerns are well placed because the dynamic range really dependsAnoni said:(e-mail address removed) (Don) wrote in message
Hi,
I'm curious. Why would 14-bit'ness have anything to do
with noise in the dark areas, even in theory? Is it an assumption
that quantizing noise is the primary source of dark-area noise?
Keep in mind that I really don't understand what number of
bits has to do with dynamic range either, I'd think that the range
would be mostly optical & analog -- from clear film down to
the analog noise level in the black areas, or is the usable
darkest black always assumed to be where the lsb flips to '1'
regardless of how much analog-system noise is present?
If I'm too off topic, "never mind". Your example just
tickled my brain.
Kennedy McEwen said:Your concerns are well placed because the dynamic range really depends
on which is the greatest noise that is present in the blacks -
quantisation or analogue noise. If, for example, the analogue noise was
significantly greater than the quantisation noise then the dynamic range
would indeed be much less than the 14-bits permits and the multiscan
implementation, even in 15-bit limited Photoshop, would provide more
dynamic range and deep shadow noise reduction.
However, it can be shown by measurement that in deep shadows the
analogue noise of the 14-bit Nikon scanners is very low indeed, such
that the total is barely greater than the quantisation noise itself.
This is, obviously, not the case as the light level increases and photon
noise (due just to the random emission and arrival of individual photons
at the CCD) becomes the dominant noise source.
Only your theory predicted that there would be no noise at all - and at
the lower ADC counts the noise is not photon driven at all, but pretty
close to being quantisation noise on the LS-4000 & LS-50. Photon noise
is only significant when sufficient photons arrive in the integration
period that their square root gives rise to a noise signal which exceeds
the quantisation and CCD readout noise. At low levels, such as deep in
the shadows of dense media, photon noise is insignificant.
No, just not much more than 2.
We do know that it has at least 16-bit accuracy because of the test that
I suggested you repeat to convince yourself. ....
No, we don't know that NikonScan4 is working with 16-bits internally, it
might be working with 57 and three quarter bits for all we know, but it
has to round the result to the 16-bit output. We do know that it is *at
least* 16 bits internally though.
Only the difference between the two approaches would be subtle if you
did the basic arithmetic correctly.
I did suggest writing your own software to implement the arithmetic to
the necessary precision, and I still believe that is the only way you
will make any headway in this at all.
Don said:On Tue, 3 Aug 2004 20:44:04 +0100, Kennedy McEwen
The point is, 14-bits covers the 3.4 dynamic range of Kodachromes -
and then some - 2.7 bits more if memory and math serves.
Now then, the
rule-of-thumb is to allow 1.5 bits for noise so, in theory, 14-bits
should be more than enough to scan without noise. And yet it isn't.
I doubt it needs quite that much, but it certainly requires more thanJudging by my empirical tests, I'd need an 18 or even a 20-bit scanner
to get all of the image data without noise.
That is certainly the case, but I don't have many slides with theYou once stated that you multiscan slides with 2x as standard (going
to more only when/if needed) so 2x multi-pass multiscanning and
blending with Photoshop afterwards is at least as good as your
standard workflow with a single-pass multiscan, correct?
Nope - you ignored the "if/when needed". This isn't my workflow andIf yes, we can eliminate all the discussions about > 2x multi-pass
multiscanning and focus only on 2x because you concur there is no
difference between 2x single-pass and 2x multi-pass scans, right?
Of course it does and it is a basic physics why that is so - I willNow then, when I compare a 2x multiscan of a not excessively dark
slide (i.e. one not requiring more than 2x) to a properly boosted
shadows scan of the same slide, the latter scan still reveals more
detail.
Indeed, but testing of the output indicates that they are not using any(Just because they output 16-bits says nothing about what they use
internally e.g. Photoshop also "outputs" 16-bits. But I'm just
nitpicking here...)
No, because Nikonscan only needs to handle the output of the hardwareIf we go back to what you said above then that's still 2 bits short:
"you need at least 18bits of accumulation for that to work".
Not at all, in fact I have prepared a scan sequence that I can send toSo, are you in effect saying that purported Nikon claims of 16x
multiscanning are actually misleading because NikonScan's (or
firmware's?) accumulation accuracy *may* be insufficient?
Nope, it doesn't wash - oncve you have run out of bits, you have run outThis is *not* a confrontational question, but - as always - a genuine
one.
(Also - and off the top of my head, caveats apply - there is more than
one way to average out the scans. For example, averaging out in twos -
or whatever is below the accuracy threshold. True, it would propagate
rouding errors and not be as accurate as proper averaging but it may
improve on truncation. Again, I'm just nitpicking...)
Superior only inasmuch as it partially resolves one specific problem youThe problem is I don't think it's really worth the effort since
performing a second scan at +2-3 AG - or whatever is necessary - gives
me superior results much faster. Once I have the time I'd love to get
back and write my own software, but that's not likely any time soon.
Mot at that level, however it probably does at 3-4AG. However, evenOne last question, and the key question, really: Does scanning a
second to time to pull out the detail in shadows (i.e. at +2-3 AG)
reveal more detail than multiscanning?
Even *if* it did, you would never know for sure because Photoshop just(Of course, I'm *not* talking about conventional contrast masking or
contrast blending both of which have problems with gradients, but
comparing a multi-pass multiscan to an appropriately adjusted shadows
scan.)
Even judging by the 2x case above, I'm convinced it does.
Now that's what happens when you take rules of thumb to extremes. Where
did you get the idea that Kodakchrome had a total dynamic range of only
3.4?
Here is the characteristic curve for one type of Kodachrome, the
others don't change much and the same principles hold true:
http://www.kodak.com/global/en/professional/support/techPubs/e55/f002_048
6ac.gif
Examination will show that the oft misquoted dynamic range of 3.4 is
actually only approximately the linear range. This chart, for example,
shows a densities reaching well up to 3.8. More importantly though, this
is not the entire density range of the film - the chart only extends to
an exposure of 1/100 lux-seconds. Really deep shadows could well be far
lower exposure than that and, whilst well beyond the film's reciprocity
failure knee, will still record an image, albeit exceedingly dense and
with reduced contrast compared to the original scene.
That is certainly the case, but I don't have many slides with the
densities that you appear to be trying to cope with - and when I do, I
can bump up the multiscan as stated.
Nope - you ignored the "if/when needed". This isn't my workflow and
material that we are having problems with Don, its yours. ;-)
Of course it does and it is a basic physics why that is so - I will
leave it as a simple exercise for you to work out why. However that is
hardly the issue here, since we are not discussing whether one approach
resolves your particular problem better than another, but whether you
have actually managed to implement multiscanning correctly in Photoshop
as you claim. The evidence to date is that you haven't, which is hardly
surprising, because it is, in fact, impossible.
Superior only inasmuch as it partially resolves one specific problem you
have at the moment. It does nothing, for example, to reduce the noise
on the higher levels. Nor does it stretch the Dmax of the scanner into
the region that you claim you actually need.
No probs - its pretty hot here too. Far too hot do get on with theDon said:On Fri, 6 Aug 2004 16:30:32 +0100, Kennedy McEwen
Sorry for the delay in replying but it's just too hot... I can't even
scan because the scanner needs to recalibrate even between two scans
(highlights and shadows).
So, I went for bike ride yesterday...
Right here, back when I was trying to figure out how much I need to
boost the shadow's scan on my LS-30 to cover the full dynamic range.
I guess you missed that thread, which is too bad because it would have
saved me some time.
Not necessarily, as I said, I don't think the situation is quite as badThat explains a lot! For one, my having to scan almost every slide
twice - at least for now. We'll see what happens later as I get into
different batches (I scan chronologically).
It also raises a few other questions, for example, it means that
pretty much no scanner on the market can truly cover the full dynamic
range to the extent a twin-scan can (once for highlights and once for
shadows)!?
Well blurring doesn't actually mask the noise though. MultiscanningI meant that since anything above 2x gets
us into Photoshop shortcomings, by sticking to 2x we are at least
working from the same (or at least comparable) baseline.
Actually, probably not exactly the same since sub-pixel shifting
introduces some blurring. We haven't really addressed this blurring
but it's been a nagging thought in the back of my mind since this
blurring is bound to have an effect. On the one hand, it may be
negative because it "diffuses" the pixel which needs to be corrected
by multisampling, perhaps requiring more scans to achieve the same
effect as a single-pass multiscan; but on the other hand, blurring has
a slight positive effect by masking noise - even thought that's not
really a solution, but only masking of the problem.
I'm not concerned with noise on the higher levels because it is not
visible or, let's put it this way, it's not as objectionable as the
noise in dark areas.
That is another story completely - but at least black in a negative isI'm sure I'll change my tune when the time comes
to wrestle with the negatives... ;-)
But I don't understand why you say it doesn't extend the Dmax of the
scanner?
No probs - its pretty hot here too. Far too hot do get on with the
garden landscaping project which has consumed most of my dry weekends
this summer. So I just sat in the garden and admired my previous
handiwork through a nice cool beer. ;-)
....Not necessarily, as I said, I don't think the situation is quite as bad
as you paint it with your 18-20-bit requirement. There is a
fundamental limit for the Dmax that unexposed Kodachrome will produce.
Unfortunately Kodak do not publish it, and whilst it is more than the
LS-50 will achieve, I doubt it is that much beyond its capabilities -
perhaps 4.5-5. That should be achievable with multiscanning on a 16-bit
device like the LS-5000 or the Minolta 5400. Of course you then need to
apply some shadow lift to bring that into a region that can be
represented in a 16 file format,
let alone do what you want: present it
on a display with 8-bit graphics.
Well blurring doesn't actually mask the noise though.
However if the frames do not align perfectly then the signal (ie. the
actual image content) is not being added coherently, so the sum is
actually less than would be expected, particularly in the finest
details. Meanwhile the noise is unaffected and just adds in quadrature
as normal. Consequently, if blurring occurs is means that the
improvement in signal to noise must actually be less than what would be
anticipated in the ideal case.
Again, you might be able to see from this why you get a very similar
result - with the same 15-bit limitation - to your multiscanning efforts
as you do with a 0.3pixel gaussian blur.
Scanning is like peeling onions, in many respects.
Not only can it
create a lot of tears, but as soon as you have removed one layer of
problems there is another, almost identical, one just underneath.
Once
you have resolved your shadow noise, the general noise level will be
just as evident as the shadow noise now appears to be.
I said it doesn't extend it as far as you claim you actually need. It
certainly does extend the Dmax, but 2-3EV only gives the equivalent of
taking the shadow depth down by 2-3 bits, which is 16-17bits, or 1-4bits
short the 18-20 that you claim you need. As I said, I don't believe you
do actually need this, but if you do then you ain't going to get it that
way either.
I would have thought that, after rescaling the shadow scan into theDon said:On Sat, 7 Aug 2004 18:47:02 +0100, Kennedy McEwen
That "histogram synchronization", as I call it, which I use when
combining the two scans (highlights and shadows) was a big revelation.
I wish this was more publicized because I had to "invent" it myself!
On reflection, I should've figured it out much sooner as it's so
blatantly obvious. Once exposure is boosted the two scans are no
longer "in sync" and - somewhat counterintuitively - the shadows scan
has to be darkened in order to bring it into the same region as the
highlights scan. It's more complicated than that as I first have to
identify the point on the histogram where I want the two scans to
"meet" and work from that. Anyway, I'm surprised more is not written
about this (or at least I couldn't find it). Instead, all I heard/read
was "contrast masking doesn't work for gradients".
I meant "mask" in the sense that it spreads it around and makes it a
bit more difficult to spot.
Only by enough to ensure that what is detected does not have any moreI suppose, conceptually, this is similar
to anti-aliasing, which I contemptuously call "pro-blurring" ;-) as it
"masks" the jaggies by making *everything* fuzzy... :-/
But that is no different from anti-aliasing.Blurring noise is not as objectionable as anti-aliasing (well, to me,
anyway) because there are other artifacts competing for distortion
e.g. grain or film curvature resulting in softer focus, etc.
Everything in life is relative, Howard, but you need to be careful whatHoward said:Although I've never before resampled any image in Photoshop, it seems
that all agree that, other things being equal, downsampling
(discarding pixels) is less degrading to the image than upsampling
(inventing pixels).
Upsample if your system can support the additional data produced. ThatIf one's final image resolution in Photoshop -- before resampling --
were greater than 720 dpi, the choice would be easy: downsample to
720dpi.
But what if one's final image resolution in Photoshop -- before
resampling – were, say, between 360 dpi and 720 dpi? Would it be
better to upsample in Photoshop to 720 dpi (thus no printer resampling
at all), or to downsample in Photoshop to 360 dpi (where printer
upsampling to 720 dpi results in relatively few artifacts and
anomalies since 360 divides evenly into 720).
Here you are more likely to see a difference between upsampling andOf course the same question should be asked where the final image
resolution (before resampling) falls between 240 dpi and 360 dpi --
whether, in Photoshop, to downsample to 240, or to upsample to 360?
You are welcome.As always, thanks for your insight.
I would have thought that, after rescaling the shadow scan into the
correct range for the primary scan, you would be better to use a density
profiled mix to merge the two channels rather than a simple mask.
Obviously the mix would be 100% shadow scan for everything below 14-bits
and 100% primary scan for everything above the saturatation limit of the
shadow scan, with some transition percentage in between. At least this
would simulate a non-linearity in the extended dynamic range, rather
than the discontinuity that a simple mask would produce.
But that is no different from anti-aliasing.