Don said:
On Fri, 20 Aug 2004 01:12:24 +0100, Kennedy McEwen
We're only talking about a couple of pixels in the *transition area*
between the two layers which is blurred.
Of course, as I indicate later, if I simply blur that transition area
(in 16-bit mode) after I combine the layers then even this negligible
amount of image data in transition area will be unaffected.
I realise that it is the transition area that has been corrupted but,
unless you are prepared to work at it till hell freezes over, you won't
be able to blur all of those transitions manually. Since there is no
layer function in PS6/7 you have no alternative.
Depends on which channel but, in general, yes. Red reaches deeper
(less 0-counts) while blue is narrower (more 0-count bins) - as is to
be expected of Kodachromes. In the few difficult images I tried so far
I ended up clipping 10 to 15 levels (256 scale).
Ok, I *think* I see where the disparity is coming from here. Since you
are determined to "scan raw" and make all of your adjustments in PS, it
appears that you haven't done any black point compensation when
producing the scan before you are making this assessment. Errors in
black point compensation are exaggerated by the gamma correction curve,
due to the very high gains that occur in the shadow region - another
reason why Photoshop uses a slope limit.
Think about what the scanner is actually doing when it produces the
image. The first step is to calibrate the sensor - normalise the
responses of the CCD cells. This is achieved by viewing black and white
references inside the scanner. Ignore the white reference for the
moment. The data from the black reference is subtracted from all
subsequent scan data to produce the raw output, which is then gamma
compensated. However, any difference between the black reference and
the darkest part of the slide, then the black calibration will produce a
small black level offset in the raw data which is then amplified by the
gamma in the shadow region. Such a black offset is unavoidable, simply
due to stray light leakage paths within the hardware when actually
performing the scan. Since the black calibration is performed with the
LEDs off there is no light leakage to reference, so the black offset is
always positive in the raw data. The high gain of the gamma curve in
the shadows brings that up in level.
Unless you compensate for this black offset, either by sampling the mask
or, better, the unexposed film border, and subtracting this from your
image (ideally performed before gamma is applied, but the difference is
actually small if the black offset is significant) then all of your
subsequent processing will be in error. That is why the black point
control is there!
Now, looking at your information, if all the channels have empty bins
below 16 and you are working with 2.2 gamma, then it would appear that
your black offset is actually a count of around 140 on the 14-bit ADC.
Whilst that is rather large, it is not impossibly so. From what you
have said above though, your red channel, which is more dense on KC
emulsions, has fewer than 16 empty bins, indicating a black offset which
is actually much less. Given the operation of the Nikon scanners, with
a single broad band response CCD, I would expect black offset to be very
similar in all three channels, so the difference you see between the
channels is almost certainly real density variations on the film.
However, before you do *any* subsequent processing (*especially* the
scan mixing that you have been attempting) you need to correct for the
black point of the scanner. You might also want to consider
multiscanning to get an accurate assessment of what the black point
actually is for the exposure that you are working with.
AE scans do not suffer from this to the extent described i.e. there
are pixels in 0-16 range.
I would expect AE scans to have a lower, but not significantly so, black
offset. Your "primary" scan has, presumably, been adjusted by
increasing the AG in each channel so that the highlights just fail to
saturate, however I would not expect that to be by very much, so the
difference in black offset between an AE and AG optimised scan should be
relatively small. What level of AG adjustment are you typically
applying to each channel to produce your primary scan.
Shadows scans *adjusted* to be brought down to the nominal scan range
(using my primitive empirical "method") do have empty bins.
These will have more empty bins because you have increased the CCD
exposure to the stray light. Consequently the black offset is
increased. You need to apply a different black point adjustment for the
shadow scan, but you can estimate it in the same way as for the primary
scan.
*However*, (and it's a big however!) this is almost certainly due to
my method.
That would appear so! Scanner software wins again! Of course you can
apply black point correction in PS later, but with half the precision
that you can do it in NikonScan. ;-)
Finally, I'm concentrating on the most difficult images first (dense
images with little contrast) because once they are taken care of the
rest is easy. Therefore, these results should not be taken as
representative because I'm really dealing with extreme images.
Clearly with dense images, stray light is more significant
proportionally, so black point compensation is even more important.
I am trying to get the last vestige of shadow data. However, after I
have done that - depending on the image - I may or may not need to
boost (in post processing) to such extent that this data becomes
(glaringly) visible.
But the key is, I have obtained this shadow data, it's still there,
although - as you yourself explained - due to 8-bit nature of displays
(as well as editing needs of a particular image) it may not be that
easy to see in the final product. However, the data has been obtained
and archived.
Once you have corrected for blacks, I will be very surprised if you see
any difference at all on the display after combining these scans. You
get an extra bit of raw precision for every EV, so 3EV gives you an
effective 17-bit scan if you combine it correctly with the primary. Even
with an unlimited slope gamma, 14-bits produces less than 8-bit
quantisation for every count except the lowest. But you will find that
out in the long run.
OK, I've uploaded a couple of image segments. They are very small (50
x 50 pixels) but at full resolution and 16-bit depth. Note that the
shadows scan has *not* been sub-pixel shifted so the two areas are
only approximately the same but not identical. Also, no editing has
been done on these segments. They are as received from the scanner,
only cropped
OK, got these. However I don't see any random bright pixels in the
normal scan even under extreme adjustments. On the contrary, examining
the data itself I can see random *dark* spikes in the red channel. In
particular, there are 5 cells which have *zero* data in the red channel,
making them appear cyan against a white background under very extreme
adjustments. More significantly though, your shadow scan looks *much*
softer - if you haven't applied any blur to this then I would be
concerned that the focus was different.
Yes, it's the banding. I created two images one using PS gamma and the
other using AMP gamma curves from:
http://www.aim-dtp.net/aim/download/gamma_maps.zip
I hope then, from the previous explanation (and the issues discussed
above) that you can see why Timo's curve appears to produce less banding
than the PS one - it will if you don't give it real shadow detail.
You may already know this site, but it's been created by a
controversial Finnish guy who firmly believes all image editing should
be done in linear gamma.
Timo's ramblings are legendary and mainly wrong, particularly his forte
on processing in linear space rather than perceptual space - nice
mathematically, but completely wrong in terms of how you see things, and
that is what matters. There are a few things he is quite correct on, but
he is so focussed on kicking the established methodology that they are
hard to distil from his output. This is made the more difficult because
some functions should be undertaken in a linear working space, such as
the scanner calibration itself etc.
Anyway, he knocks Photoshop at every turn so
you'll feel quite at home there... ;o)
Err, I don't knock Photoshop in general - just specific points, like its
claims for 16-bit precision and singular failure to deliver. ;-)
Anyway, I didn't actually tabulate the data or ran these curves on a
gradient, but simply visually inspected a few test images. The PS
gamma images suddenly looked relatively "choppy" when compared to
images adjusted with above curves. They (curves adjusted images) also
appeared slightly "lighter" (!?) but that might have been my
subjective impression.
I have just checked these and there is no visual banding on a
synthesised grey ramp on this machine using either gamma curve. I
suspect you are seeing an interaction with the curves and your monitor
profile in the colour management.
The amp file produces an unlimited slope gamma curve with 8-bit
precision, with linear slope segments to 16-bits. However, since it is
based on 8-bit data, both for the input and output levels, the precision
of the end points on the linear segments is limited, and this gives rise
to some clustering in the histogram of the converted data. This can,
and will as shown below, produce problems.
That's what I thought at first but then I serendipitously came across
these AMP curves, and since they do not show this banding, I concluded
it was the PS gamma which caused the banding and that the display was
fine.
They will if you bring the black level up after applying it. As
mentioned, unlimited gamma slope means you need to sacrifice more of the
black levels to avoid quantisation.
Does NikonScan gamma also implement "slope limitations"?
Yes, the slope is limited to a maximum of 21 by default rather than
design. The NS gamma curve is similar to the effect that you get using
an 8-bit amp curve on 16-bit data in that the curve approximated by a
series of 256 linear segments. However, being calculated with 16-bit
precision, including the segment ends points, the results are *much*
smoother. This is pretty obvious if you compare histograms of ramps
processed by the three versions. The Nikonscan curve produces the
smoothness of the Photoshop curve without the discontinuity due to the
transition to a linear shadow region, yet doesn't produce any of the
quantisation limits of the amp implementation.
Note that the lack of slope limit on the NikonScan is less noticeable
because of the black offset inherent in the scanner calibration process.
And...
If yes, how do I compensate for that (if I need to), again using the
simple (istic...) example above, because the logic above is quite
clear to me?
As I said above, I don't think you will even need to bother after you
have corrected the black point, but if you still want to continue
torturing yourself, you need to consider the effect of the linear slope
on gamma up to PS levels of around 50 or so. However, using the
methodology you are proposing, the best accuracy you can achieve is
still pretty crude. The data for your amp file is just the calculation
I provided in the previous post for the continuous function - but at
each of the 256 vertices of the 255 linear segments. Photoshop will
apply a linear interpolation to the data that falls between those
points. That is all accurate, but the data defining the vertices is
only 8-bit accurate, and this will introduce posterisation as you will
see.
Rounding the data/4^(1/2.2) function to 8 bit precision, the first few
terms of your amp file should be:
0,1,1,2,2,3,3,4,4,5,5,6,6,7,7,8,9,9,10,10..
Note that only one 8 occurs, since 4^1/2.2 is 1.88, and this is where it
rolls over in the sequence.
For example, if you had data anywhere between level 16 and 17, that is
16-bit data between 4096 and 4351, this amp correction for the 2EV shift
would produce a 16-bit result of 256*9 = 2304 for all of those 256 input
values. This is because you have no slope between levels 9 and 10,
since there is insufficient resolution in an 8-bit value to quantify it.
The situation gets worse as you try to correct for higher EV adjustments
using this technique.
In fact, the required slope fro 2EV can be seen directly from the
following real calculations at the ends of the relevant linear segment:
Level=16: Output = 8.520328 (16-bit value 2181)
Level=17: Output = 9.05285 (16-bit value 2318)
Hence slope = 0.532521
Thus, the data in the 16-bit range between 4096 and 4351 would be
expected to map to the range 2181 to 2318, rather than all be reproduced
at exactly 2304, as an 8-bit amp curve adjustment does. In short, the
amp methodology results in 8-bit quantisation effects. I suspect that
this quantisation will be worse than the improvement you are expecting
to produce - especially if you make the transition to modified shadow
scan in a region where the primary scan is producing valid data. Whether
you like the look of it's histogram or not, it is still accurate to
14-bits and the amp transform will reduce it to 8-bit precision - and
not just in the transition areas, but everywhere in the image that the
shadow scan is relevant.
As I suggested right at the beginning of this sub-thread, you can't do
what you want with Photoshop <=7 - and I don't even know if what you
want to do is possible with the required accuracy in CS.
The only way I can see of doing this is to scan in linear working space
with gamma=1, apply the 2EV correction there and *then* apply gamma to
get to flat perceptual space. Trouble is, if you use Timo's amp curves
you will encounter 8-bit quantisation again. The only transfer function
that does not introduce adjacent codes which are identical somewhere in
the range is the unity transfer. Unfortunately, going the AMP route,
the problem occurs just where you don't want it, in the shadows.
So, the proposed solution is...
you'll like this, not a lot, but... ;-)
Scan linear.
Scale and merge in linear space using Photoshop.
Save image as tif file.
Import into NikonScan.
Apply required gamma in NikonScan.
Save in glorious 16-bit colour (shame you had to go through that poxy
15-bit stage en-route) from NikonScan.
Fortunately, if you set the black point correctly in Nikonscan in the
first place, you won't need to worry about any of this ever again, since
everything that can be recorded in the 14-bit dynamic range of the
scanner you have will be perfectly reproduced in a perceptually flat
display space without having to go near that pesky 15-bit editing suite.
;-)