Firmware hacks for the LS-4000?

  • Thread starter Thread starter fotoobscura
  • Start date Start date
Bart van der Wolf said:
SNIP

And even if we'd consider a direct comparison at equal size, the loss
is less than a 50% loss due to the better S/N ratio and sharpening
potential:
<http://www.xs4all.nl/~bvdwolf/temp/50pct_IPRL.png>
If I would tweak the PSF a bit more, the result after restoration could
be slightly improved, and the S/N still better.
Bart, I think you are missing the point here and going off down a rabbit
hole.

Nobody is disputing that there exist downsampling filters which increase
the MTF in the pass band of the output. I have spent more time
designing such filters myself than I wish I had! In almost all cases
these can be synthesised as composites of downsampling and high pass
filters anyway. However the high pass filter can equally be applied to
the original data directly and achieve exactly the same effect - over
the full original bandwidth.

Downsampling itself produces an SNR improvement, and your plots show
this too, but at the expense of restricting useful resolution to the
reduced bandwidth of the output - not what multisampling is intending to
achieve, and certainly not what deconvolution of the original data at
the full resolution would achieve, if sufficient noise reduction had
been produced by the multisampling in the first place.

In this case the reduced pass band of the downsampled output is
0.25cy/pixel, with everything beyond that aliasing, whilst in the
original the pass band is 0.5cy/pixel. Don't forget that the intention
of the sloped edge MTF measurement process is to actually oversample the
original data, thus pull out the aliased content of the test image into
a continuous linear scale.

Your plots demonstrate quite clearly that the original image has content
in the region 0.3-0.4 cy/pixel and beyond which is lost.
 
Not always. There are some scanners which do not offer single-pass
multiscanning using the manufacturer's software but they do in VueScan.
I'm absolutely sure my old Minolta Scan Speed was one of those scanners.
As far as I know, this is also true for the Scan Dual II and scanners
from some other brands

It's semantics, but both our statements are correct.

In your example, single-pass multi-scanning was *available* in Scan
Speed although not *used* by native software.

BTW, are you really absolutely sure about that? I mean, it would be
very unusual for them to go through the trouble of implementing it but
then not use it? Especially, since it's such an important feature.

The only reason I can think of them doing that is if they found a
problem and the feature failed their internal QA (quality assurance)
in which case VueScan may have just "liberated" bad data.

But Minolta has been known to do strange things ;o) like forcing users
to use ICE and Grain Dissolver together!?

There, VueScan does help. Unfortunately, it has other problems - but
we don't want to start that again... ;o)

Don.
 
Maybe I'm a little confused but I recalled that sampling and doing
"passes" are two different things. Am I wrong here? I seem to recall
having "passes" available on Vuescan with my LS-40 and sampling was
available on my LS-4000 and 8000.

That's why I for one prefer the terms:
*single*-pass multi-scanning
and
*multi*-pass multi-scannig.

They indicates quite clearly what's going on: Multi-scanning (multiple
sampling of the same area) are performed either in a "single-pass" or
"multiple-passes".
I do 8 pass, sometimes 16 pass sampling with Vuescan and I never see
any blurring..? Even with Long Exposure on which i'd presume would
exasperate that effect.

Depends how you look at it. You sure can't see if from across the
room... ;o)

I'm kidding, of course, but it illustrates a serious point. You need
to examine the *component* images (before they were combined) at least
at 100% magnification (I prefer 300-400%). In Photoshop, after
superimposing them accurately (!) pressing Ctrl/Tab will toggle
between the two images and the misalignment just "jumps out at you".

Of course, for this test you would need to do two passes *manually*
because VueScan will only produce the result (a single image) and not
the two (or more) images it used to create the composite image.

Don.
 
SNIP
If anyone is interested in downloading a large file, I have a
demo .PSD that shows how good the LS30,40 and 50 family
of scanners are in accuracy of scan.

Yes, if that is representative for that family of scanners, the
scan-to-scan registration is pretty good. That would mean that the
multiscan feature, lacking real multi-sampling, is a very useful
feature.

Bart
 
Bart said:
SNIP

Yes, if that is representative for that family of scanners, the
scan-to-scan registration is pretty good. That would mean that the
multiscan feature, lacking real multi-sampling, is a very useful
feature.

Bart


Hello

It has to be said that with my LS50, multiscanning is almost no use at
all. It is very difficult to see the improvement compared to that with a
LS30 in my demo.

http://www.btinternet.com/~mike.engles/mike/LS50.psd

Mike Engles
 
Bart van der Wolf said:
SNIP

Yes, if that is representative for that family of scanners, the
scan-to-scan registration is pretty good. That would mean that the
multiscan feature, lacking real multi-sampling, is a very useful feature.
Unfortunately it isn't representative - hence Don's issues with
sub-pixel alignment on his LS-50. I think Mike got away with it on his
LS-30 for a couple of reasons:
* Firstly, good luck in having a scanner that had minimum backlash in
the geartrain. My LS-2000, which was basically the same scanner with
the top end features, could never have achieved that - but it had single
pass multi-scanning built in, so it didn't need to for that, just for
other things I wanted to do.
* Lower resolution scanning capability of the LS-30. It was, after
all, a 2700ppi scanner with essentially the same internal mechanical
construction as the later 4000ppi scanners. So, needless to say, the
mechanical tolerances are the same, but the size of the steps has
reduced 33%. Consequently, the scan to scan misalignment is at least
33% worse on a pixel basis.

PS. I have just noticed Mike's post confirming his experience with the
LS-50 is not as good. I'm not surprised - at 4000ppi, each pixel is a
little over 6um. By contrast, a human hair is about 60um - 120um thick,
so just a little dust trapped in the grease of the mechanism will give
pixels of misalignment.
 
Don said:
It's semantics, but both our statements are correct.

Yes. It was 'available', but it was hidden.
BTW, are you really absolutely sure about that? I mean, it would be
very unusual for them to go through the trouble of implementing it but
then not use it? Especially, since it's such an important feature.

The successor of the F-2800 Scan Speed was the F-2900 Scan Elite. The
first Elite was technically identical to the Speed except that it had
ICE and one-pass multisampling. IIRC, Ed Hamrick found out that the
instruction set for both scanners was the same, except that ICE didn't
work on the Speed (it didn't have an IR light source) but one-pass
multisampling did.
I guess it was a marketing issue: Minolta had already planned how the
next model could be equipped with more features (i.e. those features
Nikon already had in the LS-2000) and it was probably cheaper to use the
same components and the same instruction set (one-pass multiscanning
does not require additional hardware). Perhaps inside the Speed,
components could be found that allow for mounting an IR LED. I never
checked it out.
 
Kennedy said:
Unfortunately it isn't representative - hence Don's issues with
sub-pixel alignment on his LS-50. I think Mike got away with it on his
LS-30 for a couple of reasons:
* Firstly, good luck in having a scanner that had minimum backlash in
the geartrain. My LS-2000, which was basically the same scanner with
the top end features, could never have achieved that - but it had single
pass multi-scanning built in, so it didn't need to for that, just for
other things I wanted to do.
* Lower resolution scanning capability of the LS-30. It was, after
all, a 2700ppi scanner with essentially the same internal mechanical
construction as the later 4000ppi scanners. So, needless to say, the
mechanical tolerances are the same, but the size of the steps has
reduced 33%. Consequently, the scan to scan misalignment is at least
33% worse on a pixel basis.

PS. I have just noticed Mike's post confirming his experience with the
LS-50 is not as good. I'm not surprised - at 4000ppi, each pixel is a
little over 6um. By contrast, a human hair is about 60um - 120um thick,
so just a little dust trapped in the grease of the mechanism will give
pixels of misalignment.
--
Kennedy
Yes, Socrates himself is particularly missed;
A lovely little thinker, but a bugger when he's pissed.
Python Philosophers (replace 'nospam' with 'kennedym' when replying)


Hello

As far as I can see from the PSD I posted from my LS50 is that ALL the
images are perfectly aligned.I did NO layer shifting.

What I was saying is the increase in noise performance was not as
noticable. If you look at the second LS50 PSD,turning off the blending
and opacity and adding the samples one by one at 600%, you will see
that only the noise pattern changes. Look at the area of the persons
eye.

It seems that I must have been twice lucky with my scanners, which
suggests that this luck is available to others that buy Nikons.

Multi sampling using a LS50 does not give a great advantage in terms of
S/N, but is very good when used with a LS30.

My way of using multisampling uses a base scan which is as sharp as it
can be. The extra samples are added to the darker and noisier regions,
by using blending and opacity. The very apparent increase in image
detail in those regions as well as the reduction in noise is obvious at
least for the LS30. Zoom in 300% ALT click on the base image to toggle
the samples on and off. CTRL click to see the samples without the base.
You will see the scan lines!

Mike Engles
 
* Lower resolution scanning capability of the LS-30. It was, after
all, a 2700ppi scanner with essentially the same internal mechanical
construction as the later 4000ppi scanners. So, needless to say, the
mechanical tolerances are the same, but the size of the steps has
reduced 33%. Consequently, the scan to scan misalignment is at least
33% worse on a pixel basis.

Exactly! But even on the LS-30 the misalignment was clearly
noticeable. Of course, provided one really inspects the image at
*appropriate magnification*. Now, whether this objective fact is
important or even meaningful depends on one's requirements.

And that's the crux of the matter which seems to cause most conflicts
in these here parts i.e. people who look at *data objectively* and
draw factual conclusions from that vs. people who look at *images
subjectively* and base their evaluation on personal feelings.

Nominally, there's nothing wrong with either approach, of course.
However, any attempt to translate personal feelings into objective
fact is doomed to failure even though the proponents of the former
can't see it or, more accurately, exactly because of it.
PS. I have just noticed Mike's post confirming his experience with the
LS-50 is not as good. I'm not surprised - at 4000ppi, each pixel is a
little over 6um. By contrast, a human hair is about 60um - 120um thick,
so just a little dust trapped in the grease of the mechanism will give
pixels of misalignment.

Precisely! When I migrated from LS-30 to LS-50 I took it as a given
that the relative misalignment will increase.

Don.
 
The successor of the F-2800 Scan Speed was the F-2900 Scan Elite. The
first Elite was technically identical to the Speed except that it had
ICE and one-pass multisampling. IIRC, Ed Hamrick found out that the
instruction set for both scanners was the same, except that ICE didn't
work on the Speed (it didn't have an IR light source) but one-pass
multisampling did.

I guess it was a marketing issue: Minolta had already planned how the
next model could be equipped with more features (i.e. those features
Nikon already had in the LS-2000) and it was probably cheaper to use the
same components and the same instruction set (one-pass multiscanning
does not require additional hardware). Perhaps inside the Speed,
components could be found that allow for mounting an IR LED. I never
checked it out.

That's why I hate "marketroids". Nothing infuriates me more then when
perfectly good hardware, or indeed software, is intentionally crippled
for marketing "reasons". Grrr... ;o)

Don.
 
It seems that I must have been twice lucky with my scanners, which
suggests that this luck is available to others that buy Nikons.

Indeed, you seem to have been very lucky. Alas, I'm not one of the
lucky ones and I do notice a shift.

In all the months I've had the LS-50 I only had one twin scan where
the alignment was perfect. I was so surprised I did a double take in
case I loaded the same image twice by mistake. But it was two
different scans, perfectly aligned! Made my day! ;o)

One thing which may influence the alignment is that I have been doing
my tests with slides in thin cardboard Kodachrome mounts. And since
these mounts are very thin I thought they may "float" more easily
because Nikons do rattle and lot, so I've been playing with various
ways of "fixing" them in place. For example, by using an empty mount
with increased hole (to avoid it blocking the image) and inserting
both together. But I didn't notice any increase in alignment.
Multi sampling using a LS50 does not give a great advantage in terms of
S/N, but is very good when used with a LS30.

That's to be expected because of the limited dynamic range of the LS30
(which was my first Nikon too).

However, back to those pesky Kodachromes, even the LS50 doesn't go far
enough and (at least in my experience) I see a lot of noise in very
dark areas.
My way of using multisampling uses a base scan which is as sharp as it
can be. The extra samples are added to the darker and noisier regions,
by using blending and opacity.

I prefer to do a 100% replace rather than blend. That's why I may be
"hypersensitive" to even the slightest misalignment.

Don.
 
Don said:
Indeed, you seem to have been very lucky. Alas, I'm not one of the
lucky ones and I do notice a shift.

In all the months I've had the LS-50 I only had one twin scan where
the alignment was perfect. I was so surprised I did a double take in
case I loaded the same image twice by mistake. But it was two
different scans, perfectly aligned! Made my day! ;o)

One thing which may influence the alignment is that I have been doing
my tests with slides in thin cardboard Kodachrome mounts. And since
these mounts are very thin I thought they may "float" more easily
because Nikons do rattle and lot, so I've been playing with various
ways of "fixing" them in place. For example, by using an empty mount
with increased hole (to avoid it blocking the image) and inserting
both together. But I didn't notice any increase in alignment.


That's to be expected because of the limited dynamic range of the LS30
(which was my first Nikon too).

However, back to those pesky Kodachromes, even the LS50 doesn't go far
enough and (at least in my experience) I see a lot of noise in very
dark areas.


I prefer to do a 100% replace rather than blend. That's why I may be
"hypersensitive" to even the slightest misalignment.

Don.


Hello

If you inspect my LS50 image at 600% you will see 9 perfect scans. The
film is Kodachrome 25, without ICE in a thin plastic mount. I do all
scans one after another without closing the Twain. My scanner is
amazingly noisy, with especially during preview, much grinding of gears.

As far as the LS30 is concerned, the total increse in the quality of the
image,far outweighs the small shifts, only noticable at 300%.
I'd rather have a slightly blurred image that will print reasonably well
than one with so much detail obscurred with noise. The shifts were only
a pixel.

It is a matter of opinion and taste and I guess that I am not so
demanding.

By the way what is 100% replace?

Mike Engles
 
Don said:
Indeed, you seem to have been very lucky. Alas, I'm not one of the
lucky ones and I do notice a shift.

In all the months I've had the LS-50 I only had one twin scan where
the alignment was perfect. I was so surprised I did a double take in
case I loaded the same image twice by mistake. But it was two
different scans, perfectly aligned! Made my day! ;o)

One thing which may influence the alignment is that I have been doing
my tests with slides in thin cardboard Kodachrome mounts. And since
these mounts are very thin I thought they may "float" more easily
because Nikons do rattle and lot, so I've been playing with various
ways of "fixing" them in place. For example, by using an empty mount
with increased hole (to avoid it blocking the image) and inserting
both together. But I didn't notice any increase in alignment.


That's to be expected because of the limited dynamic range of the LS30
(which was my first Nikon too).

However, back to those pesky Kodachromes, even the LS50 doesn't go far
enough and (at least in my experience) I see a lot of noise in very
dark areas.


I prefer to do a 100% replace rather than blend. That's why I may be
"hypersensitive" to even the slightest misalignment.

Don.


Hello

If you inspect my LS50 image at 600% you will see 9 perfect scans. The
film is Kodachrome 25 rated at 40, without ICE in a thin plastic mount.
I do all scans one after another without closing the Twain. My scanner
is amazingly noisy, with especially during preview, much grinding of
gears.

As far as the LS30 is concerned, the total increase in the quality of
the image,far outweighs the small shifts, only noticable at 300%.
I'd rather have a slightly blurred image that will print reasonably well
than one with so much detail obscurred with noise. The shifts were only
a pixel in 2700.The blurring would be less than that caused by scanning
a bowed slide. It is a matter of opinion and taste and I guess I am not
so demanding.

By the way what is 100% replace?

Mike Engles
 
By the way what is 100% replace?

Oh, I just mean I don't blend the images like conventional "contrast
masking" but only use relevant parts from the two images and then join
them.

Let's say we have two scans: a nominal scan (highlights touching the
right histogram edge, but not clipping) and a "shadows scan" (exposure
boosted until all of shadow data is out of the noise range).

I first bring the shadows scan down (using curves) to the same level
as the nominal scan. This step is usually skipped in conventional
contrast masking but I consider it absolutely essential! In the same
step I also "color synchronize" the two scans because the two
exposures result in different color response. This enables me to
combine images in the middle of gradient - something not really
possible with conventional contrast masking, or at best, difficult to
do and then only with a lot of cutting of corners.

Next, I create a mask so the shadows scan provides shadow data and
nominal scan provides the rest.

Easiest explained with an example. Let's say, looking at the nominal
scan you see noise up to about 32 on the histogram. Using the above
procedure I would have anything below 32 be 100% data from the shadows
scan, and anything above it 100% from the nominal scan data.

Because of my "color synchronization" the combined image has *no edge*
between the two images and they flow seamlessly into each other.

All other contrast masking procedures (whether it's using Gaussian
Blur or blending) shadows data is "polluted" by data from the nominal
scan, and vice versa. I was not happy about that. However, in many
instances conventional contrast masking can produce acceptable results
but it does have problems with gradients. With my method the image
content is irrelevant, but he process is more time consuming.

Don.
 
Don said:
Oh, I just mean I don't blend the images like conventional "contrast
masking" but only use relevant parts from the two images and then join
them.

Let's say we have two scans: a nominal scan (highlights touching the
right histogram edge, but not clipping) and a "shadows scan" (exposure
boosted until all of shadow data is out of the noise range).

I first bring the shadows scan down (using curves) to the same level
as the nominal scan. This step is usually skipped in conventional
contrast masking but I consider it absolutely essential! In the same
step I also "color synchronize" the two scans because the two
exposures result in different color response. This enables me to
combine images in the middle of gradient - something not really
possible with conventional contrast masking, or at best, difficult to
do and then only with a lot of cutting of corners.

Next, I create a mask so the shadows scan provides shadow data and
nominal scan provides the rest.

Easiest explained with an example. Let's say, looking at the nominal
scan you see noise up to about 32 on the histogram. Using the above
procedure I would have anything below 32 be 100% data from the shadows
scan, and anything above it 100% from the nominal scan data.

Because of my "color synchronization" the combined image has *no edge*
between the two images and they flow seamlessly into each other.

All other contrast masking procedures (whether it's using Gaussian
Blur or blending) shadows data is "polluted" by data from the nominal
scan, and vice versa. I was not happy about that. However, in many
instances conventional contrast masking can produce acceptable results
but it does have problems with gradients. With my method the image
content is irrelevant, but he process is more time consuming.

Don.


Hello

Thanks for the explantion,but it sounds like hard work!
Do you do this in a linear space or a gamma space?
It would be interesting to see a example of the results and what the
improvement is compared to working on a single scan.

Mike Engles
 
Thanks for the explantion,but it sounds like hard work!
Do you do this in a linear space or a gamma space?
It would be interesting to see a example of the results and what the
improvement is compared to working on a single scan.

Yes, it's very hard work, but the results are quite stunning.
Essentially it turns any scanner into a variable bit depth scanner.
I'll try to post some examples but I'm a bit swamped right now. (I
haven't done any scanning for over a month.)

I do it gamma 2.2 space which is actually making my life more
difficult, but the problem with gamma 1.0 was that it was very hard to
discern the noise (i.e. where to set the cutoff point) because the
scan is very dark. This is because I'm wrestling with Kodachromes.

When I get some time (wishful thinking...) I would like to go back and
try it all again in gamma 1.0.

Don.
 
Don said:
Yes, it's very hard work, but the results are quite stunning.
Essentially it turns any scanner into a variable bit depth scanner.
I'll try to post some examples but I'm a bit swamped right now. (I
haven't done any scanning for over a month.)

I do it gamma 2.2 space which is actually making my life more
difficult, but the problem with gamma 1.0 was that it was very hard to
discern the noise (i.e. where to set the cutoff point) because the
scan is very dark. This is because I'm wrestling with Kodachromes.

When I get some time (wishful thinking...) I would like to go back and
try it all again in gamma 1.0.

Don.


Hello

I look for to seeing the examples.

Mike Engles
 
Back
Top