Sprintscan 35+, 10 bits vs 8 bits

  • Thread starter Thread starter cubilcle281
  • Start date Start date
C

cubilcle281

Hi all,

I am scanning slides with a Polaroid Sprintscan 35+. They are old
Instamatic slides (126 film) which are ~26mm square, and while the
slide will fit into most modern scanners, the top or sides get chopped
off. So for this batch of slides (and for budget reasons for now), I
am stuck with this particular scanner.

I have a lot of slides to do, so I am trying to get through it in the
fastest way possible, but also allow me to get the best possible
quality. With any other scanner, this would be easy - scan raw and
correct later. However, the Sprintscan does 10 bits/channel
internally, converting to 8 bits/channel for output, so you really have
to correct the slides as you scan them.

Or so I thought! After setting levels for a slide I loaded the file
into Photoshop elements, and looked at the histogram. There was
distinct banding! This gives me the impression that the levels
correction is being done on the 8-bit output, rather than the 10-bits
internally that I had thought.

Can anyone confirm what is going on here? What options in polacolor
are done on the internal 10-bit scan? I have heard 'gamma' mentioned
before, but what does that mean in relation to levels and curves? If
all the levels processing is done on the 8-bit output, then I would
rather work in Photoshop, but I would like to make use of the 10-bit
scan if it is at all possible.

One last thought; it is possible that the 10-bit->8-bit conversion is
only really used when doing negative scans, since the dynamic range is
so much less.

Thanks in advance,

C
 
Or so I thought! After setting levels for a slide I loaded the file
into Photoshop elements, and looked at the histogram. There was
distinct banding! This gives me the impression that the levels
correction is being done on the 8-bit output, rather than the 10-bits
internally that I had thought.

Can anyone confirm what is going on here? What options in polacolor
are done on the internal 10-bit scan? I have heard 'gamma' mentioned
before, but what does that mean in relation to levels and curves? If
all the levels processing is done on the 8-bit output, then I would
rather work in Photoshop, but I would like to make use of the 10-bit
scan if it is at all possible.

I'm not familiar with Polaroid Sprintscan but a couple of things...

Gamma is a way to "brighten" the image without "clipping". Too
complicated to explain in detail but essentially a curve is applied to
the raw data. This is what causes the gaps in the histogram because
the area on the left is stretched while the area on the right is
compressed. Imagine the middle point dragged to the right. The extreme
points at both ends remain as they are so that's why there's no
clipping.

You may also notice "recursive waves" but that occurs when histogram
is converted from 10 to 8 bits for display.

One thing you may try is to set gamma to 1.0. That's also known as
"linear gamma". If you then turn off all the other editing features,
or set them to neutral settings, you should get a fairly smooth
histogram. Of course, that will not get you the 10-bit accuracy, but
at least the scanner software will not do any more "damage".

One option is to then open such a file in Photoshop and as the very
first step convert to 16-bit. The image will look very dark, but
that's because it's in gamma 1.0. Next, go to Levels and change the
middle point value from 1.0 to 2.2. That's the quick way. A more
complicated way is to get gamma curves from someplace and use them
e.g. http://www.aim-dtp.net/aim/download/gamma_maps.zip.

Do your edits on that. Of course, this will not recover the original
10-bits, but at least you will do all your edits in 16-bits which will
give you more elbow room.
One last thought; it is possible that the 10-bit->8-bit conversion is
only really used when doing negative scans, since the dynamic range is
so much less.

It's possible, but not probable. If what comes off the CCDs is 10 bits
then it wouldn't make sense to just process that regardless of the
media. But you never know...

Don.
 
Hmm,

It's not clear to me where your problems are originating, but let me
clear up a few details.

CCD captures analog and turns it into voltage. The a/d converter is
the thing converting analog voltage into discreet bytes.

The A/D converter makes 8 + 2 bytes. Not really "10" but they use the
2 bytes for accuracy I beleive. So, it's pretty much always 8
bytes/channel.

Scan raw and adjust later:
Why can't you do that with the Polaroid? Did you check the scanner for
accuracy before starting? Are you letting it warm up?

Setting levels for a slide:
If you are adjusting levels and then checking the histogram and finding
banding, photoshop is making your "banding problem."

I'm all for intelligent questions, but you are blaming a decent scanner
for some problems that most likely have little to do with the
hardware/software.

Please provide your scanning software setting please.
 
mp said:
Hmm,

It's not clear to me where your problems are originating, but let me
clear up a few details.

CCD captures analog and turns it into voltage. The a/d converter is
the thing converting analog voltage into discreet bytes.

The A/D converter makes 8 + 2 bytes. Not really "10" but they use the
2 bytes for accuracy I beleive. So, it's pretty much always 8
bytes/channel.

That's *bit* actually, not byte, a byte being made up of 8 bits.

Unless that particular scanner does something very strange, 10 bits are
10 bits -- yeah, the least significant two bits are used for "accuracy",
but so are the other eight!

Now, many scanners used to (and maybe still do) offer only 8 bits per
channel "externally" (i.e. either on the bus, or from the propertary
Twain driver), while working with more than that "internally" (i.e.
inside the scanner, or inside the Twain driver).

This, apparenly, is the OP's case. However, he said that the results
"look like" (the histograms look like) only 8 bpc were used even
internally during gamma correction.

This sounds definitely strange (either it's a 10-bit scanner *somehow*,
or it isn't, after all).

Though, I have a feeling that the scanner *is* 10-bit, only the OP had
some excessively high expectation over the results one can obtain from
10 bpc: I think that, with only 10 bpc, moderate gamma correction will
quite possibly result in histogram "banding".

Most scanners now offer 16-bit scanning (i.e. they have a 16-bit A/D,
though you won't find any scanner where all the 16 bits contain
meaningful values instead of noise).
Scan raw and adjust later:
Why can't you do that with the Polaroid? Did you check the scanner for
accuracy before starting? Are you letting it warm up?

If you have a scanner with an internal bit depth higher than the
external bit depth, it's completely normal not to be able to "scan raw
and adjust later".
I mean, of course you *can* do it, but the quality is going to suffer
compared to adjusting at scantime.

Really, scanners with internal/external bit depth differences are just
stupid. I'd suggest trying SANE on Linux (though there might be a decent
build for Windows somewhere) on those scanner, as AFAIK, some/many of
those scanners really "cripple" bit depth inside the *Twain driver*, not
really the scanner itself.
Setting levels for a slide:
If you are adjusting levels and then checking the histogram and finding
banding, photoshop is making your "banding problem."

He said he's adjusting levels at scan time (which should, in theory, be
done on a 10-bpc image), not inside Photoshop.
I'm all for intelligent questions, but you are blaming a decent scanner
for some problems that most likely have little to do with the
hardware/software.

I'm not sure.

by LjL
(e-mail address removed)
 
From what I have gathered reading over historical posts in google
groups, the scanner does 10 bits internally, but only outputs (as in,
leaves the scanner) 8-bit. IIRC, Ed Hamrick wasn't even able to get
more than 8 bits out of the scanner. There was some reference to him
being able to modify the gamma, and gamma correction is done in the
10-bit space. I looked into this a long time ago so I can't find any
references, but it all sounds plausible.

I should explain that I am trying to scan a couple of hundred slides,
so I am trying to avoid having to correct each one as I go if it turns
out I can do it later in photoshop (much faster!) with no loss of
quality. Plus, I then have my digital 'archive' and I can choose to
correct only what I want/need at the time and leave the rest for later.

Therefore, what I really want to know is what is done in the 8-bit
space vs what is done in 10-bit, so I can make the best use of my time.

I have not seen any actual banding in the image, rather just banding in
the histogram. It is not a problem in itself, but just an indication
that things may be done in 8-bit.

To help me sort this out, does anyone know of a program that can
analyse a tiff file and show the intensity values - like a histogram
but in more detail ??

Thanks
mp said:
Hmm,

It's not clear to me where your problems are originating, but let me
clear up a few details.

CCD captures analog and turns it into voltage. The a/d converter is
the thing converting analog voltage into discreet bytes.

The A/D converter makes 8 + 2 bytes. Not really "10" but they use the
2 bytes for accuracy I beleive. So, it's pretty much always 8
bytes/channel.

That's *bit* actually, not byte, a byte being made up of 8 bits.

Unless that particular scanner does something very strange, 10 bits are
10 bits -- yeah, the least significant two bits are used for "accuracy",
but so are the other eight!

Now, many scanners used to (and maybe still do) offer only 8 bits per
channel "externally" (i.e. either on the bus, or from the propertary
Twain driver), while working with more than that "internally" (i.e.
inside the scanner, or inside the Twain driver).

This, apparenly, is the OP's case. However, he said that the results
"look like" (the histograms look like) only 8 bpc were used even
internally during gamma correction.

This sounds definitely strange (either it's a 10-bit scanner *somehow*,
or it isn't, after all).

Though, I have a feeling that the scanner *is* 10-bit, only the OP had
some excessively high expectation over the results one can obtain from
10 bpc: I think that, with only 10 bpc, moderate gamma correction will
quite possibly result in histogram "banding".

Most scanners now offer 16-bit scanning (i.e. they have a 16-bit A/D,
though you won't find any scanner where all the 16 bits contain
meaningful values instead of noise).
Scan raw and adjust later:
Why can't you do that with the Polaroid? Did you check the scanner for
accuracy before starting? Are you letting it warm up?

If you have a scanner with an internal bit depth higher than the
external bit depth, it's completely normal not to be able to "scan raw
and adjust later".
I mean, of course you *can* do it, but the quality is going to suffer
compared to adjusting at scantime.

Really, scanners with internal/external bit depth differences are just
stupid. I'd suggest trying SANE on Linux (though there might be a decent
build for Windows somewhere) on those scanner, as AFAIK, some/many of
those scanners really "cripple" bit depth inside the *Twain driver*, not
really the scanner itself.
Setting levels for a slide:
If you are adjusting levels and then checking the histogram and finding
banding, photoshop is making your "banding problem."

He said he's adjusting levels at scan time (which should, in theory, be
done on a 10-bpc image), not inside Photoshop.
I'm all for intelligent questions, but you are blaming a decent scanner
for some problems that most likely have little to do with the
hardware/software.

I'm not sure.

by LjL
(e-mail address removed)
 
cubilcle281 said:
groups, the scanner does 10 bits internally, but only outputs (as in,
leaves the scanner) 8-bit. IIRC, Ed Hamrick wasn't even able to get
more than 8 bits out of the scanner. There was some reference to him
being able to modify the gamma, and gamma correction is done in the
10-bit space. I looked into this a long time ago so I can't find any
references, but it all sounds plausible.

I see. Well, in this case I suppose you have no choice but to live with
8-bit external.
I should explain that I am trying to scan a couple of hundred slides,
so I am trying to avoid having to correct each one as I go if it turns
out I can do it later in photoshop (much faster!) with no loss of
quality. Plus, I then have my digital 'archive' and I can choose to
correct only what I want/need at the time and leave the rest for later.

Therefore, what I really want to know is what is done in the 8-bit
space vs what is done in 10-bit, so I can make the best use of my time.

Why don't you
1) scan image1 at linear gamma
2) correct image1 in Photoshop to gamma 2.5
3) scan image2 at gamma 2.5
4) compare image1 and image2

Step 4 could probably be done in various ways, the most obvious of which
(to me) would be to compare the histograms.

I have not seen any actual banding in the image, rather just banding in
the histogram. It is not a problem in itself, but just an indication
that things may be done in 8-bit.

As I said, you're right saying *may*.
10-bit is not that much: I think banding could easily arise in the most
significant 8 bits from gamma-correcting a 10-bit image.

I think you should try this as well:
1) scan at gamma 1.1
2) check if there is *any* banding in the histogram
To help me sort this out, does anyone know of a program that can
analyse a tiff file and show the intensity values - like a histogram
but in more detail ??

ImageMagick's "identify", or the equivalent NetPBM tool I can't remember
the name of -- pnmhist I'd think.


by LjL
(e-mail address removed)
 
I should explain that I am trying to scan a couple of hundred slides,
so I am trying to avoid having to correct each one as I go if it turns
out I can do it later in photoshop (much faster!) with no loss of
quality. Plus, I then have my digital 'archive' and I can choose to
correct only what I want/need at the time and leave the rest for later.

Therefore, what I really want to know is what is done in the 8-bit
space vs what is done in 10-bit, so I can make the best use of my time.

Time is of course, important, as is bit depth. However, one other
thing to consider is how good are those edits in a scanner program?

I mean, you are working through the preview "keyhole" rather than the
full Photoshop display with up to 1600% magnification. This means,
that some of those inexact edit decisions made based on the limited
scanner program features may cancel out any advantages of it
(perhaps?) working in 10 bits.
I have not seen any actual banding in the image, rather just banding in
the histogram. It is not a problem in itself, but just an indication
that things may be done in 8-bit.

Some histogram artefacts are actually introduced by the display
algorithm. Even Photoshop suffers from that. Case in point are the
"waves" I referred to in my last message.

To be really sure you need a histogram which goes beyond 8-bit. That's
why I wrote my own true 16-bit histogram.
To help me sort this out, does anyone know of a program that can
analyse a tiff file and show the intensity values - like a histogram
but in more detail ??

Before I wrote the above mentioned stand-alone histogram program, I
used this free Photoshop plug-in:

http://www.reindeergraphics.com/free.shtml

However, that one only goes up to 12-bits.

The thing is, it won't do you much good if the data you're looking at
is only 8-bits. I guess what you really want is your scanner program's
histogram to display more than 8-bits, before the file is exported. A
bit of a Catch 22. :-(

Don.
 
Don said:
Time is of course, important, as is bit depth. However, one other
thing to consider is how good are those edits in a scanner program?

I mean, you are working through the preview "keyhole" rather than the
full Photoshop display with up to 1600% magnification. This means,
that some of those inexact edit decisions made based on the limited
scanner program features may cancel out any advantages of it
(perhaps?) working in 10 bits.

I would suggest this: scan raw, then do the adjustments in Photoshop
*and take note of the adjustments that have been made*, then scan again
using the same adjustments that were applied in Photoshop.

No doubt that this does take time; however, perhaps by scanning at a
lower resolution in the "preview" scans, and through some smart "batch
workline", a reasonable compromise might be obtained.
[snip]

The thing is, it won't do you much good if the data you're looking at
is only 8-bits. I guess what you really want is your scanner program's
histogram to display more than 8-bits, before the file is exported. A
bit of a Catch 22. :-(

Yeah.
But in any case (though I know that here we'll go again...), I think
that doing adjustments at scan time still has an advantage -- that is,
if the scanner really does scan at 10-bit.

For example, setting the whitepoint and blackpoint at scantime should do
no damage and can improve the results, if the scanned image doesn't
cover the whole range of values.
Note that you *can't* precisely know the exact values for the whitepoint
and the blackpoint from a low-resolution preview, so you should either
take some safety margin, or be prepared to scan again (some
trial-and-error, which could be made automatic, though) in case of clipping.

I think adjusting gamma could be a little more problematic, without
seeing a full-size preview first. Still, I think it can be done safely
with many images.

More sophisticated adjustments are left as an exercise to the reader,
who will surely find many very dangerous caveats, at least if his name
is Don :-)


by LjL
(e-mail address removed)
 
But in any case (though I know that here we'll go again...), I think
that doing adjustments at scan time still has an advantage

You are right it has huge advantages. IMHO, it's the best way to
capture the most data.
you are working through the preview "keyhole" rather than the
full Photoshop display with up to 1600% magnification
Long ago, there was some truth to this. With good scanning software,
the display is color managed just like Photoshop. Even the cheap
Canon 4200F software manages the display.

The ideal scanner workflow is to color correct in the scanning software
to get things close, then finish in your image editor. Odds are
excellent you will end up with more data to send to the printer.
 
I would suggest this: scan raw, then do the adjustments in Photoshop
*and take note of the adjustments that have been made*, then scan again
using the same adjustments that were applied in Photoshop.

No doubt that this does take time

That's one way. But, as you say, it's time consuming and he said he
wanted to speed things up.
But in any case (though I know that here we'll go again...), I think
that doing adjustments at scan time still has an advantage -- that is,
if the scanner really does scan at 10-bit.

That's a big if and - unless he goes through the above procedure - his
editing decisions will be made based on the preview keyhole i.e. they
will be wrong.
For example, setting the whitepoint and blackpoint at scantime should do
no damage and can improve the results, if the scanned image doesn't
cover the whole range of values.

No, it doesn't, simply because you're basing those crucial decisions
on the *preview image* histogram!

Again, that histogram is based on the *tiny* preview image! Not only
that, but most likely the preview was done in *8-bit*! (BTW, NikonScan
previews at 16-bit.)
I think adjusting gamma could be a little more problematic, without
seeing a full-size preview first. Still, I think it can be done safely
with many images.

Actually, gamma is about the only thing you can do safely in scanning
software. The setting is known and fixed at 2.2 (or 1.8 for Macians).
More sophisticated adjustments are left as an exercise to the reader,
who will surely find many very dangerous caveats, at least if his name
is Don :-)

Just the facts! :o)

The root problem, Lorenzo, is that no matter how hard you try - and
you do try very hard! ;o) - you just can't get around the fact that
applying changes at scan time is a bad idea in all but most trivial of
cases.

Doing that means basing major editing decisions on the preview keyhole
and that will never be as good as working on the full image. Not to
mention all the other problems (enumerated earlier) associated with
scanning software editing.

Don.
 
Long ago, there was some truth to this. With good scanning software,
the display is color managed just like Photoshop. Even the cheap
Canon 4200F software manages the display.

You're missing the key point. It's not about color management.

That scanning software histogram is of a *tiny* preview image! What's
more many scanning program do this preview in *8-bit* for speed! So,
you'll be basic major editing decisions on guesswork.
The ideal scanner workflow is to color correct in the scanning software
to get things close, then finish in your image editor. Odds are
excellent you will end up with more data to send to the printer.

No, actually doing color correction at scanning stage is about the
*worst* thing you can do! You may get away with setting the black and
white point sometimes... maybe... if you're very conservative with
clipping... But color correcting in scanner software is *very*
destructive and inexact!

The ideal procedure is to scan raw at maximum magnification and
bit-depth (optionally in linear gamma) and then do all editing in
external software.

After all, scanning software was written to scan, not to edit. That's
what external editors are for.

Don.
 
Don said:
On Tue, 18 Oct 2005 21:04:35 +0200, "Lorenzo J. Lucchini"

[snip]
But in any case (though I know that here we'll go again...), I think
that doing adjustments at scan time still has an advantage -- that is,
if the scanner really does scan at 10-bit.


That's a big if

Yes, but I think this can be found out with a bit of testing: I
described some ideas for doing it in another article.
and - unless he goes through the above procedure - his
editing decisions will be made based on the preview keyhole i.e. they
will be wrong.

Where do you place the line that separates a scan from a "keyhole"?
A preview doesn't necessarily have to be made at 100dpi or so.

Now, no matter how high your preview's resolution, you won't have *all*
the necessary information until you take your preview at the *same*
resolution as the final scan; however, with a decent compromise between
resolution and scanning speed, I think you can come pretty darn close.
No, it doesn't, simply because you're basing those crucial decisions
on the *preview image* histogram!

Again, that histogram is based on the *tiny* preview image!

Yes, but all you need from that histogram is the whitepoint and the
blackpoint.

Those *will not* be correct in the smaller preview, agreed.
But this error will result in two possible situations: either you end up
with a clipped image, or you end up with an image where the whitepoint
and blackpoint are not as "stretched" as they could have been (ideally,
1 and 254).

In the latter case, oh well, you can't have everything, but you've still
gained something.

The former case (clipping) is a little tougher, but as I said, you can
allow some margin to avoid that.
Also, it's not a case of "hmm, wonder whether I made the right
adjustments from that tiny preview": either the image clips, or it
doesn't. If it does, scan again, and consider increasing your safety
margin next time.
Not only
that, but most likely the preview was done in *8-bit*! (BTW, NikonScan
previews at 16-bit.)

I don't see how this matters for setting whitepoint and blackpoint...
Yeah, you'll only be able to chose among 256 values rather than 65536,
but I'd be very happy if this was the only problem! Whitepoint at 253.74
instead of 254... terrible! ;-)

But as you correctly say, the big problem is the limited preview
resolution, not the bit depth -- resolution *can*, at least under
extreme conditions, give you a white/blackpoint that's off by *much*,
possibly making the scan clip; bit depth can't.
Actually, gamma is about the only thing you can do safely in scanning
software. The setting is known and fixed at 2.2 (or 1.8 for Macians).

Yes, but I wan't thinking about that, I was thinking about using gamma
to adjust the original image (not a very conventional use of gamma
perhaps, but I think it's quite a common edit).
Just the facts! :o)

The root problem, Lorenzo, is that no matter how hard you try - and
you do try very hard! ;o) - you just can't get around the fact that
applying changes at scan time is a bad idea in all but most trivial of
cases.

Hmm... I can't really deny this when speaking about what I do with *my*
scanner, since *I* could scan at 16-bit, but instead prefer to scan at
8-bit and make some scan time adjustments.
In this case, you're obviously right (though, you know, people make
compromises for speed, storage space and all, and I don't think mine is
so terrible a compromise if done carefully).

But in the OP's case, he *cannot* scan at 10-bit (let alone 16-bit),
instead he *has* to work with 8-bit data.
In his case, I'd be very careful to say "just kiss goodbye to the lowest
two bits, and deal with the rest", even if the only other option is
messing with scan time settings.

After all, you don't like scantime edits, but you don't seemed to like
throwing away bits, either. It's a matter of compromise, I think -- a
compromise that, in my case, could be avoided altogether (by scanning at
16-bit), but can't in the OP's case.

Anyway, you said gamma=2.2 or gamma=1.8 is something that can safely be
done at scan time; and white/blackpoint adjustment can also be done at
scan time, though not "safely", but the worst that can happen is having
to scan again, in case of clipping.

I think these two alone are worth considering.
Also, if you have decent film curves available for the film you're
scanning, I'd throw them in, too: I think they can only help. Yeah, they
*could* cause damage if the film was developed very badly, or
something... but what the heck, just scan a test frame and decide.

by LjL
(e-mail address removed)
 
So, you'll be basic major editing decisions on guesswork.

Correction:

So, you'll be *basing* major editing decisions on guesswork.
The ideal procedure is to scan raw at maximum magnification and
bit-depth (optionally in linear gamma) and then do all editing in
external software.

Clarification:

The ideal procedure is to scan raw at *native resolution*... etc.

Don.
 
Where do you place the line that separates a scan from a "keyhole"?

Just below native resolution. Anything less than native resolution and
it's guesswork. That's the nominal answer and you can argue how much
(or little) error is there but that's not the point.

A preview isn't anywhere *near* that native resolution! In case of
preview we're talking about several orders of magnitude, not a few
pixels here and there.
A preview doesn't necessarily have to be made at 100dpi or so.

Not strictly true as most scanning programs don't even offer that
option (or at the very least limit it severely). Instead, most
scanning programs preview at what they consider "enough" resolution.

But let's assume you can set the preview to 100% i.e. native
resolution. What have you achieved with that? Nothing! You're still
stuck in the scanner program with inadequate and limited set of
editing tools.
Now, no matter how high your preview's resolution, you won't have *all*
the necessary information until you take your preview at the *same*
resolution as the final scan;

There you go! So why argue otherwise?
however, with a decent compromise between
resolution and scanning speed, I think you can come pretty darn close.

"Close only counts in grenades and horse shoes" as the saying goes.

You can come up with all sorts of convoluted procedures to make any
statement "true" but you only end up with a "cure worse than the
disease".

Since I seem stuck in a quoting mode ;o) here's one more:

"You can put lipstick on a pig, but it's still a pig!" ;o)
Yes, but all you need from that histogram is the whitepoint and the
blackpoint.

Which will be only as accurate as the histogram.

The tiny preview image histogram is massively inaccurate.

Ergo: Your B&W points based on such a histogram will be equally
massively inaccurate.
Those *will not* be correct in the smaller preview, agreed.

There you go #2! Why argue otherwise when you know this?
But this error will result in two possible situations: either you end up
with a clipped image, or you end up with an image where the whitepoint
and blackpoint are not as "stretched" as they could have been (ideally,
1 and 254).

So, once again, you end up with a convoluted "solution" i.e. a cure
worse than the disease.
In the latter case, oh well, you can't have everything, but you've still
gained something.

No, you haven't! All you "gained" is having to perform multiple
previews and still end up with an inaccurate setting.
Also, it's not a case of "hmm, wonder whether I made the right
adjustments from that tiny preview": either the image clips, or it
doesn't. If it does, scan again, and consider increasing your safety
margin next time.

You're tying yourself in knots to do something which is unnecessary
just to "prove" a point which you know is wrong.

Like I said above, you end up with a convoluted "solution" only doing
even more damage. Those multiple previews ending up with a guess is a
cure worse than the disease.
I don't see how this matters for setting whitepoint and blackpoint...
Yeah, you'll only be able to chose among 256 values rather than 65536,
but I'd be very happy if this was the only problem! Whitepoint at 253.74
instead of 254... terrible! ;-)

Yes, it is because...
But as you correctly say, the big problem is the limited preview
resolution, not the bit depth -- resolution *can*, at least under
extreme conditions, give you a white/blackpoint that's off by *much*,
possibly making the scan clip; bit depth can't.

There you go #3! So, you know all this, but still keep arguing against
it!?

All I need to do, is just sit here and you'll make all my points for
me! ;o)
Hmm... I can't really deny this when speaking about what I do with *my*
scanner, since *I* could scan at 16-bit, but instead prefer to scan at
8-bit and make some scan time adjustments.
In this case, you're obviously right (though, you know, people make
compromises for speed, storage space and all, and I don't think mine is
so terrible a compromise if done carefully).

I make compromises myself all the time. That's not the point. The
point is I know I make them, and I don't try to defend them. I know
they are less then perfect and I'm OK with that because it makes sense
in the given context. It's a subjective decision and I stand by it.
But I don't pretend it's an objective decision which applies to
others. It may, if they have similar requirements, but it may not. So
I don't try to defend it as anything other than a subjective decision
within a narrowly defined context.

The problem is (and I'm talking in general terms now, not necessarily
related to this discussion) people often *want* their compromise to be
the "perfect" solution, and they tie themselves in knots trying to
justify that compromise. In other words, they themselves are very
unhappy with the compromise but instead of looking for an optimal
solution they stick with the sub-standard compromise and try to defend
the indefensible. In the end they just get into more and more trouble
as facts clearly prove otherwise.
But in the OP's case, he *cannot* scan at 10-bit (let alone 16-bit),
instead he *has* to work with 8-bit data.
In his case, I'd be very careful to say "just kiss goodbye to the lowest
two bits, and deal with the rest", even if the only other option is
messing with scan time settings.

You're missing the point! Struggling to *try* (!) and keep those two
bits (without really knowing if that attempt is successful) will
produce *worse* results than by using the alternative option where you
know what you're doing. The point is he isn't even sure if those two
bits are thrown away before scanner software applies any edits!

So, on balance, the safest option is not to risk any more damage but
get the best you can (i.e. with as little interference from the
scanner software "editing" as possible), up the bit depth afterwards
and edit in proper editing software.

Given those two options:

1. *Hope* that scanner software edits are in 10 bits and work through
the preview keyhole doing massive damage which more than cancels out
any 2-bit gain! And all along not even knowing if the 2 bits are used!

2. *Know* that externally scaled output is in true 16-bit and work
with the full complement of tools an external editor has knowing
you're getting the most out of available data.

In such a case option 2 will always produce superior results.

Don.
 
Don said:
Just below native resolution.

Oh... ok.
[snip]
A preview doesn't necessarily have to be made at 100dpi or so.

Not strictly true as most scanning programs don't even offer that
option (or at the very least limit it severely). Instead, most
scanning programs preview at what they consider "enough" resolution.

Well, that can be a problem if a scanner isn't supported by anything
other than the native driver. Which, unfortunately, seems to be the OP's
case -- no SANE support and no SilverFast support, though he said
VueScan has support.
But let's assume you can set the preview to 100% i.e. native
resolution. What have you achieved with that? Nothing! You're still
stuck in the scanner program with inadequate and limited set of
editing tools.

Unless the software is really bad, I don't see why it's all so
inadequate. Working histogram, curves and levels should be more than enough.
There you go! So why argue otherwise?

Never argued otherwise. I said you won't have *all* the necessary
information (needed to make a "perfect" scan), but I think you do have
*some* of the necessary information (allowing to make a better scan than
making use of *no* available information at all).
"Close only counts in grenades and horse shoes" as the saying goes.

You can come up with all sorts of convoluted procedures to make any
statement "true" but you only end up with a "cure worse than the
disease".

I can see how the cure would be "not so much better than the disease",
but no more.
Then whether the cure is *worth* doing, wrt cost/benefit, is something
that should be decided on an individual basis.
Since I seem stuck in a quoting mode ;o) here's one more:

"You can put lipstick on a pig, but it's still a pig!" ;o)

If you're stuck with the pig, then a better pig may still be an
improvement for you.

Obviously I don't know if the OP is really "stuck" with his pig, I mean
scanner, but he didn't ask whether we think he should buy another scanner.
Which will be only as accurate as the histogram.

The tiny preview image histogram is massively inaccurate.

Ergo: Your B&W points based on such a histogram will be equally
massively inaccurate.

Bah... sorry, I just can't agree. Yeah, they're inaccurate. For me,
though, they're good enough to set the exposure time to a reasonable
value (you know that setting exposure is the same as setting the
whitepoint for me).
"Reasonable" means that they allow to avoid clipping in the vast
majority of cases, while giving a very real quantization advantage for
underexposed frames.
There you go #2! Why argue otherwise when you know this?

Not arguing otherwise. They are not correct.
That something isn't "correct", though, has never meant that it can't be
close enough.

And if you don't like "close enough"... well, you should probably get an
infinite bit depth and resolution scanner (as well as film, as well as a
parfect lens) in the first place!
So, once again, you end up with a convoluted "solution" i.e. a cure
worse than the disease.

Worse? No.
Worse only in the case that you end up with clipping. That's easily
solved by scanning again, and is not something that should happen
anywhere near often.
No, you haven't! All you "gained" is having to perform multiple
previews and still end up with an inaccurate setting.

No. You've performed *one* preview and *one* scan, and you've ended up
with a setting that's more accurate than *no* setting, though not 100%
accurate.
You're tying yourself in knots to do something which is unnecessary
just to "prove" a point which you know is wrong.

Unnecessary? Of course it's unnecessary. Do you think that multi-pass
multi-scanning with sub-pixel alignment is "necessary"?

I'm just trying to describe out to obtain as much information as
possible from a 10-bit internal / 8-bit external scanner.
Whether the complicatedness of the process is comparable to "tying
oneself in knots" is something that *the single scanner user* should decide.
Like I said above, you end up with a convoluted "solution" only doing
even more damage. Those multiple previews ending up with a guess is a
cure worse than the disease.

Demonstrate this please.

Say I have an image with whitepoint=50.
I take a preview. The preview tells me that whitepoint=40.
I take the scan, setting whitepoint=57 (40+1/5*40) to be on the safe side.
I obtain an image that has whitepoint=223.

A "perfect" scan would have had whitepoint=254, so I definitely haven't
obtained a "perfect" scan.

But tell me exactly why having 223 values would be worse than having
only 50 of them.

Oh, but wait, are all those 223 values "real"? No. With 10 bits per
channel, only 200 of them will be "real".

Well, it appears that I've gained a net 150 pixel values, or if you
prefer, I have 4 times less quantization than I would have had if I
didn't set the whitepoint.

Tell me again why this is considered bad.
Yes, it is because...

.... because?
I don't think you're explaining why this is, below.

There you go #3! So, you know all this, but still keep arguing against
it!?

Well... yes.
Anyway, whitepoint at 253.74 instead of at 254 (an example of what could
be caused by a preview's *low bit depth*, not resolution, i.e. 256
values rather than 65536) is terrible because...?
All I need to do, is just sit here and you'll make all my points for
me! ;o)

Having A helps obtaining B.
I don't have A, therefore I can't obtain B.
Yes?

I make "your" points because I think they're true. Would you prefer I
hid them and denied them just to make my argument artificially sound
stronger?
(Well, I've found this to be common practice on Usenet, but it's not one
of the Usenet customs I like best)
I make compromises myself all the time. That's not the point. The
point is I know I make them, and I don't try to defend them. I know
they are less then perfect and I'm OK with that because it makes sense
in the given context. It's a subjective decision and I stand by it.
But I don't pretend it's an objective decision which applies to
others. It may, if they have similar requirements, but it may not. So
I don't try to defend it as anything other than a subjective decision
within a narrowly defined context.

What do you mean by "defending" a compromise?
If you mean something on the lines "my compromise works as good as your
no-compromise approach", well, that's just the kind of thing that
*can't* be true of a compromise.

If you mean "I have arguments showing that, in this context and given my
requirements, my compromise is better than other possible compromises",
I feel like I have every right to defend it.

In any case, while this stuff we're talking about is a compromise *in my
case*, it is not for the original poster, who just *can't* decide what
bit depth to scan at with his scanner.

So, at best, the compromise I'm talking about in his case would be that
of *not buying a better scanner and trying to take the best from the one
he's got*.
Which, of course, is a subjective decision, so if he doesn't like this
compromise he can definitely go buy a new scanner, and I'll have no
objection.
The problem is (and I'm talking in general terms now, not necessarily
related to this discussion) people often *want* their compromise to be
the "perfect" solution, and they tie themselves in knots trying to
justify that compromise. In other words, they themselves are very
unhappy with the compromise but instead of looking for an optimal
solution they stick with the sub-standard compromise and try to defend
the indefensible. In the end they just get into more and more trouble
as facts clearly prove otherwise.

Hope I'm not unknowingly falling down into digital hell, then.
You're missing the point! Struggling to *try* (!) and keep those two
bits (without really knowing if that attempt is successful) will
produce *worse* results than by using the alternative option where you
know what you're doing. The point is he isn't even sure if those two
bits are thrown away before scanner software applies any edits!

Don, excuse me, but are you just ignoring the fact that I've told you
and the OP, more than once, how to try to find out whether those two
bits are indeed used or not?

My approach for finding that out might be flawed, but I do clearly
recognize that, first of all, the OP should test whether his scanner is
really 10-bit or not.
So, on balance, the safest option is not to risk any more damage but
get the best you can (i.e. with as little interference from the
scanner software "editing" as possible), up the bit depth afterwards
and edit in proper editing software.

The safest option is to check whether those two bits exist or not first,
and then behave consequently.
If it turns out that there is no method to tell with a good degree of
certainly whether those darn bits are there or not, then yes, the safest
method is probably the one you say.

Anyway, my "what-if" scenario in these posts has been based on the
finding of the existence of ten binary digits per channel in the analog
to digital conversion performed inside the scanner, and on the
consequent inclusion of all ten binary digits during any image
processing taking place at the firmware or driver level.
*Phew*.
Given those two options:

1. *Hope* that scanner software edits are in 10 bits and work through
the preview keyhole doing massive damage which more than cancels out
any 2-bit gain! And all along not even knowing if the 2 bits are used!

I still don't see valid arguments in favour of a "massive damage
cancelling out the gain".
At best, I've seen arguments that "having to scan again in case you've
got clipping at first try is time consuming and not worth the effort".
Which can be true to some degree, but is quite different.
2. *Know* that externally scaled output is in true 16-bit and work
with the full complement of tools an external editor has knowing
you're getting the most out of available data.

In such a case option 2 will always produce superior results.

What do you mean "externally scaled output"? If you mean that the output
from the scanner is 16-bit, well, we know the OP's scanner just can't do
that.

If you mean *converting* the scanner's output to 16-bit and then working
on that, this may be valid advice; however, I don't see why this
couldn't be done *together* with option 1.

That is, IIUC, options 1 and 2 are not mutually exclusive.


by LjL
(e-mail address removed)
 
Unless the software is really bad, I don't see why it's all so
inadequate. Working histogram, curves and levels should be more than enough.

I explained all that several times already, including that I'm not
saying everyone must use an external editor. I'm just stating the
facts. It's up to each user to decide what they want or need. If one
is happy with scanner editing tools, more power to them! But if one
says they are equal to an external editor, that's simply inaccurate,
for the many reasons outlined earlier.
Bah... sorry, I just can't agree. Yeah, they're inaccurate.

That's inconsistent. You can't say you disagree and then in the same
breath confirm what I just said (i.e. that what you disagree with is
actually correct).
For me,
though, they're good enough to set the exposure time to a reasonable
value (you know that setting exposure is the same as setting the
whitepoint for me).

The two key words being "for me". We are not talking about subjective
judgment. As I keep repeating, if it works for you, fine. But that
does not make it objectively true. It may mean you have lower
expectations or any number of other things. Nothing wrong with that,
of course. What is wrong, however, is to assume that your personal
preferences translate into objective statements. They don't.

Meta level diversion:

I think there's a basic misunderstanding of the principle of what I'm
saying. When I'm stating generic facts I'm not advocating any
particular application or use of these facts. I'm just offering
pertinent information. It's then up to each reader to decide what to
do with those facts: ignore them completely, use them fully, use them
only in part, ... etc.
Demonstrate this please.

I have! The inexactness of scanner software environment causes vastly
inaccurate settings.
Say I have an image with whitepoint=50.
I take a preview. The preview tells me that whitepoint=40.
I take the scan, setting whitepoint=57 (40+1/5*40) to be on the safe side.

That's just not realistic. The clipping settings are on the order of
0.3% to 0.5.% and you're bracketing with 20%!?

....
Tell me again why this is considered bad.

Because it starts with a wrong premise and then goes downhill from
there.

This is a prime example of tying yourself in knots trying to justify
this wrong premise and trying to solve a problem that doesn't exist.

If one is so concerned with accuracy, one should simply edit in an
external editor and all of the above "problems" disappear.
... because?
I don't think you're explaining why this is, below.

Because you explained it yourself:
....
....
I make "your" points because I think they're true. Would you prefer I
hid them and denied them just to make my argument artificially sound
stronger?

No, no, no, you misunderstand. What I'm referring to is: You object to
the facts but then immediately add in the following sentence that the
facts are actually true. You do this all the time.

It's inconsistent. So what I'm saying above has nothing to do with
style but with substance which is contradictory.
(Well, I've found this to be common practice on Usenet, but it's not one
of the Usenet customs I like best)

I don't even answer such messages. If one can't "agree to disagree
agreeably" then there's no point. I have no interest in shouting and
calling people names.
What do you mean by "defending" a compromise?

For example, your above attempt of trying to come up with a convoluted
clipping margin of 20%.
If you mean something on the lines "my compromise works as good as your
no-compromise approach", well, that's just the kind of thing that
*can't* be true of a compromise.

If you mean "I have arguments showing that, in this context and given my
requirements, my compromise is better than other possible compromises",
I feel like I have every right to defend it.

None of the above. It's number 3: "There is a perfect solution already
available (edit in an external editor). However, I want to use scanner
software to edit. But it's bad. So I will tie myself in knots trying
to work around its inaccuracies - but not succeeding. I'll end up with
worse results than easily available solution of just using an external
editor. I agree that's the ideal solution, but I will continue to
defend the inferior and inexact attempt at a workaround in scanner
software."
In any case, while this stuff we're talking about is a compromise *in my
case*, it is not for the original poster, who just *can't* decide what
bit depth to scan at with his scanner.

Which - as I've shown - is irrelevant because even if he could make
scanner software work at 10 bits internally, due to all the outlined
problems, that would still be worse than using the full complement of
tools on 8-bit data in an external editor.
Don, excuse me, but are you just ignoring the fact that I've told you
and the OP, more than once, how to try to find out whether those two
bits are indeed used or not?

But that's simply irrelevant for the reasons I just explained in the
previous paragraph. What's the point of wasting time trying to find
that out when in the end - even if true (!) - the end result would
still be inferior to editing in an external editor?
My approach for finding that out might be flawed, but I do clearly
recognize that, first of all, the OP should test whether his scanner is
really 10-bit or not.

It just doesn't matter. The only thing that would matter is if he
could actually get those 10-bits out. But he can't.
Anyway, my "what-if" scenario in these posts has been based on the
finding of the existence of ten binary digits per channel in the analog
to digital conversion performed inside the scanner, and on the
consequent inclusion of all ten binary digits during any image
processing taking place at the firmware or driver level.
*Phew*.

What you're ignoring is that it doesn't matter as I just explained.

If the only place he can use those 10-bits is within the scanner
software then it's "Game Over". No need to look any further.
I still don't see valid arguments in favour of a "massive damage
cancelling out the gain".
At best, I've seen arguments that "having to scan again in case you've
got clipping at first try is time consuming and not worth the effort".
Which can be true to some degree, but is quite different.


What do you mean "externally scaled output"? If you mean that the output
from the scanner is 16-bit, well, we know the OP's scanner just can't do
that.

If you mean *converting* the scanner's output to 16-bit and then working
on that, this may be valid advice;

That's what I mean, of course.
however, I don't see why this
couldn't be done *together* with option 1.

That is, IIUC, options 1 and 2 are not mutually exclusive.

No, they are not - but only theoretically. In practice, since 1 causes
irreparable damage it (a priori) negates any potential benefits of 2.
Worse still! It goes beyond any benefits of 2.

One final meta level observation:

If what I write sometimes appears "picky" this could be because the
intentions are misunderstood. The intention is *not* to be picky but
to point an (objectively) important aspect. I don't expect this will
be important to everyone (subjectively). However, in that case they
can just ignore it. The problem only occurs when they try to use
subjective assertions to counter objective fact.

I mean, the normal reaction to such a statement offering additional
information (which is my only intention) and made in a
non-confrontational, matter-of-fact fashion (i.e. not argumentative,
but simply and calmly stating facts), a normal reaction to that would
be:
A. Thanks, but no thanks! I don't care for such level of detail
B. Huh! I never thought of that! Thanks!

I'm perfectly OK with either reaction.

Don.
 
Don said:
I explained all that several times already, including that I'm not
saying everyone must use an external editor. I'm just stating the
facts. It's up to each user to decide what they want or need. If one
is happy with scanner editing tools, more power to them! But if one
says they are equal to an external editor, that's simply inaccurate,
for the many reasons outlined earlier.

I'm not saying that one should not use an external editor because the
driver-integrated tools are equivalent. They are obviously not, with any
dedicated editor having many more tools.

I'm saying that the tools the scanner driver does have (typically
curves, level and a histogram) don't have any specific flaws that make
them worse than what's in Photoshop.

(That's a general statement, of course I suppose you can find a scanner
driver with irremediably bugged curves and levels...)

Any flaws are thus not in the *tools*, but in the fact you're applying
them based on a preview -- which is what we are discussing.

But if you maintain that it's the *scanner driver tools* themselves that
are inadequate, then please explain how they are. The "main reasons
outlined earlier" simply refer to something else.

That's inconsistent. You can't say you disagree and then in the same
breath confirm what I just said (i.e. that what you disagree with is
actually correct).

You said they're massively inaccurate.
I said I they are inaccurate.

Clearly I interpreted you "massively" as meaning "too much, for any
purpose", which is what I do not agree with.

The two key words being "for me". We are not talking about subjective
judgment. As I keep repeating, if it works for you, fine. But that
does not make it objectively true.

Nor objectively false.

Anyway, my "for me" was not really intended as to imply subjective
judgement, although it does probably sound as such.

"For me" = "With my own scanner, they're good enough to set decent (i.e.
not-clipping, but still narrower than 0..255) w/b-point values at first
try in the vast majority of cases. Others may find that they'll often
have to make multiple trial scans (perhaps too many for their likings)
to obtain the same result)".

Because, you see, I maintain that a good result is *always* obtainable:
in the worst case, with a very bad scanner, you'll have to take a number
of "trash scans" before coming out with a good scans.

Now, yeah, *really* having to do *that* could be labeled as "tying
oneself in knots", I admit: you'd come up with a good result, but only
through such a time-consuming process that, I guess, nobody would do it
in practice.

However, the fact that it doesn't get (nearly) to such an extreme with
my scanner makes me infer that it'll be workable with many other scanners.
It may mean you have lower
expectations or any number of other things.

Actually, what I'm trying to say is that I suspect the OP could obtain
*better* results by exploiting the internal 10-bits than by ignoring them.

You may disagree with this statement, but saying that it may mean I have
lower expectations than someone else just does not make any sense.
Nothing wrong with that,
of course. What is wrong, however, is to assume that your personal
preferences translate into objective statements. They don't.

No preferences.

I stand that: if you own a 10-bit external / 8-bit internal scanner,
then by taking a less-than-full-resolution scan ("preview") followed by
one or more full-resolution scans, you can *always* obtain more
information content than by using a "standard procedure" (i.e. just scan).

In particular, here is the complete algorithm, so that no doubts need
remain:

1) Take a scan with with bp=0, wp=255, using any resolution ("preview")
2) Take a full-resolution scan using bp=preview_bp, wp=preview_wp
3) Repeat step 2 if clipping is found

It's quite simple isn't it? Well, I haven't mentioned using a "safety
margin" and this kind of things, but they're not strictly necessary.

*Please* demonstrate what is flowed in the algorithm above. All I can
see from it is that the information content in the final scan will be
higher than by "just scanning" for any given underexposed or overexposed
scan target.

Now, *that* specific algorithm, not having any safety margin, is a bit
impractical to use, since it is bound to force the user to scan more
than once at full resolution.

So, you might find it to slow to use, but now *that's your subjective
opinion and preference*, it's not a "tying myself in knots" on my side.


Anyway, if you keep a margin, it can become much more practical to use,
even though the improvements will diminish.
Meta level diversion:

I think there's a basic misunderstanding of the principle of what I'm
saying. When I'm stating generic facts I'm not advocating any
particular application or use of these facts. I'm just offering
pertinent information. It's then up to each reader to decide what to
do with those facts: ignore them completely, use them fully, use them
only in part, ... etc.

Same for me.
I have! The inexactness of scanner software environment causes vastly
inaccurate settings.

Well, no, you haven't. You have *stated* so, but not, IMHO, satisfyingly
demonstrated so. Please attack the algorithm I've given above,
demonstrating it's flawed.

And "it'll take too many test scans to be practical for people" just
doesn't cut it.
That's just not realistic. The clipping settings are on the order of
0.3% to 0.5.% and you're bracketing with 20%!?

Sorry, I'm not sure what you mean. What are the clipping settings?
Are you saying that I could safely bracket with less than 20%? That
would be even nicer. As I said, I kept on the safe side.
...



Because it starts with a wrong premise and then goes downhill from
there.

What's the wrong premise?
This is a prime example of tying yourself in knots trying to justify
this wrong premise and trying to solve a problem that doesn't exist.

The "problem" here is that the lowest 2 bits from the OP's scanner can't
be obtained directly.
Please explain why this problem doesn't exist.

(The only explanation I could think of is "more than 8 bits per channel
are useless", and other people would give such an explanation, but
you'll have a harder time than them, since I know you wouldn't say that)
If one is so concerned with accuracy, one should simply edit in an
external editor and all of the above "problems" disappear.

And two more bits would appear by magic from nothingness? Nice.
Because you explained it yourself:

No, I didn't.

You "..."'ed the part where I explained why this is no explanation to
your "yes, it is because ..." above.

No, no, no, you misunderstand. What I'm referring to is: You object to
the facts but then immediately add in the following sentence that the
facts are actually true. You do this all the time.

Cite please. At most, I think I might have sometimes been too vague
about exactly *which* parts of your facts I consider true, and which
ones I don't.

On the other hand, I suspect that I have later pinpointed that better
later in almost every case... since you pointed it out in almost every case.
It's inconsistent. So what I'm saying above has nothing to do with
style but with substance which is contradictory.

I see, but I don't think it is.
What do you mean by "defending" a compromise?


For example, your above attempt of trying to come up with a convoluted
clipping margin of 20%.

I'm really missing something here. I mean, you can go with much less
than 20% if you like (and I suspect it would work quite well with many
scanners, very rarely forcing to scanning the same picture more than one).

Actually, I *do* use less than 20% with my scanner.
I just chose 20% because the higher, the "safer" (i.e. there's less risk
of having to scan again, even though there's also proportionally less gain).
None of the above. It's number 3: "There is a perfect solution already
available (edit in an external editor).

Perfect? No. It doesn't let the OP exploit his scanner's internal 10
bits in any way.

You may argue (and you're doing that in fact) that "the cure is worse
than the disease" (which I don't think is true, in any case", but you
can hardly argue that the disease is a "perfect solution".
However, I want to use scanner
software to edit. But it's bad. So I will tie myself in knots trying
to work around its inaccuracies - but not succeeding. I'll end up with
worse results than easily available solution of just using an external
editor. I agree that's the ideal solution,

As I said, I definitely don't, given you have no way to make use of the
lowest two bits in an external editor. Please don't put too many words
in my mouth.
but I will continue to
defend the inferior and inexact attempt at a workaround in scanner
software."
Bah.



Which - as I've shown - is irrelevant because even if he could make
scanner software work at 10 bits internally, due to all the outlined
problems, that would still be worse than using the full complement of
tools on 8-bit data in an external editor.

Yeah, except that I don't think you have shown it, while on the other
hand I do think that I have shown the contrary.
Ah, life, don't we all love it?

But that's simply irrelevant for the reasons I just explained in the
previous paragraph.


You: "The point is he isn't even sure if those two bits are thown away
before scanner software applies any edits."

Me: "I've told you and the OP [..] how to try to find out whether those
too bits are indeed used"

You: "But that's simply irrelevant"

Who's being contradictory now? Is it "the point", or is it "irrelevant"

Oh, wait, I suppose a point might well be irrelevant -- that's not a
contradiction.
So, ok, I recognize that you found out your own point was irrelevant :-)
What's the point of wasting time trying to find
that out when in the end - even if true (!) - the end result would
still be inferior to editing in an external editor?

Except that it wouldn't.

It just doesn't matter. The only thing that would matter is if he
could actually get those 10-bits out. But he can't.

I wonder why they make scanners that have more bits internally than
there are externally, if they serve no useful purpose.
We know the marketing is a son of the devil, but there's a limit to
everything!
[snip]

One final meta level observation:

If what I write sometimes appears "picky" this could be because the
intentions are misunderstood. The intention is *not* to be picky but
to point an (objectively) important aspect. I don't expect this will
be important to everyone (subjectively). However, in that case they
can just ignore it. The problem only occurs when they try to use
subjective assertions to counter objective fact.

I mean, the normal reaction to such a statement offering additional
information (which is my only intention) and made in a
non-confrontational, matter-of-fact fashion (i.e. not argumentative,
but simply and calmly stating facts), a normal reaction to that would
be:
A. Thanks, but no thanks! I don't care for such level of detail
B. Huh! I never thought of that! Thanks!

I'm perfectly OK with either reaction.

I see, but my reaction (*in the case at hand, i.e. the OP's*) is
"C. Sorry, but I think better level of detail can be obtained in another
way".

Clearly I could be mistaken, but it's certainly another valid reaction
in principle.

Note the emphasized "in the case at hand", as in *my own* case (which we
discussed in another thread), my reaction is clearly B (though with
additional notes), as my scanner is perfectly able to scan at 16-bit
external if desired.


by LjL
(e-mail address removed)
 
I'm not saying that one should not use an external editor because the
driver-integrated tools are equivalent. They are obviously not, with any
dedicated editor having many more tools.

There you go! So why do you argue against it?
Any flaws are thus not in the *tools*, but in the fact you're applying
them based on a preview -- which is what we are discussing.

I never said the faults were in the scanner editing tools (although
there probably are!). It's the preview "keyhole" that causes you to
make bad decisions. And due to the minimal scanner editor you'll need
to edit again later. Remember the equation: 2 edits said:
But if you maintain that it's the *scanner driver tools* themselves that
are inadequate, then please explain how they are. The "main reasons
outlined earlier" simply refer to something else.

No, they refer exactly to the fact that you're basing your decision on
the inadequate preview image and using inferior scanner editor.
Anyway, my "for me" was not really intended as to imply subjective
judgement, although it does probably sound as such.

Yes it does. And it is. As you will now confirm it yourself:
"For me" = "With my own scanner, they're good enough to set decent (i.e.
not-clipping, but still narrower than 0..255) w/b-point values at first
try in the vast majority of cases. Others may find that they'll often
have to make multiple trial scans (perhaps too many for their likings)
to obtain the same result)".

There you go! You've just described what "subjective" means!
Because, you see, I maintain that a good result is *always* obtainable:
in the worst case, with a very bad scanner, you'll have to take a number
of "trash scans" before coming out with a good scans.

Good said:
Now, yeah, *really* having to do *that* could be labeled as "tying
oneself in knots", I admit: you'd come up with a good result, but only
through such a time-consuming process that, I guess, nobody would do it
in practice.

There you go, again. Confirming what I said: You tie yourself in knots
only to come up with sub-standard results. Remember: good said:
1) Take a scan with with bp=0, wp=255, using any resolution ("preview")
2) Take a full-resolution scan using bp=preview_bp, wp=preview_wp
3) Repeat step 2 if clipping is found

It's quite simple isn't it?

No, it isn't! That's the canonical definition of convoluted.
*Please* demonstrate what is flowed in the algorithm above.

I have! Several times. Please re-read the thread.
Now, *that* specific algorithm, not having any safety margin, is a bit
impractical to use, since it is bound to force the user to scan more
than once at full resolution.

There you go!
Sorry, I'm not sure what you mean. What are the clipping settings?
Are you saying that I could safely bracket with less than 20%? That
would be even nicer. As I said, I kept on the safe side.

A safe clipping setting is 0.3% to 0.5.%. Using 20% margin is just
impractical. You'll be left with no dynamic range to speak of.
What's the wrong premise?

That scanner software editing can even be compared to an external
editor. Hint: Read your own words in the very first paragraph on top
of this message!
The "problem" here is that the lowest 2 bits from the OP's scanner can't
be obtained directly.

As I've must explained about at least two times already, any potential
"gain" of these 2 bits is more than obliterated by the inexactness of
the process. In other words, external editor on 8 bits produces better
results than scanner editor using preview on (maybe?) 10 bits.
Please explain why this problem doesn't exist.

Again, I must have explained that many times as well. One last time:

The problem is trying to make an inferior scanner software editor
perform like a superior external editor. Since an external editor
exists, trying to replace it with inferior scanner software editing is
creating a problem where none exist. Just use the external editor and
there is no problem.
Cite please.

--- start ---

sorry, I just can't agree. Yeah, they're inaccurate.

First you don't agree and then, on the same line, you agree, and then
you go on not agreeing again.
Well... yes.

When faced with a contradiction you admit it. And then do it again.
I make "your" points because I think they're true.

And yet you then continue to argue against these "true points".
In this case, you're obviously right...

But ...

And then you go on to immediately contradict yourself again.

Etc... etc...

--- end ---

And that's only *some* contradictions in *one* single message!
Need more? ;o)
I wonder why they make scanners that have more bits internally than
there are externally, if they serve no useful purpose.
We know the marketing is a son of the devil, but there's a limit to
everything!

Of course, they serve a purpose. The question is whether they can be
retrieved.

When a manufacturer *claims* higher internal bit rate but provides no
way of actually getting that data out, I get very suspicious. Either
they are just lying (it's been known to happen) or the data is so bad
they're ashamed to let people see it. Either way, unless I can get
that data out to put it through a rigorous testing procedure, I don't
consider any "claimed" resolution as real.

Like you say, marketroids are devil's spawn! Actually, I'd call them
even worse! :o)

Don.
 
Don said:
On Thu, 20 Oct 2005 22:18:02 +0200, "Lorenzo J. Lucchini"

[snip]
Sorry, I'm not sure what you mean. What are the clipping settings?
Are you saying that I could safely bracket with less than 20%? That
would be even nicer. As I said, I kept on the safe side.

A safe clipping setting is 0.3% to 0.5.%. Using 20% margin is just
impractical. You'll be left with no dynamic range to speak of.

Excuse me, but I'm afraid you misunderstood something here.

I think that by "clipping setting" you mean the percentage of pixels that
are allowed to clip -- which is what software tools (NetPBM for example)
often allow you to set when doing automatic normalization.

This is *not* what my 20% is about, not even related.
Consequently, it makes *no* sense to say that I'd be left with no dynamic
range: obviously, *in the worst case*, I'll end up with the very same scan
that you'll end up with if you "just scan" (bp=0, wp=255, etc, as you
suggest).

In *all* cases, I'm ending up with *no clipping* (*), so I don't quite see
how your 0.3%-0.5% clipping settings numbers could have any bearing.


(*) "no clipping" is entirely the point! since I explained that I'm assuming
a scan is thrown away and taken again if *any* amount of clipping is found
(i.e. if wp is found to be greater than 254, or bp is found to be smaller
than 1)

by LjL
(e-mail address removed)
 
Back
Top