Best scanning manager program?

  • Thread starter Thread starter T. Wise
  • Start date Start date
Hecate,

I see that you and your company do a lot of toil and go to a lot of
trouble in your backup approach. Perhaps someone's brain was bubbling
over with ideas that have the effect of doubling the workload.

For me, as a home network user, high quality tape has the advantages
of simplicity and compactness.

Father Kodak.
 
I also have 8x12 and 13x19 prints from 35mm using Vuescan. And the
quality of these prints has been deemed very good by a panel of
professional photographers. So the conclusion I gather is that
quality can be obtained from Vuescan.

Make that Photoshop...!

So, I'm afraid, you gather wrong. Not to mention total lack of any
specific detail about the panel. The only conclusion we *can* draw is:

Even sub-standard Vuescan output can be made palatable to some people
by heavy editing given a competent Photoshop operator to try and hide
all the damage Vuescan has done.

But be all that as it may, the key point is that the context here is
quality from the *scanner program*, not what happens to the image
afterwards. Even if assume that the above test is legitimate, such a
test tells more about the pre-press, the printing process, the paper,
the inks, etc. than about Vuescan.

Why hide behind all of that? Why not test Vuescan's output directly?

And the answer is because it's virtually impossible (due to bugs) to
find even two Vuescan versions which produce consistent results!

Don.
 
Failed??? Why, Apple stock is at its highest ever, sales continue to
grow, their bank balance is impressive, and they remain at the
forefront in innovation.

In the real world that "innovation" is known as *stealing*!

Apple hasn't had an original idea since "the Woz" left! At which point
it converted to a *law firm* with the sole purpose of defending its
loot, with a sizeable *marketing* division excelling in "form over
substance" and preoccupied/obsessed with fashion and fluff.
If that's failure, I want some, too!

For a *computer* company (proclaiming to take over the world) now
reduced to making MP3 players, yes, that's massive failure, all right!

MP3 players which they then recalled due to defects. More failure!

Not to mention, they're now working for their declared mortal enemy
and using a processor made by another mortal enemy! Yes, that would
qualify as total failure in pretty much everything they attempted!
Don, all this shows is that you are clueless about Apple and lends
doubt to all your other assertions on other topics.

Not only is that a stretch - you seem all to eager to make (!) - but
it's based on a totally false premise too.

Speaking of clueless, you seem oblivious that, today, Apple is merely
a minor division of Microsoft in charge of monopoly alibi.

When Apple went bankrupt (at the height of the Microsoft anti-trust
case) it was rescued by Microsoft to the tune of ~300 million dollars.

Jobs, who until then attacked and mocked Microsoft at every turn,
meekly bowed to Bill, said "Yes, Sir! Thank you, Sir!" and came
crawling back... Which puts his past protestations in perspective!
For the record: I use Macintosh, Windows, Linux, and various flavors
of Unix.

As do I. Although, for some of those past tense "used" is more apt.

So before you (and others) jump to any wrong conclusions, the above is
*not* aimed so much at Apple but (a lament) at a missed opportunity to
win over the men in gray suits. Instead, Apple squandered it all
because of their small mindedness, Job's monumental ego, etc.

Don.
 
I think what you're still failing to grasp is the difference between a
subjective preference ("I like it" or "looks good to me") and
objective fact (statistics, measurements, data analysis, etc.)
{end quote}

And Don hasn't provided any evidence to suggest that using objective
criteria is preferable to using subjective comparisons when it comes to
evaluating the quality prints!

That you would state such an assertion shows you're still completely
missing the point, Roger. That just doesn't make any sense at all.
What objective tests have you done of VS against other software
programs in a controlled setting?

You can read all about that if you go back to the archives.

Not to mention endless, *specific* problems posted by *many* Vuescan
users themselves appalled by the results and the many Vuescan bugs!

Again, it's all in the archives.

You're also confusing two things here: Testing and ranking. Vuescan
fails on both counts, of course, but the primary focus here's testing.
The most important thing is whether the subjective conclusions drawn
are informed by facts, evidence and testing, the quality of the
criteria uses, and the willingness of the person making the conclusion
to fairly evaluate the evidence. My conclusions are based on my own
testing. I am not an expert in the technical side of photography but
can appreciate differences in the quality of images.

That's not testing. That's *taste*! What you personally *like* has no
relation to objective technical fact!

I'm not denying your right to have a *subjective* preference, of
course, but that subjective preference has absolutely no place in a
discussion based on *objective* fact.
I have extensively tested Vuescan with my slides, negatives, B&W negs,
and with and without a scanhancer to evaluate graininiess. I've also
tested prints done with VS against my digial mini-lab, and see no
quality improvement in letting them do the scanning and processing.
Same goes against my friend's 20D- it's good to do tests like these
from time to time to keep pushing to get the best possible scans and
prints.

Those are not *objective tests*! Those are personal preferences!
I participated in this year's scanner bakeoff and my results came out
fairly well for this model of hardware. This test was a mixture of
subjective and objective criteria done

Any time you include "subjective" in a sentence that's not a test but
a beauty contest i.e. a question of *taste*.
It is clear to me that the more meaningful test is a *subjective* one
which looks at results from different platforms and not simply
objective measurements, if the question is "can I get good prints using
Vuescan?"

Of course, that is *not* the question!

It may be the question for you, but it's not a question when the goal
is *independent* and *objective* testing.
This *subjective* comparision was with the same frame under identical
conditions ....
This is my subjective opinion

While I appreciate this comparison between FilmGet and Vuescan (and
the work you put in!!!) and it's definitely a step in the right
direction, unfortunately it still fails because, in the final
analysis, it's subjective.
Anyway, results with Vuescan are not universal given the range of
scanners being more (and less) compatible, so any blanket assertions
about its scan quality are unsupportable.

That's just patently wrong. If a Vuescan bug is universal affecting
*all* scanners (and there are *lots* of such bugs!) then it's not a
blanket assertion but demonstrable, objective fact.

!!! ===> NOTE: Last, but not least, due to Vuescan's notorious lack of
reliability *and* an endless number of versions, any real tests are
only valid for that particular version being put through its paces. A
version later, and all those tests are no longer valid.

While this may hold true for any program, due to Vuescan's *extreme*
volatility (infinite number of versions, infinite number of bugs) it's
proven repeatedly that (due to its propensity for bugs) there are wild
oscillations even between two subsequent versions. That's not only a
fact but the canonical, cold, objective *definition* of unreliability!

Don.
 
Make that Photoshop...!

So, I'm afraid, you gather wrong. Not to mention total lack of any
specific detail about the panel. The only conclusion we *can* draw is:


I think it's inappropriate to mention the names of these photographers
in this public newsgroup since they are not involved in this
discussion (and probably never visit this newsgroup). Having said
that, the panel includes photographers who routinely publish in
Arizona Highways and National Geographic. Good enough panel for you?

-db-
 
Don said:
When Apple went bankrupt (at the height of the Microsoft anti-trust
case) it was rescued by Microsoft to the tune of ~300 million dollars.

Apple never went bankrupt. Never filed for it. Always had a minimum of
5 Billion USdollars in the bank.

Anyway...we are WAY off topic here and we should let this go and get
back to arguing about scanners. Agreed?

-db-
 
Don wrote:
I'm not denying your right to have a *subjective* preference, of
course, but that subjective preference has absolutely no place in a
discussion based on *objective* fact.
[End Don]

Isn't what we're talking about scan quality, which is an assessment of
different factors and ultimately a value judgment about which scan
looks better? Isn't this also how photographs are generally judged- by
how people like them? Or do you grade Henri Cartier-Bresson on
resolution and sharpness and give him a failing grade.

It seems to me that to you "objectivity" is nothing more than a
rhetorical device you used to give your opinions weight they would not
otherwise have. Opinions and analysis are never *objective*, no matter
how much evidence you have to support them and something isn't
absolutely true because you decree it is. I don't think you have much
experience or credible evidence to back your subjective assertions, as
you're limited to old first-hand evidence and second-hand reports from
a variety of sources, some more credible than others.

Of course you're conclusions are *cold, objective fact* while mine are
wishy-washy feelings simply because you decree them to be. I find this
very amusing ; ) You must not have any background in the social
sciences, right?
Analysis is subjective and I'm not afraid to say my conclusions and
opinion are subjective because THEY ARE and this is not an admission of
weakness. Your refusal to examine your own subjectivity and
fallability IS a weakness, hence the claims by others that you spew
hyperbole while pretending that it is absolute fact.

Don Quote:
"Any time you include "subjective" in a sentence that's not a test but
a beauty contest i.e. a question of *taste*."

To extend your analogy: You're participating in a beauty contest but
you have pencils and a slide rule coming out of your shirt pocket and
are wondering why everyone thinks you're in the wrong place. Are you
going to measure beauty by measuring the faces of the contestants?

Anyway, I have no idea what objective scanner tests you are talking
about because you don't mention them in your previous threads, only
subjective tests you have done of Kodachromes where you examine
histograms, and the scanner code and then draw conclusions about them.
What is objective about any of this? I've searched the archives since
the FS4000US's release and Vuescan's support for it, searching for
advice on my scanner and came across many threads with you, Bart, Erik
and others.

If anyone wants to know how I arrived at my conclusions, all you have
to do is ask and I'd be happy to share methodology about any of the
things I posted on before (IR cleaning, scanner noise, various exciting
bugs, IT8 support, cropping, scanhancer and grain, calibration, etc).
I don't think I've arrived at absolute truths but am happy to share
advice based on my experience, as I have ruled out some wrong answers,
which is more than I can say for Don.
 
Hi Neil,
More of my testing has turned up that cropping is fine on my FS4000US
with 8.2.25 in negative mode.

It seems that doing batch scans of slides and manually changing the
crop boundaries cause the problem. Many other film scanners (Nikon IV,
V, etc) wouldn't have this problem as they can't batch scan mounted
slides like the FS4000US. I'm going to do more testing and send the
results to Ed as he asked me for an update.

This feature broke early in the 8.2.xx series, I believe, and hasn't
been fixed, so you wouldn't see it in the bug fixed reports.
 
Recently said:
Hi Neil,
More of my testing has turned up that cropping is fine on my FS4000US
with 8.2.25 in negative mode.

It seems that doing batch scans of slides and manually changing the
crop boundaries cause the problem. Many other film scanners (Nikon
IV, V, etc) wouldn't have this problem as they can't batch scan
mounted slides like the FS4000US. I'm going to do more testing and
send the results to Ed as he asked me for an update.
Interesting... so it would seem that this is really more of a "batch
scanning" issue than a cropping bug? Though I did try it, I don't use VS
for batch scanning, as I much prefer the ScanWizard Pro interface for
that, so it would explain why I haven't seen a problem with cropping.
BTW -- what is the nature of the problem?

Neil
 
I think it's inappropriate to mention the names of these photographers
in this public newsgroup since they are not involved in this
discussion (and probably never visit this newsgroup). Having said
that, the panel includes photographers who routinely publish in
Arizona Highways and National Geographic. Good enough panel for you?

Not if you insist on being vague. Why is it inappropriate to name
them? By participating in a public venture their names are, by
definition, public. Could it be because they never made any such
*explicit* compliments regarding Vuescan?

Even if I grant you it was a top-notch panel when one judges such a
competition the primary focus is image *content*! So, any compliments
you got mean you take good pictures and are adept at Photoshop.

But all that's a distraction.

Instead of fluffing up the panel how about the main point: Did you
present raw Vuescan output or a heavily Photoshopped image?

Answering that puts you at the horns of a dilemma:

1. If it's a heavily edited image then Vuescan's negative effects
would have been well hidden.

2. If it's raw output, then you need a panel of technical people - not
artists (!) - to evaluate such raw data correctly.

So, any way you look at it, your prints do not exonerate Vuescan.

Don.
 
Apple never went bankrupt. Never filed for it.

Microsoft came to the rescue exactly to prevent such a formal filing!!
That was the whole point! But, technically, Apple was bust.

You also successfully navigated around my main point, that this
"successful" *computer* company has been reduced to making MP3 players
and flogging songs on the Internet.
Always had a minimum of 5 Billion USdollars in the bank.

Which is totally irrelevant when their debts are several times that
and the creditors are breathing down their neck! Just ask any airline
company currently filing for Chapter 11 in spite of "fat" bank
accounts.

And if Apple was in such a stellar state, why did they jump at - by
comparison - measly 300 mil from their worst and sworn enemy?

Anyway, now that we've established how much I know about Apple does
that clear your "doubt to all my other assertions on other topics"? If
you were consistent you'd now have to reverse yourself and conclude I
was right about those topics as well! ;o)
Anyway...we are WAY off topic here and we should let this go and get
back to arguing about scanners. Agreed?

Well, I was challenged, but now that the record has been set straight
I'd go along with that.

Don.
 
Isn't what we're talking about scan quality, which is an assessment of
different factors and ultimately a value judgment about which scan
looks better?

No, that's exactly your basic misunderstanding and reason for us going
around in circles.
Isn't this also how photographs are generally judged- by
how people like them? Or do you grade Henri Cartier-Bresson on
resolution and sharpness and give him a failing grade.

That only further confirms your misunderstanding. Although technical
proficiency may be a factor it's only a side issue, at best. The
*main* criteria when judging *esthetics* is *image content*. Indeed,
given a superb image with strong emotional and esthetic effect most
critics would overlook minor technical flaws. On the contrary, some
artists may indeed intentionally "play" with such flaws for artistic
effect!
Don Quote:
"Any time you include "subjective" in a sentence that's not a test but
a beauty contest i.e. a question of *taste*."

To extend your analogy: You're participating in a beauty contest

But we are *not* talking about a beauty contest! That's *the* point
you're failing to grasp!
but
you have pencils and a slide rule coming out of your shirt pocket and
are wondering why everyone thinks you're in the wrong place. Are you
going to measure beauty by measuring the faces of the contestants?

Are you misunderstanding what I'm saying intentionally? How you make
such an illogical jump is beyond me!?

A beauty context is an *esthetic* judgment, not an objective judgment!
That's the point!

By contrast, when asked which car has more horse power, is your
answer: What colors are they? I usually like the red one better!

Come on, Roger, you're starting to become totally incoherent.
Anyway, I have no idea what objective scanner tests you are talking
about because you don't mention them in your previous threads, only
subjective tests you have done of Kodachromes where you examine
histograms, and the scanner code and then draw conclusions about them.

Bingo! Key words: histogram, programming code.
What is objective about any of this?

I'm sorry Roger, but that statement alone shows that you seem to have
a very fundamental misapprehension about how objective testing works.

Don.
 
That reply sort of clarifies things. It seems that we are judging the
program by quite different criteria. I care about the quality of the
final print. Would you mind explaining what you think is important in
a scan, as it's not clear at all. How do you judge the quality of
scans? Just saying "objective testing" isn't an explanation.

If the answer is: "histograms and programming code" (what a let down, I
thought you had some secret scanner testing software I have never heard
of), those are only interesting to me insofar as they lead to quality
prints. VS gives results in photoshop where the histograms have no
gaps or spikes, unlike a program like Filmget which processes the image
at a depth of 8-bits. I have read your posts about needing 16-bit
histograms to see hidden flaws, and about all processing in the scanner
software being "corruption," but unless the scan software introduces
posterization, banding, or other artifacts into my image, I find it
irrelevant to the final quality of the image. The final image to be
printed is 8-bit anyway, so if the resultant 8-bit histogram is smooth,
and the image has no visual corruption upon inspection at full
resolution, who cares about the rest?

Scanners and scan managers are just tools, so if they produce the
desired result, isn't that what counts? It isn't for nothing that I
start with good quality prime lenses on a sturdy tripod with 100 speed
film, scan at the native resolution of the scanner and output to a Fuji
Frontier. If the scanning program were incompetent, it would clearly
be the weak link in the chain that keeps the good inputs from leading
to a correspondingly good output. This isn't the case.

Please make the case for why whatever you mean by objective testing is
preferable to subjective judgement about the quality of the scans.
Simply saying it is inherently superior isn't going to convince anyone
of anything.
 
Roger said:
That reply sort of clarifies things. It seems that we are judging the
program by quite different criteria. I care about the quality of the
final print. Would you mind explaining what you think is important in
a scan, as it's not clear at all. How do you judge the quality of
scans? Just saying "objective testing" isn't an explanation.

I don't want to take sides about VueScan, as I haven't done testing to
check whether what Don says, or whatever is relevant, is true of VueScan
or not.

But let me express some general opinions about "judging criteria". On
this, I basiclly agree with Don: it's the technicalities that count.
That the final result is pleasing, good or what you want to call it does
not, IMHO, say anything about the used programs' qualities -- except
that, of course, when the "technicalities" are very very wrong, the
final result will hardly be too good.
If the answer is: "histograms and programming code" (what a let down, I
thought you had some secret scanner testing software I have never heard
of), those are only interesting to me insofar as they lead to quality
prints. VS gives results in photoshop where the histograms have no
gaps or spikes, unlike a program like Filmget which processes the image
at a depth of 8-bits.

Well, if it scans at 8-bit, that's an obvious limitation of FilmGet.
VueScan scans at 16-bit so there will obviously (*) be more information
in the raw scan, but I see that more as a matter of "input data quality"
than "processing quality".

(*) Of course it's not really obvious, as I have argued with Don a
little about this, as you have read. But let's just assume that
scanners, or at least decent film scanners, do have more than 8 bits per
channel of meaningful information. Under this assumption, of course, a
program like FilmGet that, as you say, only scans at 8 bpc will not even
be considered by someone who wants to get full quality from the scanner.
I have read your posts about needing 16-bit
histograms to see hidden flaws, and about all processing in the scanner
software being "corruption," but unless the scan software introduces
posterization, banding, or other artifacts into my image, I find it
irrelevant to the final quality of the image.

But then you shouldn't even care that FilmGet creates "gaps and spikes"
in the histogram, while VueScan doesn't: it's still "just" the histogram
that we're talking about.

Anyway. Now, Don says that VueScan produces "smooth" histograms because
it deliberately introduces noise.

Let's assume for a moment that it does *not*, and that the smooth
histogram is simply the result of smart processing that minimizes
information loss: what would you prefer at this point, a program like
this, or a program that creates "gaps and spikes"?

And between the "gaps and spikes" creating program and another program
that does not show gaps and spikes because it hides them in noise, what
would you prefer?

Me, I'd prefer the "smart processing" program over the "noise hidden
gaps and spikes" program, and the "noise hidden gaps and spikes"
programs over the "gaps and spikes" program.

That's because the "smart processing" program does what I think it's
supposed to do: is smart enough to discard the least possible amount of
valid information.
The "noise hidden gaps and spikes" is then definitely worse than it, but
it's still better than the "gaps and spikes" program, because it tries,
at least, to process the image so that the loss of information is as
invisible as possible.

All of this is *independent* of final image quality: perhaps the three
programs would all give final images that, for me, are indistinguishable
from one another.
But this doesn't matter. Who knows that, someday, I might not want to
crank the levels on those images, or heavily change the gamma, or
whatever? Won't then the three images differentiate? Surely, after a
point, they will; and at that point, having used the "best" program will
pay.
The final image to be
printed is 8-bit anyway, so if the resultant 8-bit histogram is smooth,
and the image has no visual corruption upon inspection at full
resolution, who cares about the rest?

Smooth histogram doesn't mean much unless you *know* its smoothness
comes from valid information.
For example, imagine a really bad scanner, which has 16-bit A/D but is
so bad that only carries data in the four most significant bits, and all
the rest is drowned in noise.
A scan from it will show a smooth histogram. So...?

Sure, in such a case, there probably *will* be "visual corruption upon
inspection". But even in a case where there isn't visual corrupution,
can you say that it will remain that way upon playing with levels,
applying sharpening, or doing whatever you might fancy doing in a future?
Scanners and scan managers are just tools, so if they produce the
desired result, isn't that what counts?

"Desired"? Desired by whom? I have a thermometer right in front of me.
It shows me there are 22C. Unless it's way off, which it isn't, it
produces "the desired result", I couldn't possibly want more from it.

But, is this an argument against the production of high-precision
temperature measuring devices?
You know, what's "desired" may differ from person to person, and even
change for the same person, depending on various factors.

This doesn't prevent doing a decent, scientific, objective or
near-objective analysis showing that the high-precision thermometer is
definitely, unarguably better than the one I have.
It isn't for nothing that I
start with good quality prime lenses on a sturdy tripod with 100 speed
film, scan at the native resolution of the scanner and output to a Fuji
Frontier. If the scanning program were incompetent, it would clearly
be the weak link in the chain that keeps the good inputs from leading
to a correspondingly good output. This isn't the case.

Then perhaps Don is wrong about VueScan. This, however, takes nothing
away from the value of scientific testing.
Please make the case for why whatever you mean by objective testing is
preferable to subjective judgement about the quality of the scans.
Simply saying it is inherently superior isn't going to convince anyone
of anything.

Oh well, it convinces *me*.
Since the "subjective" judgement is subjective, I'd rather make the
relative processing myself, i.e. in Photoshop or something.
I'd like the scanner program to do the "objective" part for me, though,
and do those things like adjusting for film curves, setting exposure
times, even sharpening (if the scanner's exact need of sharpening is
known), in a mathematically as exact way as possible -- and losslessly.


by LjL
(e-mail address removed)
 
That reply sort of clarifies things. It seems that we are judging the
program by quite different criteria.

That's what I've been telling you from the start, Roger!!!
How do you judge the quality of
scans? Just saying "objective testing" isn't an explanation.

Yes, it is. It's also known as the scientific method. Instead of my
repeating it all over again, just review this and other threads.
If the answer is: "histograms and programming code" (what a let down, I
thought you had some secret scanner testing software I have never heard
of), those are only interesting to me insofar as they lead to quality
prints.

What you seem to miss is that the prints will be as good as what goes
in. So, if the scan is bad, the print will be bad. Oh, you may try to
pep it up in Photoshop and hide the warts, but that only goes so far.

That's why the first step, in this case, is to assess how good is the
scanning software. That's because *everything* else flows from there.
If you start with a bad scan, it will dog you from then on.
VS gives results in photoshop where the histograms have no
gaps or spikes, unlike a program like Filmget which processes the image
at a depth of 8-bits.

That only confirms how little you understand, Roger, I'm sorry to say.

Gaps and spikes in a histogram can be good or bad, depending on what's
being done. I don't know if this will help any but here goes anyway...

If you scan raw at gamma 2.2 in 8-bit depth, you *expect* gaps and
spikes! Again, because this is important: You *expect* gaps and
spikes! If you don't get them, that's *bad*!!! Why? Because it means
the scanning software is "massaging" (corrupting!) raw data!

If you scan raw at gamma 1.0 in 8-bit depth (or any bit depth for that
matter) there should be *no* gaps and spikes. If you do get them,
that's *bad*!!! Why? Same as above: Because it means the scanning
software is "massaging" (corrupting!) raw data!
I have read your posts about needing 16-bit
histograms to see hidden flaws, and about all processing in the scanner
software being "corruption," but unless the scan software introduces
posterization, banding, or other artifacts into my image, I find it
irrelevant to the final quality of the image.

And that's precisely the problem! You're failing to grasp the
significance and implication of this corruption.
The final image to be
printed is 8-bit anyway, so if the resultant 8-bit histogram is smooth,
and the image has no visual corruption upon inspection at full
resolution, who cares about the rest?

There's the rub. I'm sorry, Roger, but that shows a total lack of even
the most elementary understanding of the process, or anything I've
been telling you all these weeks!
Scanners and scan managers are just tools, so if they produce the
desired result, isn't that what counts?

And I never said any different!!!

If you're happy with (substandard) Vuescan output, more power to
you!!!

But don't try to *pretend* that just because you *like* that
*substandard* output, it's somehow "good quality". It's not!

Not only that, but you're mixing two totally unrelated things:
Objective data and your personal subjective feelings/preferences.
It isn't for nothing that I
start with good quality prime lenses on a sturdy tripod with 100 speed
film, scan at the native resolution of the scanner and output to a Fuji
Frontier.

So you *can* be objective when you want to!!! All those specs are the
result of objective data evaluation! *That's* what I'm talking about!

You would never say: I use disposable plastic lenses, shoot hand-held
while driving on cobblestone roads and output to whatever my 30-minute
corner photo store uses. The result I get is "good quality".

And yet that's *exactly* (!) what you're saying when you keep
insisting how "good" your Vuescan output is!

Vuescan is the equivalent of a disposable plastic lens camera with a
fixed focus, shot while driving on cobblestone roads!
Please make the case for why whatever you mean by objective testing is
preferable to subjective judgement about the quality of the scans.

Because it is *objective*! It's the first step in evaluating anything.
Simply saying it is inherently superior isn't going to convince anyone
of anything.

Well, it convinced everyone else in the world, Roger. It's called the
scientific method and we've been using it for 100s of years.

Don.
 
Anyway. Now, Don says that VueScan produces "smooth" histograms because
it deliberately introduces noise.

Let's assume for a moment that it does *not*, and that the smooth
histogram is simply the result of smart processing that minimizes
information loss:

Unfortunately, that's just mathematically impossible. Histogram gaps
and spikes are the result of application of gamma. Therefore, a raw
scan at gamma 2.2 in 8-bit producing a smooth histogram *has* lost
data somewhere along the way.
what would you prefer at this point, a program like
this, or a program that creates "gaps and spikes"?

If we want *pure* data, then "gaps and spikes", I'm afraid.
And between the "gaps and spikes" creating program and another program
that does not show gaps and spikes because it hides them in noise, what
would you prefer?

Same as above: If we want pure data it's "gaps and spikes".
Me, I'd prefer the "smart processing" program over the "noise hidden
gaps and spikes" program, and the "noise hidden gaps and spikes"
programs over the "gaps and spikes" program.

The problem is there is no smart processing to remove gaps and spikes
(in the above case) without losing data. It's just mathematically
impossible. Any program that hides gaps and spikes (in the above case)
is just "corrupting" raw data. We may want this corruption, but that's
a different story.

BTW, the easiest way to "remove" gaps and spikes is to interpolate.
For example, increasing image size by a small amount and then reducing
back to original size. This will remove gaps and spikes, and results
in a very smooth histogram. It will also blur the image slightly.
Unfortunately, these gaps will be filled with "imaginary" data.
Smooth histogram doesn't mean much unless you *know* its smoothness
comes from valid information.
Exactly!!!

For example, imagine a really bad scanner, which has 16-bit A/D but is
so bad that only carries data in the four most significant bits, and all
the rest is drowned in noise.
A scan from it will show a smooth histogram. So...?
Exactly!!!


"Desired"? Desired by whom? I have a thermometer right in front of me.
It shows me there are 22C. Unless it's way off, which it isn't, it
produces "the desired result", I couldn't possibly want more from it.

That's a good example. Roger is basically saying:

I want 25C! So I painted a picture of a thermometer showing 25C.
That's much better than a real thermometer which shows different
temperatures. My picture of a thermometer *always* shows 25C. That's
much better "quality" than the real thermometer which keeps changing.
But, is this an argument against the production of high-precision
temperature measuring devices?
You know, what's "desired" may differ from person to person, and even
change for the same person, depending on various factors.

This doesn't prevent doing a decent, scientific, objective or
near-objective analysis showing that the high-precision thermometer is
definitely, unarguably better than the one I have.
Exactly!!!


Oh well, it convinces *me*.

And everybody else for several hundreds of years. Ever since we
stopped using alchemy.
Since the "subjective" judgement is subjective, I'd rather make the
relative processing myself, i.e. in Photoshop or something.
I'd like the scanner program to do the "objective" part for me, though,
and do those things like adjusting for film curves, setting exposure
times, even sharpening (if the scanner's exact need of sharpening is
known), in a mathematically as exact way as possible -- and losslessly.

Bingo!

Don.
 
LjL, I basically agree with you.

Your quote: "Sure, in such a case, there probably *will* be "visual
corruption upon
inspection". But even in a case where there isn't visual corrupution,
can you say that it will remain that way upon playing with levels,
applying sharpening, or doing whatever you might fancy doing in a
future?"

When I talked about having a smooth 8-bit histogram, that's for output
AFTER extensive correction with curves, selective color adjustments,
and of course sharpening in Photoshop. I have worked on files from
different sources (consumer digital cameras, flatbed scans) and found
that similar corrections gave visible problems, like posterization. VS
gives me source files with negatives that are close to what is on the
film, and that are robust enough to stand up to some real editing.
That's the goal of the scan for me- to give me a solid starting point
that sets me up for a quality final print. I do all of the
"subjective" processing (color, grain reduction, sharpness, levels) and
making the image look the way I want in Photoshop- Vuescan is just to
get an original that looks like the negative or slide.

LjL, what do you mean by scientific or objective testing and how do you
know Don has done any?
 
Just to clarify, Filmget and Vuescan both scan at the scanner's maximum
bit depth. Filmget doesn't do a RAW gamma 1.0 scan as far as I am
aware. I have concluded that if any color corrections are enabled in
Filmget, they are performed at 8-bit precision, judging by the gaps in
the resultant histogram for the 16-bit file. Both output files are 16
bit and Vuescan shows no such gaps or spikes for the exact same file,
which led me to the conclusion I stated before. I did this testing
when I did the IR testing a few days ago, and this is the sort of thing
I've been doing to come to my conclusions about Vuescan.

So if you accept that this is "objective," it still doesn't answer
which program gave me a better file, because of the visual flaws I
documented with Filmget's IR cleaning. I don't know of any objective
tool that measures noise patterns, so how can you objectively judge
this?
I want to know from you Don, what are the limitations to what you can
measure, if you throw out any sort of visual judgment as subjective?
How do you even conclude there is noise without visual judgment? How
do you distinguish scanner hardware noise from noise done by the
scanning software? Let's be scientific and rule out some of the
possible explanations here. Science also relies heavily on controlled
observations that are made public and that are repeatable. I am happy
to offer up scans as evidence for visual analysis from others, but
let's not throw out an important tool like eyesight.

Don, tell me what objective criteria you use to measure scans. I've
tried to be quite frank about the criteria I use, can't you do the
same?

By the way, I agree completely with this statement and that's why I use
Vuescan: "What you seem to miss is that the prints will be as good as
what goes in. So, if the scan is bad, the print will be bad."
 
Don said:
Unfortunately, that's just mathematically impossible. Histogram gaps
and spikes are the result of application of gamma. Therefore, a raw
scan at gamma 2.2 in 8-bit producing a smooth histogram *has* lost
data somewhere along the way.

You're right of course. But what I was thinking of is a program that
makes the histogram "as smooth as possible", without using "tricks" like
noise.

For example, take a program that applies gamma 2.2 to a raw scan:
certainly, the resulting histogram will be better if the program just
applies gamma 2.2, than if it iterates the application of muliple,
smaller gammas.

OK, this is a stupid example, no remotely sane program would apply gamma
like that; but still (since there is not only gamma in the real world,
but there are also other things that need to be done to the image), a
program that applies transformations "smartly" has an edge over one that
applies them naively.

Often, it's simply the *order* in which you apply the transformations
that makes a difference!
[snip]
And between the "gaps and spikes" creating program and another program
that does not show gaps and spikes because it hides them in noise, what
would you prefer?

Same as above: If we want pure data it's "gaps and spikes".

As I said previously, I don't quite agree on this, *as long as* the
noise is *only* applied so that it fills in the gaps, *without* touching
the existing data.

That's because this noise addition is lossless, and the "original" image
without noise can be precisely reconstructed, as long as one also has
its histogram.

It's not "pure data", that's for sure, but the "pure data" can be
reconstructed (with the "save the histogram" caveat), and the result
tends to be more pleasing to the eye, since it hides posterization.
The problem is there is no smart processing to remove gaps and spikes
(in the above case) without losing data. It's just mathematically
impossible. Any program that hides gaps and spikes (in the above case)
is just "corrupting" raw data. We may want this corruption, but that's
a different story.

Indeed we may.
And you're not wrong saying that it is mathematically impossible;
however, the mathematical impossibility falls if we assume that the
histogram of the original image is not discarded. If it's kept -- and
the noise is applied in the correct way, i.e. *only* filling the gaps --
there should be no loss of information at all.
[snip]
Smooth histogram doesn't mean much unless you *know* its smoothness
comes from valid information.

Exactly!!!

But a "smart" application of noise to smooth the histogram, like I
described above, would be welcome IMHO.
Of course, it would have to be done correctly, and should be an
user-selectable option (tooltip: "Helps hiding excessive posterization;
a histogram taken before application must be kept for this operation to
be reversible"), that goes without saying.

by LjL
(e-mail address removed)
 
Hecate said:
Incidentally, we used to use CD and are currently copying all our CDs
to DVD.

I've read that DVD's are not as archival as CD's. Delkin Devices
markets a CD, "eFilm Archival Gold," with the subtitle "The 300 Year
Disc." I'm converting my archival CD's to this brand. It's more
expensive, but I figure my images are worth it. Any comments?

BTW, the OP asked about scanner managers & we've long ago wandered into
other territories (dual processors?), which good conversations
naturally do. But, at some point, when the conversation shifts to a
different topic (even a different group?) the moderator, or someone,
should open a new topic in this or a different group and
cross-reference to the new topic/group.
 
Back
Top