difficulty drum scanning negatives

  • Thread starter Thread starter Jytzel
  • Start date Start date
Hi,

Recently said:
No, however the sampled data is in identity with the subject *after*
it has been correctly filtered at the input stage.
In which case, I disagree with your usage of the term "identity".
This principle is
the entire foundation of the sampling process. No information can
get past the correct input filter which cannot be accurately and
unambiguously captured by the sampling system.

"Accurately and unambiguously" = "No distortion".
The principle is not where the problem lies. It is in the implementation.

From your own response to an earlier post:
"With a drum scanner the spot size (and it's shape) is the anti-alias
filter, and the only one that is needed. One of the most useful features
of most drum scanners is that the spot size can be adjusted independently
of the sampling density to obtain the optimum trade-off between resolution
and aliasing..."
^^^^^^^^^^^^^^^^^^^^^^^^^
In another post, you reported:
"then the photomultiplier in the scanner produces a signal which is
proportional to the average illumination over
the area of the spot."

Sounds (and looks) like distortion to me, given that the "area of the
spot" may have more than one illumination level, and the recorded value is
averaged. ;-)
If properly filtered prior to sampling then the sampled data is a
*perfect* representation of the filtered subject. In short, there may be
*less* information in the properly sampled and reconstructed subject
than in the original, but there can never be more.
Which only further reinforces my disagreement with your usage of
"identitiy". I've not heard the term used in such a way that it includes a
"less than" clause. ;-)
However imperfect
reconstruction will result in artefacts and distortion which are not
present in the original subject - false additional information, and
jaggies fall into this category, they are not aliasing artefacts.
I didn't suggest that jaggies are aliasing artifacts. They are clearly
output representation artifacts, as are "lumpies" or other kinds of
distortions dependent on the representation of the pixels identified in
the numeric data resulting from sampling. My claim is that the numeric
data contains various distortions of the subject, and while some may be
assignable to the input filtering (including those you mentioned), but
others are assignable to the practical limitations of math operations, and
that these errors are inextricable.
Each sample
represents a measure of the subject at an infinitesimally small point
in space (or an infinitesimally small point in time).
As you present in another post, the issue relevent to the topic appears to
be:
"However, since the grain is random and smaller than the spot size, each
aliased grain only extends over a single pixel in the image - but this can
be many times larger than the actual grain on the original. "

IOW, the measure of the subject is not "infinitesimally small", and by
your own admission, some aspects of the subject (e.g. minimum grain sizes)
can be smaller than the sample size.
Sorry Neil, but that is completely wrong.
Not according to your own posts (as excerpted, above).

I agree with those statements in your posts, even if you don't! ;-)
That, most certainly, is *NOT* a fact! Whilst I am referring to an
interpretation of the sampled data, the correct interpretation does
*not* introduce distortion. You appear to be hung up on the false
notion that every step introduces distortion - it does not.
I see. And, just what kind of system are you using that avoids such
artifacts as rounding errors, for example?
No, that is the Red Book specification - I suggest you look it up -
how yo get to that sampled data is irrelevant to the discussion on the
reconstruction filter.
Our disagreement boils down to whether artifacts are introduced by
real-world recording processes. The reason that I stressed how _audio_ is
recorded -- as opposed to the burning of the end result onto a CD
master -- is that the first stages of the recording process is somewhat
more analogous to scanning than "recording a CD".

MANY artifacts are introduced because of the lack of, as you have put it,
an adequate input filter. There is not a microphone made that will capture
actual acoustic events due to many factors, not the least of which is that
those events are typically not two dimensional in nature, but the
processes of the capturing devices (microphones) are. The rest of the
recording process is one of manipulation and error correction to create an
acceptable representation of the original acoustic events. I've not run
into anyone "in the biz" that would claim that these two are "in
identity", or that it would be possible to reconstruct the original
acoustic events from the sampled data (recording).

Finally, the process of reducing the recorded data to the 44.1/16 standard
introduces MORE errors by virtue of whether dithering is used, and if so,
which dithering algorithms one chooses. By the time a CD is ready for
purchase, it's much more akin to a painting than a scanned photograph,
which is why I think it was a poor choice as an example for this topic.
Of course, this approach assumes that the entire image can be
adequately represented in 3000 or 2000ppi, which may not be the case,
just as many audiophiles clamour for HD-CD media to met their higher
representation requirements.
And, is in fact, one of the issues at the root of my perspective. ;-)
Your assertion that the sampled data is inherently distorted and that
this inevitably passes into the reproduction is in complete
disagreement with Claude Shannon's 1949 proof. I suggest that you
will need much more backup than a simple statement of disagreement
before many people will take much notice of such an unfounded
allegation.
The crux of the matter is that I'm only interested in what happens in real
world implementations, as film in hand represents just that. I don't have
a problem with the theory, and not only understand it, but agree that *in
theory* the math behind sampling can lack distortion. However, I don't
live in theory, and have little real-world use for theoretical "solutions"
that can't be (or at least, aren't) realized. ;-)

To that end, I think I'll just rely on the results I've been able to
obtain. I, as I presume the OP, am interested in understanding the
limitations of the process. Your own posts have provided excellent bases
for the understanding of such limitations. What puzzles me is that you
don't see the "trade offs" that you spoke of as distortions of the
original subject. What, exactly, are you "trading off" that doesn't result
in a reduction of the available data in the subject?
Good God, I think he's finally got it, Watson! The spot is part of
the input filter of the sampling system, just as the MTF of the
imaging optics are!
I "had it" long before your first posts on the subject. However, I see
every stage of the real-world process as introducing errors, and thus
distortions of the subject.
Indeed these components (optics, spot etc.) can be used without
sampling in the signal path at all, as in conventional analogue TV,
and will result in exactly the same distortions that you are
referring to. If this is not proof that sampling itself does not
introduce an inherent distortion then I do not know what is!
As, to my knowledge, there is no system available that implements perfect
input filtering and flawlessly applies sampling algorithms, all that is
left is to expand my knowledge by being presented with such a system. ;-)
Just in case you haven't noticed, you have in the above statement
made a complete "about-face" from your previous statements - you are
now ascribing the distortions, correctly, to the input filter not the
sampling process itself, which introduces *no* distortion, or the
reconstructon filter which can introduce distortion (eg. jaggies) if
improperly designed.
I'm not terribly concerned about sampling (e.g. the math) without input
filters (e.g. the implementation). I'm only concerned about systems. So
there's no "about face" involved, we're just interested in different
things, it seems. ;-)

Regards,
 
Neil said:
Hi,


From your own response to an earlier post:
"With a drum scanner the spot size (and it's shape) is the anti-alias
filter, and the only one that is needed. One of the most useful features
of most drum scanners is that the spot size can be adjusted independently
of the sampling density to obtain the optimum trade-off between resolution
and aliasing..."
^^^^^^^^^^^^^^^^^^^^^^^^^
In another post, you reported:
"then the photomultiplier in the scanner produces a signal which is
proportional to the average illumination over
the area of the spot."

Sounds (and looks) like distortion to me, given that the "area of the
spot" may have more than one illumination level, and the recorded value is
averaged. ;-)
Ok - I give up! I thought I was discussing the subject with someone who
understood a little of what they were talking about and merely required
some additional explanatory information. That comment indicates that
you simply do not have a clue what you are talking about at all, since
you are clearly incapable of understanding either the action of a
spatial filter or the difference between the action of the filter and
the action of sampling.

Please learn the basics of the topic before wasting people's time with
such drivel.
Which only further reinforces my disagreement with your usage of
"identitiy". I've not heard the term used in such a way that it includes a
"less than" clause. ;-)
Try *reading*! The identity is with the filtered subject which, having
been filtered is less than the subject!

More obfuscation and/or deliberate misrepresentation!
I didn't suggest that jaggies are aliasing artifacts.

No? I didn't suggest you did, however you did defend the suggestion
made by a third party that they were. Try reading your opening input
into this thread again and stop the obfuscation.
My claim is that the numeric
data contains various distortions of the subject, and while some may be
assignable to the input filtering (including those you mentioned), but
others are assignable to the practical limitations of math operations, and
that these errors are inextricable.
And this is precisely where you depart company from the very principles
of the Sampling Theorem, which is hardly surprising given your previous
statements indicating your total confusion of the topic!

Let me explain it one more time, finally. There are two filters, an
input (antialiasing) filter and an output (reconstruction) filter
between which is placed a sampling system. The performance of the
system is totally independent of whether the sampling system is actually
present or not providing that the filters are matched to the dimensions
of the sampling system. In short, it is impossible to determine whether
the information has been sampled or not simply by examining the output
of the reconstruction filter, because the sampling process itself does
not introduce any distortion or limitation of the signal at all.

Since you clearly do not understand this fundamental concept on which
the entire science of information technology is based, I suggest you
acquaint yourself in detail with its scientific proof, presented clearly
in Claude Shannon's 1948 paper "A Mathematical Theory of Communication"
and desist from arguing the case against something which is a proven
mathematical fact, as relevant to audio communication as it is to
scanning images.
As you present in another post, the issue relevent to the topic appears to
be:
"However, since the grain is random and smaller than the spot size, each
aliased grain only extends over a single pixel in the image - but this can
be many times larger than the actual grain on the original. "

IOW, the measure of the subject is not "infinitesimally small", and by
your own admission, some aspects of the subject (e.g. minimum grain sizes)
can be smaller than the sample size.
Indeed - and they would only reach the sampling system if the input
filter, in this case the optic MTF and the spot size and shape, permit
them to. With an adequate input filter, the grain is not sampled and
grain aliasing does not occur.

Snipped the rest of this tripe, you really haven't a clue what you are
talking about. Before posting anything else, read up on the topic -
specifically the texts I have suggested. They may not be the most
comprehensive, but they are the most readable explanations of the topic
for even a layman to understand.
 
Recently said:
Ok - I give up! I thought I was discussing the subject with someone
who understood a little of what they were talking about and merely
required some additional explanatory information. That comment
indicates that you simply do not have a clue what you are talking
about at all, since you are clearly incapable of understanding either
the action of a spatial filter or the difference between the action
of the filter and the action of sampling.
What *should* be clear to you is that I have repeatedly stated that I am
referring to real-world implementations, and not simply sampling theory. I
have repeatedly asked you to suggest a system (to make it clearer that is
HARDWARE I'm talking about) capable of performing near the levels of
accuracy that sampling theories implied. Your response is to point once
again at -- usually the same -- theoretical sources, and you've NOT ONCE
indicated the existance of such hardware. If you think that such exists,
that is where we part in our perspectives.
Please learn the basics of the topic before wasting people's time with
such drivel.
In short, this has nothing to do with my capability to understand sampling
theory, and everything to do with what one can actually purchase and/or
use. I tried to emphasize my point by excerpting your own posts,
indicating the limitations typical of such systems. So, if it's drivel,
I'm afraid it didn't originate with me, sir.


Try *reading*! The identity is with the filtered subject which,
having been filtered is less than the subject!
Your statement, that the sampled data is a perfect representation of the
filtered subject is essentially stating that the sampling alogrithm has
not altered the post-filter data. On a theoretical level, we are in
agreement about this point; the input filter has presumably restricted the
information to fall within the capabilities of the sampling algorithm to
represent it accurately. More to the point, the only way that I dispute
this is in real-world implementations, e.g. math coprocessor variances
such as the rounding errors I wrote of. Surely, you don't insist that such
impacts are non-existant in real-world systems?
More obfuscation and/or deliberate misrepresentation!
"> No, however the sampled data is in identity with the subject after
it has been correctly filtered at the input stage. "

It is clear in this exchange that you have relocated "the subject" from
being the pre-filter object I inquired about to a post-filtered
representation of that object. I am not now, nor have I ever been
referring to "the subject" as a post-filtered representation of the
object. The distortion I spoke of is the difference between the subject
and the post-filter representation, and in other parts of the exchange,
included the possibile accumulation of errors due to hardware
computational limitations. I've never claimed differently. So, where is
the "obfuscation and/or deliberate misrepresentation", beyond your claim
that it exists in this material?
No? I didn't suggest you did, however you did defend the suggestion
made by a third party that they were. Try reading your opening input
into this thread again and stop the obfuscation.
Perhaps you should re-read that opening input again, and stop trying to
misrepresent what I stated. Here it is, for your convenience:

Don wrote:
">> It isYour reply:
"> No it isn't!"> Jaggies occur because of inadequately filtered reconstruction systems.
Not because of inadequate sampling! A jagged edge occurs because the
reconstruction of each sample introduces higher spatial frequencies
than the sampled image contains, for example by the use of sharp
square pixels to represent each sample in the image."
My reply:
"While I understand your complaint, I think it is too literal to be useful
in this context. Once a subject has been sampled, the "reconstruction" has
already taken place, and a distortion will be the inevitable result of any
further representation of those samples. This is true for either digital
or analog sampling, btw."

My opening statement, "...I understand your complaint..." is that I am
agreeing with you, but questioning the value of the distinction you are
making. Put plainly, you are referring strictly to the algorithm applied
to post-filtered data. To clarify my response, it is that by the time the
subject (not post-filtered representation of the subject) is sampled, it
is already distorted (by the input filter), and will only be further
distorted by the time of output in a real-world system.

And, directly to the issue of jaggies:
You stated:
"> Aliasing only occurs on the
input to the sampling system - jaggies occur at the output."
My reply was:
"Whether one has "jaggies" or "lumpies" on output will depend on how
pixels
are represented, e.g. as squares or some other shape. However, that really
misses the relevance, doesn't it? That there is a distortion as a result
of sampling, and said distortion will have aliasing which exemplifies
the difficulty of drum scanning negatives, and that appears to be the
point of Don's original assertion. Our elaborations haven't disputed this
basic fact."

Clearly, I am agreeing with YOU that jaggies are output artifacts. My
response elaborates on some possible artifacts that _output devices_ may
introduce. There is NOTHING in my statement that merits your claim that
"...however you did defend the suggestion made by a third party that..."
(jaggies are aliasing artifacts). The remaining content merely questions
whether the points you are making addresses the OP's question at hand.

At best, my entry recognized the idea that a real-world system, e.g.
scanner as a piece of hardware, not simply the sampling-stage mathematic
operation on post-filtered data, can present the end user with a file that
contains aliasing, and possibly to that end, Don was responding to the OP.
I was not then, and am not now arguing about any aspect of sampling theory
independent of a real-world implementation through existant hardware. Make
no mistake that my choice is not because I don't understand, or have not
read the material.

Your insults aside, the fact is that we're talking apples and oranges. The
problem is, you fail to acknowledge this. If you wish to criticise the
accuracy or relevance of my comments, you'll do so not by pointing at
various sources of sampling theory, but by pointing at the hardware that
performs to the degree of accuracy that such theories imply. To distill
the point of my input to a single sentence: If such hardware existed, the
"trade offs" you spoke of would, in all likelihood, be unnecessary.

Regards,
 
Neil said:
What *should* be clear to you is that I have repeatedly stated that I am
referring to real-world implementations, and not simply sampling theory.

Really - the issues raised in this and other posts do not relate to
specific hardware implementations, but to generic steps in the process.
In particular your insistence that sampling itself, not the filters,
introduces distortions which you have never specified. I have already
mentioned the practical limitations of positional accuracy in real
sampling systems which are insignificant in modern systems, but you have
yet to divulge what these imaginary distortions you think exist in real
practical hardware at the sampling stage.
I
have repeatedly asked you to suggest a system (to make it clearer that is
HARDWARE I'm talking about) capable of performing near the levels of
accuracy that sampling theories implied. Your response is to point once
again at -- usually the same -- theoretical sources, and you've NOT ONCE
indicated the existance of such hardware.

I did, but you were clearly too lost in your own flawed mental model of
the process to notice that I had. I suggest you back up a few posts and
find it.
 
Recently said:
I did, but you were clearly too lost in your own flawed mental model
of the process to notice that I had. I suggest you back up a few
posts and find it.
While I did respond to the various analogies that others presented, I
don't recall presenting a mental model of the process. However, perhaps
you did answer the question, and it's possible that the post you are
referencing above is not available on my news server. The closest response
that I can locate is from our exchange on 4/5:

You wrote:
"The point is that he has already done this - most drum scanner
manufacturers produce equipment capable of the task, unfortunately many
operators are not up to driving them close to perfection - often because
they erroneously believe that such perfection is unobtainable in sampled
data, so why bother at all."

Is your intent is to suggest that the only source of grain aliasing in the
resultant file is operator error? If so, the difficulty that I have is in
reconciling such a notion against your own excellent description on 4/1:

There, you wrote in part:
"Part of the skill of the drum scan operator is adjusting the spot or
aperture size to optimally discriminate between the grain and the image
detail for particular film types, however some film types are difficult,
if not impossible to achieve satisfactory discrimination."

It appears to imply that, regardless of operator skill, there will be
cases in which some artifacts are unavoidable. This explanation is one
that I understood to be the case, and directly experienced, at least
decades before this thread began. Perhaps you'll indulge me by clarifying
this, as it is the primary source of any "confusion" that I may have?

Regards,
 
Neil said:
While I did respond to the various analogies that others presented, I
don't recall presenting a mental model of the process.

Your repeated statements that sampling itself introduces distortion is
evidence of a flawed mental model of the process, one which is at direct
odds with the underlying principles of sampling in general.
You wrote:
"The point is that he has already done this - most drum scanner
manufacturers produce equipment capable of the task, unfortunately many
operators are not up to driving them close to perfection - often because
they erroneously believe that such perfection is unobtainable in sampled
data, so why bother at all."

Is your intent is to suggest that the only source of grain aliasing in the
resultant file is operator error?

Not at all, many systems are designed in such a way that grain aliasing
cannot be avoided. For example, until recently, this was impossible to
avoid in almost all desktop scanners, and still is in many. Some drum
scanners apparently suffer from a similar problem, specifically that the
aperture shape and size and/or the sampling density cannot be increased
to a sufficient degree to prevent aliasing.
If so, the difficulty that I have is in
reconciling such a notion against your own excellent description on 4/1:

There, you wrote in part:
"Part of the skill of the drum scan operator is adjusting the spot or
aperture size to optimally discriminate between the grain and the image
detail for particular film types, however some film types are difficult,
if not impossible to achieve satisfactory discrimination."

It appears to imply that, regardless of operator skill, there will be
cases in which some artifacts are unavoidable. This explanation is one
that I understood to be the case, and directly experienced, at least
decades before this thread began. Perhaps you'll indulge me by clarifying
this, as it is the primary source of any "confusion" that I may have?
As mentioned above, there are cases where this cannot be avoided,
irrespective of the operator skill, simply due to hardware design
limitations. Also, as previously mentioned there are some films, almost
exclusively monochrome, high contrast, ultrathin emulsions, which are
capable of resolving image detail right up to the spatial frequencies at
which grain structure exists. Had you looked up some of the references
I cited you would have found that this type of case is specifically
addressed, where the image is effectively randomly sampled by the film
grain which is in turn regularly sampled by the scanner system. If
neither loss of image content nor grain aliasing are acceptable then
these films require sampling and input filtering beyond the resolution
of the film itself. The aperture, together with normal optical
diffraction limits, still performs an input filter to the sampling
process, reducing the contrast of the grain to a minimum above the
Nyquist of the sampling density, however the sampling density can easily
reach 12,000ppi or more (true, not interpolated). Few scanners are
capable of this, however, given that the film MTF has fallen
significantly before grain contrast becomes significant, it is still
perfectly feasible to identify an optimum, if less than perfect,
differentiation point in lesser scanners.

Such issues are rarely a problem with the much thicker and multilayer
colour emulsions where resolution generally falls off well before grain
contrast becomes significant. Just as importantly the grain itself is
indistinct, having been bleached from the emulsion to leave soft edged
dye clouds, resulting in a slow rise in granular noise as a function of
spatial frequency. Thus the ability to differentiate between resolved
image content and grain is much enhanced and the failure to do so with
adequate equipment is invariably due to operator skill (or interest or
both) limitations.
 
Recently said:
Your repeated statements that sampling itself introduces distortion is
evidence of a flawed mental model of the process, one which is at
direct odds with the underlying principles of sampling in general.
I'm afraid that you are mistaken about my comments re: sampling errors.
Rather than put full quotes here, I'll follow your lead and invite you to
read them again. I've never questioned the integrity of the theoretical
functions involved in sampling, and wrote so more than once. However, I
did state that any real-world implementation of sampling algorithms by
hardware will introduce at least rounding errors due to hardware
limitations. I would not call that a "mental model of the process", in
that it explicitly describes hardware functioning.

All of my other comments regarding distortions (errors, if you prefer)
involved the state of the information about the subject post-input filter,
the issue being GIGO at the sampling stage. Again, this is simply a
description of hardware functioning, and not a mental model of the
process. If you disagree with any of this, please let me know how and why.
Not at all, many systems are designed in such a way that grain
aliasing cannot be avoided. For example, until recently, this was
impossible to avoid in almost all desktop scanners, and still is in
many. Some drum scanners apparently suffer from a similar problem,
specifically that the aperture shape and size and/or the sampling
density cannot be increased to a sufficient degree to prevent
aliasing.
Now, we're getting somewhere. My repeated request was for a reference to a
commonly available machine which has sufficiently high performance
capabilities to reliably avoid grain aliasing with all commonly available
films (obviously, for all subjects and without sacrificing detail or
introducing other artifacts). I am unaware of the existance of such a
scanner, and would appreciate make and model, or a pointer to the site. If
you've already done so, it isn't on my news service.

But, I suspect that we actually agree about this, as you have responded
with:
As mentioned above, there are cases where this cannot be avoided,
irrespective of the operator skill, simply due to hardware design
limitations. Also, as previously mentioned there are some films,
almost exclusively monochrome, high contrast, ultrathin emulsions,
which are capable of resolving image detail right up to the spatial
frequencies at which grain structure exists.

Which is the crux of the problem, is it not? And, it's not news to me.
;-)

Regards,
 
Neil said:
I'm afraid that you are mistaken about my comments re: sampling errors.
Rather than put full quotes here, I'll follow your lead and invite you to
read them again. I've never questioned the integrity of the theoretical
functions involved in sampling, and wrote so more than once.

You wrote:
"Whether one has "jaggies" or "lumpies" on output will depend on how
pixels are represented, e.g. as squares or some other shape. However,
that really misses the relevance, doesn't it? That there is a distortion
as a result of sampling"

In your next post you the wrote:
"However, more to the point, distortion is inextricably inherent in the
sampled data"

And then wrote:
"My claim is that the numeric data contains various distortions of the
subject, and while some may be assignable to the input filtering
(including those you mentioned), but others are assignable to the
practical limitations of math operations, and that these errors are
inextricable."

All of these statements, especially the last one, refer quite
specifically to the sampling process, not to the limitations of the
input filter which you specifically address separately in the latter
statement.
Now, we're getting somewhere. My repeated request was for a reference to a
commonly available machine which has sufficiently high performance
capabilities to reliably avoid grain aliasing with all commonly available
films (obviously, for all subjects and without sacrificing detail or
introducing other artifacts). I am unaware of the existance of such a
scanner, and would appreciate make and model, or a pointer to the site. If
you've already done so, it isn't on my news service.
Pick any of the currently available film/flatbed scanners and you will
have in your hands a scanner which does not alias grain.

Look at the Minolta 5400 for a higher resolution scanner which, which
the grain dissolver activated, does not alias grain.

Although not technically a drum scanner, the Imacon 848 provides most of
the related features and will cope with most photographic film without
grain aliasing or resolution loss.

Finally, its expensive but the Aztek Premier will do 16000ppi optical
sampling with independent aperture control to get everything off the
highest resolution monochrome film without introducing grain aliasing at
all.
But, I suspect that we actually agree about this, as you have responded
with:


Which is the crux of the problem, is it not?

Not really. Most, if not all of the people on this forum, are
interested in scanning images from colour film where such high
resolution requirements just don't exist.
 
Paul Schmidt said:
What are the best films for scanning say one or two brands/types
in each of these categories:

B&W (what's best old tech, new tech, chromogenic)

I'm partial to the Fuji line. I've settled mostly on Neopan 400 and Neopan
1600. Some Acros 100. (Don't ask me why it's not Neopan 100. I have no
idea. :)

Acros 100:

http://canid.com/sioux_falls/falls_park1.html


Neopan 400:

http://canid.com/johanna/butterfly1.html


Neopan 1600:

http://canid.com/johanna/balancing_act.html
 
Recently said:
You wrote:
"Whether one has "jaggies" or "lumpies" on output will depend on how
pixels are represented, e.g. as squares or some other shape. However,
that really misses the relevance, doesn't it? That there is a
distortion as a result of sampling"
"Jaggies or lumpies" clearly refers to the result post output-filter, as
identified in the first part of the first sentence by the words "on
output". The latter reference of distortion has to do with GIGO, and I
didn't go into detail at that point. I did make it plainly clear in
subsequent posts that I am referring to real-world implementations in
hardware.
In your next post you the wrote:
"However, more to the point, distortion is inextricably inherent in
the sampled data"
It should be obvious that this refers to the state of the information post
input-filter, as that comprises the content of "the sampled data". GIGO,
once again.
And then wrote:
"My claim is that the numeric data contains various distortions of the
subject, and while some may be assignable to the input filtering
(including those you mentioned), but others are assignable to the
practical limitations of math operations, and that these errors are
inextricable."

All of these statements, especially the last one, refer quite
specifically to the sampling process, not to the limitations of the
input filter which you specifically address separately in the latter
statement.
Not really. That "the numeric data contains various distortions of the
subject" directly addresses the end result of all stages up to the point
where that data can be examined -- e.g. post sampling, and post storage.
It in no way isolates the sampling stage, as exemplified by "...some may
be assignable to the input filtering...", while the last portion refers to
the *implementation*, e.g. "practial limitations of math operations", or
put another way, real-world execution of those functions. Unless you have
access to some device the rest of the world has yet to see, this is an
accurate statement.
Pick any of the currently available film/flatbed scanners and you will
have in your hands a scanner which does not alias grain.
However, in the process, they compromise the image in other ways, and as
such do not meet the criteria that I've spelled out, above in "...for all
subjects and without sacrificing detail or introducing other artifacts".
Look at the Minolta 5400 for a higher resolution scanner which, which
the grain dissolver activated, does not alias grain.
Ditto.

Although not technically a drum scanner, the Imacon 848 provides most
of the related features and will cope with most photographic film
without grain aliasing or resolution loss.
"Most photographic film" is not "all commonly available film", which is
another of the criteria from above.
Finally, its expensive but the Aztek Premier will do 16000ppi optical
sampling with independent aperture control to get everything off the
highest resolution monochrome film without introducing grain aliasing
at all.
I'll look into this model. Thank you for the reference, even if I remain
skeptical that 16000 ppi is sufficiently high frequency to "get everything
off the highest resolution monochrome film" without any artifacts, at
least it's not flatbed territory or CCD-based.
Not really. Most, if not all of the people on this forum, are
interested in scanning images from colour film where such high
resolution requirements just don't exist.
Definitely not "all of the people on this forum", based on the number of
inquiries related to scanning monochrome negatives. You shouldn't have to
search very deeply to find a significant number of such requests.

Furthermore, there are color films that are also challenging to scan, such
as the Kodachromes. I've gotten much better results from optical
enlargements of those slides. I haven't used NPS 160, as is the case of
the OP, but allow for the possibility that this might be another such
film. Do you know for certain that it isn't?

Regards,
 
Neil Gould said:
Recently, Kennedy McEwen <[email protected]> posted:
<Snipped earlier quotations which can be pulled from the relevant
archives if anyone is interested>
That "the numeric data contains various distortions of the
subject" directly addresses the end result of all stages up to the point
where that data can be examined -- e.g. post sampling, and post storage.
It in no way isolates the sampling stage, as exemplified by "...some may
be assignable to the input filtering...", while the last portion refers to
the *implementation*, e.g. "practial limitations of math operations", or
put another way, real-world execution of those functions. Unless you have
access to some device the rest of the world has yet to see, this is an
accurate statement.

Despite all of your subsequent floundering on this Neil your statements
were not clearly referring to sampling "and all previous processes",
especially when you chose to address those earlier process with separate
criticisms. You may mean what you say but, if so, you certainly haven't
said what you mean!

As to the distortion introduced specifically by sampling, pray explain
*exactly* what you mean. You now specify that you are considering
real-world situations, but no real world, high performance scanner
suffers from relevantly significant distortion at the sampling stage.
Quantisation on almost all drum scanners, though not strictly sampling
but I'll include it for your benefit, is sufficient that its associated
noise is lower than the random noise in the film grain itself and
certainly lower than photon noise required to interrogate it. Spatial
positional errors are at least an order of magnitude less than the
aperture size itself and are thus irrelevant. Where and what,
precisely, is this real world sampling distortion to which you refer?
However, in the process, they compromise the image in other ways, and as
such do not meet the criteria that I've spelled out, above in "...for all
subjects and without sacrificing detail or introducing other artifacts".

"Most photographic film" is not "all commonly available film", which is
another of the criteria from above.

I'll look into this model. Thank you for the reference,

As you will note from the above I specifically ramped up the performance
level, at each stage providing a device which addressed the next most
significant point in your list of criteria. It was not my intention
that all of the above would meet all of your criteria, however it
indicates that eliminating grain aliasing does not in itself require a
significantly high performance device.
even if I remain
skeptical that 16000 ppi is sufficiently high frequency to "get everything
off the highest resolution monochrome film" without any artifacts, at
least it's not flatbed territory or CCD-based.
I think you will find that it is capable of much higher sampling if
necessary, however the minimum aperture size is around 3um in diameter
and that defines the limit in terms of the anti-alias spatial filter.
That corresponds to the cut-off of a perfect diffraction limited f/4
lens across the visible spectrum, with considerable contrast reduction
for faster lenses. So even if you can find a film capable of resolving
it, you won't be able to put much information of that scale onto the
film in the first place. Even an optically perfect f/1 lens, and Kodak
Tech Pan developed in fine grain developer will have a composite MTF of
no more than 7% at the critical spatial frequency of that aperture
dimension, and that is just what is present on the film, it doesn't
include any reproduction lens, such as in a projector, enlarger, scanner
or even your own eyes! But you claim you are constraining your comments
to real world situations - well, unless you are prepared to extend your
real world photographic situation into the hard UV or X-ray region then
your concerns about additional information being present on the film are
totally unfounded - and all of this at spatial frequencies below any
that aliasing, whether grain or any other artefact you might be
concerned about, becomes an issue!
Definitely not "all of the people on this forum", based on the number of
inquiries related to scanning monochrome negatives. You shouldn't have to
search very deeply to find a significant number of such requests.

Furthermore, there are color films that are also challenging to scan, such
as the Kodachromes. I've gotten much better results from optical
enlargements of those slides.

Kodachrome is essentially a 3 layer dyed monochrome emulsion, the
resolution of which is almost an order of magnitude lower than the best
black and white films! However, retaining in some cases a significant
silver content in the developed emulsion it also retains a sharp grain
structure with high spatial frequency content. Nevertheless, the film
has no image resolution at all at 16000ppi, whilst a suitably selected
aperture size will totally eliminate grain aliasing at that resolution.
It may be challenging to scan, but it certainly is not a problem for
even moderate professional scanning equipment. Even decent desktop
scanners, such as some I mentioned in my previous post, can pull almost
everything off of Kodachrome without grain aliasing.
I haven't used NPS 160, as is the case of
the OP, but allow for the possibility that this might be another such
film. Do you know for certain that it isn't?
NPS is a relatively high resolution chromogenic colour negative film
which presents no more problems, in terms of granularity, than other
films of the same type. It's limiting resolution is of the order of
80-125cy/mm, around 1/3rd the highest resolution traditional B&W
emulsions and should present no problems whatsoever to a good drum
scanner. It is possible to get more information on there than can be
reproduced by 4000ppi scanning, but not much. Consequently, the
Crosland drum machine should be capable of getting virtually all of the
image content off without aliasing at all.

However, as was pointed out right at the start of this thread, negative
film requires contrast stretching when producing positive images -
whether by scanning or conventional chemical printing. That enhances
the visibility of the grain on the film much more than would be apparent
from slide film. That is just a problem with all conventional
photographic colour negative films.
 
Recently said:
Despite all of your subsequent floundering on this Neil your
statements were not clearly referring to sampling "and all previous
processes", especially when you chose to address those earlier
process with separate criticisms. You may mean what you say but, if
so, you certainly haven't said what you mean!
Despite your unbending desire to misrepresent what I wrote, and ignore
clarifications when it suits you, I fail to see how you could possibly not
know what I meant as early as 4/4/04, when I wrote:

"As, to my knowledge, there is no system available that implements perfect
input filtering and flawlessly applies sampling algorithms... "

"The distortion I spoke of is the difference between the subject and the
post-filter representation, and in other parts of the exchange, included
the possibile accumulation of errors due to hardware computational
limitations. I've never claimed differently. "

I clarified what I was referring to in terms that I hoped would be
unambiguous as soon as I realized the miscommunication. And, there still
is no evidence of a "mental model" in any of this.
As to the distortion introduced specifically by sampling, pray explain
*exactly* what you mean.
Did that, several times.
You now specify that you are considering
real-world situations, but no real world, high performance scanner
suffers from relevantly significant distortion at the sampling stage.
I made it clear more than once that I was describing hardware limitations,
and not describing "good performance." To be clear, I agree that hardware
errors introduced at the sampling stage are likely to have the least
impact on the image quality, though they may be multiplicative in
subsequent stages. That's very different than saying such errors don't
exist, which is the position you're clinging to, and my *only*
disagreement with you on that point.
However, as was pointed out right at the start of this thread,
negative film requires contrast stretching when producing positive
images - whether by scanning or conventional chemical printing. That
enhances the visibility of the grain on the film much more than would
be apparent from slide film. That is just a problem with all
conventional photographic colour negative films.
What are you calling the artifact represented by "(enhanced)... visibility
of the grain", above? Perhaps that is a point of miscommunication in this
discussion.

Neil
 
Neil said:
Did that, several times.
Nope - I have seen what you have written and nowhere does a description
of the distortion that you specifically attribute to sampling, having
already attributed some aspects to the input filter, occur. You have
mentioned aliasing however, as already explained, with an appropriate
input filter, such as a properly selected aperture size and shape on a
drum scanner, no such distortion is created. So what is this distortion
you are so concerned about?
What are you calling the artifact represented by "(enhanced)... visibility
of the grain", above? Perhaps that is a point of miscommunication in this
discussion.
The artifact I am referring to is the grain on the film itself, not
aliased grain or distorted grain, simply the grain itself.

Since an image is recorded on colour negative film in a reduced contrast
form, that recorded image must be contrast enhanced for viewing in a
positive form. The grain on the film is also contrast enhanced by this
process at the same time. By comparison, the image recorded on positive
slide film has full contrast. No negatives ever exhibit as dense images
as are present on slides. The up side of this is that viewing the image
or printing from a positive film image requires no contrast enhancement
of either image or grain. The downside is that the exposure latitude -
the range of luminance which can actually be recorded on the film - is
much less for positive slide film than for negative film, and therefore
much more precise exposure is necessary.

Consequently, images from negatives always exhibit more visible
granularity that those from positive film of the same generic standard.
 
Recently said:
Nope - I have seen what you have written and nowhere does a
description of the distortion that you specifically attribute to
sampling, having already attributed some aspects to the input filter,
occur. You have mentioned aliasing however, as already explained,
with an appropriate input filter, such as a properly selected
aperture size and shape on a drum scanner, no such distortion is
created. So what is this distortion you are so concerned about?
You may have seen what I wrote, but it apparently didn't communicate. I
explained in several posts that I was referring to hardware calculation
errors. Those errors are distortions attributable to the sampling stage. I
agreed with you that errors at this stage will have the least impact on
image quality in many, if not most cases. Whether this becomes significant
during operations such as contrast stretching due to the mulitplicative
nature of such errors on subsequent stages is an open question. I'm not
concerned about it, just recognizing that such errors exist, and objecting
to your denials of their existance. It's not nearly the big deal that
you've made of it, and certainly doesn't constitute a "mental model" of
any kind.
The artifact I am referring to is the grain on the film itself, not
aliased grain or distorted grain, simply the grain itself.
Since an image is recorded on colour negative film in a reduced
contrast form, that recorded image must be contrast enhanced for
viewing in a positive form. The grain on the film is also contrast
enhanced by this process at the same time.
So, are you saying that the grain is *not* exaggerated (either contrast
range or shape), but accurately represented? If so, this shouldn't be a
problem worthy of conversation. If not, then this can be a problem if it
becomes an issue in prints made from scanned negatives, for example
limiting practical enlargement sizes. Perhaps the same problem that the OP
was trying to describe?

Neil
 
Neil said:
So, are you saying that the grain is *not* exaggerated (either contrast
range or shape), but accurately represented? If so, this shouldn't be a
problem worthy of conversation.

If you go back and read the original post you will see that it certainly
is an issue worthy of conversation.

As you will also note from the first comment I made on this thread, I am
not convinced that the issue raised is *just* one of grain aliasing, in
the main because I know that this is a character of negative film in any
case. However "not just" and "not" are not identities. Whilst, as I
mentioned before, Crosland scanners *should* be capable of recovering
almost all of the information from NPS film without introducing
aliasing, I do not know any more than anyone else writing in this thread
as to how the scan was actually made and whether the scanner was
optimally operated. Therefore grain aliasing cannot be ruled out,
however, even without it, results similar to those reported could be
anticipated.

As I stated in my second post to the thread, the only way to tell for
sure if it is aliased is to compare it to the original, unsampled, slide
or negative - which usually means comparing a full resolution print from
the scanned image to a conventional chemically produced print of the
same size. Differences in granularity between the prints, relative to
the image, would indicate grain aliasing.
 
Recently said:
If you go back and read the original post you will see that it
certainly is an issue worthy of conversation.
Well, no one can claim that it didn't get adequately conversed. ;-)

Sorry, but the early posts on this topic are already scrolled off the
server. However, I didn't get the remotest impression that Jytzel thought
the grain was "accurately represented", and was indeed complaining that it
was not. So, we're actually conversing about the _second_ clause from my
statement (snipped from the excerpt above). ;-)
As you will also note from the first comment I made on this thread, I
am not convinced that the issue raised is *just* one of grain
aliasing, in the main because I know that this is a character of
negative film in any case.
I realize that naming artifacts is something that people generalize about
in every form of media. Many unexpected results having to do with grain
may get called "grain aliasing", when in fact they may be something else
or a combination of artifacts. I don't get hung up by such colloquial
usages.
However "not just" and "not" are not
identities. Whilst, as I mentioned before, Crosland scanners
*should* be capable of recovering almost all of the information from
NPS film without introducing aliasing, I do not know any more than
anyone else writing in this thread as to how the scan was actually
made and whether the scanner was optimally operated. Therefore grain
aliasing cannot be ruled out, however, even without it, results
similar to those reported could be anticipated.
Crossfield scanners aren't the newest kids on the block. It could simply
be that it was out of spec. I'd even suspect that it could be the result
of requesting the wrong resolution scan.
As I stated in my second post to the thread, the only way to tell for
sure if it is aliased is to compare it to the original, unsampled,
slide or negative - which usually means comparing a full resolution
print from the scanned image to a conventional chemically produced
print of the same size. Differences in granularity between the
prints, relative to the image, would indicate grain aliasing.
Ummm... I don't know about this one as an objective measure. Too many
variables. Why not examine the original film under a loupe (or microscope,
if necessary), and compare that to the scanned file? That's what I'd do.

Neil
 
Neil said:
Sorry, but the early posts on this topic are already scrolled off the
server. However, I didn't get the remotest impression that Jytzel thought
the grain was "accurately represented", and was indeed complaining that it
was not.

Well his post is still on this server and I am sure you can read it
again on any of the archives, such as Google Groups, if you want. In
any case, he wasn't complaining that the grain wasn't accurately
represented - he had been told by the scanner operator that it was and
that negatives always show grain more than slides, which is true for the
reasons I have explained. Jytzel was questioning whether this advice
was factual, suspected that the operator was at fault and wondered what
the how he could obtain the best results from his negatives.

The likely fact is that he probably already has the best scans possible
from that particular film, given it's specification, however he could
try either a chemical print or scanning in a higher performance drum
(such as the Aztek I mentioned earlier or a few others).
Crossfield scanners aren't the newest kids on the block. It could simply
be that it was out of spec. I'd even suspect that it could be the result
of requesting the wrong resolution scan.
Neither are they even close to the best or indeed the most flexible in
terms of operator controls, but they should be capable of pulling almost
everything off of NPS emulsion unless actually faulty.
Ummm... I don't know about this one as an objective measure. Too many
variables. Why not examine the original film under a loupe (or microscope,
if necessary), and compare that to the scanned file? That's what I'd do.
Because the film is negative and compressed image contrast, on an orange
mask, which mean that the perceptual variable will swamp any actual
differences. The only way of making a valid comparison with a scanned
image is to make a continuous (ie. totally unsampled) print of the
highest quality available at the same scale for direct comparison with a
full resolution print from the scan. It would not be necessary to print
the entire image, indeed, for comparison of granularity, it would be
preferable to print only an area of near uniform tone.
 
Thanks for all who responded. I got the scans from the office and they
look horrible. I don´t think it´s grain, ít´s noise, noise, noise!
Colors look posterised with no gradation observed. The histogram shows
no abnormalities however (?) I don't believe the problem is the film,
it's the scan that it's bad. If anybody is interested I can send
portion of the image for viewing.

J
 
Hi,

Recently said:
Because the film is negative and compressed image contrast, on an
orange mask, which mean that the perceptual variable will swamp any
actual differences. The only way of making a valid comparison with a
scanned image is to make a continuous (ie. totally unsampled) print
of the highest quality available at the same scale for direct
comparison with a full resolution print from the scan. It would not
be necessary to print the entire image, indeed, for comparison of
granularity, it would be preferable to print only an area of near
uniform tone.
I understand your concern about the compressed and negative image
contrast. However, I'd be equally concerned about the apparent aliasing
that may be introduced by the reconstructing application; the various
resolutions that can result from different paper choices in the optical
print, etc.

However, for the purpose of establishing grain aliasing, one should only
have to examine the edge of a high-contrast portion of the image under
magnification and compare that to an equivalent zoom magnification of the
digital file (obviously not to the pixel level). If the on-screen profile
matches the optical profile... no aliasing... if they're overly blocky...
aliasing. Why would this method not yield a "valid comparison"?

Neil
 
Recently said:
Thanks for all who responded. I got the scans from the office and they
look horrible. I don´t think it´s grain, ít´s noise, noise, noise!
Colors look posterised with no gradation observed. The histogram shows
no abnormalities however (?) I don't believe the problem is the film,
it's the scan that it's bad. If anybody is interested I can send
portion of the image for viewing.
Out of curiosity... have you made an optical print of the film yet? And,
how are you observing the scans?

Regards,

Neil
 
Back
Top