Multi-sampling and "2400x4800 dpi" scanners

  • Thread starter Thread starter ljlbox
  • Start date Start date
Good afternoon,

I really am enjoying your selective snipping, so without further delay, on with the
show . . . .

Kennedy said:
I have no idea, and care less, what your particular bent or limitation
is, although your comments betray lack of any scientific or
instrumentation design knowledge. I assume you have some photographic
knowledge, and as a consequence some experience of using commercial
scanner systems. Suffice to say that I have spent over 25 years in the
electro-optic imaging industry . . . . .

And yet not one imaging publication pays you to write. So instead you come to
Usenet and try to gather fans. Interesting behaviour. When will I see your writings
in Imaging Technologies, Electronic Publishing, Photo Techniques, or Advanced
Imaging magazines?

As if you would care, my expertise is commercial imaging and commercial printing.
One does not need to know how to construct a camera, printer, scanner, or computer
in order to use them. It is possible to test devices without tearing them apart,
and many people have done just that, and written extensively about that.
. . . . . . . . . .

No, you are wriggling again! Your initial comment made no statement
about optics

How could you not account for the optics. Also, I suggest you read again, since I
have mentioned optics many times. Maybe if you did not snip so much, and could
actually remember that.
- this was, according to you, the maximum that a 10200 cell
linear array could resolve, and it is as wrong now as it was then -
despite a feeble attempt to invoke optics at the last minute!

The best system using such an array only does an actual 3400 dpi, though the
interpolated resolution can be higher.
. . . . . . .


There's the rub, bozo

Resorting to name calling just makes you look bad, and I would have expected better
from you.
- I am part of that industry and have been two and
a half decades and these figures are trivial to derive from basic design
criteria and tolerancing.

So you never actually test these devices? You only do calculations?
The MTF of your example Kodak array is around 60% at Nyquist, depending
on the clock rate. The MTF of a suitable optic can easily exceed 70% at
the same resolution. If you are measuring much less than 35% contrast
at 1200ppi on an 8.5" scan from this device then you really need to be
re-examining your optical layout, because it certainly isn't high
performance. As for the optical MTF at your claimed 3400ppi limit for
the device: it should readily exceed 90% and thus has little effect at
all on the performance of the device.


On the contrary, it shows you have no idea what you are talking about.
Name ONE (even an obsolete example) Kodak trilinear CCD with 10200 total
cells in each line which had an optical resolution of only 3400ppi when
optically scaled to an 8.5" scan width.

Optically scaled, now there is the rub. ;-)
See how it is not possible to talk about the imager without mentioning the optics.
So I mention the optics and you attempt to slam me for it . . . it should be
assumed at the start that a flat scanner has optics in place; it should not even
need to be stated.
You really are talking
absurdities! Even directly at the focal plane itself, the KLI-10203
device is capable of 3600 samples per inch with an MTF of approximately
60% at that resolution (and I remind you that your allegation was not
specific to this device with its particular pixel size, but to all 10200
element linear arrays!).

Approximately 72 mm length of a row with 10200 elements gives about 141 samples per
millimetre. Convert to inches and it is about 3600 samples per inch, imagine that.
So how would you arrange the carriage and optics to get that many samples? Hint:
this is done in two scanners using that imager.

My statement used the term best, but if you want to think that is "all", then go
ahead. The reason I stated that was to give the OP a sense that his scanner might
have an actual resolution much lower than the best scanners now on the market. That
is despite the resolution probably stated by his scanners' manufacturer, or 2400 by
4800 as he originally posted. I seriously doubt his scanner achieves an actual
optical resolution of 2400 dpi, though I don't doubt his file size stated that
dimension.
It certainly does in terms of "dpi", "ppi" parameters that you have been
quoting. These terms define the SAMPLING RESOLUTION!

The size in microns of the cells has more affect on the resolution, though the
optics in the scanner can still be a greater limit. Some scanner optics are only
good for around 40 to 50 lp/mm, even in some high end systems. A few are more
capable.
I suggest you learn something about imaging system design before making
yourself look even more stupid than you already do. First lesson should
be what defines optical resolution and what units it is measured in.
Clue: you haven't mentioned them once yet!

Nothing wrong with assuming people already understand cycles per millimetre, or
line pairs per millimetre. No wonder your posts drone on for hours, so many
disclaimers and definitions . . . are you this boring in person too . . . . . .
;-)

I mentioned it several times: all the components affect the system resolution.
Start with a great imager chip, then throw a crap lens in front of it; or try using
a poor stepper motor. We can equate, or reference dpi and ppi with lp/mm or cy/mm,
though there are some basics that should be stated. Even in commercial printing,
with image setters running 2400 dpi or 2540 dpi, the size of the dots is different,
and in some systems can be variable. When we look at file sizes in ppi, the size of
the image on the monitor can differ on monitors with differing display resolution;
put an image at 100% size on one monitor, then compare it to 100% size on a
different monitor, and that image can appear physically larger on one than the
other. A pixel can even vary in size on an imaging chip, though we do see that
expressed as the micron size to the chip. Some of the Nikon D-SLRs, and even some
video systems used none square pixels. We often assume square pixels, but in
reality there are non square pixels. We can also assume square dots or spots in
printing, though due to many factors (dot gain, ink properties, paper properties,
et al) the actual dots or spots can end up more rounded.
You really don't have a clue, do you? How many swathes does this Rolls
Royce of scanners make to achieve 5600ppi on a 12" scan width with only
8000 pixels in each line? Perhaps you dropped a zero, or misunderstood
the numbers or just lied.

No misunderstanding. The optics and CCD carriage move in two dimensions to cover
the entire bed. This is different from a "pass" technique of scanning used in some
devices. The 8000 element tri-linear CCD has 5 µm cell sites, though the movement
of components in necessary to achieve the very high true optical resolution
possible. The line is not 12", since the imaging chip is closer to 72 mm in size.
Feel free to have a test scan done by the manufacturer, don't just take my word for
it. If you have ever read anything I have written, you should know that I encourage
people to investigate more, and learn more. There would be no benefit for me to lie
about this, but it makes quite a statement about you to accuse me of doing that.
You would build a scanner from such a detector without an imaging optic
to project the flatbed onto the focal plane? And you *STILL* claim you
know what you are talking about? You really are stretching credulity to
extremes now.

Look up XY scan and XY stitch technologies. Components need to move to take
advantage of the imager, but patents on technologies limit the approaches of how
this is accomplished. That you have not used such scanner does not invalidate the
facts that they exist. Again, I suggest having the manufacturers do test scans for
you.
In which case it would be unable to yield much more than 800ppi in a
single swathe at that width!

So figure it out . . . how can they possibly do better? If you are as smart as you
think you are, then it should be easy for you to solve this one, otherwise you have
become complacent in your 25 years in the "business".
I fail to see how I could have missed this point when I specifically
made reference to the condition of a single pass in the sentence you
have quoted above!

So since you don't understand how that works (two dimensional movement), you want
to restrict the discussion to your parameters . . . amazing. :-)

The entire point of mentioning high end systems is to point out that lower spec
systems are less capable. That the single pass scan is a further limit should be
obvious to many people reading this. My feeling is that when people are aware of
what is out there being used at the top, they might get a better understanding of
the low to mid range gear they can afford. View Camera magazine had a nice test of
some scanners a few months ago, including some lower priced flat bed scanners.
Those tests are just one source showing that claimed resolution and file sizes to
not equate to actual optical resolution in low and mid range scanners. The OPs 2400
by 4800 is very likely much less than that in true optical resolution; so why try
to jump through all those hoops to not gain any more resolution. My feeling is that
he should either be happy with what he has, or save his efforts and money to get a
better scanner. Unfortunately, if all I did was type one paragraph like this in my
original reply to him, he would have been left with questions. Maybe between both
of us he will get enough to go do some of his own investigation and come to his own
conclusions.
Since the scanner under discussion on this thread is a single pass
scanner, and the OP is specifically interested in what he can achieve in
that single pass, I see no need to extend the explanation to swathe
equipment.

So why did you see the need to encourage him to spend time on "improving" his
scanner when you knew it was limited? I would think the OP has better things to do
with his time than waste efforts on a scanner already giving its' best.
Incredible. Not only because even cheap scanners now achieve better
than this . . . .

Bullshit. Testing by several different individuals, and writings in many
publications, indicate that the stated "resolution" in these low end systems is
merely the file size, not the true optical resolution. Perhaps you should try
reading actual test reports, rather than manufacturer brochures. Even Epson have
one high spec system where they state a close to true optical capability, yet they
also sell cheaper systems that claim more resolution . . . so why would imaging
professionals, pre-press specialists, and service bureaux use the higher cost and
seemingly lower spec Epson when choosing Epson gear . . . quite simply the
Expression 10000XL is a better and more capable scanner than the Perfection 4870 in
true optical resolution.

It is a shame that there is not some standard for stating specifications enforced
upon manufacturers. We would all do much better with actual performance numbers,
rather than calculated or theoretical. This happened previously with CRTs when the
diagonal was stating, and then manufacturers were compelled by regulations to state
the "viewing area". I think this should happen with scanners, though somehow I
doubt we will see a change in the industry.
, but because neither "ppi" nor "dpi" is an appropriate
measurement unit for "optical resolution" in the first place!

Would you rather I use lp/mm, since we are discussing scanning of photos? I think
more readers would understand dpi and ppi than lp/mm or cy/mm, or other things like
lines per height, or even just simply lines. Perhaps photosites per inch is more to
your liking . . . . . MTF at Nyquist . . . . Raleigh limit instead of Nyquist limit
.. . . . . . I think it is better to just test the device as configured and use
terms other people understand, and I think many here understand dpi and ppi.
I didn't suggest otherwise - a simple optic with a single pass scan.
That is what we are discussing in this thread. You are the one bringing
in additional complications to justify your original mistaken advice to
the OP.

I bring in the "additional complications" to point out that there are better
systems out there. If people understand those better systems, they might get a more
realistic sense of performance and capability in their lesser systems. Nothing
wrong with being on a budget for scanners, especially if it is not for generating
income, but one should not expect more than the budget scanner can accomplish, and
one would do better to not bemoan the scanner when performance is less than they
expected. This is performance for the dollar; you pay heavily to get the best, and
few people want (or need) to spend that much. When people just read the brochure
and think their under $1000 scanner is better than one over $10000, I think it can
help to understand a little of why that is simply not true (the lower cost scanner
specifications as stated by the manufacturer).
I suggest you look up the original patents for this "microscan"
technology - you will find a familiar name in the inventors - and it was
well before 1999 - although that could be around the time that the
original patents expired. Even so, as the inventor of aspects of that
particular technology, I can assure you that diffraction is still the
limit of all optics.

Sure, fairly common knowledge when doing image analysis. However, the technology I
mostly referenced, at least with the Creo scanners, is based on a separate patent.
That patent is held by Creo, though since the buyout from Kodak I would guess that
Kodak now controls those patents.

It is a loose association, and my oversimplified original statement did not contain
the ten more paragraphs of information that may have satisfied those few
individuals starving for explicit details.
. . . . . . . . .


No you didn't, you said "OK Maybe I should have stated that better".
That does not, under any circumstances, amount to either an apology or
an admission of being incorrect, let alone both.

Here's a suggestion: pull up the past posts, then do a search for the word
"apologize", and you will find it. If you want to ignore it, then that states a
great deal about your character. I would doubt that you would ever state that you
were incorrect, though we can all imagine why that might be true.
No, I browse a post first to capture the gist of the message and then
respond to the specific lines I quote.

Certainly did not seem that way . . . glad to know you are at least "browsing".
;-)
Or just about any consumer grade flatbed scanner in that class of the
market these days.

Sure . . . look at that, we agree on something. :-)
And what does that have to do with your allegation that they contain
Sony CCDs?

The term "I would suspect" means that was an assumption. English is my second
language, so if I was incorrect in that usage, feel free to correct me. The nature
and tone of your response is why I replied in the above manner; had you simply
stated what CCDs were in a Nikon scanner I would have replied differently.
Unfortunately, you chose another method of response, which again states more about
your personality.
You are like a child pissing up a wall.

Which again states more about your personality. ;-)
Did you actually read what was written, Bozo?

Name calling shows your level of emotional maturity, but it does not bother me so
if you feel you must continue with that, enjoy yourself. ;-)
Why are you still asking
about LEDs?

Why did you not answer the question? You could simply state which high end scanning
system, or which scanning system you believe to be high end, uses LEDs as the light
source. The only high end film scanner I know on the market today is the line made
by Imacon, none of which use LEDs. My first mention of Nikon scanners was an aside
to indicate they used LEDs, so you mentioned other film scanners; again, the
question remains . . . though you could just resort to name calling again instead
of providing an answer . . . your choice.
Why is that no surprise??

I posted that to see if you would refute it, and you did . . . go figure. My guess
is that you would refute it simply because you thought I wrote that, with a simple
desire for you to attempt to make yourself look like an intellectual. That you
would refute a calculation example from the very designers of that imager states a
great deal about you.
TIP: optical resolution is measured at the flatbed surface, not at the
focal plane - the reason for that is that only the flatbed surface is
accessible for testing other than during design and manufacture and it
is the only position that matters to the end user.

Quite simple, and I don't really think that is something that needs to be
mentioned. All of us should assume that true optical resolution is a statement of
what a system is capable of achieving, and capable in a manner in which that
scanner would normally be put to use.
The physical size of
the CCD has no direct influence on the resolution obtained other than
its implications on the optical system requirements. 7um pixels are
relatively trivial to resolve optically - low cost digital still cameras
work well with sub-3um pixels, albeit with limited minimum apertures,
but the pixel resolution is not particularly demanding.

It does influence the fill factor, which can influence the saturation and colour
response. Differences in colour responsiveness can affect apparent edge definition.
Edge definition can be thought of as an aspect of resolution, though it is mostly a
function of contrast. It is interesting that the better resolving systems use
smaller micron sized imaging cells, but to claim there is no correlation requires
much more explanation.

Low fill factor reduces device sensitivity. Of course other aspects of the design
can improve sensitivity. A 5 µm square pixel on one imager can be as spectrally
sensitive as a 7 µm square pixel on another imager, if some aspects of the device
are improved, and that is despite the 7 µm square pixel having almost twice the
area of the 5 µm square pixel imager. One great affect is the colour filtering over
the pixels, and varying thickness can influence spectral sensitivity.
It should indeed since it is quite simple really. In terms of
measurement: assess the MTF of the scanner using an ISO-12233 or
ISO-16067 references depending on subject matter and determine the
optical resolution at an agreed minimum MTF. Industry standard is
nominally 10%, but some people play specmanship games though that is
unnecessary here. You should note that this optical resolution will not
be in dpi or ppi, but I leave it to you to figure what it will be, since
you demonstrate ignorance and need to learn some facts.

See, now there you go farting higher than your own rear. What would you like to
use, Photosites per inch, or some other measure? Or do you just want to state MTF
at Nyquist and be done with it? Maybe we should agree on lp/mm, since it is heavily
used in photography.
In terms of design, just for fun, use your example of the KLI-10203
which has a nyquist MTF of better than 60% at 2MHz clock rate. Fit an
IR filter, cut-off around 750nm, to eliminate out of band response.

I have not seen one of those installed with an IR cut-off filter. It is listed in
the specifications that the colour dyes of the filters are IR transparent beyond
700 nm, so your assumption is not a bad one, though why add another level of
complication?
Select a 1:3 f/4 relay objective from one of many optical suppliers. Few
will fail to meet an MTF of over 70% on axis at the sensor's nyquist
frequency and those from the better suppliers including Pilkington,
Perkin Elmer etc should achieve this across the entire field. Damping
mechanism and timing to eliminate lateral post-step motion or, ideally,
continuous backscan compensation of focal plane by multi-facet polygon.
Result: Scan width = 8.5"; sampling resolution = 1200ppi; MTF at Nyquist
for native resolution >=35% (ie. well resolved, optical resolution
exceeds sampling resolution!).

MTF at Nyquist for 3400dpi should exceed 80%, based on CTE limited MTF
of 95% for the detector and 90% optical MTF with 1 wavefront error at
this lower resolution.

So how to Creo, Dainippon Screen and Fuji Electronic imaging achieve that and more?
These are just figures for optics and your example detector that I
happen to have in front of me at the moment - with a little searching it
might be possible to obtain better. Nevertheless, 1200ppi resolution is
clearly practical on an 8.5" scan width with the device you seem to
believe can only achieve 3400ppi.

"Only achieve 3400 ppi" . . . how is that worse than 1200 ppi? I don't think 1200
ppi at 8.5" scan width is useful for current commercial printed intended images,
unless you want to restrict the output dimensions. More resolution is always
better, especially when going to large printed outputs.
Hardly surprising though is it -
similar CCDs from other manufacturers are actually specified as
1200ppi/A4 devices!

Cor, shifting goalposts really is your forte isn't it

What . . . I tell you a scanner, and ask you to explain how it achieves an optical
resolution at the imaging bed that is better than you claim is possible. You can
think Creo are not telling the truth, or you can send off an image to them and
request it to be scanned. Proof is just a short test run away.
We determine a
projected resolution on an 8.5" width platform and you want to see it
achieved on a 12" platform. Do you understand the ratio of 8.5 and 12?
You are an idiot and I rest my case!

Name calling again . . . . . I suppose maybe I should be expecting that from you by
now, though I am just a little disappointed in your behaviour.

Anyway, the one way to solve this is for you to send a sample transparency to Creo
and request a full resolution scan from the iQSmart and EverSmart Supreme scanners.
In this case, a picture (scanned) is worth a thousand words . . . or maybe two
thousand considering all we have typed so far. ;-)

Enjoy yourself, and do try to cut back on the caffeine. ;-) Getting out more often
might be nice too. ;-)

Have a nice day . . . and I mean that. ;-)

Ciao!

Gordon Moat
A G Studio
<http://www.allgstudio.com>
 
Gordon Moat said:
Good afternoon,

I really am enjoying your selective snipping, so without further delay,
on with the
show . . . .
That is fine, but I have better things to do with my time than debate
technical issues with someone who has absolutely no understanding of
them and continues to lie in order to justify their previous errors. I
wrote at the end of my last post that you were an idiot and I had
finished debating with you, you are clearly too stupid to understand
something as basic as that.
 
Gordon Moat ha scritto:
(e-mail address removed) wrote:

[snip]
I don't want to get *too* technical.

Though you want to hack the driver. ;-)

Programming is my field, optics isn't. Not that I'm terribly good at
that either (in fact I haven't hacked the driver succesfully so far),
but off hand I can't think of anything I'm terribly good at.

If it is moving the optics, and not the CCD, then it has a three or four row
CCD with RGB filtering over it. If it is moving the CCDs, then it could be
several. Of course, you could crack it open and find out. ;-)

Well, I dunno, but it looks like it's moving everything through the
glass.
In any case, I'm not really interested in the CCD layout, except for
its - ehm - staggeredness, in that the 2400dpi are obtained with two
shifted 1200dpi CCDs.
You would be lucky for it to be much better than half that, but for sake of
discussion . . . . . . .

Well, if you can suggest a simple method for measuring real resolution,
I would be happy to try and find out. Well, I wouldn't necessarily be
*happy*, on a second thought.
But anyway, yes, let's just pretend for the sake of discussion.
Make that a three fold problem . . . how and what do you plan to use to view
that image? In PhotoShop, you would view 2400 by 4800 as a rectangle; if all
the information was 2400 by 2400 viewing, then you have a square; if you want
a square image and have 2:1 ratio of pixels then your square image will be
viewed like a stretched rectangle. This is similar to a problem that comes up
in video editing for still images; video uses non-square pixels, so the
square pixel still images need to be altered to fit a non-square video
display.

But keeping the image at a 2:1 ratio is not what I plan to do.
What I plan to do is to take it back to a 1:1 ratio by downsampling on
the y axis: this way I get a 2400x2400 dpi picture, which is less noisy
than the same picture taken directly at 2400x2400 dpi from the scanner.

My main doubt was, what is the "best" or the "right" algorithm to do
the downsampling? I usually just use (bi)cubic resampling when I want
to resize something; but in this case, I've got some specific
information about the original data -- i.e. that each line is shifted
by half a pixel from the previous line (a quarter of a pixel actually,
since the CCD is staggered, but I think this shouldn't be important as
long as I want to keep the final image at 2400x2400).

I just thought that this knowledge might have allowed me to choose an
appropriate downsampling algorithm, instead of just using whatever
Photoshop offers.
Or how to actually still view it as a square image.

By downsampling.
After all, it's the same with scanners that can do "real"
multi-sampling, only that
1) lines do not exhibit a sub-pixel shift, since the CCD is kept in the
same positing for each sample
2) the scanner firmware, or the driver, hides it all to the user and
just gives out a ready-to-use 1:1 ratio image
[snip]

I have not heard of anyone outside of Canon still using a staggered idea. I
think Microtek may have tried it, or possibly UMAX.

Well, Epson certainly does it in (most of) their flatbeds.
In order to really do
something different with that, much like the video example above, it seems
you would need to get the electronic signal directly off the CCD prior to any
in-scanner processing of the capture signal. Basically that means hacking
into the scanner. I don't see how that would be practical; even if you came
up with something, you still have a low cost scanner with limited optical
(true) resolution and colour abilities.

But I don't think the data that come out of my scanner are very much
adulterated (at least if I disable color correction, set gamma=1.0 and
such stuff).

Well, when I scan at "4800 dpi" from Windows, they're certainly
adulterated in the sense that interpolation is used on the x axis, in
order to get a 1:1 ratio, apparent 4800x4800 dpi image.

But I know (really, I'm not just assuming) that this interpolation is
done in the *driver*, not in the scanner; so assuming that I can hack
the Linux driver to support 4800 dpi vertically, horizontal
interpolation becomes a non-issue: after all, it's easier to *not*
interpolate than to interpolate!


by LjL
(e-mail address removed)
 
Good afternoon LjL,

Gordon Moat ha scritto:
(e-mail address removed) wrote:

[snip]
I don't want to get *too* technical.

Though you want to hack the driver. ;-)

Programming is my field, optics isn't. Not that I'm terribly good at
that either (in fact I haven't hacked the driver succesfully so far),
but off hand I can't think of anything I'm terribly good at.

Perhaps you might want to look at the firmware. The code might be simpler, and you
may be able to affect a change in less attempts.

I had a project working on a future full frame digital camera a few months ago.
The programmer on that sub-contracting me to help with the colours aspects of the
software, and with user interface development. I admit I could never do the
programming end of it, and it is something which simply does not interest me.
Colour models are another realm, and I enjoyed that aspect. Wish I could tell you
more about the project, but I signed a confidentiality agreement.

If it is moving the optics, and not the CCD, then it has a three or four row
CCD with RGB filtering over it. If it is moving the CCDs, then it could be
several. Of course, you could crack it open and find out. ;-)

Well, I dunno, but it looks like it's moving everything through the
glass.

If you can figure out how to easily get it apart, you might discover a bit more.
However, dust could then become an issue. It would make a nice opportunity to
clean the other side of the glass.
In any case, I'm not really interested in the CCD layout, except for
its - ehm - staggeredness, in that the 2400dpi are obtained with two
shifted 1200dpi CCDs.


Well, if you can suggest a simple method for measuring real resolution,
I would be happy to try and find out. Well, I wouldn't necessarily be
*happy*, on a second thought.

Ted Harris wrote an article in the May/June 2005 issue of View Camera about
scanners. He used a USAF1951 test target and a T4110 step wedge; might be
something to try out. Really well written and presented article, though it would
have been nice to include larger sample images.
But anyway, yes, let's just pretend for the sake of discussion.


But keeping the image at a 2:1 ratio is not what I plan to do.
What I plan to do is to take it back to a 1:1 ratio by downsampling on
the y axis: this way I get a 2400x2400 dpi picture, which is less noisy
than the same picture taken directly at 2400x2400 dpi from the scanner.

So you want to capture a stacked rectangle in one dimension, then compress the
long end to display a square. Sounds like some pretty hefty algorithm to avoid
creating artefacts in the final image file. Wouldn't that actually reduce the edge
sharpness and resolution of any diagonals in the source image?
My main doubt was, what is the "best" or the "right" algorithm to do
the downsampling? I usually just use (bi)cubic resampling when I want
to resize something; but in this case, I've got some specific
information about the original data -- i.e. that each line is shifted
by half a pixel from the previous line (a quarter of a pixel actually,
since the CCD is staggered, but I think this shouldn't be important as
long as I want to keep the final image at 2400x2400).

I think Bart van der Wolf had a short test page of several algorithms. You might
want to search archives, or contact him about that. Few people ever suggest using
bilinear, though there are some images that work better using that. It would seem
that having an option to use more than one algorithm would be of greater benefit
that forcing just one to work. Of course, the programming would be much more hefty
to do that.
I just thought that this knowledge might have allowed me to choose an
appropriate downsampling algorithm, instead of just using whatever
Photoshop offers.

In a production environment, what we do is Scan Once, Output Many workflow.
Basically scanning at the highest resolution, then later using that file to match
output requirements on a case by case basis. Time constraints sometimes mean just
scanning at the output resolution needed for a project, though that can mean that
later on you would need to scan the same image in a different manner for another
project.

Ideally you try to do as much as you can prior to dumping the image file into
PhotoShop. Nearly all operations in PhotoShop are destructive editing. The other
issue is that getting the scan optimal reduces billing time in PhotoShop, though
that is more a commercial workflow necessity.
By downsampling.

Resize on one axis. Still images sent to video NLEs need a conversion to allow
them to display properly, i.e. you want a picture with a round basketball to still
look like a round basketball when displayed on a video monitor or television.
After all, it's the same with scanners that can do "real"
multi-sampling, only that
1) lines do not exhibit a sub-pixel shift, since the CCD is kept in the
same positing for each sample
2) the scanner firmware, or the driver, hides it all to the user and
just gives out a ready-to-use 1:1 ratio image

Just a guess, but I would think the firmware would be the place to find these
instructions. Seems that if you found your information there, then it might just
be as simple as removing that portion of it, if it does in fact work the way you
suspect.
[snip]

I have not heard of anyone outside of Canon still using a staggered idea. I
think Microtek may have tried it, or possibly UMAX.

Well, Epson certainly does it in (most of) their flatbeds.

Interesting. Just a side note, the 4870 Photo was in that test group from View
Camera magazine. While Epson state it is a 4800 ppi scanner, the best Ted Harris
got was 2050 ppi. The interesting thing I found was that Dmax tested was
substantially less than Epson claims. Rather than attempt to repeat the article
here, it might be worth it for you to order a copy, or get an old issue of this.

My feeling is that working on the actual Dmax would be more beneficial to final
image quality than playing with the resolving ability. You can try simple things
like using drum scanner oil on the flat bed, though clean-up is another issue.
There have been some good articles in Reponses Photo (French) in the past about
this method, and sometimes in a few other publications.
But I don't think the data that come out of my scanner are very much
adulterated (at least if I disable color correction, set gamma=1.0 and
such stuff).

Sort of like a RAW capture from a digital camera.
Well, when I scan at "4800 dpi" from Windows, they're certainly
adulterated in the sense that interpolation is used on the x axis, in
order to get a 1:1 ratio, apparent 4800x4800 dpi image.

I don't think overscanning or interpolation is all bad. Kai Hammann and a few
others have written about this for a few scanning devices. Basically what I have
found in practice is that overscanning can allow a smoother tonal transition of
colour in large areas of colour, such as sky in landscape images. Overscanning
might not improve true resolution, but it can often make the colour output just a
bit smoother. If you have not guessed it by now, I am more inclined to favour
colour quality over outright resolution.
But I know (really, I'm not just assuming) that this interpolation is
done in the *driver*, not in the scanner; so assuming that I can hack
the Linux driver to support 4800 dpi vertically, horizontal
interpolation becomes a non-issue: after all, it's easier to *not*
interpolate than to interpolate!

by LjL
(e-mail address removed)

Sounds like you are working with SANE. Maybe the driver would do it, but I still
think a look at the firmware might show you another option. Anyway, best of luck,
and enjoy your project.

Ciao!

Gordon Moat
A G Studio
<http://www.allgstudio.com>
 
Kennedy said:
That is fine, but I have better things to do with my time than debate
technical issues with someone who has absolutely no understanding of
them and continues to lie in order to justify their previous errors. I
wrote at the end of my last post that you were an idiot and I had
finished debating with you, you are clearly too stupid to understand
something as basic as that.
--

Awe . . . did I hurt your feelings? Well, it was fun while it lasted. I guess
you have nothing left to learn. ;-)

Ciao!

Gordon Moat
A G Studio
<http://www.allgstudio.com>
 
Gordon said:
Good afternoon LjL,

[snip]

Perhaps you might want to look at the firmware. The code might be simpler, and you
may be able to affect a change in less attempts.

I would certainly like to look at the firmware!
But I don't think there is any (documented) way to look at it and/or
modify it. I guess it's one of the things Epson wants to keep most
secret of their scanners!
[snip]

If you can figure out how to easily get it apart, you might discover a bit more.
However, dust could then become an issue. It would make a nice opportunity to
clean the other side of the glass.

Nah, it's too new to make such attempts...
[snip]

Well, if you can suggest a simple method for measuring real resolution,
I would be happy to try and find out. Well, I wouldn't necessarily be
*happy*, on a second thought.

Ted Harris wrote an article in the May/June 2005 issue of View Camera about
scanners. He used a USAF1951 test target and a T4110 step wedge; might be
something to try out. Really well written and presented article, though it would
have been nice to include larger sample images.

I don't want to spend money (or send money abroad, as is often the case
with these things) on test targets. I would certainly like to know my
scanner's true resolution, but I'm not going to spend money for that...
after all, there's nothing I can do to improve it.

(Though I might consider getting a calibration target - that does have a
practical use!)

I am currently experimenting with "slanted edge" and Imatest, and I'll
publish some results soon, although I'm afraid they're going to be
heavily off, as I haven't quite understood the procedure.
[snip]
But keeping the image at a 2:1 ratio is not what I plan to do.
What I plan to do is to take it back to a 1:1 ratio by downsampling on
the y axis: this way I get a 2400x2400 dpi picture, which is less noisy
than the same picture taken directly at 2400x2400 dpi from the scanner.

So you want to capture a stacked rectangle in one dimension, then compress the
long end to display a square. Sounds like some pretty hefty algorithm to avoid
creating artefacts in the final image file. Wouldn't that actually reduce the edge
sharpness and resolution of any diagonals in the source image?

That's exactly my main concern.
On the other hand, just about everyone recommends to scan at a high
resolution and then scale down instead of scanning directly at a lower
resolution. And this is exactly what I'm doing, except that the scaling
down is only in one direction in my case.
I think Bart van der Wolf had a short test page of several algorithms. You might
want to search archives, or contact him about that. Few people ever suggest using
bilinear, though there are some images that work better using that.

I have found http://heim.ifi.uio.no/~gisle/photo/interpolation.html .
Bart van der Wolf is involved, but I don't know if it's the page you meant.

But, you see, I'm not looking for the "best looking" algorithm in
general -- what I'm looking for is the right algorithm to downsample
things made with an half-pixel shift etc. etc.
It might even come out to be bilinear!
It would seem
that having an option to use more than one algorithm would be of greater benefit
that forcing just one to work. Of course, the programming would be much more hefty
to do that.

But my goal is to automate the scanning process, so looking at each
image before storing it isn't really an option.
Storing the images in the original ratio to leave room for future
decisions is also, well, not "not an option" but impractical, due to the
file sizes involved.

But again, more than the algorithm that "looks best", I'm searching for
the algorithm that is the most "correct" in the context I'm working with.
Hopefully, it will also be the one that looks best with most images!
[snip]
Ideally you try to do as much as you can prior to dumping the image file into
PhotoShop. Nearly all operations in PhotoShop are destructive editing. The other
issue is that getting the scan optimal reduces billing time in PhotoShop, though
that is more a commercial workflow necessity.

My idea is to have a script (which, in a basic form, is already in
place) to batch-scan and do all the non-destructive (or
destructive-but-the-file-would-be-too-large-otherwise) corrections.

The resulting images would be stored for archival.

Another script, or the same script, would create copies of the images
for viewing, where the various destructive transformations *are* applied
(USM, the finer histogram corrections, resizing to 1200x1200dpi, etc).

This second part would of course be performed by me in Photoshop instead
of by the script, for pictures I care about particularly.
The script would just work as a "one hour photo" equivalent.
[snip]

Or how to actually still view it as a square image.

By downsampling.

Resize on one axis. Still images sent to video NLEs need a conversion to allow
them to display properly, i.e. you want a picture with a round basketball to still
look like a round basketball when displayed on a video monitor or television.

Ok, but I'm not doing video...
Actually, I also intend to be able to display my pictures on TV (I've
got a "WebTV" from my ISP), but that's a very minor concern.

Besides, aren't you talking about NTSC? I'm PAL, and I'm not sure but I
think PAL pixels are square (we've got more scanlines than NTSC).
[snip]

My feeling is that working on the actual Dmax would be more beneficial to final
image quality than playing with the resolving ability.

Well, multi-sampling does improve DMax AFAIK, and multi-sampling is what
I'm trying to "simulate" (actually there's nothing to simulate, 2x
multi-sampling is there in my scanner, it's just that it shifts the CCD
a little after the first sampling...).

You can see from the "positive vs negative" thread that I'm also trying
to work out the way exposure time control works in my scanner (which,
technically, comes with "auto-exposure" only). Longer exposure times
(or, possibly, superimposing a long exp scan to a short exp scan, as Don
does) would also help DMax.
You can try simple things
like using drum scanner oil on the flat bed, though clean-up is another issue.
There have been some good articles in Reponses Photo (French) in the past about
this method, and sometimes in a few other publications.

I could try just for seeing what comes out of it, but I can't really do
that normally... this scanner is used for scanning paper sheet by other
people here. I don't have it all for me to play with.
[snip]

Sounds like you are working with SANE.

Correct. SANE also have the advantage (over VueScan and Windows
software, but I don't have Window on that computer anyway) that it
allows to easily use the scanner from the network.

We've got three computers here, plus the "server" that the scanner is
connected to, and thanks to SANE and SaneTwain we can all scan from our
own computers.
Maybe the driver would do it, but I still
think a look at the firmware might show you another option. Anyway, best of luck,
and enjoy your project.

Thanks! Well, the scans I'm getting right now aren't *so* bad for what I
must do with them, so at worst I'll be left with decent scans if I fail,
and will hopefully have learned something in the process.
Nothing to lose.


by LjL
(e-mail address removed)
 
Gordon Moat said:
Awe . . . did I hurt your feelings? Well, it was fun while it lasted. I guess
you have nothing left to learn. ;-)
Certainly a lot less than you! Advising anyone to use a 54 year old 3
bar test target to assess the resolution of a digital imaging sensor
just about sums you up!

Few things irritate me to the point of calling them, but liars, losers
and idiots are pretty high and you top all of those lists at the moment!
 
Kennedy said:

Look, perhaps this is none of my business, but... I know when you want
to insult people you leave them in no doubt that you have, but aren't
you just exaggerating a little? Reading back into the thread (but it's
3:53) it looks like he started, but still!


by LjL
(e-mail address removed)
 
Lorenzo J. said:
I know when you want to insult people you leave them in no doubt that
you have,

that is generally the intention. :-)
but aren't you just exaggerating a little?

The USAF-1951 test chart is a mediocre *analogue* test chart. Even when
used with analogue imaging components such as the film cameras it was
originally intended to measure, it suffers major drawbacks, not least of
which is the fact that each spatial frequency is restricted to only 2.5
line pairs. The result is that any second order component of the system
MTF (and the optics are more than likely to have higher orders)
attenuates or exaggerates the contrast of the middle black line of the
three by an unknown amount. Most professional measurement instruments
transitioned to 6.5 or more line pairs by the early 60s or FT based
approaches later but, having been the first publicly available standard
pattern, the USAF-1951 chart lived on and was copied in amateur circles
long after it had become obsolete in the scientific community.

With the advent of digital imaging, that limitation became even more
restrictive since all bar pattern based testing is open to
misinterpretation through aliasing. When there are many line pairs, the
onset of aliasing at the hard resolution limit of the system is obvious,
but it is virtually impossible to identify in the 2.5line pairs of the
USAF-1951 target until it has reached a level which would be highly
objectionable in any image!

In short, popular though it may be amongst novices who are enamoured by
its apparent fine detail and pseudo-technical journalists, it is worse
than useless for the assessment of digital images from any source,
producing ambiguous results at best and totally misleading results in
general.

It certainly is no exaggeration to state that only an idiot would
recommend such a misleading and ambiguous reference for the assessment
of a scanner.

BTW - I am sure you know about time zones and people aren't always in
the same time zone as their usenet server. ;-)
 
Kennedy McEwen ha scritto:
Lorenzo J. said:
I know when you want to insult people you leave them in no doubt that
you have,

that is generally the intention. :-)
but aren't you just exaggerating a little?

The USAF-1951 test chart is a mediocre *analogue* test chart.

[snip]

It certainly is no exaggeration to state that only an idiot would
recommend such a misleading and ambiguous reference for the assessment
of a scanner.

You know that's not the reason of this flamewar (not even remotely). He
said "Ted Harris wrote an article [...]. He used a USAF1951 test target
[...]; might be something to try out."
Hardly "recommending".

But I don't really want to argue this -- hey, it's your flamewar. I'm
even one of the few who enjoys reading flamewars.

All I intended to tell you is that, in my opinion, you overreacted. Of
this opinion you can do what you prefer... just thought it might have
been good to tell you.
BTW - I am sure you know about time zones and people aren't always in
the same time zone as their usenet server. ;-)

Uh? Are you referring to me saying it was 3:53?
I said that because it *was* 3:53 here (no not PM) when I re-read the
thread, only to point out that I might have been too sleepy not to get
confused about something.


by LjL
(e-mail address removed)
 
Lorenzo J. Lucchini said:
Gordon said:
Good afternoon LjL,

[snip]

Perhaps you might want to look at the firmware. The code might be simpler, and you
may be able to affect a change in less attempts.

I would certainly like to look at the firmware!
But I don't think there is any (documented) way to look at it and/or
modify it. I guess it's one of the things Epson wants to keep most
secret of their scanners!

Not sure if you have any test gear, or chip readers, but I think that would be
necessary. Of course, the other way is if they offered a firmware download for update.
Unfortunately, I don't think Epson has much in the line of firmware for download.
[snip]

If you can figure out how to easily get it apart, you might discover a bit more.
However, dust could then become an issue. It would make a nice opportunity to
clean the other side of the glass.

Nah, it's too new to make such attempts...
[snip]

Well, if you can suggest a simple method for measuring real resolution,
I would be happy to try and find out. Well, I wouldn't necessarily be
*happy*, on a second thought.

Ted Harris wrote an article in the May/June 2005 issue of View Camera about
scanners. He used a USAF1951 test target and a T4110 step wedge; might be
something to try out. Really well written and presented article, though it would
have been nice to include larger sample images.

I don't want to spend money (or send money abroad, as is often the case
with these things) on test targets. I would certainly like to know my
scanner's true resolution, but I'm not going to spend money for that...
after all, there's nothing I can do to improve it.

(Though I might consider getting a calibration target - that does have a
practical use!)

The small Kodak Q-13 is around $US 20. That has very useful test colour patches which
you could use to improve colour. You could also use a scan of that to create an overall
correction Action to use in PhotoShop. I think it could save you some time in
correcting colour on scans.
I am currently experimenting with "slanted edge" and Imatest, and I'll
publish some results soon, although I'm afraid they're going to be
heavily off, as I haven't quite understood the procedure.

Slanted edge is one way to do this. Recall that scanners and digital cameras work best
in a vertical or horizontal orientation; go much at all diagonal and the results will
always be less. Of course, in the real world of images there are few perfectly straight
lines that are photographed, except in architecture.
[snip]
But keeping the image at a 2:1 ratio is not what I plan to do.
What I plan to do is to take it back to a 1:1 ratio by downsampling on
the y axis: this way I get a 2400x2400 dpi picture, which is less noisy
than the same picture taken directly at 2400x2400 dpi from the scanner.

So you want to capture a stacked rectangle in one dimension, then compress the
long end to display a square. Sounds like some pretty hefty algorithm to avoid
creating artefacts in the final image file. Wouldn't that actually reduce the edge
sharpness and resolution of any diagonals in the source image?

That's exactly my main concern.
On the other hand, just about everyone recommends to scan at a high
resolution and then scale down instead of scanning directly at a lower
resolution. And this is exactly what I'm doing, except that the scaling
down is only in one direction in my case.

It is a good work practice. However, if you know you will not need the greater file
size scan for any uses, then you can save lots of time just scanning at the size you
need. You can always scan at a larger size later, if you find you need it for a certain
output. Of course, if you have the time, nothing wrong with always scanning at maximum.
I have found http://heim.ifi.uio.no/~gisle/photo/interpolation.html .
Bart van der Wolf is involved, but I don't know if it's the page you meant.

Well, as far as I remember, that was one of them. The reference to Douglas in Australia
is another aspect, and I was asked to become involved in some discussions at the
beginning of this year. I think many people want a very simple answer, though I do not
believe there is a simple answer, since many other aspects are involved and printing is
one of those.
But, you see, I'm not looking for the "best looking" algorithm in
general -- what I'm looking for is the right algorithm to downsample
things made with an half-pixel shift etc. etc.
It might even come out to be bilinear!

Very true. I was hoping that you finding that page might suggest some other things to
try out. If you only test two methods, maybe you might miss a third or fourth that
could have worked better. I don't really see one method for every type of image, and I
think some adapting depending upon image type could help. Not sure what Techno Aussie
does, though I did not find it too complex to not come up with something that works in
a similar manner, though probably taking more steps than Douglas used.

There are also some nice commercial products out there which you might be able to
alter. Perhaps as a programmer you could reverse engineer one of those. Maybe Genuine
Fractels, Nik Sharpener Pro, or getting the SDK for PhotoShop Plug-Ins. There are also
the old Kai's products that did some interesting localized interpolations, and might
yield some fun code.
But my goal is to automate the scanning process, so looking at each
image before storing it isn't really an option.
Storing the images in the original ratio to leave room for future
decisions is also, well, not "not an option" but impractical, due to the
file sizes involved.

Okay, so a search for the best compromise.
But again, more than the algorithm that "looks best", I'm searching for
the algorithm that is the most "correct" in the context I'm working with.
Hopefully, it will also be the one that looks best with most images!

Out of curiosity, what types of images would you normally be scanning?
[snip]
Ideally you try to do as much as you can prior to dumping the image file into
PhotoShop. Nearly all operations in PhotoShop are destructive editing. The other
issue is that getting the scan optimal reduces billing time in PhotoShop, though
that is more a commercial workflow necessity.

My idea is to have a script (which, in a basic form, is already in
place) to batch-scan and do all the non-destructive (or
destructive-but-the-file-would-be-too-large-otherwise) corrections.

The resulting images would be stored for archival.

You might want to download a trial version of BinuScan or Creo oXYgen Scan. Even though
they might not run with your Epson, you might get some ideas for programming your own
solution.
Another script, or the same script, would create copies of the images
for viewing, where the various destructive transformations *are* applied
(USM, the finer histogram corrections, resizing to 1200x1200dpi, etc).

PhotoShop has a nice Actions feature since version 5.0. You can create nearly any
combination to run on a folder of images. Once you start an Action, if you know it will
run a while, then you can leave the computer and take a break. ;-)
This second part would of course be performed by me in Photoshop instead
of by the script, for pictures I care about particularly.
The script would just work as a "one hour photo" equivalent.

On Mac OS, there is AppleScript, and with Windows there is also scripting. Not really
programming, though it can function for some nice automation. I use a few of these
types of scripts, though mostly just when doing operations involving Quark or InDesign.
[snip]

Or how to actually still view it as a square image.

By downsampling.

Resize on one axis. Still images sent to video NLEs need a conversion to allow
them to display properly, i.e. you want a picture with a round basketball to still
look like a round basketball when displayed on a video monitor or television.

Ok, but I'm not doing video...
Actually, I also intend to be able to display my pictures on TV (I've
got a "WebTV" from my ISP), but that's a very minor concern.

Besides, aren't you talking about NTSC? I'm PAL, and I'm not sure but I
think PAL pixels are square (we've got more scanlines than NTSC).

Just a different resize ratio. I mentioned video since your rectangular scan could be
considered as non-square pixels, since your final image would be a square. Perhaps you
might get an idea from video still image processing. I think there is an automated
feature for something like that in PhotoShop CS, though of course you could just create
an Action that does the same thing.

That would be different than altering the Epson scanner driver. However, if you could
somehow prevent the Epson driver from making a square image from the rectangular
information, then you could perform the more automated steps in PhotoShop by using
Actions you create. Batch mode in PhotoShop using Actions does work nicely.
[snip]

My feeling is that working on the actual Dmax would be more beneficial to final
image quality than playing with the resolving ability.

Well, multi-sampling does improve DMax AFAIK, and multi-sampling is what
I'm trying to "simulate" (actually there's nothing to simulate, 2x
multi-sampling is there in my scanner, it's just that it shifts the CCD
a little after the first sampling...).
Sounds more like poor registration. There is some documentation with SilverFast about
them implementing multi-scanning on scanners that did not originally offer it,
basically something to do with aligning pixels on successive scans instead of relying
on the scanner to be that accurate. Multi-scan does increase your scan times, though if
you hit ENTER, then had an automated process for the rest the time might not be
impacted so badly.
You can see from the "positive vs negative" thread that I'm also trying
to work out the way exposure time control works in my scanner (which,
technically, comes with "auto-exposure" only). Longer exposure times
(or, possibly, superimposing a long exp scan to a short exp scan, as Don
does) would also help DMax.

Crap . . . autoexposure only would suck. I don't think I could deal with a limitation
like that. Shame some software is not available for you to better control exposure. The
only thing that bothers me more in low end gear is a lack of focus control.
I could try just for seeing what comes out of it, but I can't really do
that normally... this scanner is used for scanning paper sheet by other
people here. I don't have it all for me to play with.

Bummer. The Kami oil evaporates very quick, and will leave the surface clean. I think
Aztek might sell that direct, though you could also try ICG. Perhaps it is something to
try when you have more time to the scanner.
[snip]

Sounds like you are working with SANE.

Correct. SANE also have the advantage (over VueScan and Windows
software, but I don't have Window on that computer anyway) that it
allows to easily use the scanner from the network.

We've got three computers here, plus the "server" that the scanner is
connected to, and thanks to SANE and SaneTwain we can all scan from our
own computers.

I think Caldera Graphics has some nice UNIX based imaging and scanning software. I used
Caldera Cameleo a few years ago at one location; not as user friendly as some software
though very effective.
Thanks! Well, the scans I'm getting right now aren't *so* bad for what I
must do with them, so at worst I'll be left with decent scans if I fail,
and will hopefully have learned something in the process.
Nothing to lose.

Cool! Nice discussion with you, hope something works out.

Ciao!

Gordon Moat
A G Studio
<http://www.allgstudio.com>
 
Kennedy McEwen ha scritto:
Lorenzo J. said:
I know when you want to insult people you leave them in no doubt that
you have,

that is generally the intention. :-)
but aren't you just exaggerating a little?

The USAF-1951 test chart is a mediocre *analogue* test chart.

[snip]

It certainly is no exaggeration to state that only an idiot would
recommend such a misleading and ambiguous reference for the assessment
of a scanner.

You know that's not the reason of this flamewar (not even remotely). He
said "Ted Harris wrote an article [...]. He used a USAF1951 test target
[...]; might be something to try out."
Hardly "recommending".

But I don't really want to argue this -- hey, it's your flamewar. I'm
even one of the few who enjoys reading flamewars.

Hello again LjL,

Glad someone was enjoying what I wrote. ;-) Sometimes it seems that when
you enter comp.periphs.scanners one needs to check their humour at the
door. :-)

It seems I made him so mad that there is no way he would ever agree with
anything I type, even if it was relating the tests of others, or making a
statement like "the sky is blue". Oh well, I guess I will avoid typing
something he finds irritating.

Anyway, like I state to many people, don't just take one source for
anything; go out and investigate on your own based on many recommendations.
I also do not believe in re-inventing the wheel; so if someone else has
some useful information from however crude a test, then they might want to
read it, rather than repeat it.

Not everyone has a Siemennsstar pattern for testing. In the world of
commercial printing, that is basically what we use, though in the US most
printing shops call them targets. Normally on a print this will also
indicate registration between plates and dot gain, though there are other
tools to do the same. If you have a nice high resolution laser printer, or
something else with fine output, I have a test target that can be printed
and used to evaluate commercial prints. It is often placed outside the crop
lines along the edge of the sheet. It also contains colours and percentages
of such, which are useful on print runs.

When I sent off items to test a few Creo scanners, I sent actual
transparencies (photos) of things I would need to scan. I think that sort
of test can be more important in choosing a scanner than some test target
resolution. I also have facilities to view at 4x, 7x, 8x and 10x through a
loupe, or 20x to 50x through a microscope; which should cover just about
any printing enlargement I would need to perform.

If you consider what output parameters you need to meet, then you can try
to fine tune your scanning to best fit into those parameters. Sometimes you
might find the printer is the greatest limit, and other times you will find
it is your scanner. Usually I have seen a greater problem with colour
issues than a lack of resolution. Kai Hammann has some nice articles about
scanning, and I basically agree with him that a skilled scanner operator
can nearly match a scan on a lesser scanner to what someone less skilled
can accomplish on a better scanner. Practice your skills, and hone your
eyes, and you can improve.

Ciao!

Gordon Moat
A G Studio
<http://www.allgstudio.com>
 
SNIP
Well, I dunno, but it looks like it's moving everything through
the glass.
In any case, I'm not really interested in the CCD layout,
except for its - ehm - staggeredness, in that the 2400dpi are
obtained with two shifted 1200dpi CCDs.

Even that's not the whole story. The sensor pitch and effective area
also determine limiting resolution.

SNIP
Well, if you can suggest a simple method for measuring real
resolution,
I would be happy to try and find out. Well, I wouldn't necessarily
be
*happy*, on a second thought.
But anyway, yes, let's just pretend for the sake of discussion.

Scanning a sharp (slanted approx. 5 degrees) edge, without clipping,
will allow to determine limiting resolution (according to the ISO, 10%
modulation is close to visual limiting resolution) and the Modulation
Transfer Function (MTF, or contrast as a function of spatial
resolution).
SNIP
My main doubt was, what is the "best" or the "right" algorithm
to do the downsampling?

http://www.xs4all.nl/~bvdwolf/main/foto/down_sample/down_sample.htm
or in an attempt to focus on scanner related (subject to
grain-aliasing) down-sampling:
http://www.xs4all.nl/~bvdwolf/main/foto/down_sample/example1.htm
[...]
I just thought that this knowledge might have allowed me to
choose an appropriate downsampling algorithm, instead of just
using whatever Photoshop offers.

To answer that question, also requires to determine actual resolution
limitations. Nevertheless, the better downsampling algorithms will
behave better regardless of the data quality 'thrown' at them.

SNIP
Canon (thru 'vibrating' mirrors) and Epson (thru staggered sensor
lines) are certainly contendors.

Bart
 
Bart said:
[snip: scanner resolution testing]

Well, let's refer to the other thread about this, as I have in fact now
tried the method you suggest.
My main doubt was, what is the "best" or the "right" algorithm
to do the downsampling?

http://www.xs4all.nl/~bvdwolf/main/foto/down_sample/down_sample.htm
or in an attempt to focus on scanner related (subject to grain-aliasing)
down-sampling:
http://www.xs4all.nl/~bvdwolf/main/foto/down_sample/example1.htm
[...]

Both pages were excellent reading, but I don't think they really address
*my* problem: basically, you started with high-resolution images,
resized down, and saw what happened to the detail that was too
high-resolution to correctly render in the smaller images.

My problem is: I have a 4800 dpi scan (well, only on the vertical axis),
but it really only contains the equivalent of a 2400 dpi scan at best,
so aliasing when downsampling shouldn't really be an issue.

But sharpness is an issue: as I know that pixels overlap, and I know by
how much (though not really, as you point out below), I feel there ought
to be an algorithm that is suited to my particular situation, though it
might not be the "best" algorithm in the general case.

But also read below.
To answer that question, also requires to determine actual resolution
limitations. Nevertheless, the better downsampling algorithms will
behave better regardless of the data quality 'thrown' at them.

Well, are my Imatest results enough to attempt that?

And, perhaps the key in obtaining what I want really is in sharpening,
and not so much in the resizing algorithm.
In that case, I suppose I should find out the "correct" amount of
sharpening from the Imatest data -- though I should run Imatest on a
4800 dpi scan to get appropriate results for applying sharpening to a
(downsampled) 4800 dpi scan, shouldn't I?

Still, resampling-then-sharpening sounds like an unnecessarily lossy
operation, since I know everything (or, I better know everything) in
advance of resizing; and even resample-then-sharpen leaves the question
of what resampling algorithm to use, as I suppose they aren't all the
same even when aliasing is taken out of the equation.

I am very interested in this possibility of calculating the correct
amount of sharpening from MFT results anyway, even aside from the issue
of 4800 dpi scanning.
Obviously, even my 2400 dpi scans need some sharpening, and I'm not the
kind of guy who likes to decide the amount by eyeballing even if there
is "a right way".


by LjL
(e-mail address removed)
 
Gordon Moat said:
(e-mail address removed) wrote: SNIP
Not everyone has a Siemennsstar pattern for testing.

But everbody can!

For those who want to do their own testing, I have created a target
file that is also better suited for Digcam sensors, not only for
analog film, and you can make your own target from it at home with a
decent inkjet printer.
For HP/Canon inkjet printers (3.8MB):
http://www.xs4all.nl/~bvdwolf/main/downloads/Jtf60cy-100mm_600ppi.gif
For Epson inkjet printers (5.3MB):
http://www.xs4all.nl/~bvdwolf/main/downloads/Jtf60cy-100mm_720ppi.gif

Print it at the indicated ppi without printer enhancements on glossy
Photopaper which should produce a 100x100mm target, and shoot it with
your (digi)cam from a (non-critical) distance like between 25-50x the
focal length. That will produce a discrete sampled sensor array
capture that's limited by the combined optical components and capture
medium in the optical chain.

Lot's of interesting conclusions can be drawn from the resulting
image. The target is cheap to produce, and when it gets worn-out, you
just print a new one.

Bart
 
Bart said:
But everbody can!

For those who want to do their own testing, I have created a target
file that is also better suited for Digcam sensors, not only for
analog film, and you can make your own target from it at home with a
decent inkjet printer.
For HP/Canon inkjet printers (3.8MB):
http://www.xs4all.nl/~bvdwolf/main/downloads/Jtf60cy-100mm_600ppi.gif
For Epson inkjet printers (5.3MB):
http://www.xs4all.nl/~bvdwolf/main/downloads/Jtf60cy-100mm_720ppi.gif

That is awesome Bart! Thanks for sharing!

I already have software that generates these automatically, but nice to
see a ready made one.
Print it at the indicated ppi without printer enhancements on glossy
Photopaper which should produce a 100x100mm target, and shoot it with
your (digi)cam from a (non-critical) distance like between 25-50x the
focal length. That will produce a discrete sampled sensor array
capture that's limited by the combined optical components and capture
medium in the optical chain.

Have you tried just printing them to a B/W laser printer? Different
results than inkjet?
Lot's of interesting conclusions can be drawn from the resulting
image. The target is cheap to produce, and when it gets worn-out, you
just print a new one.

Bart

As always Bart, you are a wealth of information.

Ciao!

Gordon Moat
A G Studio
<http://www.allgstudio.com>
 
SNIP
I already have software that generates these automatically, but
nice to see a ready made one.

Yes, there are several programs that can produce such a pattern, but
this one also includes improved centre rendition to avoid
print-aliasing (at 600 or 720 ppi native resolution) and it includes a
mid-gray background which allows to post-process to a 'correct'
contrast (the mid-grey corners should render as RGB 128/128/128, which
is also a check for uniform lighting and correct exposure). Adding a
grayscale step-wedge will help even more, but there are better,
spectrally uniform, versions available than CMY inkjet-inks allow.

SNIP
Have you tried just printing them to a B/W laser printer?
Different results than inkjet?

I use/suggest an inkjet printer because it allows more accurate
simulation of continuous tone images, so I designed the pattern for
those. A laser printer could be used, but the native printing density
in PPI needs to be an exact match with either 600 ppi (or 720 ppi) for
good results. The only laser printer versions that I've seen made by
others, were printed having a completely wrong gamma/contrast, so the
grayscale values would probably need a type of pre-calibration.
As always Bart, you are a wealth of information.

Thanks. All I do is share some of the findings I've gathered/expanded
over the years.
I live by (at least) one of the principles I later heard explained by
one of the Apple 'fellows' (Guy Kawasaki, also described by some as a
'pyrotechnical mind') in (one of) his book(s) "How to Drive Your
Competition Crazy".

The (his wording) principle is:
Eat like a bird (many times your own weight (= absorb all kinds of
knowledge), and poop like an elephant (share huge amounts, even with
competitors)).

Bart
 
Back
Top