Bart van der Wolf said:
SNIP
No, I'm not assuming but referring to observed behavior.
Not according to your own evidence of observed behaviour! That clearly
shows that the better downsampling processes, even those in PS, do
include filtering to minimise aliasing - see below.
Yes, the later versions (CS/CS2) allow to do 'something' in order to
reduce artifacts, but they are rather ineffective (probably a
compromise for speed) and cause other artifacts (e.g. halo).
Well, I don't have PS CS or CS2, but my own tests using your test
pattern have produced the same results as you have published - see below
- and this quite clearly indicates filtering on the downsampled
algorithms in PS. I know this same filtering was present in PS5 (I
never used PS6) and I believe it has been included in PS since at least
PS4, but it is so long ago that I used that version that I cannot be
sure exactly whether it was introduced before or after.
I am glad that you posted that link again Bart, because that was in my
mind when I responded to your previous post yesterday, but I couldn't
find the appropriate page right away.
"Observe" the behaviour that *you* have produced for the downsampling of
an rgb image.
The nearest neighbour algorithm has no integral filtering and produces
the reference level of aliasing. Now, correct me if I am wrong, but the
aliasing present in the linear and bicubic downsampling is considerably
*less* than this reference.
Based on the amplitude of the central ring of the horizontal and
vertically displaced alias imaged, I estimate the maximum amplitude of
the aliased components in the bicubic example to be only 15% of the
original reference - that is a significant reduction of aliasing, not
complete elimination, but a significant reduction - on a target you
acknowledge to be "hypersensitive" to the problem. With bilinear
downsampling the maximum amplitude of the aliased signal, as shown in
your page, is even less!
This can only occur because a filter with a suitably limited MTF is
applied prior to the actual downsampling process. It is not a
consequence of the linear or cubic interpolation used, as can be readily
proven by applying such downsampling to single dimensional curves in
Excel or Mathcad. Furthermore, less capable software such as PSP,
either uses a different filter or none at all. Last time I checked PSP
(V7?) bicubic downsampling produced precisely the same level of aliasing
as the reference level.
Now, that 85% reduction also applies to any increase in high frequency
content that would be caused by applying USM to the scan produced by a
HyperCCD. Since, even after USM is applied, the MTF of the HyperCCD
scan at the Nyquist limit of the sample density is *still* zero, the
effect is negligible.
Based on the above experiment, I'd have to disagree.
Stop and think about the logical consequence, and its relevance to this
thread, of what you are saying.
You are suggesting that the USM, which only goes some way to recover the
MTF of the scanning process for a HyperCCD to that of an undersampled
inline CCD, causes excessive aliasing (and you particularly specify
grain aliasing) on undersampling. The logical consequence of that is
that downsampling the output of a conventional linear CCD scanner, eg. a
typical film scanner, would be considerably *worse* when downsampled and
should be heavily filtered to avoid it!
Sorry Bart but no matter how you dress it up, the alias content of a
downsampled, USM filtered HyperCCD will *always* be less than the same
process applied to an inline CCD which, because of its intrinsically
undersampled nature, inevitably has a significant grain alias content in
before you even start!
Things may be a bit less critical with real-life images, but anything
besides the 'regular' bi-cubic downsampling will cause *avoidable*
artifacts.
Even though your own page demonstrates that regular *bilinear*
downsampling produces *LESS* aliased content than that? Compare the
amplitude of the diagonal alias patterns on your own examples of these
two downsample algorithms built into Photoshop, the bilinear at:
http://www.xs4all.nl/~bvdwolf/main/foto/down_sample/down_sample_files/Rin
gs1_BLrgb.gif
with the bicubic at:
http://www.xs4all.nl/~bvdwolf/main/foto/down_sample/down_sample_files/Rin
gs1_BCrgb.gif
Yes, but is it the correct type of USM?
Define "the correct type of USM". You know full well that there is no
single "correct type" since, amongst many other reasons, the USM cannot
lift the MTF at the Nyquist limit (or even close to it) to the level of
an inline device. The best that can be achieved is a level of USM which
does not cause the system MTF to exceed unity at any spatial frequency
(ie. produce ringing edges) or have an integrated noise gain of greater
than 2. There is an infinite number of such solutions.
Besides the fact that sharpening should (to avoid color artifacts) be
applied to the Luminance of an image and not to the R/G/B channels,
That would be an adequate rule to apply to an undersampled system.
However we are applying the USM in this discussion to a system which is
oversampled by design - and even more oversampled by manufacture, given
the lens MTF that is fitted to the 2450. It makes absolutely no
difference whatsoever whether the USM is applied to the individual RGB
channels or simply to the luminance in that case.
I've not seen conclusive evidence that the method used in the Epson SW
is optimal. An edge scan with and without USM will reveal some of the
answers.
We have discussed this at length in the past, when I was mentioned that
I used the edge scan method to assess the performance of various
scanners I was buying. An edge scan has no ringing even after the USM
is applied on the Epson scanner - the resulting MTF or SFR never exceeds
the DC level except for noise at very low frequencies. You will find
ample evidence of edge and line pattern responses (as well as MTF
measurements) of this and other Epson scanners at James' scanner bakeoff
pages.