SNIP
Hope you would answer two questions:
1. Why did you select the -Lanczos filter out of all those available?
I have found that Mitchell also performs well for downsizing. Your
last study (concentric rings) also recommended -Catrom and -Sinc.
Consistency. When I first started using ImageMagick, I didn't know the
support (size of the kernel) used for the "Sinc" filter (theoretically
optimal but potentially slow) and chose Lanczos, which is basically a
Sinc windowed Sinc which could suppress ringing artifacts better. It
is also a default filter for certain images in IM, so I figured it to
be a good candidate. Since then I've stuck with that for consistency,
but Sinc does equally well and so does Catrom. Mitchel isn't bad, but
IMO not as good as the other three. Whether the differences show up in
a pictorial scene at all, depends on image content.
2. What were the ImageMagick USM parameters you selected? I have been
using -unsharp 1x3+1+.09 but kind of blindly.
I didn't use IM's USM but, since I wanted to save to maximum quality
JPEG in Photoshop after converting to sRGB space, I chose to sharpen
in Photoshop. There I used a slight variation of my currently
preferred Photoshop method, as indicated in:
http://www.xs4all.nl/~bvdwolf/main/downloads/Non-clipped-sharpening.png .
This usually starts with a duplicate layer in Luminocity blending
mode, USM set to amount 400-500!, Radius 0.3, Threshold 0-4, if screen
output is needed. Opacity of the layer is then often reduced a bit.
If you have to do a lot of conversions, and use an ImageMagick script
for that, you'll have to figure out a reasonable average, but I
usually arrive at very small sharpening radii for screen display.
Since the image was down-sampled, it has an extremely high resolution,
and all that needs to be done is less than one pixel radius USM
contrast adjustment to compensate for the sub-pixel resampling losses
of contrast. For print, the additional dithering losses need to be
pre-compensated, so you'll need something in the order of 1-1.5 pixel
radius sharpening at final print size (assuming 600-720 ppi output).
3. Did you abandon the notion of doing Blur, downsample, and Sharpen
with Photoshop? Or should we assume pre-blur in all cases.
No, blur>down-sample>sharpen is *the* method to use in Photoshop, but
Gaussian blur is not the best type of blur. It tends to soften the
image more than necessary (due to long Gaussian tails) to avoid
aliasing. Its optimal radius is also hard to calculate, because it
differs with each scale and depends on implementation (hard to
translate between applications). The benefit is that it's available in
most Photo-editors, so the method is usable and reproducible for many
users of different applications.
The images in the Example 1. page have not been *separately*
pre-blurred, although Photoshop seems to do some by default, Qimage
has it built-in now by my request, and IM does so by default without
user intervention. A reason for that is that I didn't want to include
too many variables, and the blur radius needs to be adjusted each
time, tailored for the exact reduction in scale, and the shape of a
Gaussian curve makes that not very intuitive. For that reason I've
made myself a checklist, depicting the GB radius and resulting maximum
diameter of weighted average, but it probably is only valid for
Photoshop CS. With that checklist I know which diameter will be no
larger than 1 pixel after rescaling, and what radius I can use without
causing loss of sharpness.
The main purpose of that example page is a demonstration of how the
out-of the-box applications perform. I know how to improve on those
results (by pre-filtering), but ImageMagick doesn't need any
improvement, so it's up to the other SW makers to clean up their act.
Bart