Which "BICUBIC" Method to use?

  • Thread starter Thread starter Howard
  • Start date Start date
H

Howard

Kennedy McEwen recently convinced me that just prior to sending a
Photoshop image to my printer, I should resample to the printer's
native resolution -- or at least a resolution which is an integer
divisor of such native resolution.

As may be found in the recent thread on this user-group site, the
reason for such resampling is straightforward: One way or the other,
the image will be resampled to the printer's native resolution.
Because Photoshop's "bicubic" resampling is generally superior to the
methods used by printer drivers, better to do it there.

The new question:

Beginning with Photoshop CS, there are now THREE bicubic resampling
methods. For the purposes described herein, which method is superior?
Or, perhaps a more appropriate question: Under what circumstances
would each of the three bicubic resampling methods provide superior
results?

Thank you.

Howard
 
Howard said:
Kennedy McEwen recently convinced me that just prior to sending a
Photoshop image to my printer, I should resample to the printer's
native resolution -- or at least a resolution which is an integer
divisor of such native resolution.


As may be found in the recent thread on this user-group site, the
reason for such resampling is straightforward: One way or the other,
the image will be resampled to the printer's native resolution.
Because Photoshop's "bicubic" resampling is generally superior to the
methods used by printer drivers, better to do it there.

I'm sure a good argument can be made for doing what you suggest, but I
don't think it is the one you give here. The printer has a resolution
in dots per inch, and the digital image has a resolution in pixels per
inch. One pixel in the image ordinarily is represented by many dots in
printing. Also, because of dithering, it is not a fixed number of dots.
The printer driver has to use an elaborate algorithm for doing the
conversion. You can't avoid that by what you do in your photoeditor.
 
I think Bruce Fraser covered this in Real World Photoshop.

If you're going down in size, use bicubic sharpen.

Going up, bicubic smoother.

As for the other options, I must plead forgetfulness. Sorry.
 
Leonard Evens said:
I'm sure a good argument can be made for doing what you suggest, but I
don't think it is the one you give here. The printer has a resolution
in dots per inch, and the digital image has a resolution in pixels per
inch. One pixel in the image ordinarily is represented by many dots in
printing. Also, because of dithering, it is not a fixed number of
dots. The printer driver has to use an elaborate algorithm for doing
the conversion. You can't avoid that by what you do in your photoeditor.

You are getting confused Leonard. Howard is not referring to the ink
dots and their placement, but to the actual pixel resampling that is
inherent in the printer driver *before* dithering of dots occurs. For
the Epson desktop range *all* images are resampled to 720ppi (yes,
pixels not dots, per inch) unless they are already sent to it in that
resolution. For the Epson large format range, the resampling density is
360ppi. The half-toning algorithm then sets to work and, as you say,
places many dots for each pixel and dynamically distributes colour
inaccuracies cause by the limited inkset over several pixels to achieve
the very fine tonal fidelity that photo quality reproduction requires.

There are several methods by which the resampling density of the first
stage of the printing process can be verified. One method is to create
an image with high contrast (black/white, cyan/white, magenta/white &
yellow/white) horizontal and vertical bar patterns and print that at
different pixel resolution including 720ppi and approximations thereof
(by scaling, not resampling in the application) and carefully viewing
the aliased output under a loupe. Others have discovered the same
figures using alternative methods, but I came up with the one above a
long long long long time ago. ;-)

You can do this for any printer and determine its native resampling
resolution. It isn't unique to Epson, but I suspect these specific
resolutions are, since too many people bandy 300ppi around (which is
positively one of the worst image resolutions for Epsons) for it to be
coincidence. But I haven't tested other printers in detail, so I can't
advise on what they are.

Unfortunately Howard, I don't have Photoshop CS, and unlikely to upgrade
as I understand it is incompatible with my OS and it also has that nasty
validation - I change machine configuration far too often to be
encumbered by that, which is why I haven't installed XP either. As a
consequence I haven't had a chance to experiment with the different
bicubic functions to see exactly what they are doing. I suspect, from
reading others comments however, that they differ in how much filtering
is applied to the data prior to the resampling itself. For this effect
I would use the version with minimum filtering - which, at a guess, is
the one in the middle. ;-)

Try them all though and pick the one that gives best results after
viewing detail under magnification - if there isn't any difference then
it obviously doesn't matter.
 
Usenet User said:
I think Bruce Fraser covered this in Real World Photoshop.

If you're going down in size, use bicubic sharpen.

If that's what Bruce Fraser wrote, I'll have to disagree with him:
http://www.xs4all.nl/~bvdwolf/main/foto/down_sample/down_sample.htm
will show you what happens on a theoretically perfect (no noise, high
signal to noise, only spatial frequencies up to 'Nyquist') image.
http://www.xs4all.nl/~bvdwolf/main/foto/down_sample/example1.htm will
show you what that means for a film scan.

Not all images have such easy to detect artifacts, but if they do,
they'll be in your face before you know it. It's better to prevent
that by pre-filtering and using Photoshop CS's normal Bi-cubic, or use
a program with a better implemented resampling algorithm.
Going up, bicubic smoother.

That will reduce blockiness, but is will also lose a lot of contrast.
An alternative is multiple 110% increments in size. That will increase
the edge contour contrast and avoid blockiness at the same time.
Again, there are better algorithms than Bi-cubic if you have to
up-sample.

A program like Qimage does a reasonable down-sample and a very natural
up-sample if needed to feed a printer driver at it's native
resolution. Depending on the paper/ink used that is often 720 ppi for
Epsons and 600 ppi for Canon and HP printers, although there may be
differences amongst desktop and wide format inkjet printers.
RGB laser or LED or CRT or Dye-sub printers have their own native ppi
resolutions, often in the 250-400 ppi range.

Bart
 
Kennedy said:
You are getting confused Leonard. Howard is not referring to the ink
dots and their placement, but to the actual pixel resampling that is
inherent in the printer driver *before* dithering of dots occurs. For
the Epson desktop range *all* images are resampled to 720ppi (yes,
pixels not dots, per inch) unless they are already sent to it in that
resolution. For the Epson large format range, the resampling density is
360ppi. The half-toning algorithm then sets to work and, as you say,
places many dots for each pixel and dynamically distributes colour
inaccuracies cause by the limited inkset over several pixels to achieve
the very fine tonal fidelity that photo quality reproduction requires.

There are several methods by which the resampling density of the first
stage of the printing process can be verified. One method is to create
an image with high contrast (black/white, cyan/white, magenta/white &
yellow/white) horizontal and vertical bar patterns and print that at
different pixel resolution including 720ppi and approximations thereof
(by scaling, not resampling in the application) and carefully viewing
the aliased output under a loupe. Others have discovered the same
figures using alternative methods, but I came up with the one above a
long long long long time ago. ;-)

You can do this for any printer and determine its native resampling
resolution. It isn't unique to Epson, but I suspect these specific
resolutions are, since too many people bandy 300ppi around (which is
positively one of the worst image resolutions for Epsons) for it to be
coincidence. But I haven't tested other printers in detail, so I can't
advise on what they are.


Thanks for the information. I must admit that I misunderstood the
issue. But can you save me some time by telling me which it is for my
Epson 1280 and for a friend's Epson 2200? Are these "desktop" or "large
format"?
 
Leonard Evens said:
Thanks for the information. I must admit that I misunderstood the
issue. But can you save me some time by telling me which it is for my
Epson 1280 and for a friend's Epson 2200? Are these "desktop" or
"large format"?
They are both 720ppi drivers. I actually tested this out in the 1280's
near identical predecessor, the 1270, and others have confirmed the 2200
is the same.
 
Kennedy said:
They are both 720ppi drivers. I actually tested this out in the 1280's
near identical predecessor, the 1270, and others have confirmed the 2200
is the same.

Thanks. For much of what I do, using 4 x 5 scans, I can scale down to
720. But for larger prints, I would either have to scale up or use 360.
Which do you think would be better?
 
You obviously know much more about this than me. I was just passing
on what I read.

You've prompted me to check: it is in there, FWIW, on pages 110 and
111 of Real World Photoshop CS.

Blatner has a web site -- moo.com, and can be contacted there. Fraser
is at creativepro.com and pixelgenius.com . They might be glad to hear
from you on this.
 
You can do this for any printer and determine its native resampling
resolution. It isn't unique to Epson, but I suspect these specific
resolutions are, since too many people bandy 300ppi around (which is
positively one of the worst image resolutions for Epsons) for it to be
coincidence. But I haven't tested other printers in detail, so I can't
advise on what they are.
Quick question:

Using Epson printers we've usually (depending on image size) set image
resolution to either 360ppi or 240ppi. Now, I can understand 360ppi
form what you've said, but are we creating problems by using 240ppi?
 
SNIP
Using Epson printers we've usually (depending on image size) set image
resolution to either 360ppi or 240ppi. Now, I can understand 360ppi
form what you've said, but are we creating problems by using 240ppi?

Since the printer driver probably (may depend on paper choice) starts
the dithering process from a native 720 ppi, it'll have to rescale
before it does that. You can most likely do a better job at rescaling
(and post-sharpening) than the printer driver. In fact, I would not be
surprised if the driver internally just uses (bi-)linear scaling.
Having to dither from a known resolution (720 ppi) allows a much
simpler and more optimized code, and thus faster dither process, than
having to deal with arbitrary resolutions to fit an output size.

Bart
 
Leonard Evens said:
Thanks. For much of what I do, using 4 x 5 scans, I can scale down to
720. But for larger prints, I would either have to scale up or use
360. Which do you think would be better?

As I explained to Howard when we discussed this, the end resolution is
less important than the avoidance of aliasing artefacts getting there.
Both 360 and 720 are well above what the naked eye can resolve on a
print, so the only push for 720 is if you expect the print to be viewed
closely. I often print sheets of 'contact' sized prints to interleave
with my negatives in files. These are one example where 720ppi is worth
the effort because I view the images under magnification.
 
Hecate said:
Using Epson printers we've usually (depending on image size) set image
resolution to either 360ppi or 240ppi. Now, I can understand 360ppi
form what you've said, but are we creating problems by using 240ppi?
Not really.

360ppi is 720/2.

240ppi is just 720/3 - the next lowest optimum resolution that you
should target. After that you are looking at 180ppi (720/4) and 144ppi
(720/5), but they start to get visible on close inspection.

The issue is avoiding resampling artefacts by ensuring that the
resampling is performed by the best algorithm available to you. The
printer just uses nearest neighbour or linear interpolation (depending
on the options switched on, eg. DCC). These produce minimum distortion
if the start resolution is an integer division of the target resolution
which, for Epson desktop printers, is 720ppi.

So apart from reduced resolution on the page and smaller files, you
won't see any more problems with 240ppi than 360ppi.

If you need to use lower resolutions than 240ppi though, such as 180ppi
or 144ppi, switch the "DCC" or "Digital Camera Correction" option on in
the printer driver, which will force it to use linear interpolation.
 
Hello Kennedy.

By way of opinion, I agree with you that Windows XP and Photoshop
"activation" can be a minor pain in the neck. More so, I really
dislike Windows XP as the administrator, user, file-protection
structure are a ROYAL pain in the neck, and certainly unnecessary for
experienced users. As is typical of such attempts to dummy-proof the
world, it makes matters far more difficult for those willing to learn
the system and use it efficiently. But as you know, one must upgrage
the operating system to use Photoshop CS --- and here's the issue to
condider.

As a photo restorer, I find Photoshop's 16-bit adjustment layers and
ability to use the Healing Brush on new/blank layers to alone be worth
the upgrade price, the activation issues, and the pain of usintg
Windows XP. The "Shadow/Highlight" adjustment tool and lens-blur
filter are also worthy additions to the program, but as an experienced
user I've had my own techniques to deal with these effects. They
effects are, however, much easier to accomplish with the new tools.

So, while all your stated issues are valid, I would nonetheless
encourage anyone doing photo restoration to upgrade to Photoshop CS.

Of course, I'm still a stone-age Windows user. I suppose the best and
the brightest long ago moved to an Apple machine to run Photoshop!

Howard
 
Not really.

360ppi is 720/2.

240ppi is just 720/3 - the next lowest optimum resolution that you
should target. After that you are looking at 180ppi (720/4) and 144ppi
(720/5), but they start to get visible on close inspection.

The issue is avoiding resampling artefacts by ensuring that the
resampling is performed by the best algorithm available to you. The
printer just uses nearest neighbour or linear interpolation (depending
on the options switched on, eg. DCC). These produce minimum distortion
if the start resolution is an integer division of the target resolution
which, for Epson desktop printers, is 720ppi.

So apart from reduced resolution on the page and smaller files, you
won't see any more problems with 240ppi than 360ppi.

If you need to use lower resolutions than 240ppi though, such as 180ppi
or 144ppi, switch the "DCC" or "Digital Camera Correction" option on in
the printer driver, which will force it to use linear interpolation.


Thanks, that's helpful. I thought we were OK, but I just wanted to
check.

Thanks too to Bart. :)
 
Bart van der Wolf said:

Very very interesting study! Thanks for putting it on the web, Bart.
Given your sample image, which seems fair, I agree with your conclusion
that ImageMagick -Lanczos -unsharp is better than Photoshop.

Hope you would answer two questions:

1. Why did you select the -Lanczos filter out of all those available?
I have found that Mitchell also performs well for downsizing. Your
last study (concentric rings) also recommended -Catrom and -Sinc.

2. What were the ImageMagick USM parameters you selected? I have been
using -unsharp 1x3+1+.09 but kind of blindly.

3. Did you abandon the notion of doing Blur, downsample, and Sharpen
with Photoshop? Or should we assume pre-blur in all cases.
 
SNIP
Hope you would answer two questions:

1. Why did you select the -Lanczos filter out of all those available?
I have found that Mitchell also performs well for downsizing. Your
last study (concentric rings) also recommended -Catrom and -Sinc.

Consistency. When I first started using ImageMagick, I didn't know the
support (size of the kernel) used for the "Sinc" filter (theoretically
optimal but potentially slow) and chose Lanczos, which is basically a
Sinc windowed Sinc which could suppress ringing artifacts better. It
is also a default filter for certain images in IM, so I figured it to
be a good candidate. Since then I've stuck with that for consistency,
but Sinc does equally well and so does Catrom. Mitchel isn't bad, but
IMO not as good as the other three. Whether the differences show up in
a pictorial scene at all, depends on image content.
2. What were the ImageMagick USM parameters you selected? I have been
using -unsharp 1x3+1+.09 but kind of blindly.

I didn't use IM's USM but, since I wanted to save to maximum quality
JPEG in Photoshop after converting to sRGB space, I chose to sharpen
in Photoshop. There I used a slight variation of my currently
preferred Photoshop method, as indicated in:
http://www.xs4all.nl/~bvdwolf/main/downloads/Non-clipped-sharpening.png .
This usually starts with a duplicate layer in Luminocity blending
mode, USM set to amount 400-500!, Radius 0.3, Threshold 0-4, if screen
output is needed. Opacity of the layer is then often reduced a bit.

If you have to do a lot of conversions, and use an ImageMagick script
for that, you'll have to figure out a reasonable average, but I
usually arrive at very small sharpening radii for screen display.
Since the image was down-sampled, it has an extremely high resolution,
and all that needs to be done is less than one pixel radius USM
contrast adjustment to compensate for the sub-pixel resampling losses
of contrast. For print, the additional dithering losses need to be
pre-compensated, so you'll need something in the order of 1-1.5 pixel
radius sharpening at final print size (assuming 600-720 ppi output).
3. Did you abandon the notion of doing Blur, downsample, and Sharpen
with Photoshop? Or should we assume pre-blur in all cases.

No, blur>down-sample>sharpen is *the* method to use in Photoshop, but
Gaussian blur is not the best type of blur. It tends to soften the
image more than necessary (due to long Gaussian tails) to avoid
aliasing. Its optimal radius is also hard to calculate, because it
differs with each scale and depends on implementation (hard to
translate between applications). The benefit is that it's available in
most Photo-editors, so the method is usable and reproducible for many
users of different applications.

The images in the Example 1. page have not been *separately*
pre-blurred, although Photoshop seems to do some by default, Qimage
has it built-in now by my request, and IM does so by default without
user intervention. A reason for that is that I didn't want to include
too many variables, and the blur radius needs to be adjusted each
time, tailored for the exact reduction in scale, and the shape of a
Gaussian curve makes that not very intuitive. For that reason I've
made myself a checklist, depicting the GB radius and resulting maximum
diameter of weighted average, but it probably is only valid for
Photoshop CS. With that checklist I know which diameter will be no
larger than 1 pixel after rescaling, and what radius I can use without
causing loss of sharpness.

The main purpose of that example page is a demonstration of how the
out-of the-box applications perform. I know how to improve on those
results (by pre-filtering), but ImageMagick doesn't need any
improvement, so it's up to the other SW makers to clean up their act.

Bart
 
Back
Top