Displaying high-res images in .NET?

  • Thread starter Thread starter Usenet User
  • Start date Start date
U

Usenet User

..NET 1.1/2.0

I have a need to display high-resolution scrollable images in a .NET
application (Windows Forms). One well known solution is to create a
Panel with AutoScroll set to "true" and then add a PictureBox or
another Panel to it, that is used to display the image.

The above approach works, however, to my surprise, .NET GDI+-based
graphics are not really hi-res friendly.

Consider the following codes examples:

1)
Bitmap b = new Bitmap( 6000, 6000 );
panel2.BackgroundImage = b; // <-- Works OK

2)
Bitmap b = new Bitmap( 14000, 10000 );
panel2.BackgroundImage = b; // <-- "Out of memory" error


The full stack of the error is:

System.OutOfMemoryException: Out of memory.
at System.Drawing.TextureBrush..ctor(Image image, WrapMode
wrapMode)
at System.Windows.Forms.Control.PaintBackground(PaintEventArgs e,
Rectangle rectangle)
at System.Windows.Forms.Control.OnPaintBackground(PaintEventArgs
pevent)
at
System.Windows.Forms.Control.PaintWithErrorHandling(PaintEventArgs e,
Int16 layer, Boolean disposeEventArgs)
at System.Windows.Forms.Control.WmEraseBkgnd(Message& m)
at System.Windows.Forms.Control.WndProc(Message& m)
at System.Windows.Forms.ScrollableControl.WndProc(Message& m)
at System.Windows.Forms.ControlNativeWindow.OnMessage(Message& m)
at System.Windows.Forms.ControlNativeWindow.WndProc(Message& m)
at System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32
msg, IntPtr wparam, IntPtr lparam)


Internally, the TextureBrush constructor makes a call to
GdipCreateTexture() method in gdiplus.dll, and, apparently, there is
some limit on the image size that the method can handle.

Obviously, I am hitting some limit of GDI+ here, but still need to
find a solution. I want to stay with .NET, if possible and avoid using
any third-party ActiveX controls.

What alternatives can I pursue?
Will going with .NET 3.x and WPF help?

TIA!
 
There are no real solutions to this problem. A 32 bit process has a 2
gigabyte address limit but in reality the usable limits of memory in a .NET
program are somewhat less.

GDI+ is not well suited to dealing with large images and certainly, a 14000
by 10000 full colour image would hit the scales at a little over 534
megabytes.

An image loaded into memory may have far more memory allocated to it than
just the image file size. This is because the image itself might be held in
memory and then the raster image is also held in it's fully expanded state.
If the picture box is double buffered there may even be two full copies of
the image in memory.

--
--
Bob Powell [MVP]
Visual C#, System.Drawing

Ramuseco Limited .NET consulting
http://www.ramuseco.com

Find great Windows Forms articles in Windows Forms Tips and Tricks
http://www.bobpowell.net/tipstricks.htm

Answer those GDI+ questions with the GDI+ FAQ
http://www.bobpowell.net/faqmain.htm

All new articles provide code in C# and VB.NET.
Subscribe to the RSS feeds provided and never miss a new article.
 
Bob said:
There are no real solutions to this problem. A 32 bit process has a 2
gigabyte address limit but in reality the usable limits of memory in
a .NET program are somewhat less.

At least, potential solutions such as MapViewOfFile aren't very accessible
from C#.
 
Couldn't he just copy portions of the bitmap into a picturebox through
an unsafe method with pointers ? I doubt very much he has a
14000x14000 display, so only a small rectangle of the original image
is shown at any time, and copying 4 bytes at a time is really fast.
I've done it for a graphics app, including zoom levels (by factors of
two), and you can't tell the difference with native routines.

(on a side note, if Bob hadn't written his GDI+ FAQ, my app wouldn't
even exist. My eternal thanks to you, Bob !).

Michel
 
Couldn't he just copy portions of the bitmap into a picturebox through
an unsafe method with pointers ? I doubt very much he has a
14000x14000 display, so only a small rectangle of the original image
is shown at any time, and copying 4 bytes at a time is really fast.

That doesn't help much. You can't keep the whole image in memory, whether
in a Bitmap, PictureBox, or whatever. The math forbids having
32-bpp*14000x14000 (=~2GB) in a 32-bit address space where 2GB is reserved
by the kernel and some more is taken up by code and other data.

You need MapViewOfFile or AWE or something to select certain parts of the
data into your virtual address space. None of these tools are readily
available to .NET.
 
Usenet said:
I have a need to display high-resolution scrollable images in a .NET
application (Windows Forms).
2)
Bitmap b = new Bitmap( 14000, 10000 );
panel2.BackgroundImage = b; // <-- "Out of memory" error

To work on a 32-bit stack, you've basically got to give up trying to
load the whole bitmap into memory. You're going to have to create a set
of filters that are able to read slices of scanlines out of the image,
possibly with extra processing to integrate zoom info, etc. In essence,
write a viewer for a model, where the model is the data on-disk. The
FileStream class works fine for multi-gigabyte files.

The image stuff in the .NET/Win32 box won't work at that scale, as we're
talking many gigabytes per image when rasterized. The Windows GDI/+
stuff is designed for images for on-screen display and printing, not so
much for totally scalable image manipulation.

If it seems like too much work / beyond your capability set, then you'll
be better off buying. You will probably be better off buying in any
case, as I assume gigapixel imaging isn't one of your business's core
competencies.

-- Barry
 
Ben said:
You need MapViewOfFile or AWE or something to select certain parts of the
data into your virtual address space. None of these tools are readily
available to .NET.

Even if you could select portions of the data into memory, it wouldn't
do you much good, as you can only display on screen as many pixels as
your desktop resolution.

Depending on the image file format, for any given cropped area, each
subsequent scan line of pixels / pixel groups will likely be very
distant in the file, many strides away. IMO a view that builds up a
screen-viewable picture, whether zoomed out (and thus an aggregate of
data) or zoomed in, is best off processing the file in a forward-only
fashion to pick up the data it needs. To get efficient scrolling, a
combination of caching and prediction would probably help.

To get it really efficient, IMO you'd need to get the whole thing into
memory either on multiple machines or in a single 64-bit address space.
Having an image server that has tiles at different zoom levels, and
perhaps using interpolation, something along the lines of Google Maps /
Earth, seems a pretty inevitable design for maximum flexibility.

Some file formats might help; quadtree or z-order may reduce
stride-induced random access.

-- Barry
 
Barry said:
Even if you could select portions of the data into memory, it wouldn't
do you much good, as you can only display on screen as many pixels as
your desktop resolution.

In some cases the virtual address space, and not physical RAM, could be the
limitation. In such cases keeping all the data in memory, and swapping
virtual addresses around with AWE or MapViewOfFile would outperform file
access.
Depending on the image file format, for any given cropped area, each
subsequent scan line of pixels / pixel groups will likely be very
distant in the file, many strides away. IMO a view that builds up a
screen-viewable picture, whether zoomed out (and thus an aggregate of
data) or zoomed in, is best off processing the file in a forward-only
fashion to pick up the data it needs. To get efficient scrolling, a
combination of caching and prediction would probably help.

MapViewOfFile should gracefully degrade to this, but use memory when
possible.
To get it really efficient, IMO you'd need to get the whole thing into
memory either on multiple machines or in a single 64-bit address
space. Having an image server that has tiles at different zoom
levels, and perhaps using interpolation, something along the lines of
Google Maps / Earth, seems a pretty inevitable design for maximum
flexibility.

Some file formats might help; quadtree or z-order may reduce
stride-induced random access.

Thanks for sharing your wisdom... I wish I knew more about some of those but
they seem kind of specialized, I haven't encountered a need for them.
 
Ben said:
Barry said:
Ben said:
Depending on the image file format, for any given cropped area, each
subsequent scan line of pixels / pixel groups will likely be very
distant in the file, many strides away. IMO a view that builds up a
screen-viewable picture, whether zoomed out (and thus an aggregate of
data) or zoomed in, is best off processing the file in a forward-only
fashion to pick up the data it needs. To get efficient scrolling, a
combination of caching and prediction would probably help.

MapViewOfFile should gracefully degrade to this, but use memory when
possible.

MapViewOfFile will only directly help if the image is a raw bitmap and
you're displaying it with a image pixel to screen pixel ratio <= 1, and
even then it'll be limited. Since you'll need to copy each slice of
pixels into a single bitmap for actual on-screen display, you don't save
as much as you could when avoiding a copy by piggybacking the VM/FS
caching subsystem, IMO.

Zoomed out, you need to aggregate data from multiple pixels for each
on-screen pixel, possibly many, many pixels. Since those aggregates
would be expensive to create, you'd want to both cache them and
predictively (or exhaustively, offline) calculate them. Ideally the
image format itself would provide for this, as they'd be very expensive
to create.
Thanks for sharing your wisdom... I wish I knew more about some of those but
they seem kind of specialized, I haven't encountered a need for them.

Z-order is a space-filling curve, a kind of fractal that is recursively
made up of Z-shaped strokes. It's described on Wikipedia reasonably
well, which is where I went to refresh my memory of the traversal:

http://en.wikipedia.org/wiki/Z-order_(curve)

Quadtrees are involved in a number of image algorithms, as a search on
Google will show, and Z-order is a depth-first traversal of a quadtree.

For example, the aforementioned aggregate data can be inferred from a
quadtree by successively ignoring leaves after a given depth. However,
that requires you to have constructed a quadtree already, which
effectively has MIPs built-in to the node colours.

Very brief intro to quadtrees:

http://www.cs.ubc.ca/~pcarbo/cs251/welcome.html

-- Barry
 
Barry Kelly said:
Ben said:
Barry said:
Ben Voigt [C++ MVP] wrote:
[...]
Depending on the image file format, for any given cropped area, each
subsequent scan line of pixels / pixel groups will likely be very
distant in the file, many strides away. IMO a view that builds up a
screen-viewable picture, whether zoomed out (and thus an aggregate of
data) or zoomed in, is best off processing the file in a forward-only
fashion to pick up the data it needs. To get efficient scrolling, a
combination of caching and prediction would probably help.

MapViewOfFile should gracefully degrade to this, but use memory when
possible.

MapViewOfFile will only directly help if the image is a raw bitmap and
you're displaying it with a image pixel to screen pixel ratio <= 1, and
even then it'll be limited. Since you'll need to copy each slice of
pixels into a single bitmap for actual on-screen display, you don't save
as much as you could when avoiding a copy by piggybacking the VM/FS
caching subsystem, IMO.

I was under the impression that mapped files do use the same virtual memory
code, just backed by a real file instead of the swapfile.
 
Ben said:
I was under the impression that mapped files do use the same virtual
memory code, just backed by a real file instead of the swapfile.

That is the only thing making sense.

But the docs only describe "what" not "how".

Arne
 
Ben said:
Barry Kelly said:
Ben Voigt [C++ MVP] wrote:
MapViewOfFile will only directly help if the image is a raw bitmap and
you're displaying it with a image pixel to screen pixel ratio <= 1, and
even then it'll be limited. Since you'll need to copy each slice of
pixels into a single bitmap for actual on-screen display, you don't save
as much as you could when avoiding a copy by piggybacking the VM/FS
caching subsystem, IMO.

I was under the impression that mapped files do use the same virtual memory
code, just backed by a real file instead of the swapfile.

That's what I meant. Virtual memory, which requires paging in and out on
demand, is very similar to file-system caching, so they are often
implemented by the same OS subsystem.

-- Barry
 
Thank you for the input, everyone!

I managed to find a couple of "purely .NET" controls that do what I
want. One of them is included in DotImage SDKs by Atalasoft
($600-$2500). The other one is part of Imagistik Image Viewer by
Informatik. (The latter is noticeably slower when resizing images or
displaying thumbnails, however.) Both handle my humongous multi-page
TIFFs rather gracefully. Reflector shows that both solutions
ultimately rely on GDI+ and nothing else. The internals in both cases
are rather complex, though.
 
Back
Top