Hilton said:
Because when the Bitmap get's GC'd, the GC will do a "if (o 'implements'
IDisposable && !o.Disposed) o.Dispose ()" and even those this Bitmap
object won't be Disposed deterministically, it will be Disposed at least
when/if an OutOfMemory exception gets thrown.
First:
Are you aware thet there is no Disposed property in the IDisposable
interface:
// Summary:
// Defines a method to release allocated unmanaged resources.
[ComVisible(true)]
public interface IDisposable
{
// Summary:
// Performs application-defined tasks associated with freeing,
releasing, or
// resetting unmanaged resources.
void Dispose();
}
It would mean that runtime should take the step to record if the Dispose()
was called.. a big overhead, let me say it, for nothing.
CLR tracks already finalizers, now it should double track?
Second:
What if it Dispose() generates an exception? Was the object Disposed() or
not? Do we need to finalize or not?
Partially disposed objects are a nightmare.
Please recall, I'm not saying that my proposal is perfect and that we
should change our code to use if (it requires no code change in the apps).
All I'm saying is that if the GC add the "if" as stated above, it would
act as some level of safety net and catch undisposed objects.
It can't.. add the "if" as I said before. To support it, yes, it would
require code changes.
I think we're kidding ourselves to think that you, me, and all the other
.NET engineers in the world will always write code to do the right thing.
I really like "using ()" and use it all the time, but there are some
objects whose lives cannot be bottled up in a few lines. One little bug
and whammo,
Well, that's the life of sw engineers. One little bug (no finalizer when
there should be one) and..
You could blow up a nuclear power plant.. just a little bug.
CLR offers very good protection. Better than any other runtime. But it still
can't protect you from yourself.
a LOT of memory potentially gets leaked. I cannot even count the times
Leaked? Not if you are talking managed resources. Not in any case if you
have a finalizer,
and by saying finaliser I don't mean ~class() { // Gone for lunch };
people have posted here about being out of memory only to find that they
didn't know to Dispose a Bitmap object. The purist in you will jump up
and down and say "well they should have, they screwed up, and they didn't
call Dispose() - their fault.". Now if only we all lived in a perfect
world where one perfect engineer was solely responsible for one perfect
project... However, multiple engineers work on multiple projects and one
missing Dispose() can cost a company millions. Then when we get to
finding the
No, one missed finalizer can cost millions. One missed Dispose() could cost
a few bytes in a short time.
problem, we're back to finding the problem-causing 'malloc-free' we're all
so fond of. :|
Not really. There is no 'free' in C#.
Why not add a safety net to make a C# application more robust, less leaky,
stay up longer, etc? Is it a band aid? Absolutely.
Hilton
Now, ff you change the word Dispose to Finalize, you see that the framework
is already there.
The Dispose() is "user mode" and Finalize() is "kernel mode".. which one you
trust more?
You really need a double safety net?