When should I implement IDispose?

  • Thread starter Thread starter Guest
  • Start date Start date
Hi Jon,

This thread becomes to long, it is not necessary to answer , the change that
we will agree is very low. (But that we did know already)

:-)

Cor
 
What I was complaining about was the fact that there are lots of programs
that have relied on the fact that when an object goes out of scope the
destructor is called. In the destructor the code looks after releasing
resources or whatever should happen when the object is not required any
more.

This is not the case in DOT.NET. Now you have to implement IDisposable and
remember to call the Dispose of the object. For routines that have multiple
exit points this becomes a problem. The developer would have to remember (a
big cause of problems) to dispose the object and finding the code that does
not do that during debugging can be a tedious and time consuming exercise.

How you code to a standard that says that something will eventually happen
is poor design in my mind. The whole purpose of coding is to provide a
consistant processing path.

Lloyd Sheen
 
Cor said:
Why?

That there is VB.net does not mean that everybody is explictly advised to
use it.

If something wasn't encouraging its explicit disposal, why would it
implement IDisposable in the first place?
 
Lloyd Sheen said:
What I was complaining about was the fact that there are lots of programs
that have relied on the fact that when an object goes out of scope the
destructor is called. In the destructor the code looks after releasing
resources or whatever should happen when the object is not required any
more.

This is not the case in DOT.NET. Now you have to implement IDisposable and
remember to call the Dispose of the object. For routines that have multiple
exit points this becomes a problem.

Not really - you use a try/finally block instead. In C# you use a using
block which makes it even easier.
 
Ok again what I am talking about is the loss of a fundamental programming
idea.

When an object goes out of scope and is no longer usable it should destruct.
This give the developer the abiltity to catch this an do whatever is
required. It seems that the USING will be a new addition to the VB language
in the next release so MS realizes thru the many complaints that there were
many developers using this feature.

It the point at which this happens is when ever the framework decides then
the developer is left to the wim of that framework. IDispose is a good
option to allow developers to do the same but if forgotten it is better that
the destructor is called when the object goes out of scope.

Lloyd
 
Lloyd Sheen said:
Ok again what I am talking about is the loss of a fundamental programming
idea.

When an object goes out of scope and is no longer usable it should destruct.

That's a problem to start with in a managed world, as an *object* never
goes out of scope - only a *variable* does. Working out whether or not
the variable going out of scope means that the object is no longer
referenced is a very expensive operation.

There was a very good article on this somewhere, explaining just why it
was impossible to get deterministic finalization without huge other
penalties, but I can't find it now. I'll post a reply to this message
if I find it.
 
VB.net, C#, J++ and C++ with managed code are almost completly equal when
they are compiled. In this newsgroup are that the only languages which
counts and where we are talking about.

Lloyd was complaining about and comparing the newer .NET languages with the
older VB environment. So it is relevant to his message. Also...
If you are coming from a VB background you may not see it this way and I
understand why.
What is that what you understand, I am curious what you mean with the
sentence above?

Now you confuse me, if you didn't want to talk about VB why did you ask me
to respond to my VB statement????
I find it an efficient way and that is what I am talking about not to make
everytime a roundtrip to the OS to clean up resources if there is enough
memory.

You keep assuming there is enough memory without knowing the specifics. I
don't program with that assumption. That is dangerous.

The original poster never mentioned memory availability, installation
environments, number of applications running on the machine, number of
instances of his specific application, number of user, type of application,
etc.. He said thousands of objects that should be cleaned up when the user
intervenes.

Thousands of objects can hog up just as much resources as one big one. For
example, if he is talking about a single web server application with
mutliple sessions, then one could multiply that by every user logged on. I'm
assuming worse case here but even if it was a single application but the
machine wasn't equipped to handle thousand of objects with small graphic
images he should be equipped now to call dispose on them. Its better to be
safe than sorry when working with incomplete specs. Thousands of small
managed images,files,graphics,etc. should have disposed called for them just
as importantly as one large one. Just assuming the GC will work OK without
knowing the specifics is wrong.
Never seen that in this newsgroup seriously, what I have seen is that the
taskmanager shows a lot (but what is a lot) of memory consumption and that
people are afraid if the GC does work correct because it does start
immidiatly when the program closes, but I cannot be sure of that of
course.

Look at the .NET performance newsgroup. Usually the first thing that is
suggested when memory leak "thought" to have happen is IDispose of graphic
objects, file objects, etc.. I just went over and sure enought there was
one.
 
This thread becomes to long, it is not necessary to answer , the change
that
we will agree is very low. (But that we did know already)

While I know this is true, unfortunately agreement is not my goal. I believe
you gave the original poster bad advice with out knowing the specifics,
which in turn could hurt him and his development effort.
 
Back
Top