Brandon said:
If an object holds onto a resource, then it needs to have a
finalizer. Once an object needs to be finalized, it is no longer
lightweight -- just having them will lead to performance issues. Some
languages have made them immutable, thus making it impossible for
them to hold onto resources.
Everything was fine until your final sentence, which does not follow
logically from the first two even if it is true for some languages. Simply
because a lightweight object may hold a resource is no reason for making
them immutable. The word "may" or "does" are two different things in this
concept. As a conceptual analogy, "Simply because a human being may commit a
heinous offense is no reason for putting that person in jail."
Boxed value types never have their finalizers run. To the CLR, value
types don't have finalizers at all (even if it overrides the Finalize
method).
Then there is no reason to restrict __value types so that they can not have
destructors, other than that the CLR does not want to track value types when
they go out of scope in order to call their destructors at that time.
Finalizers and destructors are two different things. The existing
managed extensions syntax (and C#) make a great mess of things by
confusing the two.
I realize that they are syntactically different but the natural situation
for languages that do have destructors is for the destructor to be called
once the finalizer method is called. That is what C# and MC++ does, and I
think this is an excellent decision. The greatest weakness of Java compared
to .NET is that Java has no destructors or any guarantee of a destructor
being finally called, but .NET actually does have destructors which are
called throug the finalizer method. Of course both are flawed as GC
languages in their inability for implementing the optimum way to deal with
classes which encapsulate resources, despite Dispose() and Close() and other
workarounds, but at least .NET makes an attempt at some solutions, by
supporting destructors, while Java does not.
Actually, they don't. __gc classes have finalizers. They use the
destructor syntax for this unfortunately. This is being fixed in the
new Whidbey syntax.
The biggest improvement that could be made in the Whidbey release, or any
further .NET release, is a simple idea although it may be difficult
presently to implement: allow certain classes and certain objects to be
marked ( I favor a "resource" attribute ) in such a way that when the object
goes out of scope, the runtime checks immediately to see whether the
destructor can be called ( no other references ) and does call it if it can.
This is the ultimate solution to handling resource objects, and would
entirely eliminate the headaches posed by them in .NET. Until MS takes this
simple stance in their GC language, it will remain partially flawed in a way
that other GC based languages are also flawed. Once MS does implement the
above, or its equivalent, .NET will standout from current flawed
implementations of GC in other languages. But it is up to MS to implement
this instead of procrastinating and looking for further future kludges,
similar to Dispose(), Close(), and other poor workarounds.
If you're using value types, that's the only way to do it right.
Value types don't have destructors or finalizers, so holding onto
resources outside of function scope is very dangerous.
Yes, I figured this out. Still that is only because the run-time is
unwilling to track when value types go out of scope. If I could mark my
value type using something like the "resource" attribute suggested above,
then I should be able to have a finalizer and wouldn't have to have the
inelegant workaround which I do. I can live with the workaround for the
present, but I encourage MS to solve this problem as previously noted for
the future.