I'm thinking of cases where the runtime doesn't immediately go down and the
problem lingers for some time, e.g. because of the famous catch (Exception).
A simple test for example shows that OutOfMemoryExceptions is not fatal in
the sense that the CLR goes down. But once you run into it, you're in
trouble. The run-of-the-mill business application can do little but recycle.
If the runtime has been corrupted but does not shutdown immediately then
there are larger problems then anything the application can deal with.
Even in the example you provide there is no single correct approach that all
applications should use. If the runtime itself is OOM then it will shutdown
regardless of what the app does. If the app is OOM but the system is fine
then there still is no agreed upon course of action that all applications
should follow.
The real problem is that the term "fatal" is ill-defined. Does it refer to
the runtime or to the application? If for the app, then what is fatal for
one app may not be fatal for another, so there could be no agreed upon
exception hierarchy of fatal vs. non-fatal exceptions that would always be
accurate. If it refers to the runtime then if it truly is fatal I would want
the runtime to stop immediately.
Yeah, but in the same spirit it should be possible to point out working
alternatives. I don't think anybody has acted like a zealot in the
discussion so far. But don't get me started on checked exceptions. Evil!
Evil! ;-) ;-)
I am opposed to checked exceptions for a variety of reasons - no point in
getting off-topic. If you have an approach that is of value then please
describe it.
It's in his book .NET Framework Programming, and some of his comments are
reprinted in the .NET Framework Standard Library Annotated Reference (aka
SLAR).
Why not repeat the gist of it here?
Sounds a bit academic to me. We're not talking about same fantasy technology
that nobody has ever used. Adding a distinct exception hierarchy for system
failures doesn't change anything fundamentally. Concerns regarding
scalability are a bit farfetched IMHO.
I disagree with both satements. As tempting as it sounds I don't see how
creating this hierarchy adds enough value to justify it. Perhaps you are not
concerned with scalability, but I would be very concerned if the CLR
architects were not.
What's on the top of the list?
Tools (Design time):
Static and dynamic analysis tools to determine the exceptions that can be
thrown from a given method call (I am not referring to checked exceptions).
These ought to be added to an Intellisense list.
There is no way to express the relationship between where an exception is
thrown and where the exception will be handled other then by manually
examining the entire program for all throw and catch statements. This
makes it very difficult to detect missing catch blocks, or places where the
exception is handled in the "wrong" place. In a large system there tends to
be an explosion of handlers with no easy way to manage it.
Distinguish between handling an exception and swallowing an exception - this
ought to generate a compiler warning; I admit this would be difficult to
accurately detect.
Runtime wish list:
Notifications when exceptions are thrown (I believe this was added to the
2.0 version).
Programmatic ability to capture the parameters and locals within a method
when an exception occurs (for logging). Debuggers do this - there ought to
be a module that allows us to do it.
Support for restartable exceptions (low on the wish list).
Theoretical work on when to catch and when to throw.
Currently the guidelines are so poorly defined it is difficult to agree
on a framework that everyone can agree on. Admonishments like "only throw in
exceptional conditions" are pointless and silly - one person's exception is
another's error code - as are "only catch when you can recover". The
guidelines also usually ignore aspects such as boundary-crossings,
distinctions between libraries and applications, server-side versus
client-side differences, etc.
This is all based on the assumption that a fatal exception immediately
forces the system to go down. As long as this is not guaranteed or
specified, I'd rather see a simple mechanism in place that makes sure
developers don't produce potentially harmful code. And even if all this
wasn't required, the value of having a consistent exception type hierarchy
should be obvious.
This is a catch22 - any exception that is fatal to the system is fatal; if
it is not fatal to the system then it is not fatal. If the system does not
realize that it is fatal, then it cannot provide a hierarchy that is
meaningful. It also isn't at all obvious what such a hierarchy would look
like or how it would be used.
I especially don't want the system to keep running if it knows it just hit a
fatal error - at that point it cannot guarantee anything - type safety goes
out the window, as do its guarantees about safe memory access, code access
security, etc.
It doesn't need to be new types -- all I want are types that are part of
distinct type family.
There is a category, not well defined, called SystemException. However,
these are non-fatal and are recoverable by the application. I suspect this
is as close as you will get to what you are looking for. I don't even see
much value in this hierarchy but it is there if you want to use it.
And if the whole purpose of a code unit is to wrap third party code?
It doesn't matter - a unit test only tests your code, not external code.
One reason is that unit tests can be run as part of an automated build
process. Build machines should be pristine pure and unpolluted with
application specific binaries that could potentially corrupt the build (IOW,
don't install app executables). If the unit test actually required external
binaries/libraries to function then it could not be made part of the
automated build process. It would especially be a problem if the same build
machine was used for different versions of the app, where each version
required different versions of external components to be installed.
Another reason is that a unit test is intended to validate the assumptions
your code makes, not the actual implementation of the external component.
The mock object simulates the external component and the results you get
from the unit test is as good as your simulation is. If the simulation does
not reflect the reality of the component then your model is incorrect. As I
said, all this does is validate your misconceptions; running a system test
that actually exercises the external component may detect this.