Try Catch End Try - Performance issue

  • Thread starter Thread starter Bob Achgill
  • Start date Start date
B

Bob Achgill

I really like this function but have tried to slow down
on using it because I get a 1 second pause each time I
use it. I don't really understand why the computer has
to think for 1 second! Especially at 2.8 GZ and 1GB RAM.

The same pause happens on different situations: database
access Trys, Array access Trys. Hummm??
 
I really like this function but have tried to slow down
on using it because I get a 1 second pause each time I
use it. I don't really understand why the computer has
to think for 1 second! Especially at 2.8 GZ and 1GB RAM.

The same pause happens on different situations: database
access Trys, Array access Trys. Hummm??

The construct try/catch/finally introduces very little overhead to your
program - but with a catch ;). If you actaully catch an exception, then
you pay a price. That's why exceptions should be, well - exeptional :)
In other words, it is always good practice to code defensively to avoid
having any exceptions thrown in the first place - especially in a tight
loop or a flow controls consturct of some type. That means, that
methods that you create should probably NOT throw execptions on failure,
but instead return a failure result of some sort - like a Boolean value.

I know I alluded to this in the above, but I think it bears repeating...
Don't use exceptions for flow control of your program. Throwing and
Catching exceptions is very costly. So, DO use Try/Catch/Finally - Do
your best not to throw any exceptions :)
 
there's not a hit for just wrapping the same code in try/catch. Are you in
debug mode by any chance?
 
Hi Tom

Can you explain why the first exception that is thrown takes a significant
time to execute (several seconds), whilst subsequent exceptions take very
little time?

Whilst I understand what you say about not throwing exceptions for the sake
of it, it offers a convenient third way. I like to think in terms of a
function taking parameters and returning a result, and failure being flagged
by an exception. Why else do we have the InvalidArgumentException exception
(or whatever it's called)? Isn't that just the kind of exception that could
be thrown all the time, if the user insists on entering 99 when the maximum
value permitted is 98?

How about timeouts? Timeouts with a retry could quite reasonably be
implemented with exceptions, but if there is an overhead then the whole
thing becomes sluggish. If the overhead is caused by the collection of the
stack trace, then I would prefer the default to be to omit the stack trace,
and allow me to specify when it is to be collected. If not, I wonder what it
is that takes all the time.

Charles
 
Charles said:
Whilst I understand what you say about not throwing exceptions for
the sake of it, it offers a convenient third way. I like to think in
terms of a function taking parameters and returning a result, and
failure being flagged by an exception. Why else do we have the
InvalidArgumentException exception (or whatever it's called)? Isn't
that just the kind of exception that could be thrown all the time, if
the user insists on entering 99 when the maximum value permitted is
98?

No, at least in my opinion, that's not what InvalidArgumentException is for.

My opinion on it is this: standard C++ library exceptions are split in two
general "types" of exceptions, the std::logic_error, and the
std::runtime_error (both inherit from std::exception). Although .Net doesn't
have this distinction, I still like to think of exceptions in this way. A
std::logic_error is a programmer's mistake, an exception that should not
occur in a correctly programmed application, regardless of user input. A
std::runtime_error is an error that can occur at runtime and isn't caused by
the programmer (think files that fail to open etc.)

In C++, the std::invalid_argument class inherits from std::logic_error.
That's also how I see InvalidArgumentException. It's meant to be an
exception that indicates that the *programmer* is passing an invalid value
into a function. Therefore, if you are asking user input, and there is a
reasonable way to check if the value is correct *before* passing it to a
function that would throw an InvalidArgumentException if it's not correct,
you should attempt to do so.

This is of course not a law or anything. It's just the rule that I tend to
follow.

Also, any overhead from the first exception being thrown is probably caused
by the relevant exception throwing/handling code being jit-compiled. When
that's done, the subsequent exceptions will be cheaper to throw.
 
Inline..

Charles Law said:
Hi Tom

Can you explain why the first exception that is thrown takes a significant
time to execute (several seconds), whilst subsequent exceptions take very
little time?

A number of things happen. First of all, there's always a slight delay
because the exception can be trying to capture the current stack frame. As
for first-time delays, one of the biggest culprits is that most of the core
exception messages are stored as localized resource strings, and the
resource manager has to make sure the correct resource is loaded (could have
to load a DLL) and then locate the text. After that, things are probably
cached.
Whilst I understand what you say about not throwing exceptions for the sake
of it, it offers a convenient third way. I like to think in terms of a
function taking parameters and returning a result, and failure being flagged
by an exception. Why else do we have the InvalidArgumentException exception
(or whatever it's called)? Isn't that just the kind of exception that could
be thrown all the time, if the user insists on entering 99 when the maximum
value permitted is 98?

There's nothing wrong with throwing InvalidArgumentException, but the
calling code (the one with the try-catch) should NEVER depend on an error
for logic branching. Catching exceptions should be... well, an exceptional
case that only happens on a small number of exceptional occasions. The
calling code, if it needs to perform, should make sure it doesn't do
something that could normally cause an exception to take place. Exceptions
should only happen once in a blue moon if all is well (relatively speaking).
Exceptions should not be a lazy coder's alternative to checking things prior
to executing something.

An extreme example of this is the DirectX library for .NET. Most of the
function calls in the .NET library call some unmanaged DirectX code, and
they translate the simple error numbers from the C++ API into .NET
exceptions. But in some cases, like the Render methods that get called
dozens of times (at least) per second, and have a high chance of failing
simply pass the error number back from the method instead of raising the
exception.
How about timeouts? Timeouts with a retry could quite reasonably be
implemented with exceptions, but if there is an overhead then the whole
thing becomes sluggish. If the overhead is caused by the collection of the
stack trace, then I would prefer the default to be to omit the stack trace,
and allow me to specify when it is to be collected. If not, I wonder what it
is that takes all the time.

This is fine, but timeouts shouldn't happen during the normal course of
operation. If they happen, something went wrong (and took an unusually long
time to finish). You shouldn't be catching timeout exceptions 20 times a
second, which is where you'd have perf problems :-)

-Rob Teixeira [MVP]
 
Hi Rob

Points taken. In fact, what I tend to do as I am nearing the end of a
development is set all exceptions to break into the debugger when they
occur, so that I can see immediately if I am getting a lot/any in normal
operation.

Charles


Rob Teixeira said:
Inline..

Charles Law said:
Hi Tom

Can you explain why the first exception that is thrown takes a significant
time to execute (several seconds), whilst subsequent exceptions take very
little time?

A number of things happen. First of all, there's always a slight delay
because the exception can be trying to capture the current stack frame. As
for first-time delays, one of the biggest culprits is that most of the core
exception messages are stored as localized resource strings, and the
resource manager has to make sure the correct resource is loaded (could have
to load a DLL) and then locate the text. After that, things are probably
cached.
Whilst I understand what you say about not throwing exceptions for the sake
of it, it offers a convenient third way. I like to think in terms of a
function taking parameters and returning a result, and failure being flagged
by an exception. Why else do we have the InvalidArgumentException exception
(or whatever it's called)? Isn't that just the kind of exception that could
be thrown all the time, if the user insists on entering 99 when the maximum
value permitted is 98?

There's nothing wrong with throwing InvalidArgumentException, but the
calling code (the one with the try-catch) should NEVER depend on an error
for logic branching. Catching exceptions should be... well, an exceptional
case that only happens on a small number of exceptional occasions. The
calling code, if it needs to perform, should make sure it doesn't do
something that could normally cause an exception to take place. Exceptions
should only happen once in a blue moon if all is well (relatively speaking).
Exceptions should not be a lazy coder's alternative to checking things prior
to executing something.

An extreme example of this is the DirectX library for .NET. Most of the
function calls in the .NET library call some unmanaged DirectX code, and
they translate the simple error numbers from the C++ API into .NET
exceptions. But in some cases, like the Render methods that get called
dozens of times (at least) per second, and have a high chance of failing
simply pass the error number back from the method instead of raising the
exception.
How about timeouts? Timeouts with a retry could quite reasonably be
implemented with exceptions, but if there is an overhead then the whole
thing becomes sluggish. If the overhead is caused by the collection of the
stack trace, then I would prefer the default to be to omit the stack trace,
and allow me to specify when it is to be collected. If not, I wonder
what
it
is that takes all the time.

This is fine, but timeouts shouldn't happen during the normal course of
operation. If they happen, something went wrong (and took an unusually long
time to finish). You shouldn't be catching timeout exceptions 20 times a
second, which is where you'd have perf problems :-)

-Rob Teixeira [MVP]
 
Hi Sven

Regarding the JITting and caching, I would have thought that, as I am in the
debugger, the caching would occur once for the period that I was in the IDE.
It appears though that it occurs once per compile/run session.

Oh well, I am sure I shall have to live with it ;-)

Charles
 
Hi Tom,
Don't use exceptions for flow control of your program. Throwing and
Catching exceptions is very costly. So, DO use Try/Catch/Finally - Do
your best not to throw any exceptions :)

I have the same idea as you, I write this because I find this statement
important.
(And have nothing to add, however want to say that)

Cor
 
Back
Top