Hi Niall,
requirements if you don't test it against the requirements? <<
Of course I test against requirements, but for me, that's just the first
step. After that, I start testing to find requirements bugs, design bugs,
implementation bugs, etc.
thing - a program which does what the customer wants. At the end of the day,
the customer doesn't care if you have spaghetti code, as long as the program
works. <<
If you're doing custom software development, your customer certainly cares
deeply about the internal quality of the code. The customer's staff will
have to debug, maintain and enhance the code for months and years to come.
If you're doing product development, the end-customers may not care about
the internal quality of the code, but the company for which you're
developing the product certainly cares. Once again, the company will have to
debug, maintain and enhance the product for a long time.
So somebody, somewhere, *always* cares about the code's internal quality.
facilitate design changes. <<
Here at least we agree on something!
<<
The developer in question thought that he *was* targeting a specific area of
code. He didn't know about the optimisation, and indeed had no way of
knowing about the optimisation without stepping through the code in a
source-level debugger.
Of course in hindsight, it should have been coded better. But you've just
been arguing that the internal code quality doesn't really matter, as long
as the unit tests are satisfied. You can't have it both ways.
planning to wrap the exception in a more meaningful one and re-throw. <<
There are various reasons to catch System.Exception, including the one you
mentioned. Another reason is to reverse a transaction after any exception.
Yet another reason is that many developers often put in a System.Exception
catch during testing, and forget to remove it. And yet another reason is
that developers sometimes don't realise that you shouldn't catch
System.Exception without rethrowing it. Finally, a few of the .NET base
class methods suppress all exceptions for "security" reasons, and just fail
silently.
Whatever the reason, you won't find this type of requirements/coding bug
unless you step through the code and find that an unexpected exception type
was being silently thrown and caught.
Regards,
Mark
----
Author of "Comprehensive VB .NET Debugging"
http://www.apress.com/book/bookDisplay.html?bID=128
Niall said:
This is perhaps the crux of where we differ - your attitude seems to be that
your main responsibility is to ensure that the code you're testing meets the
stated requirements. In my case, that's the very least of what I'm looking
for - indeed, I am usually surprised if my code has this type of bug, so I
don't tend to spend the majority of my time looking for it.
Instead, the majority of my time is spent looking for omissions or mistakes
in the requirements, and for design bugs and implementation mistakes.
Stepping though my code is one essential technique here, and so is the use
of code reviews. Studies show that each of these techniques finds a
different set of bugs.
What do you mean by implementation mistakes? Do you mean mistakes such that
the code doesn't do what it's supposed to do, or mistakes such as slower
code, messy code, etc? What I'm wondering is how you can be sure that your
code fully meets the requirements if you don't test it against the
requirements?
I agree that bad design, slower code isn't really the domain of unit
testing. With the performance thing, if a particular function has to run in
less than a certain amount of time, you can always write a test that fails
if it takes longer. I guess the attitude of the unit test is to give you the
most important thing - a program which does what the customer wants. At the
end of the day, the customer doesn't care if you have spaghetti code, as
long as the program works. On the other hand, if you have a great design,
but your program doesn't do what they want, then they won't be pleased.
I'm not advocating a mindset of "It works, meets the requirements fully, so
lock it away and it doesn't matter if it has a bad design." In fact, it's a
bit the opposite. Once you have the unit tests solidly down, then you know
your code meets the requirements, and you know that if something breaks, you
will find out about it. So at any time, anyone can come along and change the
code to what they think is a better design, faster, etc. So the unit test
doesn't test design, but it gives you a safeguard to facilitate design
changes.
To their surprise, the breakpoint was never hit and the test completed
successfully. A quick investigation with the debugger showed that a function
a few steps up the call chain had an optimisation that allowed it sometimes
to skip unnecessary work. In this case, it skipped the new code. Treating
the code as a black box in this case just didn't work - the developer put in
some inputs, got correct outputs and entirely missed that fact that his new
code wasn't being executed at all.
This isn't the way I write unit tests, and as far as I've seen, it's not the
way that they're supposed to be written, either. The unit test is supposed
to isolate and target specific areas of code. So there should be a test that
specifically targets the function in question, ignoring the optimisation. As
for the optimisation in the other method - there should be a unit test that
specifically targets that as well. Unit tests which test at a higher level
are ok too, and probably a good idea to watch the behaviour of the object at
a higher level as well, but like I said before, unit testing is under the
hood testing.
Here's another example, closer to home. The following code shows a nasty
bug:
bool AccessGranted = true;
try
{
// See if we have access to c:\test.txt
new FileStream(@"c:\test.txt",
FileMode.Open,
FileAccess.Read).Close();
}
catch (SecurityException x)
{
// access denied
AccessGranted = false;
}
catch (...)
{
// something else happened
}
If the CLR grants access to the test file in this example, everything works
fine. If the CLR denies access, everything works fine as well because a
SecurityException is thrown. But what happens if the discretionary access
control list (DACL) on the file doesn't grant access? Then a different
exception is thrown, but AccessGranted will return true because of the
optimistic assumption made on the first line of code. The bug was really in
the requirements as well as in the code, because they didn't state what
should happen if the CLR granted access, but the file system didn't.
Stepping through this code would have shown that a completely different
exception was being thrown when the DACL denied access, and therefore would
have pointed to the omission in requirements as well as finding the bug.
I think this could have been coded better. In general, when you have code
that is trying to discover if you can do a certain thing, it should presume
the negative in the beginning. Especially for things like "Do I have the
security rights to do <xyz>" it should always be false in the beginning.
That way you ensure that the only time you ever think you have the right to
do the action is when the code that recogises the affirmative is run.
I'm not sure if you mean the base Exception type by the "..." in your code.
I wouldn't write code that catches just a plain exception unless it was
planning to wrap the exception in a more meaningful one and re-throw. If you
didn't mean to catch the base Exception type there, then the DACL exception
wouldn't be caught, and the unit test would show an error. This is the way
NUnit behaves - if an assertion fails, you get a test failure, if an
exception escapes from the test, you get a test error.
Personally, I would have written the function like:
bool AccessGranted = false;
try
{
new FileStream(@"c:\test.txt",
FileMode.Open,
FileAccess.Read).Close();
AccessGranted = true;
}
catch (SecurityException)
{
}
But, of course, an even more powerful tool than unit testing is hindsight...
Niall