Do your unit tests constitute 100% code coverage? Yes or no, and why or why not.
What I do when I get the chance is to insert statements on every branch of the code that can be grepped for and that record if they've been hit, so that I can do some sort of comparison to see which statements have not been hit. This is a bit of a chore, so I'm not always good about it.
I just built a small UI app to use in charity auctions, that uses MySQL as its DB. Since I really, really didn't want it to break in the middle of an auction, I tried something new.
Since it was in VC6 (C++ + MFC) I defined two macros:
#define TCOV ASSERT(FALSE)
#define _COV ASSERT(TRUE)
and then I sprinkled
TCOV;
throughout the code, on every separate path I could find, and in every routine.
Then I ran the program under the debugger, and every time it hit a TCOV
, it would halt.
I would look at the code for any obvious problems, and then edit it to _COV
, then continue. The code would recompile on the fly and move on to the next TCOV
.
In this way, I slowly, laboriously, eliminated enough TCOV
statements so it would run "normally".
After a while, I grepped the code for TCOV
, and that showed what code I had not tested. Then I went back and ran it again, making sure to test more branches I had not tried earlier.
I kept doing this until there were no TCOV
statements left in the code.
This took a few hours, but in the process I found and fixed several bugs. There is no way I could have had the discipline to make and follow a test plan that would have been that thorough. Not only did I know I had covered all branches, but it had made me look at every branch while it was running - a very good kind of code review.
So, whether or not you use a coverage tool, this is a good way to root out bugs that would otherwise lurk in the code until a more embarrasing time.