Unit testing code coverage - do you have 100% coverage?

后端 未结 17 2725
离开以前
离开以前 2020-12-13 09:05

Do your unit tests constitute 100% code coverage? Yes or no, and why or why not.

相关标签:
17条回答
  • 2020-12-13 09:11

    In many cases it's not worth getting 100% statement coverage, but in some cases, it is worth it. In some cases 100% statement coverage is far too lax a requirement.

    The key question to ask is, "what's the impact if the software fails (produces the wrong result)?". In most cases, the impact of a bug is relatively low. For example, maybe you have to go fix the code within a few days and rerun something. However, if the impact is "someone might die in 120 seconds", then that's a huge impact, and you should have a lot more test coverage than just 100% statement coverage.

    I lead the Core Infrastructure Initiative Best Practices Badge for the Linux Foundation. We do have 100% statement coverage, but I wouldn't say it was strictly necessary. For a long time we were very close to 100%, and just decided to do that last little percent. We couldn't really justify the last few percent on engineering grounds, though; those last few percent were added purely as "pride of workmanship". I do get a very small extra piece of mind from having 100% coverage, but really it wasn't needed. We were over 90% statement coverage just from normal tests, and that was fine for our purposes. That said, we want the software to be rock-solid, and having 100% statement coverage has helped us get there. It's also easier to get 100% statement coverage today.

    It's still useful to measure coverage, even if you don't need 100%. If your tests don't have decent coverage, you should be concerned. A bad test suite can have good statement coverage, but if you don't have good statement coverage, then by definition you have a bad test suite. How much you need is a trade-off: what are the risks (probability and impact) from the software that is totally untested? By definition it's more likely to have errors (you didn't test it!), but if you and your users can live with those risks (probability and impact), it's okay. For many lower-impact projects, I think 80%-90% statement coverage is okay, with better being better.

    On the other hand, if people might die from errors in your software, then 100% statement coverage isn't enough. I would at least add branch coverage, and maybe more, to check on the quality of your tests. Standards like DO-178C (for airborne systems) take this approach - if a failure is minor, no big deal, but if a failure could be catastrophic, then much more rigorous testing is required. For example, DO-178C requires MC/DC coverage for the most critical software (the software that can quickly kill people if it makes a mistake). MC/DC is way more strenuous than statement coverage or even branch coverage.

    0 讨论(0)
  • 2020-12-13 09:12

    To all the 90% coverage tester:

    The problem with doing so is that the 10% hard to test code is also the not-trivial code that contains 90% of the bug! This is the conclusion I got empirically after many years of TDD.

    And after all this is pretty straightforward conclusion. This 10% hard to test code, is hard to test because it reflect tricky business problem or tricky design flaw or both. These exact reasons that often leads to buggy code.

    But also:

    • 100% covered code that decreases with time to less than 100% covered often pinpoints a bug or at least a flaw.
    • 100% covered code used in conjunction with contracts, is the ultimate weapon to lead to live close to bug-free code. Code Contracts and Automated Testing are pretty much the same thing
    • When a bug is discovered in 100% covered code, it is easier to fix. Since the code responsible for the bug is already covered by tests, it shouldn't be hard to write new tests to cover the bug fix.
    0 讨论(0)
  • 2020-12-13 09:12

    A while ago I did a little analysis of coverage in the JUnit implementation, code written and tested by, among others, Kent Beck and David Saff.

    From the conclusions:

    Applying line coverage to one of the best tested projects in the world, here is what we learned:

    1. Carefully analyzing coverage of code affected by your pull request is more useful than monitoring overall coverage trends against thresholds.

    2. It may be OK to lower your testing standards for deprecated code, but do not let this affect the rest of the code. If you use coverage thresholds on a continuous integration server, consider setting them differently for deprecated code.

    3. There is no reason to have methods with more than 2-3 untested lines of code.

    4. The usual suspects (simple code, dead code, bad weather behavior, …) correspond to around 5% of uncovered code.

    In summary, should you monitor line coverage? Not all development teams do, and even in the JUnit project it does not seem to be a standard practice. However, if you want to be as good as the JUnit developers, there is no reason why your line coverage would be below 95%. And monitoring coverage is a simple first step to verify just that.

    0 讨论(0)
  • 2020-12-13 09:14

    I only have 100% coverage on new pieces of code that have been written with testability in mind. With proper encapsulation, each class and function can have functional unit tests that simultaneously give close to 100% coverage. It's then just a matter of adding some additional tests that cover some edge cases to get you to 100%.

    You shouldn't write tests just to get coverage. You should be writing functional tests that test correctness/compliance. By a good functional specification that covers all grounds and a good software design, you can get good coverage for free.

    0 讨论(0)
  • 2020-12-13 09:18

    I personally find 100% test coverage to be problematic on multiple levels. First and foremost, you have to make sure you are gaining a tangible, cost-saving benefit from the unit tests you write. In addition, unit tests, like any other code, are CODE. That means it, just like any other code, must be verified for correctness and maintained. That additional time verifying additional code for correctness, and maintaining it and keeping those tests valid in response to changes to business code, adds cost. Achieving 100% test coverage and ensuring you test you're code as thoroughly as possible is a laudable endeavor, but achieving it at any cost...well, is often too costly.

    There are many times when covering error and validity checks that are in place to cover fringe or extremely rare, but definitely possible, exceptional cases are an example of code that does not necessarily need to be covered. The amount of time, effort (and ultimately money) that must be invested to achieve coverage of such rare fringe cases is often wasteful in light of other business needs. Properties are often a part of code that, especially with C# 3.0, do not need to be tested as most, if not all, properties behave exactly the same way, and are excessively simple (single-statement return or set.) Investing tremendous amounts of time wrapping unit tests around thousands of properties could quite likely be better invested somewhere else where a greater, more valuable tangible return on that investment can be realized.

    Beyond simply achieving 100% test coverage, there are similar problems with trying to set up the "perfect" unit. Mocking frameworks have progressed to an amazing degree these days, and almost anything can be mocked (if you are willing to pay money, TypeMock can actually mock anything and everything, but it does cost a lot.) However, there are often times when dependencies of your code were not written in a mock-able way (this is actually a core problem with the vast bulk of the .NET framework itself.) Investing time to achieve the proper scope of a test is useful, but putting in excessive amounts of time to mock away everything and anything under the face of the sun, adding layers of abstraction and interfaces to make it possible, is again most often a waste of time, effort, and ultimately money.

    The ultimate goal with testing shouldn't really be to achieve the ultimate in code coverage. The ultimate goal should be achieving the greatest value per unit time invested in writing unit tests, while covering as much as possible in that time. The best way to achieve this is to take the BDD approach: Specify your concerns, define your context, and verify the expected outcomes occur for any piece of behavior being developed (behavior...not unit.)

    0 讨论(0)
  • 2020-12-13 09:20

    No because I spent my time adding new features that help the users rather than tricky to write obscure tests that deliver little value. I say unit test the big things, subtle things and things that are fragile.

    0 讨论(0)
提交回复
热议问题