Automated unit tests are a critical asset on every software project. They give you the confidence to constantly refactor and evolve you designs as your code morphs and evolves. Exception handling and adding boundary condition checks is often forgotten about unless you take the time to think it through and writing unit tests gives you the head space to do just this. Then there’s the poor sod that has to maintain your code in the future . The first place I go to understand how some code works is the unit tests.
Code coverage reports and statistics are fraught with danger. Coverage reports should not be used a management tool to judge the overall quality of a solution. All they really tell you with certainty is how much code hasn’t been tested at all. Getting hung up on the code coverage percentage is self defeating. If teams feel pressured into achieving a certain magic number there is a real danger that quantity becomes more important to the quality of the tests and you’ve missed the whole point. The focus should be on writing valuable unit tests that improve the quality and resilience of the overall solution, the code coverage metric is simply a side effect of this process.
A better way to use a code coverage report is to use it as a conversation starter with your team. If one area of the code has low coverage find out why, make sure you understand the functionality that lives there, maybe the code is trivial or it’s better tested with an integration test. As the project iterations unfold expect the total number of unit tests to climb steadily but don’t get hung on the numbers.