[[!tag automated-testing coverage use-case acceptance-criteria]]

Measuring test coverage by measuring which parts of the code are executed tests is not useless, but it usually misses the point.

Tests are not meant to show the absence of bugs, but to show what aspects of a program or system work. (See my previous rant.) If your automated tests execute 90% of your code lines, is that good enough? It doesn't really tell you what is tested and, crucially, what isn't. Those using the software don't care about code lines. They care about being able to do the things they want to do. In other words, they care about use cases and acceptance criteria.

The 10% of code not covered by your tests? If users never exercise that code, it's dead code, and should probably be removed. If those lines keep crashing the program, producing wrong results, causing data loss, security problems, privacy leaks, or otherwise cause dissatisfaction, then that's a problem. How do you know?

Test coverage should be measuring use cases and acceptance criteria. These are often not explicit, or written down, or even known. In most projects there are a lot of acceptance criteria and use cases that are implicit, and only become explicit, when things don't work the way users want them to.

A realistic test coverage would be how many of the explicit, known, recorded use cases and acceptance criteria are tested by automated tests.

Use cases are not always acceptance criteria. Acceptance criteria are not always use cases. Both need to be captured during the development process, and ideally recorded as automated test cases.

Code coverage is most useful when the main user of a piece of code is a developer, usually the developer who writes or maintains the code. In this case, coverage helps ensure all interesting parts of the code are unit tested. The unit tests capture the use cases and acceptance criteria, in very fine-grained details.