[[!tag testing software-development]]
Edsger Dijkstra was one of the most influential computer scientists, in the early days of the field. One of the many quotes from him is about testing:
Testing shows the presence, not the absence of bugs
(See page 16 of the linked PDF. Note that the context is formal correctness in software development.)
This is used from time to time to claim testing is pointless. I object to that sentiment. Dijkstra's quip is funny, but it's only partly correct, for general software development, that testing can't show the absence of bugs. More importantly, testing is meant to show the absence of bugs.
Testing is about showing what works.
Consider the physical world, from the point of a physicist. What happens if you drop an apple? It falls down. The physicist needs to answer the question: Does all fruit always fall down?
To answer that, it's not enough to drop one apple one time. Do pears also fall down? Do all apples fall down? Every apple, every time? Does it happen on sea level? At the top of a mountain? When it rains? When there's a lot of people watching? When there's betting about what happens?
To answer the question, the physicist needs to drop a very large number of different fruit, a very large number of times, in all sorts of environment and contexts. And eventually, they form a hypothesis that yes, all fruit always falls down. After a long time, after many other physicists have tried everything they can think of to prove the hypothesis wrong, the community of physicists accepts the hypothesis as true: all fruit always falls down when dropped.
Until someone goes to outer space. And then no fruit falls. It just floats there looking smug.
The point of testing software is not to do one test, once, and assume the code always works. The point is to consider all the cases in which to code under test needs to work, and try it in all those ways, and make sure it does what is expected of it.
Tests don't tell you that code works in cases that aren't tested. Tests tell you code works in the cases that are tested. If you want it to work in some other case, you add a test, or expand an existing test to cover the case you're interested in.
If the suite of tests misses a use case, and the code happens not to work in that particular case, that does not mean testing is useless, or that there's no point in even trying. If you don't have a test for dropping an orange in outer space, you don't know if oranges fall in outer space. It doesn't mean physics is pointless. If you care about dropping fruit in outer space, you go to outer space and try it.