This brings us to a second popular automated safety blanket — unit tests.
I get the impression that in weakly typed codebases, unit tests are used in large part as an ad hoc substitute for the type checker, which is truly horrifying, since they are ad hoc, require manual maintenance, can easily become incomplete[1]. Unit tests should not be used in this way.
From my experience, there’s only one class of bugs that unit tests are the right tool for — algorithmic code. Most of the code we write is simply glue code, plumbing code, moving the data through the system, maybe doing some collection crunching. That stuff is (or can and should be) usually so straight-forward and readable that tests bring nothing to the table — chances of error in simple code are so low that the ROI for writing tests (i.e. how many real bugs they catch, how many hours of downtime they prevent) will be extremely low. But if you have a complex algorithm or data structure that needs to handle edge cases and various conditions, that’s legitimately unit test territory. One rule of thumb that I’ve found useful for deciding whether to write unit tests or not is:
Did you try calling the function on the console with various inputs to test correctness? Then these calls should be in a unit test suite for the function, and not on the console.
When making changes to the code, do you feel queasy about fixing your use case but breaking some other one you don’t fully understand? Either you need to reduce complexity of your code, or you need a unit test suite for it, the former being far superior to the latter.
[1] I realize that there exist “test coverage” tools to ensure completeness. That’s part of the problem — if the type system was brought in to do its job, “test coverage” would be a meaningless metric, because only for a small subset of the code are unit tests now desirable. So existence of test coverage tools are a symptom of incorrect use of the tests.
Comments
Post a Comment