Test-Driven Development and Teaching to Test
Test-Driven Development is a style that says "write a test for a small bit of functionality, write code to make it pass, refactor, and repeat."
In a way, the "test" part of the name is misleading. TDD does produce tests in the sense that they are written to verify whether something works, that an expected answer is defined in advance, and so on. But they're not tests in the way a tester would seek - they're written for the programmer's purposes. That they are mostly somewhat useful as tests is in a sense a happy side effect.
This became most clear to me in a group discussion including James Bach, one of the founders of the context-driven testing school. Someone described a good test that a TDD programmer might write, and James said, "I'd never write that test - it's too small." That really drove home to me how different the purposes of the tests are. For testers, the goal may be things like "efficiently acquire information about the status of the program" or "see if this area works" (and other things); for developers, the goal of the test is "drive me to create the next part of the design."
Brian Marick has talked about how TDD produces checked examples. Maybe Example-Driven Development would have been a better name.
Teaching to Test
Let me shift gears to the education system: For a variety of reasons, states have moved to using tests to assess students, teachers, and schools. This has its own challenges and controversies. (It's hard to interpret what you see without more information - if students perform poorly, do you have wonderful students crushed by poor teaching, or heroic teaching that can't overcome other problems?)
But on the ground as a parent, one of my concerns has been "teaching to test." There are two approaches I've seen:
- The teacher covers an area thoroughly, and the test acts as a random sample of what students should know in this area.
- The teacher tries to out-guess "what will the test questions be" and "how good are students at test-taking", covering nothing beyond what is necessary.
In a sense, both care about the test. But the latter form is the pejorative sense of "teaching to test." It's like the students who ask, "Will this be on the test?" rather than "Do I understand this?".
The first approach comes from a position of abundance, the other from a position of scarcity.
The same thing can happen in Test-Driven Development. Early tests drive the design, and later tests tend more to verify it. At some point, my expectation has become "No matter what test I'd write, this code should pass."
But note what happens if I stop "teaching" too soon. I may have covered the simple cases - I've passed the tests at hand - but I may have left out the design complexity needed to handle the full requirements. Once the system is in the real world, it has to pass the "real" test of use.
Some of the TDD techniques, such as Fake It 'Til You Make It, explicitly create code that wasn't directly tested - you generalize your code from a small set of examples. This leaves a window where you may generalize incorrectly. If you stop testing too soon, you may not realize your mistake until later.
I try to take a sense of scarcity as a sign - if I feel pressed to rush through some task, I take that as a flag that I may not have tried enough examples.