Avoid Testing Implementation Details, Test Behaviours | Ian Cooper

:

Every so often I return to Kent Beck’s Test-Driven Development. I honestly believe it to be one of the finest software development books ever written. What I love about the book is its simplicity. There is a sparseness to it that decieves, as though it were a lightweight exploration of the field. But continued reading as you progress with TDD reveals that it is fairly complete in its insights. Where later works have added pages and ideas, they have often lessened the technique, burdening it with complexity, born of misunderstandings of its power. I urge any of you who have practiced TDD for a while to re-read it regularly

For me one of the key ideas of TDD that is often overlooked is expressed in the cycle or Red, Green, Refactor. First we write a failing test, then we implement as quickly and dirtily as we can, and then we make it elegant. In this blog post I want to talk about some of the insights of the second and third steps that get missed. (Although you should remember the first step: tests don’t have tests so you need to prove them).

A lot of folks stall when they come to implement, because they try to implement well, thinking about good OO, SOLID patterns etc. Stop! The goal is to get green as soon as possible. Don’t try to build a decent solution at this point, just script out the quickest solution to the problem you can get too. Cut and Paste if you can, copy algorithms from CodeProject or StackOverflow if you can.

From Test-Driven Development

Stop. Hold on. I can hear the aesthetically inclined among you sneering and spitting. Copy-and-paste reuse? The death of abstraction? The killer of clean design?

If you’re upset, take a cleansing breath. In through the nose … hold it 1, 2, 3 … out through the mouth. There. Remember, our cycle has different phases (they go by quickly, often in seconds, but they are phases.):

1 Write a test.

2. Make it compile.

3. Run it to see that it fails.

4. Make it run.

5. Remove duplication.”

It’s that final step (which is our refactoring step) in which we create Clean Code. Not before, but after.

“Solving “clean code” at the same time that you solve “that works” can be too much to do at once. As soon as it is, go back to solving “that works,” and then “clean code” at leisure.”

Now there is an important implication here that is often overlooked. We don’t have to write tests for the classes that we refactor out through that last step. They will be internal to our implementation, and are already covered by the first test. We do not need to add new tests for the next level – unless we feel we do not know how to navigate to the next step.

From Test-Driven Development again:

“Because we are using Pairs as keys, we have to implement equals() and hashCode(). I’m not going to write tests for these, because we are writing this code in the context of a refactoring. If we get to the payoff of the refactoring and all of the tests run, then we expect the code to have been exercised. If I were programming with someone who didn’t see exactly where we were going with this, or if the logic became the least bit complex, I would begin writing separate tests.”

Code developed in the context of refactoring does not require new tests! It is already covered, and safe refactoring techniques mean we should not be introducing specualtive change, just cleaning up the rough implementation we used to to green. At this point further tests help you steer (but they come at a cost)

The outcome of this understanding is that a test-case per class approach fails to capture the ethos for TDD. Adding a new class is not the trigger for writing tests. The trigger is implementing a requirement. So we should test outside-in, (though I would recommend using ports and adapters and making the ‘outside’ the port), writing tests to cover then use cases (scenarios, examples, GWTs etc.), but only writing tests to cover the implementation details of that as and when we need to better understand the refactoring of the simple implementation we start with.

A positive outcome from this will be that much of our implementation will be internal or private, and never exposed outside our assembly. That will lower the coupling of our solution and make it easier for us to effect change.

From Test-Driven Development again:

“In TDD we use refactoring in an interesting way. Usually, a refactoring cannot change the semantics of the program under any circumstances. In TDD, the circumstances we care about are the tests that are already passing.”

When we refactor we don’t want to break tests. If our tests know too much about our implementation, that will be difficult, because changes to our implementation will necessarily result in us re-writing tests – at which point we are not refactoring. We would say that we have over-specified through our tests. Instead of assisting change, our tests have now begun to hamper it. As someone who has made this mistake, I can vouch for the fact that it is possible to write too many unit tests against details. If we follow TDD as originally envisaged though, the tests will naturally fall mainly on our public interface, not the implementation details, and we will find it far easier to meet the goal of changing the impementation details (safe change) and not the public interface (unsafe change)

Of course, this observation, that this idea has been overlooked by many TDD practitioners, formed the kernel of Dan North’s original BDD missive and is why BDD focused on scenarios and outside-in for TDD