Mocks and Tell Don’t Ask | Ian Cooper


: 5

One of our alumni Karl blogged a request recently for folks to stop using mocks. Once upon a time I also made clear that I had a significant distrust of mocks. I’ve mellowed on that position over time, so I thought that I should explain why I have changed my opinion.

Perhaps it would be useful to give a summary of the discussion around this. Whilst I don’t think that you need to choose between being a classicist or mockist, I think that Martin Fowler’s article on Mocks Aren’t Stubs is still a good starting point for understanding the debate, because it talks about the different types of Test Double, as well as outlining approaches to their usage. However, for my part, Martin does not highlight Meszaros’s key distinction between replacing indirect inputs and monitoring indirect outputs. Both mention the danger of Over-specified Software from Mocks, and Meszaros cautions to Use the Front Door. On the other side the proponents of mocks have always pointed to their admonition to Mock Roles Not Objects {PDF}. Growing Object Oriented Software, Guided by Tests (GooS) is probably the best expression of their technique today.

The key to understanding mocks to me was that the motivation behind Mocks, revealed in GooS is to support the Tell Don’t Ask principle. Tim Mackinnon notes that:

In particular, I had noticed a tendency to add “getter” methods to our objects to facilitate testing. This felt wrong, since it could be seen as violating object-oriented principles, so I was interested in the thoughts of the other members. The conversation was quite lively—mainly centering on the tension between pragmatism in testing and pure object-oriented design.

For me understanding this goal for mock objects was something of a revelation in how I understood them to be used. Previously I had heard arguments that centered around the concept of a unit test needing isolation being the driver for use of mocking frameworks, but now I can see them as tools helping us avoid asking the object about its internal state.

Within any test we tend to follow the same pattern: set up the pre-conditions for the test, exercise the test and then check post-conditions. It’s the latter step that causes the pain for developers working towards a Tell Don’t Ask principle – how do I confirm the post-conditions without asking objects for their state.

This is a concern because objects should encapsulate state and expose behavior (whereas data structures have state but no behavior). This helps avoid feature envy, where an objects collaborator holds the behavior of that object, in a set of calls to discover its state instead of asking the object to carry out the behavior on the state itself. This leads to coupling and makes our software more rigid (difficult to change) and brittle (likely to break when we do change it).

But this leads to the question: how do we confirm the post-conditions if we have no getters?

Mocks come into play where we decide to confirm is the behavior of the class-under-test through its interaction with its collaborators, instead of through getters checking the state. This is the meaning of behavior-based testing over state-based testing, although you might want to think of it as message-based testing, because what you really care about is the messages that you send to your collaborators.

Now when we talk about confirming behavior, recognize that in a Tell Don’t Ask style the objects we are interested in confirming our interaction with are the indirect outputs of the class under test, not the indirect inputs (classes that we get state from). In an ideal world of course, we won’t have many indirect inputs (we want Tell Don’t Ask remember), but most people will tend to be realistic about how far they can take a Tell Don’t Ask approach. Those indirect inputs may continue to be fakes, stubs, or even instances of collaborators created with TestDataBuilders or ObjectMothers. Of course the presence of a lot of them in our test setup is a smell that we have not followed a Test Don’t Ask approach.

It is also likely that many of our concerns about over-specified software with mocks often come from the coupling caused by these indirect inputs over indirect outputs. They are likely to reveal most about the implementation of the their collaborator. By contrast where we tell an object to do something for us, we likely let it hide the implementation details from us. Of course we need to beware of feature envy from calling a lot of setters instead of simply calling a method on the collaborator. This chattiness is again a feature envy design smell.

I have come to see an over-specified test when using mocks is not a problem of using mocks themselves, but mocks surfacing the problem that the class-under-test is not telling collaborators to perform actions at a granular enough level, or is asking too many questions of collaborators.

Finally one question that often arises here is: if we are just handing off to other classes, how do we know that the state was eventually correct i.e. the algorithm calculated the right value, or we stored the correct information. One catch-all option is to use a visitor, and pass it in so that the class-under-test can populate it with the state that you need to observe. Other options include observers that listen for changes. Of course at some point you need to consume the data from the visitor or the event sent to the observer – but the trick here is to use a data structure where it is acceptable to expose state (but not to have behavior).

CQRS also has huge payoffs for a Tell Don’t Ask approach. Because your write side rarely needs to ‘get’ the state, you can defer the question of where we consume the data – the read side does it, in the simplest fashion of just creating data structures directly. Event Sourcing also has benefits here, because you can confirm the changes to the state made to the object by examining the events raised by the class under test instead of asking the object for its state.