Non-trivial and real-world feedbacks on writing Unit-Tests | Patrick Smacchia
I began writing tests around 8 years ago and I must say this revolutionized my way of developing code. In this blog post, I’d like to group and share several non-trivial feedbacks gained over all these years of practicing real-world testing.
There must be a one-to-one correspondence between classes tested and test fixture classes. Very simple classes like pure immutable entity (i.e simple constructors, simple getters, and readonly backing fields) can only be tested through test fixture classes corresponding to more complex tested classes.
The one-to-one correspondence stated above makes the following operations easy and straightforward:
- writing a new test suite for a currently created class,
- writing some new tests for an existing class,
- refactoring tests when refactoring some classes,
- identifying which test to run to test a class.
Tests must be written in dedicated tests assemblies because product assemblies must not be weighted with test code. Hence tests and tested code lives in different VS projects. These projects can live in the same VS solutions, or in different VS solutions.
Having tested code VS projects and tests VS projects living in the same VS solution is useful to navigate easily across tested code and tests.
If tested code and tests live in different VS solution, a global VS solution containing both code and tests VS project is still needed. Indeed, such global VS solution is useful to use refactoring tools to refactor at the same time tested code and tests
I don’t abide by the test-first theory. Tests concerning a code portion should be written at the same time than the code portion. The reason I advocate to do so, is because not all edge cases can be faced up-front, even by expert of a domain and professional testers. Edge cases are discovered only when thoroughly implementing an algorithm.
Test-first is still useful when tests can be written up-front by a domain expert or a client, with dedicated tooling such as FitNesse. In this conditions test-first is a great communication mean that can remove many pesky misunderstanding.
However only the code itself is the complete specification. Tests can be seen as a specification scaffold, that is removed at runtime. Tests can also be seen as a secondary less-detailed specification, useful to ensure that the main specification, i.e the code itself, respects a range of requirements, materialized by tests assertions.
Calls from tests to tested code, shouldn’t violate tested code contracts. When a contract is violated by running tests, it means that the tested code is buggy or (non-exclusive) that the test is buggy.
Assertions in tests are not much different than code contract assertions. The only real difference is that assertions in tests are not checked at production-time. But both kind of assertions are here for correctness, to buzz as soon as something goes wrong when running the code.
The number of tests is a completely meaningless metric. Dozens of [TestCase] can be written in minutes, while a tricky integration test can takes days to be written. When it comes to test, significant metrics are:
- percentage of code covered by tests and
- number of different assertions executed.
Not all classes need to be 100% covered by tests or even covered at all. Typically UI code is decoupled from logic, and UI code is hard to test properly, despite the progress in this domain. Notice that UI code decoupling, which is a good design practice, is naturally enforced by tests.
When testing a class, covering it 100% is a necessary goal. 90% coverage is not enough because empirically we observe that the 10% non-tested code contains most of unidentified bugs.
If to cover 10% of the code of a class it takes as much effort as covering the other 90%, it means that this 10% of code needs to be redesigned to be easily testable. Hence the 100% coverage practice is costly only when it forces the code to be well designed.
Having zero dead code is a natural consequence of having code 100% covered.
It is a good practice to tag a class 100% covered with an arbitrary [FullCovered] attribute because it documents for future developers in charge of an eventual refactoring, that the class must remain 100% covered.
Code 100% covered by tests + containing all relevant contracts, is very hard to break. This is the magical practice to be very close to bug-free code.
Personally I consider that the job is done only when all classes that can be covered by tests are indeed 100% covered by tests.
The concrete difference between unit tests and integrations tests is that unit tests are in-process only, they don’t require touching a DB, network or the file system. The distinction is useful because unit-tests are usually one or several orders of magnitude faster to run.
Tests must be run from the build machine to validate any code before it is committed.
Test should be fast to run, like a few minutes maximum, because tests must also be ran often on the developer machine, to detect regression as soon as possible.
Having a one-to-one correspondence between tested class and test class + 100% coverage, makes easy to identify which tests to run for a particular piece of code.
Last released version of the product that is working fine in production, should stress out that bugs are more likely to appear on newly developed code and refactored code. Hence tests must be written in priority for newly developed code and refactored code.
When some refactored code is not properly tested, it is a good practice to write tests that should have been written in the first place. However this practice of testing refactored code is not mandatory because it can be pretty time consuming.
Code 100% covered by tests + containing all relevant contracts, can be refactored at whim. In these conditions, there are very few chances to miss a behavior regression.
Code that is hard to be covered by tests is bug-prone and need to be redesigned to be easily coverable by easy tests. This situation often results in creating one or several classes dedicated to handle the logic implemented formerly by the code hard to cover by tests.
The previous point can be restated this way: writing a complete test suite naturally leads to good design, enforcing low coupling and high cohesion.
Singletons should be prohibited because they make the code hard to test. It is harder to re-cycle and re-initialize a singleton object for each tests, than it is to create an object for each test.
Mocking should be used to transform integration tests into unit-test, by abstracting tests from out-of-proc consideration like DB access, network access or file system. In these conditions mocking helps separating concerns in the code and hence, increases the design value.
Mocking is also useful to isolate code completely un-coverable by tests, like calls to MessageBox.Show().
Personally I am happy with: