Test trivial code by Mark Seemann
Even if code is trivial you should still test it.
A few days ago, Robert C. Martin posted a blog post on The Pragmatics of TDD, where he explains that he doesn't test-drive everything. Some of the exceptions he give, such as not test-driving GUI code and true one-shot code, make sense to me, but there are two exceptions I think are inconsistent.
Robert C. Martin states that he doesn't test-drive
- getters and setters
- one-line functions
- functions that are obviously trivial
There are several problems with this exception that I'd like to point out:
- It confuses cause and effect
- Trivial code may not stay trivial
- It's horrible advice for novices
Causality #
The whole point of Test-Driven Development is that the tests drive the implementation. The test is the cause and the implementation is the effect. If you accept this premise, then how on Earth can you decide not to write a test because the implementation is going to be trivial? You don't know that yet. It's logically impossible.
Robert C. Martin himself proposed the Transformation Priority Premise (TPP), and one of the points here is that as you start out test-driving new code, you should strive to do so in small, formalized steps. The first step is extremely likely to leave you with a 'trivial' implementation, such as returning a constant.
With the TPP, the only difference between trivial and non-trivial implementation code is how far into the TDD process you've come. So if 'trivial' is all you need, the only proper course is to write that single test that demonstrates that the trivial behavior works as expected. Then, according to the TPP, you'd be done.
Encapsulation #
Robert C. Martin's exception about getters and setters is particularly confounding. If you consider a getter/setter (or .NET property) trivial, why even have it? Why not expose a public class field instead?
There are good reasons why you shouldn't expose public class fields, and they are all releated to encapsulation. Data should be exposed via getters and setters because it gives you the option of changing the implementation in the future.
What do we call the process of changing the implementation code without changing the behavior?
This is called refactoring. How do we know that when we change the implementation code, we don't change the behavior?
As Martin Fowler states in Refactoring, you must have a good test suite in place as a safety net, because otherwise you don't know if you broke something.
Successful code lives a long time and evolves. What started out being trivial may not remain trivial, and you can't predict which trivial members will remain trivial years in the future. It's important to make sure that the trivial behavior remains correct when you start to add more complexity. A regression suite addresses this problem, but only if you actually test the trivial features.
Robert C. Martin argues that the getters and setters are indirectly tested by other test cases, but while that may be true when you introduce the member, it may not stay true. Months later, those tests may be gone, leaving the member uncovered by tests.
You can look at it like this: with TDD you may be applying the TPP, but for trivial members, the time span between the first and second transformation may be measured in months instead of minutes.
Learning #
It's fine to be pragmatic, but I think that this 'rule' that you don't have to test-drive 'trivial' code is horrible advice for novices. If you give someone learning TDD a 'way out', they will take it every time things become difficult. If you provide a 'way out', at least make the condition explicit and measurable.
A fluffy condition that "you may be able to predict that the implementation will be trivial" isn't at all measurable. It's exactly this way of thinking that TDD attempts to address: you may think that you already know how the implementation is going to look, but letting tests drive the implementation, it often turns out that you'll be suprised. What you originally thought would work doesn't.
Addressing the root cause #
Am I insisting that you should test-drive all getters and setters? Yes, indeed I am.
But, you may say, it takes too much time? Ironically, Robert C. Martin has much to say on exactly that subject:
The only way to go fast, is to go well
Even so, let's see what it would be like to apply the TPP to a property (Java programmers can keep on reading. A C# property is just syntactic suger for a getter and setter).
Let's say that I have a DateViewModel class and I'd like it to have a Year property of the int type. The first test is this:
[Fact] public void GetYearReturnsAssignedValue() { var sut = new DateViewModel(); sut.Year = 2013; Assert.Equal(2013, sut.Year); }
However, applying the Devil's Advocate technique, the correct implementation is this:
public int Year { get { return 2013; } set { } }
That's just by the TPP book, so I have to write another test:
[Fact] public void GetYearReturnsAssignedValue2() { var sut = new DateViewModel(); sut.Year = 2010; Assert.Equal(2010, sut.Year); }
Together, these two tests prompt me to correctly implement the property:
public int Year { get; set; }
While those two tests could be refactored into a single Parameterized Test, that's still a lot of work. Not only do you need to write two test cases for a single property, but you'd have to do exactly the same for every other property!
Exactly the same, you say? And what are you?
Ah, you're a programmer. What do programmers do when they have to do exactly the same over and over again?
Well, if you make up inconsistent rules that enable you to skip out of doing things that hurt, then that's the wrong answer. If it hurts, do it more often.
Programmers automate repeated tasks. You can do that when testing properties too. Here's how I've done that, using AutoFixture:
[Theory, AutoWebData] public void YearIsWritable(WritablePropertyAssertion a) { a.Verify(Reflect<DateViewModel>.GetProperty<int>(sut => sut.Year)); }
This is a declarative way of checking exactly the same behavior as the previous two tests, but in a single line of code.
Root cause analysis is in order here. It seems as if the cost/benefit ratio of test-driving getters and setters is too high. However, I think that when Robert C. Martin stops there, it's because he considers the cost fixed, and the benefit is then too low. However, while the benefit may seem low, the cost doesn't have to be fixed. Lower the cost, and the cost/benefit ratio improves. This is why you should also test-drive getters and setters.
Update 2014.01.07: As a follow-up to this article, Jon Galloway interviewed me about this, and other subject, for a Herding Code podcast.
Update, 2018.11.12: A new article reflects on this one, and gives a more nuanced picture of my current thinking.
Comments
Here is a concrete, pragmatic, example where this post really helped in finding a solution to a dilemma on whether (or not) we should test a (trivial) overriden
ToString
method's output. Mark Seemann explained why in this case (while the code is trivial) we should not test that particular method. Even though it is a very concrete example, I believe it will help the reader on making decisions about when, and whether it's worth (or not), to test trivial code.