The TDD Apostate by Mark Seemann
I've been doing Test-Driven Development since 2003. I still do, I still love it, and I still expect to be doing it in the future. Over the years, I've repeatedly returned to the discussion of whether TDD should be regarded as Test-Driven Development or Test-Driven Design. For a long time I've been of the conviction that TDD is both of those. Not so any longer.
TDD is not a good design methodology.
Over the years I've written tons of code with TDD. I've written code where tests blindly drove the design, and I've written code where the design was the result of a long period of deliberation, and the tests were only the manifestations of already well-formed ideas.
I can safely say that the code where tests alone drove the design never turned out particularly well. Although it was testable and, after a fashion, ‘loosely coupled', it was still Spaghetti Code in the sense that it lacked overall consistency and good abstractions.
On the other hand, I'm immensely pleased with code like AutoFixture 2.0, which was mostly the result of hours of careful contemplation riding my bike to and from work. It was still written test-first, but the design was well thought out in advance.
This made me think: did I just fail (repeatedly) at Test-Driven Design, or is the overall concept a fallacy?
That's a pretty hard question to answer; what constitutes good design? In the following, let's assume that the SOLID principles is a pretty good indicator of good design. If so, does test-first drive us towards SOLID design?
TDD versus the Single Responsibility Principle #
Does TDD ensure the application of the Single Responsibility Principle (SRP)? This question is easy to answer and the answer is a resounding NO! Nothing prevents us from test-driving a God Class. I've seen many examples, and I've been guilty of it myself.
Constructor Injection is a much better help because it makes SRP violations so painful.
The score so far: 0 points to TDD.
TDD versus the Open/Closed Principle #
Does TDD ensure that we follow the Open/Closed Principle (OCP)? This is a bit harder to answer. I've previously argued that Testability is just another name for OCP, so that would in itself imply that TDD drives OCP. However, the issue is more complex than that, because there are several different ways we can address the OCP:
- Inheritance
- Composition
According to Roy Osherove's book The Art of Unit Testing, the Extract and Override technique is a common unit testing trick. Personally, I rarely use it, but if used it will indirectly drive us a bit towards OCP via inheritance.
However, we all know that we should favor composition over inheritance, so does TDD drive us in that direction? As I alluded to previously, TDD does tend to drive us towards the use of Test Doubles, which we can view as one way to achieve OCP via composition.
However, another favorite composition technique of mine is to add functionality with a Decorator. This is only possible if the original type implements an interface that can be decorated. It's possible to write a test that forces a SUT to implement an interface, but TDD as a technique in itself does not drive us in that direction.
Grudgingly, however, I must admit that TDD still scores half a point against OCP, for a total score so far of ½ point.
TDD versus the Liskov Substitution Principle #
Does TDD drive us towards adhering to the Liskov Substitution Princple (LSP)? Perhaps, but probably not.
Black box testing can't protect us against the SUT attempting to downcast its dependencies, but at least it doesn't particularly pull us in that direction either. When it comes to the SUT's treatment of a dependency, TDD pulls in neither direction.
Can we test-drive interface implementations that inadvertently violate the LSP? Yes, easily. As I discussed in a previous post, the use of Header Interfaces pulls us towards LSP violations. The more members an interface has, the more likely are LSP violations.
TDD can definitely drive us towards Header Interfaces (although they tend to hurt in the long run). I've seen this happen numerous times, and I've been there myself. TDD doesn't properly encourage LSP adherence.
The score this round: 0 points for TDD, for a running total of ½ point.
TDD versus the Interface Segregation Principle #
Does TDD drive us towards the Interface Segregation Principle (ISP)? No. It's pretty easy to test-drive a SUT towards a Header Interface, just as we can test-drive towards a God Class.
Another 0 points for TDD. The score is still ½ point to TDD.
TDD versus the Dependency Inversion Principle #
Does TDD drive us towards the Dependency Inversion Principle (DIP)? Yes, it does.
The whole drive towards Testability - the ability to replace dependencies with Test Doubles - drives us exactly in the same direction as the DIP.
Since we tend to mistake such mechanistic loose coupling with proper application design, this probably explains why we, for so long, have confused TDD with good design. However, although I view loose coupling as a prerequisite for good design, it is by no means enough.
For those that still keep score, TDD scores 1 point against DIP, for a total of 1½ points.
TDD does not ensure SOLID #
With 1½ out of 5 possible points I have stated my case. I am convinced that TDD itself does not drive us towards SOLID design. It's definitely possible to use test-first techniques to drive towards SOLID designs, but that will always be an extra effort that supplements TDD; it's not something that is inherently built into TDD.
Obviously you could argue that SOLID in itself is not the end-all, be-all of proper API design. I would agree. However, based on my experience with TDD, I think the conclusion holds. TDD does not drive us towards good design. It is not a design technique.
I still write code test-first because I find it more productive, but I make design decisions out of band. I'm a Test-Driven Design Apostate.
Comments
I think some of TDD's primary benefits are:
- it raises the quality of your features (less bugs, simpler, more thought out)
- it helps support you as you refactor to improve your design
- it helps people work on existing code
Thanks for the post, very thought provoking!
Kevin
I'd give at least half a point for helping with SRP, and probably a full point. Nevertheless, I'd agree with your larger idea that TDD isn't a license to turn your brain off. The TDD system is read-green-REFACTOR, and the refactor part means you need to engage your brain and apply design princples that will make your code stronger. The tests allow you to do that refactoring in relative safety.
The Design part of TDD is, imho all about designing your low level APIs. And they are designed for ease of use automatically because when you are writing your tests, you think, "What's the easiest way to invoke the functionality I'm contemplating?" As you have pointed out, ease of use is only one component of good design, so TDD doesn't design the code for you ALL BY ITSELF.
Knowing the limitations of the methodology you are employing is critical to getting the most out of it. While you can ride your bicycle to New York, there are few situations where that is the most practical way of getting there.
-Kelly
http://cleancoder.posterous.com/the-transformation-priority-premise
What I'm discussing here is whether or not you can use tests to blindly design APIs. That's a different perspective.
Are there authoritative sources that assert that you can? Or non-authoritative ones? If so, could you cite them, which would help to put the piece into context. Otherwise, I'm just hearing "red-green on its own doesn't produce good design", which I thought was pretty much obvious, given that the "refactor" step is where I've always expected the design part to happen, and the article boils down to "doing it wrong doesn't work".
;-)
TDD /drives/ through priority and constraint. Good tests fix intent, not implementation, while only allowing implementations to emerge that meet the intent. By definition then, it should not be coupled to elements of construction (or design as you are calling it).
I appreciate the point that you're making i.e. TDD != a SOLID design, but then I don't think it ever aspired to - that's why there was the "refactor" bit after getting your tests green.
In short, in order to be good at design, learn to design. Then TDD will help you get to a good design faster.
At the same time there I think it is clear that TDD can also drive the design of your class/program. But there is a quite big difference into turning that original "can" into a "should" or a "must". And I don't know why there is so many people obssesed in change the original intention of TDD.
Isaac for example in one of the commens makes a great point about refactoring and its impact of TDD. And that's the point of that "can". But this doesn't mean that you should only rely on TDD to design your project. To me is a great technique that helps 1. to force your team to make tests, 2. to force your team to think on the stuff they are solving and 3. to pop up design flaws that we never thought of.
"I’ve written code where tests blindly drove the design..."
I don't think this is true because it's not possible. Tests can't "blindly" drive any design, especially considering the fact that tests don't write themselves. It takes a human being, a programmer with ideas and plans for the software, to decide what tests to write and how to implement them.
Now, there is a context in which a phrase like "blindly drive" is valid, and it's the TDD method. No matter how great or valid your ideas for your software might be, TDD demands that you prove the worth of your ideas one tiny step at a time. You write a simple test, then you implement it in the most simple manner. Then you repeat, then you repeat, and eventually you're left with software that may or may not match with what you had in your head.
The method is "blind" to your ideas in that your implementation is focused on one tiny requirement, but it can't be blind to your ideas completely. What gave you the idea to write the test in the first place?
When programmers start TDD for the first time, their software doesn't magically become pure examples of the SOLID principles. It takes a lot of practice to know what tests/questions to write, where to start the TDD practice, and how generally to keep things together. And even with lots of experience, it's still possible to mess things up. If I mess up during my practice of TDD, it's not fair to blame the practice of TDD any more than it's fair to blame an automobile manufacturer if someone drives their car off the road.
I think that's kinda what you're doing when you say things like:
"Nothing prevents us from test-driving a God Class."
Nothing prevents us from test-driving a God class? How about the fact that the tests will be hard to write, be unmaintainable, and will generally smell? Whenever a class takes on additional responsibilities, the tests for those responsibilities have to be "mixed" with the other tests. That fact that the programmer is going test-first will provide him with the earliest clue that the class is taking on too much, will cause him to *design* a way around the SRP violation by using a separate class.
If TDD helps to provide so much evidence that a SRP violation is occurring, why go after it because it doesn't *force* the programmer to act based on that evidence?
Making up a series of TDD "versus" the SOLID principles seems a little far-fetched to me. TDD isn't meant to be a replacement for the human brain.
I agree with many of the things you say in your post. Thank you for inspiring such fruitful conversations.