Outside-In TDD versus DDD

Monday, 04 March 2013 08:01:00 UTC

Is there a conflict between Outside-In TDD and DDD? I don't think so.

A reader of my book recently asked me a question:

"you are a DDD and TDD fan and at the same time you like to design your User Interfaces first. I have been reading about DDD and TDD for some time now and your approach confuses me, TDD is about writing tests first to lead the design, DDD is focusing on the domain yet you start with the UI."

In my opinion there's no conflict between Outside-In TDD and DDD, but I must admit that I was a bit surprised by the question. It never occurred to me that there's an apparent conflict between these two approaches, but now that I was asked the question, I understand why one would think so. In other words, I think it's a good question that warrants a proper answer.

Before discussing the synthesis of Outside-In TDD and DDD I think we need to examine each concept individually.

DDD #

As far as I can tell, Domain-Driven Design is a horribly misunderstood book. By corollary, so is the DDD concept itself. The problem with the book is that it provides value and guidance along two axes:

  • Patterns and other coding advice
  • Principles for analysis and modeling
The 'problem' is that all the patterns and coding concepts, such as Aggregate Root, Specification, Factories, Value Objects, Entities, Services, etc. are very useful. However, here's the catch: Many of these constructs are useful in themselves. They aren't really related to Domain-Driven Design. If Eric Evans had written a book called Design Patterns for Supple Design, there would have been much less confusion. He could have written another book with all the rest of the content and called that book "Domain-Driven Design," and less confusion would have ensued.

Don't let the fact that all these patterns are described in a book called Domain-Driven Design trick you into thinking that they are intrinsically bound to DDD. Using the patterns don't make your code Domain-Driven, and you can do DDD without them. Obviously, you can also combine them.

This provides a partial answer to the original question. Outside-In TDD doesn't conflict with the patterns described in Domain-Driven Design, just as it doesn't conflict with most other design patterns.

That still leaves the 'true' DDD: the principles for analyzing and modeling the subject matter for a piece of software. If that is a design technique, and Outside-In TDD is also a design technique, isn't there a conflict?

Outside-In TDD #

Outside-In TDD, as I describe it in my Pluralsight course, isn't a design technique. TDD isn't a design technique. Granted, TDD provides fast feedback about the design you are implementing, but it's not a blind design technique.

Outside-In TDD and DDD #

Once you realize that a major reason that Outside-In TDD and DDD seems to be at odds, is because of the false premise that TDD is a design technique, you should also realize that there isn't, after all, any conflict. There's no reason that you can't do both, but it doesn't mean you always should.

  • You can do Outside-In TDD without DDD. If you are building a CRUD application, DDD is probably going to be overkill. If so, skip it.
  • You can do DDD without Outside-In TDD. If you are being paid to build or extract a true Domain Model, it makes sense to do so decoupled from any sort of application boundaries.
  • You can combine Outside-In TDD with DDD, but I think this will often make best sense as an iterative approach. First you build the application. Then you realize that the business logic within the application is so complex that you'll need to extract a proper Domain Model in order to keep it maintainable.
In the end, remember that your employer/customer pays you to solve a particular problem with software, not to follow a particular software design principle. The reason I often start with the UI or perhaps a RESTful API is that this is the part of the application that the product owner cares about.

In my Pluralsight course, I also discuss the Test Pyramid. While it makes sense to begin development at the boundary (e.g. UI) of an application, you shouldn't stay at the boundary. Most tests should still be unit tests. Once you reach the phase where you realize that you'll need to evolve a proper Domain Model, you should do that with unit tests, not boundary tests. Domain Models should be implemented decoupled from all boundaries (UI, I/O and persistence), so you can only directly test a Domain Model with unit tests.

In the end, it's all a question of perspective. Outside-In TDD states that you should begin your TDD process at the boundary. In the beginning, it will focus on the externally visible behavior of the system, but as development progresses, other parts of the system must be dealt with. This is where DDD can be valuable. The lack of conflict eventually stems from the fact that Outside-In TDD and DDD operate on different levels of application architecture, with little overlap.

In my experience, a lot of applications can be written with Outside-In TDD without a Domain Model ever having to evolve - and still be maintainable. These applications may still employ the 'Supple Design Patterns', but that doesn't mean that they are Domain-Driven.


Comments

Thank for this insightful post. It inspired me to write a post about the two sides of DDD.

PS: IMO, commenting via pull-request is too burdensome. Have you considered Disqus?

2013-03-15 12:22 MST

Lev, thank you for your comment. While I've considered Disqus and other alternatives, I don't want to go in that direction. Free services have a tendendcy to not remain free forever - Google Reader being a current case in point.

2013-03-16 08:36 GMT

Moving the blog to Jekyll

Sunday, 03 March 2013 09:32:00 UTC

ploeh blog is sorta moving

For more than four years ploeh blog has been driven by dasBlog, since I originally had to move away from my original MSDN blog. At the time, dasBlog seemed like a good choice for a variety of reasons, but not too long after, development seemed to stop.

It's my hope that, as a reader, you haven't been too inconvenienced about that choice of blogging engine, but as the owner and author, I've grown increasingly frustrated with dasBlog. Yesterday, ploeh blog moved to a new platform and a new host.

What's new? #

The new platform for ploeh blog is Jekyll-Bootstrap hosted on GitHub pages. If you don't know what Jekyll-Bootstrap is, it's a parsing engine that creates a complete static web site from templates and data. This means that when you read this on the internet, you are reading a static HTML page served directly from a file. There's no application server involved.

Since ploeh blog is now a static web site, I had to make a few shortcuts here and there. One example is that I can't have any forms on the web site, so I can't have a contact me form - but I didn't have that before either, so that's not really a problem. There are still plenty of ways in which you can contact me.

Another example is that the blog can't have a server-driven comment engine. As an alternative, Jekyll provides several options for including a third-party commenting service, such as Disqus, Livefyre, IntenseDebate, etc. I could have done that too, but have chosen to try another alternative.

There are several reasons why I prefer to avoid a third-party solution. First of all, in case you haven't noticed, I'm into this whole blogging thing for the long haul. I've been steadily blogging since January 2006, and have no plans to stop. Somehow, I expect to keep blogging for a longer period than I expect the average third-party content provider to last - or to keep their service free.

Second, I think it's more proper to keep the comments with ploeh blog. These comments were originally submitted by their authors to ploeh blog, which implies that they trusted me to treat them with respect. If I somehow were to move all those comments to a third party, what would become of the copyright?

Third, while it wasn't entirely trivial to merge the comments into each post, it looked like the simplest solution, compared to have to somehow migrate comments for some 200+ posts to a third party using an unknown (to me) API.

What this all means is that I've decided to simply keep the comments as part of the body of the post. Comments, too, is static content. This also means that comments are included in syndication feeds - a side effect that I'm not entirely happy about, but for which I don't yet have a good solution.

So, if comments are all static, does it mean that I no longer accept comments? No, I still accept comments. Just send me a pull request. Snarky, perhaps, but the advantage of that is that if you want, you'll have a lot more flexibility if you want to comment: you could include code, images, tables, lists, etc. The only checkpoint you'll have to get past is me :)

Actually, you are welcome to send me pull requests for other enchancements as well, if you'd like. Did I spell a word incorrectly? Send me a pull request. Is a link broken? Send me a pull request. Do you know way more about web site design than I do and want to help? Send me a pull request :) Seriously, as you may have noticed, ploeh blog is currently using the default Twitter Bootstrap design. That's not particularly interesting to look at, but I hope that the most important part of the blog is the content, so I didn't use much energy on changing the look - it probably wouldn't have been to the better anyway.

Since the content is the most important part of the blog, I've also gone to some lentgth to ensure that old permalinks to posts still work. If you encounter a link that no longer works, please let me know (or send me a pull request).

Finally, as all RSS subscribers have probably noticed by now, switching the blog has caused some RSS hiccups. While the old feed address still works, the permalink URL scheme has changed, because I didn't think it made sense to have static files and URLs ending with .aspx.

What was wrong with dasBlog? #

Originally, I chose dasBlog as a blogging engine because it had some attractive features. However, the main problem with it is that it's no longer being maintained. There seems to be no roadmap for it.

Second, the commenting feature didn't work well. Many readers had trouble submitting comments, particularly if they wanted to include links or other HTML formatting. I also got a lot of comment spam.

Finally, the authoring experience slowly degraded. The built-in editor only worked in Firefox, and while I don't mind having Firefox installed, it was a cause of concern that I could only edit with one out of three installed browsers. What if it also stopped working in Firefox?

It was definitely time to move on, but whereto?

Why Jekyll? #

Before I decided to go with Jekyll, I looked at various options for blogging sofware or services, including WordPress.com, Blogger, FunnelWeb, Orchard, and others. However, none of them could meet my requirements:

  • Simplicity. Everything that required a relational database to host a blog was immediately dismissed. This is 2013. A RDBMS for a single user blog site? Seriously?
  • Easy hosting. Requiring a RDBMS significantly constrains my deployment options. It also tends to make hosting much more expensive. Since January 2009 I've been hosting ploeh blog on my own server in my home, but I've become increasingly annoyed at having to operate my own server. This is, after all, 2013.
  • Backwards compatible. The new blog should be able to handle the old blog's permalinks. Redirects were OK, but I couldn't just leave hundreds of external links to ploeh blog with a 404. That pretty much ruled out all hosted services.
  • Forward compatible. This is the second time I change blog platform, and it's not going to be the last. I need a solution that enables me to move forward, taking all the existing content with me to a new service or platform.

I seriously toyed with writing my own blogging engine to meet all those requirements, but soon realized that Jekyll was a pretty good match. It's not perfect, but good enough for now.


Comments

Laurence #

While I like the move of the blog post to GitHub, the overhead of creating comments here is just too high. I like having some level of entry barrier to keep the quality up but this is more to the point of completely detracting additional useful information from being posted in comments.

An option that comes to mind could be possibly using the Issue tracking system in GitHub for comments or something that gives access to the GitHub markdown features like the pages themselves.

2013-07-21 09:05 UST +10

AutoFixture 3

Friday, 01 March 2013 13:36:31 UTC

Announcing the general release of AutoFixture 3.

AutoFixture 3 has been under way in a long time, but after about of a month of limited beta/CTP/whatever trial, I've decided to release it as the 'official' current version of AutoFixture. Since AutoFixture uses Semantic Versioning, the shift from 2.16.2 to 3.0.0 (actually, 3.0.1) signifies breaking changes. While we've made an effort to make the impact of these breaking changes as light as possible, it's worth noting that it's probably not a good idea upgrading to AutoFixture 3 five minutes before you are heading out of the door for the weekend.

Read more about what's new and what has changed in the release notes.

Thank you to everyone who helped make this happen. There have been a lot of people contributing, providing feedback, asking intelligent questions, etc. but I particularly want to thank Nikos Baxevanis for all the work he's been putting into AutoFixture. Conversely, as is usual in such circumstances, all mistakes are my responsibility.

What's new? #

When you read the release notes, you may find the amount of new features is quite unimpressive. This is because we've been continually releasing new features under the major version number of 2, in a sort of Continuous Delivery process. Because of the way Semantic Versioning works, breaking changes must be signaled by increasing the major version number, so while we could release new features as they became available, I thought it better to bundle up breaking changes into a bigger release; if not, we'd be looking at AutoFixture 7 or 8 now... And as I'm writing this, I notice that my installation of Chrome is on major version 25, so maybe it's just something I need to get over.

Personal release notes #

One thing that the release notes don't explicitly mention, but I'd like to talk about a little is that AutoFixture 3 is running on an extensively rebuilt kernel. One of the visions I had for AutoFixture 3 was that the kernel structure should be immutable, and it should be as easy to take away functionality than it is to add custom functionality. Taking away functionality was really hard (or perhaps impossible) in AutoFixture 2, and that put a constraint on what you could do with AutoFixture. It also put a constraint on how opinionated its defaults could be.

The AutoFixture 3 kernel makes it possible to take away functionality, as well as add new functionality. At the moment, you have to understand the underlying kernel to figure out how to do that, but in the future I plan to surface an API for that as well.

In any case, I find it interesting that using real SOLID principles, I was able to develop the new kernel in parallel with the version 2 kernel. Partly because I was very conscious of not introducing breaking changes, but also because I follow the Open/Closed Principle, I rarely had to touch existing code, but could build the new parts of the kernel in parallel. Even more interestingly, I think, is the fact that once that work was done, replacing the kernel turned out to be a point release. This happened late November 2012 - more than three months ago. If you've been using AutoFixture 2.14.1 or a more recent version, you've already been using the AutoFixture 3 kernel for some time.

The reason I find this interesting is that sometimes casual readers of my blog express skepticism over whether the principles I talk about can be applied in 'realistic' settings. According to Ohloh, AutoFixture has a total of 197,935 lines of code, and I consider that a fair sized code base. I practice what I preach. QED.


String Calculator kata with Autofixture exercise 8

Monday, 18 February 2013 10:00:28 UTC

This is the eighth post in a series of posts about the String Calculator kata done with AutoFixture.

This screencast implements the requirement of the kata's exercise 8 and 9. Although the kata has nine steps, it turned out that I slightly misinterpreted the requirements of exercise 8 and ended up implementing the requirements of exercise 9 as well as those of exercise 8 - or perhaps I was just gold plating.

If you liked this screencast, you may also like my Pluralsight course Outside-In Test-Driven Development.


String Calculator kata with Autofixture exercise 7

Friday, 15 February 2013 10:10:27 UTC

This is the seventh post in a series of posts about the String Calculator kata done with AutoFixture.

This screencast implements the requirement of the kata's exercise 7.

Next exercise

If you liked this screencast, you may also like my Pluralsight course Outside-In Test-Driven Development.


String Calculator kata with Autofixture exercise 6

Thursday, 14 February 2013 11:53:57 UTC

This is the sixth post in a series of posts about the String Calculator kata done with AutoFixture.

This screencast implements the requirement of the kata's exercise 6.

Next exercise

If you liked this screencast, you may also like my Pluralsight course Outside-In Test-Driven Development.


String Calculator kata with Autofixture exercise 5

Wednesday, 13 February 2013 12:13:01 UTC

This is the fifth post in a series of posts about the String Calculator kata done with AutoFixture.

This screencast implements the requirement of the kata's exercise 5.

Next exercise

If you liked this screencast, you may also like my Pluralsight course Outside-In Test-Driven Development.


Comments

robi.y #
Hello and thanks for this awesome series.
It seems you assume that the int generator always returns non-negative numbers.
Is this true and by design? why?
Thanks
2013-02-19 09:25 UTC
Integers generated by AutoFixture have, by design, always been positive, although the algorithm has changed during the years.

In AutoFixture 3, numbers are random.

In AutoFixture 2.2+, all numbers are generated by a strictly monotonically increasing sequence.

In AutoFixture prior to 2.2, numbers were also generated as a rising sequence, but one sequence per number type.

Common for all is that they are 'small, positive integers'.
2013-02-19 19:46 UTC

String Calculator kata with Autofixture exercise 4

Tuesday, 12 February 2013 13:45:36 UTC

This is the fourth post in a series of posts about the String Calculator kata done with AutoFixture.

This screencast implements the requirement of the kata's exercise 4.

Next exercise

If you liked this screencast, you may also like my Pluralsight course Outside-In Test-Driven Development.


Comments

Jiggaboo #
Wow, this test is so unreadable! I wouldn't allow it in my project.
Also I wonder what if you have error in summing? Error will show sth. like: Expected 34212, was 19857. How do you know what numbers where generated by int generator?
2013-02-12 18:22 UTC
Undreadable? Well, my mother and my wife both find C# unreadable, while I don't (and I suspect you don't either). Other people find F# unreadable, while it's beginning to grow on me. Personally, I find Clojure unreadable, but I'm certain it's only a matter of time.

Unit tests that leverage AutoFixture tends to be terser and more declarative in nature than more traditional unit tests, which tend to look more imperative. Like every other new thing, it takes some time getting used to.

That said, there are things about the String Calculator kata that makes it a less than ideal fit for AutoFixture. The problem with the kata is that it's essentially just a bunch of more or less arbitrary rules (throw on negative numbers, ignore numbers bigger than 1000, support custom delimiter strings, etc.) There's no real domain being modelled here.

If there had been a more proper domain, a refactoring phase would probably have prompted me to redesign the API and introduced a custom Delimiter Value Object that I could have requested in the test, instead of requesting a Generator. That would have made the test even terser, but also, IMO, more readable. However, I didn't want to do that in this screencast, as I was concerned it would be too much of a digression.
2013-02-13 13:27 UTC

String Calculator kata with Autofixture exercise 3

Monday, 11 February 2013 09:04:16 UTC

This is the third post in a series of posts about the String Calculator kata done with AutoFixture.

This screencast implements the requirement of the kata's exercise 3.

Next exercise

If you liked this screencast, you may also like my Pluralsight course Outside-In Test-Driven Development.


String Calculator kata with Autofixture exercise 2

Thursday, 07 February 2013 10:17:27 UTC

This is the second post in a series of posts about the String Calculator kata done with AutoFixture.

This screencast implements the requirement of the kata's exercise 2.

Next exercise

If you liked this screencast, you may also like my Pluralsight course Outside-In Test-Driven Development.


Page 53 of 73

"Our team wholeheartedly endorses Mark. His expert service provides tremendous value."
Hire me!