Most guidance about Test-Driven Development (TDD) will tell you that unit tests should be fast. Examples of such guidance can be found in FIRST and xUnit Test Patterns. Rarely does it tell you how fast unit tests should be.

10 seconds for a unit test suite. Max.

Here's why.

When you follow the Red/Green/Refactor process, ideally you'd be running your unit test suite at least three times for each iteration:

  1. Red. Run the test suite.
  2. Green. Run the test suite.
  3. Refactor. Run the test suite.

Each time you run the unit test suite, you're essentially blocked. You have to wait until the test run has completed until you get the result.

During that wait time, it's important to keep focus. If the test run takes too long, your mind starts to wonder, and you'll suffer from context switching when you have to resume work.

When does your mind start to wonder? After about 10 seconds. That number just keeps popping up when the topic turns to focused attention. Obviously it's not a hard limit. Obviously there are individual and contextual variations. Still, it seems as though a 10 second short-term attention span is more or less hard-wired into the human brain.

Thus, a unit test suite used for TDD should run in less than 10 seconds. If it's slower, you'll be less productive because you'll constantly lose focus.


The test suite you work with when you do TDD should execute in less than 10 seconds on your machine. If you have hundreds of tests, each test should be faster than 100 milliseconds. If you have thousands, each test should be faster than 10 milliseconds. You get the picture.

That test suite doesn't need to be the same as is running on your CI server. It could be a subset of tests. That's OK. The TDD suite may just be part of your Test Pyramid.

Selective test runs

Many people work around the Slow Tests anti-pattern by only running one test at a time, or perhaps one test class. In my experience, this is not an optimal solution because it slows you down. Instead of just going

  1. Red
  2. Run
  3. Green
  4. Run
  5. Refactor
  6. Run

you'd need to go

  1. Red
  2. Specifically instruct your Test Runner to run only the test you just wrote
  3. Green
  4. Decide which subset of tests may have been affected by the new test. This obviously involves the new test, but may include more tests.
  5. Run the tests you just selected
  6. Refactor
  7. Decide which subset of tests to run now
  8. Run the tests you just selected

Obviously, that introduces friction into your process. Personally, I much prefer to have a fast test suite that I can run all the time at a key press.

Still, there are tools available that promises to do this analysis for you. One of them are Mighty Moose, with which I've had varying degrees of success. Another similar approach is NCrunch.


For a long time I've been wanting to write this article, but I've always felt held back by a lack of citations I could exhibit. While the Wikipedia link does provide a bit of information, it's not that convincing in itself (and it also cites the time span as 8 seconds).

However, that 10 second number just keeps popping up in all sorts of contexts, not all of them having anything to do with software development, so I decided to run with it. However, if some of my readers can provide me with better references, I'd be delighted.

Or perhaps I'm just suffering from confirmation bias...


What are your thoughts on continuous test runners like NCrunch? The time for test running is effectively reduced to zero, but you no longer get the red/greeen cycle. Is the cycle crucial, or the notion of running test often the important bit?
2012-05-24 17:52 UTC
In Agile we have this concept of rapid feedback. That a test runner runs in the background doesn't address the problem of Slow Tests. It's no good if you start writing more code and then five minutes later, your continuous test runner comes back and tells you that something you did five minutes ago caused the test suite to fail. That's not timely feedback.

Some of that can be alleviated with a DVCS, but rapid feedback just makes things a lot easier.
2012-05-24 18:39 UTC
Vasily Kirichenko
>> Still, there are tools available that promises to do this analysis for you. One of them are
>> Mighty Moose, with which I've had varying degrees of success. Another similar approach is NCrunch.

I tried both the tools in a real VS solution (contained about 180 projects, 20 of them are test ones). Mighty Moose was continuously throwing exceptions and I didn't manage to get it work at all. NCrunch could not compile some of projects (it uses Mono compiler) but works with some success with the remained ones. Yes, feedback is rather fast (2-5 secs instead of 10-20 using ReSharper test runner), but it unstable and I've returned to ReSharper back.
2012-05-25 05:27 UTC
As a practical matter, I've seen up to 45 seconds be _tolerable_, not ideal.
2012-06-04 13:39 UTC
I fully agree.

TDD cannot be used for large applications.
2012-06-22 06:43 UTC

Wish to comment?

You can add a comment to this post by sending me a pull request. Alternatively, you can discuss this post on Twitter or Google Plus, or somewhere else with a permalink. Ping me with the link, and I may add it as a comment.


Thursday, 24 May 2012 15:20:48 UTC


"Our team wholeheartedly endorses Mark. His expert service provides tremendous value."
Hire me!