You have to read and maintain test code, too.

I don't think I've previously published an article with the following simple message, which is clearly an omission on my part. Better late than never, though.

Treat test code like production code.

You should apply the same coding standards to test code as you do to production code. You should make sure the code is readable, well-factored, goes through review, etc., just like your production code.

Test mess #

It's not uncommon to encounter test code that has received a stepmotherly treatment. Such test code may still pay lip service to an organization's overall coding standards by having correct indents, placement of brackets, and other superficial signs of care. You don't have to dig deep, however, before you discover that the quality of the test code leaves much to be desired.

The most common problem is a disregard for the DRY principle. Duplication abound. It's almost as though people feel unburdened by the shackles of good software engineering practices, and as result relish in the freedom to copy and paste.

That freedom is, however, purely illusory. We'll return to that shortly.

Perhaps the second-most common category of poor coding practices applied to test code is the high frequency of Zombie Code. Commented-out code is common.

Other, less frequent examples of bad practices include use of arbitrary waits instead of proper thread synchronization, unwrapping of monadic values, including calling Task.Result instead of properly awaiting a value, and so on.

I'm sure that you can think of other examples.

Why good code is important #

I think that I can understand why people treat test code as a second-class citizen. It seems intuitive, although the intuition is wrong. Nevertheless, I think it goes like this: Since the test code doesn't go into production, it's seen as less important. And as we shall see below, there are, indeed, a few areas where you can safely cut corners when it comes to test code.

As a general rule, however, it's a bad idea to slack on quality in test code.

The reason lies in why we even have coding standards and design principles in the first place. Here's a hint: It's not to placate the computer.

"Any fool can write code that a computer can understand. Good programmers write code that humans can understand."

The reason we do our best to write code of good quality is that if we don't, it's going to make our work more difficult in the future. Either our own, or someone else's. But frequently, our own.

Forty (or fifty?) years of literature on good software development practices grapple with this fundamental problem. This is why my most recent book is called Code That Fits in Your Head. We apply software engineering heuristics and care about architecture because we know that if we fail to structure the code well, our mission is in jeopardy: We will not deliver on time, on budget, or with working features.

Once we understand this, we see how this applies to test code, too. If you have good test coverage, you will likely have a substantial amount of test code. You need to maintain this part of the code base too. The best way to do so is to treat it like your production code. Apply the same standards and design principles to test code as you do to your production code. This especially means keeping test code DRY.

Test-specific practices #

Since test code has a specialized purpose, you'll run into problems unique to that space. How should you structure a unit test? How should you organize them? How should you name them? How do you make them deterministic?

Fortunately, thoughtful people have collected and systematized their experience. The absolute most comprehensive such collection is xUnit Test Patterns, which has been around since 2007. Nothing in that book invalidates normal coding practices. Rather, it suggests specializations of good practices that apply to test code.

You may run into the notion that tests should be DAMP rather than DRY. If you expand the acronym, however, it stands for Descriptive And Meaningful Phrases, and you may realize that it's a desired quality of code independent of whether or not you repeat yourself. (Even the linked article fails, in my opinion, to erect a convincing dichotomy. Its notion of DRY is clearly not the one normally implied.) I think of the DAMP notion as related to Domain-Driven Design, which is another thematic take on making code fit in your head.

For a few years, however, I did, too, believe that copy-and-paste was okay in test code, but have long since learned that duplication slows you down in test code for exactly the same reason that it hurts in 'real' code. One simple change leads to Shotgun Surgery; many tests break, and you have to fix each one individually.

Dispensations #

All the same, there are exceptions to the general rule. In certain, well-understood ways, you can treat your test code with less care than production code.

Specifically, assuming that test code remains undeployed, you can skip certain security practices. You may, for example, hard-code test-only passwords directly in the tests. The code base that accompanies Code That Fits in Your Head contains an example of that.

You may also skip input validation steps, since you control the input for each test.

In my experience, security is the dominating exemption from the rule, but there may be other language- or platform-specific details where deviating from normal practices is warranted for test code.

One example may be in .NET, where a static code analysis rule may insist that you call ConfigureAwait. This rule is intended for library code that may run in arbitrary environments. When code runs in a unit-testing environment, on the other hand, the context is already known, and this rule can be dispensed with.

Another example is that in Haskell GHC may complain about orphan instances. In test code, it may occasionally be useful to give an existing type a new instance, most commonly an Arbitrary instance. While you can also get around this problem with well-named newtypes, you may also decide that orphan instances are no problem in a test code base, since you don't have to export the test modules as reusable libraries.

Conclusion #

You should treat test code like production code. The coding standards that apply to production code should also apply to test code. If you follow the DRY principle for production code, you should also follow the DRY principle in the test code base.

The reason is that most coding standards and design principles exist to make code maintainability easier. Since test code is also code, this still applies.

There are a few exception, most notably in the area of security, assuming that the test code is never deployed to production.


Comments

What are your thoughts on abstractions in test code? I have never found it necessary, but have at times been tempted to create abstractions in my test projects that themselves would require testing. This is usually only a consideration in projects with a complex test setup or very large test suites.

I am also curious about your thoughts on adding additional logic to the implementation code with the sole purpose of making it easier to test. As an example: Adding a method to a class that makes it a more complete abstraction, but that method is never called in production code.

My own opinion is split between keeping tests readable and potentially easier to maintain by using reusable abstractions, and keeping them simple and robust.

2025-12-05 13:02 UTC

Thank you for writing. Regarding abstractions in test code, the answer depends on how you define the term abstraction. I find Robert C. Martin's definition useful.

"Abstraction is the elimination of the irrelevant and the amplification of the essential."

Using this definition, I have no objection to adding abstractions to test code if necessary, but as you imply, if the test code is simple enough, it may not be necessary. You could consider Test Data Builders, custom assertions, setup methods, etc. as test-specific abstractions. But again, if you simplify the System Under Test (SUT) by making data immutable, you don't need Test Data Builders either, so I agree that the need for substantial test abstractions may be a smell indicating that the SUT is too complicated.

Regarding your other question, I try to avoid APIs that exist exclusively to support testing, but even so, I think you can subcategorize doing that into a few buckets. Adding new methods or constructors that exist only to be called from unit tests? I don't do that, although you could argue that Constructor Injection is just that: Dependency Injection is often introduced to support testing, but it's also applicable in production code. The constructor that receives dependencies is also used to compose the production application.

Additionally, people often ask whether it's okay to make internal or private methods public so that it's possible to test them. In a code base created with test-driven development, this shouldn't be necessary, but again, if you have a legacy code base, other rules apply.

Another category of code that may at first spring into existence to support testing is what we may term inspection APIs. This could be a read-only property or field accessor that you can query in order to make an assertion. You may also decide to give an composite value structural equality to improve testing. Or, in property-based testing, you may use the There and back again technique, where one of the directions isn't entirely required. For example, if I were to repeat the Roman Numerals kata with property-based testing today, I'd add a 'formatter' or 'serializer', so that I could take any normal integer, format it as a Roman numeral, and then parse it again. This is quite common when testing parsers and the like. You could argue that the formatter isn't strictly required for the job, but it makes testing easier, and usually turns out to be handy in its own right.

This tends to be a generalizable observation. A test is the first client of the SUT, so if a test needs a capability, it's likely that other client code might find that capability useful, too. It happens that I write a bit of test-helper code that I eventually decide to move to the production code base for that reason.

2025-12-10 15:35 UTC


Wish to comment?

You can add a comment to this post by sending me a pull request. Alternatively, you can discuss this post on Twitter or somewhere else with a permalink. Ping me with the link, and I may respond.

Published

Monday, 01 December 2025 15:03:00 UTC

Tags



"Our team wholeheartedly endorses Mark. His expert service provides tremendous value."
Hire me!
Published: Monday, 01 December 2025 15:03:00 UTC