ploeh blog danish software design
Builder isomorphisms
The Builder pattern is equivalent to the Fluent Builder pattern.
This article is part of a series of articles about software design isomorphisms. An isomorphism is when a bi-directional lossless translation exists between two representations. Such translations exist between the Builder pattern and two variations of the Fluent Builder pattern. Since the names sound similar, this is hardly surprising.
Given an implementation that uses one of those three patterns, you can translate your design into one of the other options. This doesn't imply that each is of equal value. When it comes to composability, both versions of Fluent Builder are superior to the classic Builder pattern.
A critique of the Maze Builder example #
In these articles, I usually first introduce the form presented in Design Patterns. The code example given by the Gang of Four is, however, problematic. I'll start by pointing out the problems and then proceed to present a simpler, more useful example.
The book presents an example centred on a MazeBuilder
abstract class. The original example is in C++, but I here present my C# interpretation:
public abstract class MazeBuilder { public virtual void BuildMaze() { } public virtual void BuildRoom(int room) { } public virtual void BuildDoor(int roomFrom, int roomTo) { } public virtual Maze GetMaze() { return null; } }
As the book states, "the maze-building operations of MazeBuilder do nothing by default. They're not declared pure virtual to let derived classes override only those methods in which they're interested." This means that you could technically write a derived class that overrides only BuildRoom
. That's unlikely to be useful, since GetMaze
still returns null
.
Moreover, the presence of the BuildMaze
method indicates sequential coupling. A client (a Director, in the pattern language of Design Patterns) is supposed to first call BuildMaze
before calling any of the other methods. What happens if a client forgets to call BuildMaze
? What happens if client code calls the method after some of the other methods. What happens if it calls it multiple times?
Another issue with the sample code is that it's unclear how it accomplishes its stated goal of separating "the construction of a complex object from its representation." The StandardMazeBuilder
presented seems tightly coupled to the Maze
class to a degree where it's hard to see how to untangle the two. The book fails to make a compelling example by instead presenting a CountingMazeBuilder
that never implements GetMaze
. It never constructs the desired complex object.
Don't interpret this critique as a sweeping dismissal of the pattern, or the book in general. As this article series implies, I've invested significant energy in it. I consider the book seminal, but we've learned much since its publication in 1994. A common experience is that not all of the patterns in the book are equally useful, and of those that are, some are useful for different reasons than the book gives. The Builder pattern is an example of that.
The Builder pattern isn't useful only because it enables you to "separate the construction of a complex object from its representation." It's useful because it enables you to present an API that comes with good default behaviour, but which can be tweaked into multiple configurations. The pattern is useful even without polymorphism.
HTTP request Builder #
The HttpRequestMessage class is a versatile API with good default behaviour, but it can be a bit awkward if you want to make an HTTP request with a body and particular headers. You can often get around the problem by using methods like PostAsync on HttpClient, but sometimes you need to drop down to SendAsync. When that happens, you need to build your own HttpRequestMessage
objects. A Builder can encapsulate some of that work.
public class HttpRequestMessageBuilder { private readonly Uri url; private object? jsonBody; public HttpRequestMessageBuilder(string url) : this(new Uri(url)) { } public HttpRequestMessageBuilder(Uri url) { this.url = url; Method = HttpMethod.Get; } public HttpMethod Method { get; set; } public void AddJsonBody(object jsonBody) { this.jsonBody = jsonBody; } public HttpRequestMessage Build() { var message = new HttpRequestMessage(Method, url); BuildBody(message); return message; } private void BuildBody(HttpRequestMessage message) { if (jsonBody is null) return; string json = JsonConvert.SerializeObject(jsonBody); message.Content = new StringContent(json); message.Content.Headers.ContentType.MediaType = "application/json"; } }
Compared to Design Patterns' example, HttpRequestMessageBuilder
isn't polymorphic. It doesn't inherit from a base class or implement an interface. As I pointed out in my critique of the MazeBuilder
example, polymorphism doesn't seem to be the crux of the matter. You could easily introduce a base class or interface that defines the Method
, AddJsonBody
, and Build
members, but what would be the point? Just like the MazeBuilder
example fails to present a compelling second implementation, I can't think of another useful implementation of a hypothetical IHttpRequestMessageBuilder
interface.
Notice that I dropped the Build prefix from most of the Builder's members. Instead, I reserved the word Build
for the method that actually creates the desired object. This is consistent with most modern Builder examples I've encountered.
The HttpRequestMessageBuilder
comes with a reasonable set of default behaviours. If you just want to make a GET
request, you can easily do that:
var builder = new HttpRequestMessageBuilder(url); HttpRequestMessage msg = builder.Build(); HttpClient client = GetClient(); var response = await client.SendAsync(msg);
Since you only call the builder
's Build
method, but never any of the other members, you get the default behaviour. A GET
request with no body.
Notice that the HttpRequestMessageBuilder
protects its invariants. It follows the maxim that you should never be able to put an object into an invalid state. Contrary to Design Patterns' StandardMazeBuilder
, it uses its constructors to enforce an invariant. Regardless of what sort of HttpRequestMessage
you want to build, it must have a URL. Both constructor overloads require all clients to supply one. (In order to keep the code example as simple as possible, I've omitted all sorts of precondition checks, like checking that url
isn't null, that it's a valid URL, and so on.)
If you need to make a POST
request with a JSON body, you can change the defaults:
var builder = new HttpRequestMessageBuilder(url); builder.Method = HttpMethod.Post; builder.AddJsonBody(new { id = Guid.NewGuid(), date = "2020-03-22 19:30:00", name = "Ælfgifu", email = "ælfgifu@example.net", quantity = 1 }); HttpRequestMessage msg = builder.Build(); HttpClient client = GetClient(); var response = await client.SendAsync(msg);
Other combinations of Method
and AddJsonBody
are also possible. You could, for example, make a DELETE
request without a body by only changing the Method
.
This incarnation of HttpRequestMessageBuilder
is cumbersome to use. You must first create a builder
object and then mutate it. Once you've invoked its Build
method, you rarely need the object any longer, but the builder
variable is still in scope. You can address those usage issues by refactoring a Builder to a Fluent Builder.
HTTP request Fluent Builder #
In the Gang of Four Builder pattern, no methods return anything, except the method that creates the object you're building (GetMaze
in the MazeBuilder
example, Build
in the HttpRequestMessageBuilder
example). It's always possible to refactor such a Builder so that the void
methods return something. They can always return the object itself:
public HttpMethod Method { get; private set; } public HttpRequestMessageBuilder WithMethod(HttpMethod newMethod) { Method = newMethod; return this; } public HttpRequestMessageBuilder AddJsonBody(object jsonBody) { this.jsonBody = jsonBody; return this; }
Changing AddJsonBody
is as easy as changing its return type and returning this
. Refactoring the Method
property is a bit more involved. It's a language feature of C# (and a few other languages) that classes can have properties, so this concern isn't general. In languages without properties, things are simpler. In C#, however, I chose to make the property setter private and instead add a method that returns HttpRequestMessageBuilder
. Perhaps it's a little confusing that the name of the method includes the word method, but keep in mind that the method in question is an HTTP method.
You can now create a GET
request with a one-liner:
HttpRequestMessage msg = new HttpRequestMessageBuilder(url).Build();
You don't have to declare any builder
variable to mutate. Even when you need to change the defaults, you can just start with a builder and keep on chaining method calls:
HttpRequestMessage msg = new HttpRequestMessageBuilder(url) .WithMethod(HttpMethod.Post) .AddJsonBody(new { id = Guid.NewGuid(), date = "2020-03-22 19:30:00", name = "Ælfgifu", email = "ælfgifu@example.net", quantity = 1 }) .Build();
This creates a POST
request with a JSON message body.
We can call this pattern Fluent Builder because this version of the Builder pattern has a Fluent Interface.
This usually works well enough in practice, but is vulnerable to aliasing. What happens if you reuse an HttpRequestMessageBuilder
object?
var builder = new HttpRequestMessageBuilder(url); var deleteMsg = builder.WithMethod(HttpMethod.Delete).Build(); var getMsg = builder.Build();
As the variable names imply, the programmer responsible for these three lines of code incorrectly believed that without the call to WithMethod
, the builder
will use its default behaviour when Build
is called. The previous line of code, however, mutated the builder
object. Its Method
property remains HttpMethod.Delete
until another line of code changes it!
HTTP request Immutable Fluent Builder #
You can disarm the aliasing booby trap by making the Fluent Builder immutable. A good first step in that refactoring is making sure that all class fields are readonly
:
private readonly Uri url; private readonly object? jsonBody;
The url
field was already marked readonly
, so the change only applies to the jsonBody
field. In addition to the class fields, don't forget any automatic properties:
public HttpMethod Method { get; }
The HttpMethod
property previously had a private
setter, but this is now gone. It's also strictly read only.
Now that all data is read only, the only way you can 'change' values is via a constructor. Add a constructor overload that receives all data and chain the other constructors into it:
public HttpRequestMessageBuilder(string url) : this(new Uri(url)) { } public HttpRequestMessageBuilder(Uri url) : this(url, HttpMethod.Get, null) { } private HttpRequestMessageBuilder(Uri url, HttpMethod method, object? jsonBody) { this.url = url; Method = method; this.jsonBody = jsonBody; }
I'm usually not keen on allowing null
arguments, but I made the all-encompassing constructor private
. In that way, at least no client code gets the wrong idea.
The optional modification methods can now only do one thing: return a new object:
public HttpRequestMessageBuilder WithMethod(HttpMethod newMethod) { return new HttpRequestMessageBuilder(url, newMethod, jsonBody); } public HttpRequestMessageBuilder AddJsonBody(object jsonBody) { return new HttpRequestMessageBuilder(url, Method, jsonBody); }
The client code looks the same as before, but now you no longer have an aliasing problem:
var builder = new HttpRequestMessageBuilder(url); var deleteMsg = builder.WithMethod(HttpMethod.Delete).Build(); var getMsg = builder.Build();
Now deleteMsg
represents a Delete
request, and getMsg
truly represents a GET
request.
Since this variation of the Fluent Builder pattern is immutable, it's natural to call it an Immutable Fluent Builder.
You've now seen how to refactor from Builder via Fluent Builder to Immutable Fluent Builder. If these three pattern variations are truly isomorphic, it should also be possible to move in the other direction. I'll leave it as an exercise for the reader to do this with the HTTP request Builder example. Instead, I will briefly discuss another example that starts at the Fluent Builder pattern.
Test Data Fluent Builder #
A prominent example of the Fluent Builder pattern would be the set of all Test Data Builders. I'm going to use the example I've already covered. You can visit the previous article for all details, but in summary, you can, for example, write code like this:
Address address = new AddressBuilder().WithCity("Paris").Build();
This creates an Address
object with the City
property set to "Paris"
. The Address
class comes with other properties. You can trust that the AddressBuilder
gave them values, but you don't know what they are. You can use this pattern in unit tests when you need an Address
in Paris, but you don't care about any of the other data.
In my previous article, I implemented AddressBuilder
as a Fluent Builder. I did that in order to stay as true to Nat Pryce's original example as possible. Whenever I use the Test Data Builder pattern in earnest, however, I use the immutable variation so that I avoid the aliasing issue.
Test Data Builder as a Gang-of-Four Builder #
You can easily refactor a typical Test Data Builder like AddressBuilder
to a shape more reminiscent of the Builder pattern presented in Design Patterns. Apart from the Build
method that produces the object being built, change all other methods to void
methods:
public class AddressBuilder { private string street; private string city; private PostCode postCode; public AddressBuilder() { this.street = ""; this.city = ""; this.postCode = new PostCodeBuilder().Build(); } public void WithStreet(string newStreet) { this.street = newStreet; } public void WithCity(string newCity) { this.city = newCity; } public void WithPostCode(PostCode newPostCode) { this.postCode = newPostCode; } public void WithNoPostcode() { this.postCode = new PostCode(); } public Address Build() { return new Address(this.street, this.city, this.postCode); } }
You can still build a test address in Paris, but it's now more inconvenient.
var addressBuilder = new AddressBuilder(); addressBuilder.WithCity("Paris"); Address address = addressBuilder.Build();
You can still use multiple Test Data Builders to build more complex test data, but the classic Builder pattern doesn't compose well.
var invoiceBuilder = new InvoiceBuilder(); var recipientBuilder = new RecipientBuilder(); var addressBuilder = new AddressBuilder(); addressBuilder.WithNoPostcode(); recipientBuilder.WithAddress(addressBuilder.Build()); invoiceBuilder.WithRecipient(recipientBuilder.Build()); Invoice invoice = invoiceBuilder.Build();
These seven lines of code creates an Invoice
object with a address without a post code. Compare that with the Fluent Builder example in the previous article. This is a clear example that while the variations are isomorphic, they aren't equally useful. The classic Builder pattern isn't as practical as one of the Fluent variations.
You might protest that this variation of AddressBuilder
, InvoiceBuilder
, etcetera isn't equivalent to the Builder pattern. After all, the Builder shown in Design Patterns is polymorphic. That's really not an issue, though. Just extract an interface from the concrete builder:
public interface IAddressBuilder { Address Build(); void WithCity(string newCity); void WithNoPostcode(); void WithPostCode(PostCode newPostCode); void WithStreet(string newStreet); }
Make the concrete class implement the interface:
public class AddressBuilder : IAddressBuilder
You could argue that this adds no value. You'd be right. This goes contrary to the Reused Abstractions Principle. I think that the same criticism applies to Design Patterns' original description of the pattern, as I've already pointed out. The utility in the pattern comes from how it gives client code good defaults that it can then tweak as necessary.
Summary #
The Builder pattern was originally described in Design Patterns. Later, smart people like Nat Pryce figured out that by letting each mutating operation return the (mutated) Builder, such a Fluent API offered superior composability. A further improvement to the Fluent Builder pattern makes the Builder immutable in order to avoid aliasing issues.
All three variations are isomorphic. Work that one of these variations afford is also afforded by the other variations.
On the other hand, the variations aren't equally useful. Fluent APIs offer superior composability.
Next: Church encoding.
Non-exceptional averages
How do you code without exceptions? Here's one example.
Encouraging object-oriented programmers to avoid throwing exceptions is as fun as telling them to renounce null references. To be fair, exception-throwing is such an ingrained feature of C#, Java, C++, etcetera that it can be hard to see how to do without it.
To be clear, I don't insist that you pretend that exceptions don't exist in languages that have them. I'm also not advocating that you catch all exceptions in order to resurface them as railway-oriented programming. On the other hand, I do endorse the generally good advice that you shouldn't use exceptions for control flow.
What can you do instead? Despite all the warnings against railway-oriented programming, Either is still a good choice for a certain kind of control flow. Exceptions are for exceptional situations, such as network partitions, running out of memory, disk failures, and so on. Many run-time errors are both foreseeable and preventable. Prefer code that prevents errors.
There's a few ways you can do that. One of them is to protect invariants by enforcing pre-conditions. If you have a static type system, you can use the type system to prevent errors.
Average duration #
How would you calculate the average of a set of durations? You might, for example, need to calculate average duration of message handling for a polling consumer. C# offers many built-in overloads of the Average extension method, but none that calculates the average of TimeSpan values.
How would you write that method yourself?
It's not a trick question.
Based on my experience coaching development teams, this is a representative example:
public static TimeSpan Average(this IEnumerable<TimeSpan> timeSpans) { var sum = TimeSpan.Zero; var count = 0; foreach (var ts in timeSpans) { sum += ts; count++; } return sum / count; }
This gets the job done in most situations, but it has two error modes. It doesn't work if timeSpans
is empty, and it doesn't work if it's infinite.
When the input collection is empty, you'll be trying to divide by zero, which isn't allowed. How do you deal with that? Most programmers I've met just shrug and say: don't call the method with an empty collection. Apparently, it's your responsibility as the caller. You have to memorise that this particular Average
method has that particular precondition.
I don't think that's a professional position. This puts the burden on client developers. In a world like that, you have to learn by rote the preconditions of thousands of APIs.
What can you do? You could add a Guard Clause to the method.
Guard Clause #
Adding a Guard Clause doesn't really make the method much easier to reason about for client developers, but at least it protects an invariant.
public static TimeSpan Average(this IEnumerable<TimeSpan> timeSpans) { if (!timeSpans.Any()) throw new ArgumentOutOfRangeException( nameof(timeSpans), "Can't calculate the average of an empty collection."); var sum = TimeSpan.Zero; var count = 0; foreach (var ts in timeSpans) { sum += ts; count++; } return sum / count; }
Don't get me wrong. I often write code like this because it makes it easier for me as a library developer to reason about the rest of the method body. On the other hand, it basically just replaces one run-time exception with another. Before I added the Guard Clause, calling Average
with an empty collection would cause it to throw an OverflowException
; now it throws an ArgumentOutOfRangeException
.
From client developers' perspective, this is only a marginal improvement. You're still getting no help from the type system, but at least the run-time error is a bit more informative. Sometimes, that's the best you can do.
Finite collections #
The Average
method has two preconditions, but we've only addressed one. The other precondition is that the input timeSpans
must be finite. Unfortunately, this compiles:
static IEnumerable<T> InfinitelyRepeat<T>(T x) { while (true) yield return x; } var ts = new TimeSpan(1, 2, 3, 4); var tss = InfinitelyRepeat(ts); var avg = tss.Average();
Since tss
infinitely repeats ts
, the Average
method call (theoretically) loops forever; in fact it quickly overflows because it keeps adding TimeSpan
values together.
Infinite collections aren't allowed. Can you make that precondition explicit?
I don't know of a way to test that timeSpans
is finite at run time, but I can change the input type:
public static TimeSpan Average(this IReadOnlyCollection<TimeSpan> timeSpans) { if (!timeSpans.Any()) throw new ArgumentOutOfRangeException( nameof(timeSpans), "Can't calculate the average of an empty collection."); var sum = TimeSpan.Zero; foreach (var ts in timeSpans) sum += ts; return sum / timeSpans.Count; }
Instead of accepting any IEnumerable<TimeSpan>
as an input argument, I've now constrained timeSpans
to an IReadOnlyCollection<TimeSpan>
. This interface has been in .NET since .NET 4.5 (I think), but it lives a quiet existence. Few people know of it.
It's just IEnumerable<T>
with an extra constraint:
public interface IReadOnlyCollection<T> : IEnumerable<T> { int Count { get; } }
The Count
property strongly implies that the IEnumerable<T>
is finite. Also, that the value is an int
implies that the maximum size of the collection is 2,147,483,647. That's probably going to be enough for most day-to-day use.
You can no longer pass an infinite stream of values to the Average
method. It's simply not going to compile. That both communicates and protects the invariant that infinite collections aren't allowed. It also makes the implementation code simpler, since the method doesn't have to count the elements. That information is already available from timeSpans.Count
.
If a type can address one invariant, can it also protect the other?
Non-empty collection #
You can change the input type again. Here I've used this NotEmptyCollection<T> implementation:
public static TimeSpan Average(this NotEmptyCollection<TimeSpan> timeSpans) { var sum = timeSpans.Head; foreach (var ts in timeSpans.Tail) sum += ts; return sum / timeSpans.Count; }
Now client code can no longer call the Average
method with an empty collection. That's also not going to compile.
You've replaced a run-time check with a compile-time check. It's now clear to client developers who want to call the method that they must supply a NotEmptyCollection<TimeSpan>
, instead of just any IReadOnlyCollection<TimeSpan>
.
You can also simplify the implementation code:
public static TimeSpan Average(this NotEmptyCollection<TimeSpan> timeSpans) { var sum = timeSpans.Aggregate((x, y) => x + y); return sum / timeSpans.Count; }
How do we know that NotEmptyCollection<T>
contains at least one element? The constructor enforces that constraint:
public NotEmptyCollection(T head, params T[] tail) { if (head == null) throw new ArgumentNullException(nameof(head)); this.Head = head; this.Tail = tail; }
But wait, there's a Guard Clause and a throw
there! Have we even accomplished anything, or did we just move the throw
around?
Parse, don't validate #
A Guard Clause is a kind of validation. It validates that input fulfils preconditions. The problem with validation is that you have to repeat it in various different places. Every time you receive some data as an input argument, it may or may not have been validated. A receiving method can't tell. There's no flag on a string, or a number, or a collection, which is set when data has been validated.
Every method that receives such an input will have to perform validation, just to be sure that the preconditions hold. This leads to validation code being duplicated over a code base. When you duplicate code, you later update it in most of the places it appears, but forget to update it in a few places. Even if you're meticulous, a colleague may not know about the proper way of validating a piece of data. This leads to bugs.
As Alexis King explains in her Parse, don’t validate article, 'parsing' is the process of validating input of weaker type into a value of a stronger type. The stronger type indicates that validation has happened. It's like a Boolean flag that indicates that, yes, the data contained in the type has been through validation, and found to hold.
This is also the case of NotEmptyCollection<T>
. If you have an object of that type, you know that it has already been validated. You know that the collection isn't empty. Even if you think that it looks like we've just replaced one exception with another, that's not the point. The point is that we've replaced scattered and unsystematic validation code with a single verification step.
You may still be left with the nagging doubt that I didn't really avoid throwing an exception. I think that the NotEmptyCollection<T>
constructor strikes a pragmatic balance. If you look only at the information revealed by the type (i.e. what an IDE would display), you'll see this when you program against the class:
public NotEmptyCollection(T head, params T[] tail)
While you could, technically, pass null
as the head
parameter, it should be clear to you that you're trying to do something you're not supposed to do: head
is not an optional argument. Had it been optional, the API designer should have provided an overload that you could call without any value. Such a constructor overload isn't available here, so if you try to cheat the compiler by passing null
, don't be surprised to get a run-time exception.
For what it's worth, I believe that you can only be pragmatic if you know how to be dogmatic. Is it possible to protect NotEmptyCollection<T>
's invariants without throwing exceptions?
Yes, you could do that by making the constructor private
and instead afford a static factory method that returns a Maybe or Either value. In Haskell, this is typically called a smart constructor. It's only a few lines of code, so I could easily show it here. I chose not to, though, because I'm concerned that readers will interpret this article the wrong way. I like Maybe and Either a lot, but I agree with the above critics that it may not be idiomatic in object-oriented languages.
Summary #
Encapsulation is central to object-oriented design. It's the notion that it's an object's own responsibility to protect its invariants. In statically typed object-oriented programming languages, objects are instances of classes. Classes are types. Types encapsulate invariants; they carry with them guarantees.
You can sometimes model invariants by using types. Instead of performing a run-time check on input arguments, you can declare constructors and methods in such a way that they only take arguments that are already guaranteed to be valid.
That's one way to reduce the amount of exceptions that your code throws.
Comments
Great post. I too prefer to avoid exceptions by strengthening preconditions using types.
Sincetss
infinitely repeatsts
, theAverage
method call (theoretically) loops forever; in fact it quickly overflows because it keeps addingTimeSpan
values together.
I am not sure what you mean here. My best guess is that you are saying that this code would execute forever except that it will overflow, which will halt the execution. However, I think the situation is ambiguous. This code is impure because, as the Checked and Unchecked documentation says, its behavior depends on whether or not the -checked
compiler option is given. This dependency on the compiler option can be removed by wrapping this code in a checked or unchecked block, which would either result in a thrown exception or an infinite loop respectively.
This gets the job done in most situations, but it has two error modes. It doesn't work if timeSpans
is empty, and it doesn't work if it's infinite.
There is a third error mode, and it exists in every implementation you gave. The issue of overflow is not restricted to the case of infinitely many TimeSpan
s. It only takes two. I know of or remember this bug as "the last binary search bug". That article shows how to correctly compute the average of two integers without overflowing. A correct implementation for computing the average of more than two integers is to map each element to a mixed fraction with the count as the divisor and then appropriately aggregate those values. The implementation given in this Quora answer seems correct to me.
I know all this is unrelated to the topic of your post, but I also know how much you prefer to use examples that avoid this kind of accidental complexity. Me too! However, I still like your example and can't think of a better one at the moment.
Tyson, thank you for writing. Given an infinite stream of values, the method throws an OverflowException
. This is because TimeSpan
addition explicitly does that:
> TimeSpan.MaxValue + new TimeSpan(1) System.OverflowException: TimeSpan overflowed because the duration is too long. + System.TimeSpan.Add(System.TimeSpan) + System.TimeSpan.op_Addition(System.TimeSpan, System.TimeSpan)
This little snippet from C# Interactive also illustrates the third error mode that I hadn't considered. Good point, that.
Ah, yes. You are correct. Thanks for pointing out my mistake. Another way to verify this is inspecting TimeSpan.Add
in Mircosoft's reference source. I should have done those checks before posting. Thanks again!
The Maître d' kata
A programming kata.
I recently wrote about doing programming katas. You can find katas in many different places. Some sites exist exclusively for that purpose, such as the Coding Dojo or CodeKata. In other cases, you can find individual katas on blogs; one of my favourites is the Diamond kata. You can also lift exercises from other sources and treat them as katas. For example, I recently followed Mike Hadlow's lead and turned a job applicant test into a programming exercise. I've also taken exercises from books and repurposed them. For example, I've implemented the Graham Scan algorithm for finding convex hulls a couple of times.
In this article, I'll share an exercise that I've found inspiring myself. I'll call it the Maître d' kata.
I present no code in this article. Part of what makes the exercise interesting, I think, is to figure out how to model the problem domain. I will, however, later publish one of my attempts at the kata.
Problem statement #
Imagine that you're developing an online restaurant reservation system. Part of the behaviour of such a system is to decide whether or not to accept a reservation. At a real restaurant, employees fill various roles required to make it work. In a high-end restaurant, the maître d' is responsible for taking reservations. I've named the kata after this role. If you're practising domain-driven design, you might want to name your object, class, or module MaîtreD
or some such.
The objective of the exercise is to implement the MaîtreD
decision logic.
Reservations are accepted on a first-come, first-served basis. As long as the restaurant has available seats for the desired reservation, it'll accept it.
A reservation contains, at a minimum, a date and time as well as a positive quantity. Here's some examples:
Date | Quantity |
August 8, 2050 at 19:30 | 3 |
November 27, 2022 at 18:45 | 4 |
February 27, 2014 at 13:22 | 12 |
Notice that dates can be in your future or past. You might want to assume that the maître d' would reject reservations in the past, but you can't assume when the code runs (or ran), so don't worry about that. Notice also that quantities are positive integers. While a quantity shouldn't be negative or zero, it could conceivably be large. I find it realistic, however, to keep quantities at low two-digit numbers or less.
A reservation will likely contain other data, such as the name of the person making the reservation, contact information such as email or phone number, possibly also an ID, and so on. You may add these details if you want to make the exercise more realistic, but they're not required.
I'm going to present one feature requirement at a time. If you read the entire article before you do the exercise, it'd correspond to gathering detailed requirements before starting to code. Alternatively, you could read the first requirement, do the exercise, read the next requirement, refactor your code, and so on. This would simulate a situation where your organisation gradually uncovers how the system ought to work.
Boutique restaurant #
As readers of my book may have detected, I'm a foodie. Some years ago I ate at Blanca in Brooklyn. That restaurant has one communal bar where everyone sits. There was room for twelve people, and dinner started at 19:00 whether you arrived on time or not. Such restaurants actually exist. It's an easy first step for the kata. Assume that the restaurant is only open for dinner, has no second seating, and a single shared table. This implies that the time of day of reservations doesn't matter, while the date still matters. Some possible test cases could be:
Table size | Existing reservations | Candidate reservation | Expected outcome |
12 | none | Quantity: 1 | Accepted |
12 | none | Quantity: 13 | Rejected |
12 | none | Quantity: 12 | Accepted |
4 | Quantity: 2, Date: 2023-09-14 | Quantity: 3, Date: 2023-09-14 | Rejected |
10 | Quantity: 2, Date: 2023-09-14 | Quantity: 3, Date: 2023-09-14 | Accepted |
10 |
Quantity: 3, Date: 2023-09-14 Quantity: 2, Date: 2023-09-14 Quantity: 3, Date: 2023-09-14 |
Quantity: 3, Date: 2023-09-14 | Rejected |
4 | Quantity: 2, Date: 2023-09-15 | Quantity: 3, Date: 2023-09-14 | Accepted |
This may not be an exhaustive set of test cases, but hopefully illustrates the desired behaviour. Try using the Devil's Advocate technique or property-based testing to identify more test cases.
Haute cuisine #
The single-shared-table configuration is unusual. Most restaurants have separate tables. High-end restaurants like those on the World's 50 best list, or those with Michelin stars often have only a single seating. This is a good expansion of the domain logic.
Assume that a restaurant has several tables, perhaps of different sizes. A table for four will seat one, two, three, or four people. Once a table is reserved, however, all the seats at that table are reserved. A reservation for three people will occupy a table for four, and the redundant seat is wasted. Obviously, the restaurant wants to maximise the number of guests, so it'll favour reserving two-person tables for one and two people, four-person tables for three and four people, and so on.
In order to illustrate the desired behaviour, here's some extra test cases to add to the ones already in place:
Tables | Existing reservations | Candidate reservation | Expected outcome |
Two tables for two Two tables for four |
none | Quantity: 4, Date: 2024-06-07 | Accepted |
Two tables for two Two tables for four |
none | Quantity: 5, Date: 2024-06-07 | Rejected |
Two tables for two One table for four |
Quantity: 2, Date: 2024-06-07 | Quantity: 4, Date: 2024-06-07 | Accepted |
Two tables for two One table for four |
Quantity: 3, Date: 2024-06-07 | Quantity: 4, Date: 2024-06-07 | Rejected |
Again, you should consider adding more test cases if you're unit-testing the kata.
Second seatings #
Some restaurants (even some of those on the World's 50 best list) have a second seating. As a diner, you have a limited time (e.g. 2½ hours) to complete your meal. After that, other guests get your table.
This implies that you must now consider the time of day of reservations. You should also be able to use an arbitrary (positive) seating duration. All previous rules should still apply. New test cases include:
Seating duration | Tables | Existing reservations | Candidate reservation | Expected outcome |
2 hours |
Two tables for two One table for four |
Quantity: 4, Date: 2023-10-22, Time: 18:00 | Quantity: 3, Date: 2023-10-22, Time: 20:00 | Accepted |
2½ hours |
One table for two Two tables for four |
Quantity: 2, Date: 2023-10-22, Time: 18:00 Quantity: 1, Date: 2023-10-22, Time: 18:15 Quantity: 2, Date: 2023-10-22, Time: 17:45 |
Quantity: 3, Date: 2023-10-22, Time: 20:00 | Rejected |
2½ hours |
One table for two Two tables for four |
Quantity: 2, Date: 2023-10-22, Time: 18:00 Quantity: 2, Date: 2023-10-22, Time: 17:45 |
Quantity: 3, Date: 2023-10-22, Time: 20:00 | Accepted |
2½ hours |
One table for two Two tables for four |
Quantity: 2, Date: 2023-10-22, Time: 18:00 Quantity: 1, Date: 2023-10-22, Time: 18:15 Quantity: 2, Date: 2023-10-22, Time: 17:45 |
Quantity: 3, Date: 2023-10-22, Time: 20:15 | Accepted |
If you make the seating duration short enough, you may even make room for a third seating, and so on.
Alternative table configurations #
If tables are rectangular, the restaurant has the option to combine several smaller tables into one larger. Consider a typical restaurant layout like this:
There's a round four-person table, as well as a few small tables that can't easily be pushed together. There's also three (orange) two-person tables where one guest sits against the wall, and the other diner faces him or her. These can be used as shown above, but the restaurant can also push two of these tables together to accommodate four people:
This still leaves one of the adjacent two-person tables as an individual table, but the restaurant can also push all three tables together to accommodate six people:
Implement decision logic that allows for alternative table configurations. Remember to take seating durations into account. Consider both the configuration illustrated, as well as other configurations. Note that in the above configuration, not all two-person tables can be combined.
More domain logic #
You can, if you will, invent extra rules. For example, restaurants have opening hours. A restaurant that opens at 18:00 and closes at 0:00 will not accept reservations for 13:30, regardless of table configuration, existing reservations, seating duration, and so on.
Building on that idea, some restaurants have different opening hours on various weekdays. Some are closed Mondays, serve dinner only Tuesday to Friday, but are then open for both lunch and dinner in the weekend.
Going in that direction, however, opens a can of worms. Perhaps the restaurant is closed on public holidays. Or perhaps it's explicitly open on public holidays, to cater for an audience that may not otherwise dine out. But implementing a holiday calender is far from as simple as it sounds. That's the reason I left such rules out of the above specifications of the kata.
Another idea that you may consider is to combine communal bar seating with more traditional tables. The Clove Club is an example of restaurant that does it that way.
Summary #
This is a programming kata description. Implement the decision logic of a maître d': Can the restaurant accept a given reservation?
After some time has gone by, I'll post at least one of my own attempts. You're welcome to leave a comment if you do the kata and wish to share your results.
Algebraic data types aren't numbers on steroids
A common red herring in the type debate.
I regularly get involved in debates about static versus dynamic typing. This post isn't an attempt to persuade anyone that static types are better. One of the reasons that I so often find myself debating this topic is that it intrigues me. I get the impression that most of the software luminaries that I admire (e.g. Kent Beck, Robert C. Martin, Michael Feathers) seem to favour dynamically typed languages. What is it that smart people have figured out that I haven't?
The debate continues, and this article isn't going to stop it. It may, perhaps, put one misconception to rest. There are still good arguments on either side. It's not my goal to dispute any of the good arguments. It's my goal to counter a common bad argument.
Misconception: static typing as numbers on steroids #
I get the impression that many people think about static types as something that has to do with strings and numbers - particularly numbers. Introductions to programming languages often introduce strings first. That's natural, since the most common first example is Hello, world!. After that usually follows an introduction to basic arithmetic, and that often includes an explanation about types of numbers - at least the distinction between integers and floating-point numbers. At the time I'm writing this, the online C# tutorial is a typical example of this. Real World Haskell takes the same approach to introducing types.
It's a natural enough way to introduce static types, but it seems to leave some learners with the impression that static types are mostly useful to prevent them from calling a method with a floating-point number when an integer was expected. That's the vibe I'm getting from this article by Robert C. Martin.
When presented with the notion of a 'stronger' type system, people with that mindset seem to extrapolate what they already know about static types.
If you mostly think of static types as a way to distinguish between various primitive types (such as strings and a zoo of number types), I can't blame you for extrapolating that notion. This seems to happen often, and it leads to a lot of frustration.
People who want 'stronger numbers' try to:
- Model natural numbers; i.e. to define a type that represents only positive integers
- Model positive numbers; i.e. rational or real numbers greater than zero
- Model non-negative numbers
- Model numbers in a particular range; e.g. between 0 and 100
- Model money in different currencies
Haskell does have a powerful type system, but it's a type system that builds on the concept of algebraic data types. (If you want to escape the jargon of that Wikipedia article, I recommend Tomas Petricek's lucid and straightforward explanation Power of mathematics: Reasoning about functional types.)
There are type systems that enable you to take the notion of numbers to the next level. This is called either refinement types or dependent types, contingent on what exactly it is that you want to do. Haskell doesn't support that out of the box. The most prominent dependently-typed programming language is probably Idris, which is still a research language. As far as I know, there's no 'production strength' languages that support refinement or dependent types, unless you consider Liquid Haskell to fit that description. Honestly, all this is at the fringe of my expertise.
I'll return to an example of this kind of frustration later, and also suggest a simple alternative. Before I do that, though, I'd like to outline what it is proponents of 'strong' type systems mean.
Make illegal states unrepresentable #
Languages like Haskell, OCaml, and F# have algebraic type systems. They still distinguish between various primitive types, but they take the notion of static types in a completely different direction. They introduce a new dimension of static type safety, so to speak.
It's a completely different way to think about static types. The advantage isn't that it prevents you from using a floating point where an integer was required. The advantage is that it enables you to model domain logic in a way that flushes out all sorts of edge cases at compile time.
I've previously described a real-world example of domain modelling with types, so I'm not going to repeat that effort here. Most business processes can be described as a progression of states. With algebraic data types, not only can you model what a valid state looks like - you can also model the state machine in such a way that you can't represent illegal states.
This notion is eloquently captured by the aphorism:
This is solving an entirely different type of problem than distinguishing between 32-bit and 64-bit integers. Writing even moderately complex code involves dealing with many edge cases. In most mainstream languages (including C# and Java), it's your responsibility to ensure that you've handled all edge cases. It's easy to overlook or forget a few of those. With algebraic data types, the compiler keeps track of that for you. That's a tremendous boon because it enables you to forget about those technical details and instead focus on adding value.Make illegal states unrepresentable.
Scott Wlaschin wrote an entire book about domain modelling with algebraic data types. That's what we talk about when we talk about stronger type systems. Not 'numbers on steroids'.
Exhibit: summing notionals #
I consider this notion of strong type systems viewed as numbers on steroids a red herring. I don't blame anyone from extrapolating from what they already know. That's a natural way to try to make sense of the world. We all do it.
I came across a recent example of this way of thinking in a great article by Alex Nixon titled Static types are dangerously interesting. The following is in no way meant to excoriate Alex or his article, but I think it's a great example of how easily one can be lead astray by thinking that strong type systems imply numbers on steroids.
You should read the article. It's well-written and uses more sophisticated features of Haskell than I'm comfortable with. The example problem it tries to solve is basically this: Given a set of trades, calculate the total notional in each currency. Consider a collection of trades:
Quantity, Ticker, Price, Currency 100, VOD.L, 1, GBP 200, VOD.L, 2, GBP 300, AAPL.O, 3, USD 50, 4151.T, 5, JPY
I'll let Alex explain what it is that he wants to do:
If given the above trades, the output would be:"I want to write a function which calculates the total notional in each currency. The word notional is a fancy way of saying
price * quantity
. Think of it as "value of the thing that changed hands"."For illustration, the function signature might look something like this:
"
sumNotionals :: [Trade] -> Map Currency Rational
"In English, it’s a function that takes a list of trades and returns a map from currency to quantity."
Currency, Notional GBP, 500 USD, 900 JPY, 250
The article proceeds to explore how to model this problem with Haskell's strong type system. Alex wants to be able to calculate with money, but on the other hand, he wants the type system to prevent accidents. You can't add 100 GBP to 300 USD. The type system should prevent that.
Early on, he defines a sum type to model currencies:
data Currency = USD | GBP | JPY deriving (Eq, Ord, Show)
Things basically go downhill from there. Read the article; it's good.
Sum types should distinguish behaviour, not values #
I doubt that Alex Nixon views his proposed Currency
type as anything but a proof of concept. In a 'real' code base, you'd enumerate all the currencies you'd trade, right?
I wouldn't. This is the red herring in action. Algebraic data types are useful because they enable us to distinguish between cases that we should treat differently, by writing specific code that deals with each case. That's not the case with a currency. You add US dollars together in exactly the same way that you add euros together. The currency doesn't change the behaviour of that operation.
But we can't just enable addition of arbitrary monetary values, right? After all, we shouldn't be able to add 20 USD and 300 DKK. At least, without an exchange rate, that shouldn't compile.
Let's imagine, for the sake of argument, that we encode all the currencies we trade into a type. What happens if our traders decide to trade a currency that they haven't previously traded? What if a country decides to reset their currency? What if a country splits into two countries, each with their own currency?
If you model currency as a type, you'd have to edit and recompile your code every time such an external event occurs. I don't think this is a good use of a type system.
Types should, I think, help us programmers identify the parts of our code bases where we need to treat various cases differently. They shouldn't be used to distinguish run-time values. Types provide value at compile time; run-time values only exist at run time. To paraphrase Kent Beck, keep things together that change together; keep things apart that don't.
I'd model currency as a run-time value, because the behaviour of money doesn't vary with the currency.
Boring Haskell #
How would I calculate the notionals, then? With boring Haskell. Really boring Haskell, in fact. I'm only going to need two imports and no language pragmas:
module Trades where import Data.List import Data.Map.Strict (Map) import qualified Data.Map.Strict as Map
Which types do I need? For this particular purpose, I think I'll just stick with a single Trade
type:
data Trade = Trade { tradeQuantity :: Int , tradeTicker :: String , tradePrice :: Rational , tradeCurrency :: String } deriving (Eq, Show)
Shouldn't I introduce a Money
type? I could, but I don't have to. As Alexis King so clearly explains, you don't have to model more than you need to do the job.
By not introducing a Money
type and making it an instance of various type classes, I still prevent client code from adding things together that shouldn't be added together. You can't add Trade
values together because Trade
isn't a Num
instance.
How do we calculate the notionals, then? It's easy; it's a one-liner:
sumNotionals :: Foldable t => t Trade -> Map String Rational sumNotionals = foldl' (\m t -> Map.insertWith (+) (key t) (value t) m) Map.empty where key (Trade _ _ _ currency) = currency value (Trade quantity _ price _) = toRational quantity * price
Okay, that looks more like four lines of code, but the first is an optional type declaration, so it doesn't count. The key
and value
functions could be inlined to make the function a single (wide) line of code, but I made them two named functions in order to make the code more readable.
It gets the job done:
*Trades> sumNotionals trades fromList [("GBP",500 % 1),("JPY",250 % 1),("USD",900 % 1)]
While this code addresses this particular problem, you probably consider it cheating because I've failed to address a wider concern. How does one model money in several currencies? I've previously covered that, including a simple Haskell example, but in general, I consider it more productive to have a problem and then go looking for a solution, rather than inventing a solution and go looking for a problem.
Summary #
When people enter into a debate, they use the knowledge they have. This is also the case in the debate about static versus dynamic types. Most programmers have experience with statically typed languages like C# or Java. It's natural to argue from what you know, and extrapolate from that.
I think that when confronted with a phrase like a more powerful type system, many people extrapolate and think that they know what that means. They think that it means statically typed numbers on steroids. That's a red herring.
That's usually not what we mean when we talk about more powerful type systems. We talk about algebraic data types, which make illegal states unrepresentable. Judged by the debates I've participated in, you can't extrapolate from mainstream type systems to algebraic data types. If you haven't tried programming with both sum and product types, you aren't going to grok what we mean when we talk about strong type systems.
Comments
"but in general, I consider it more productive to have a problem and then go looking for a solution, rather than inventing a solution and go looking for a problem."
This really resonates with me. I've been observing this in my current team and the tendency to "lookout" for the solutions to problems not yet present, just for the sake of "making it a robust solution" so to say.
I really like the properties of the Haskell solution. It handles all the currencies (no matter how many of them come in the dataset) without explicitly specifying them. And you can't accidentally add two different currencies together. The last part would be pretty verbose to implement in C#.
I'm not sure the above is a good example of what you're trying to say about algebraic data types. The problem can be solve identically (at least semantically) in C#. Granted, the definition of the Trade
type would be way more verbose, but once you have that, the SumNotionals
method is basically the same as you code, albeit with different syntax:
Dictionary<string, int> SumNotionals(IEnumerable<Trade> trades) { return trades .GroupBy(t => t.Currency, t => t.Price * t.Quantity) .ToDictionary(g => g.Key, g => g.Sum()); }
Am I missing something?
You are right Andrew. The LINQ query indeed has the same properites as the Haskell function.
I'm not sure what I was thinking yesterday, but I think I subconsciously "wanted" C# to be less robust.
Andrew, thank you for writing. I didn't intend to say much about algebraic data types in this article. It wasn't the topic I had in mind. It can be difficult to communicate any but the simplest ideas, so it's possible that I didn't state my intention well enough. If so, the fault is mine. I've tried to demonstrate the power of algebraic data types before, so I didn't want to repeat the effort, since my agenda was another. That's why I linked to that other article.
The reason I discussed Alex Nixon's blog post was that it was the article that originally inspired me to write this article. I always try to include an example so that the reader gets to see the connection between the general concept and specifics.
I could have discussed Alex' article solely on its merits of showcasing failed attempts to model a 'stronger number'. That would, however, have left the reader without a resolution. I found that a bad way to structure my text. Readers would be left with questions. Okay Mark, that's all fine, but then how would you solve the problem?
So I decided to include a simple solution in an attempt to cut the Gordian know, so to speak.
Mark, thanks for your response. It does indeed clear up my confusion. In my eagerness to learn more about algrebraic data types I read the second half of your post the wrong way. Thanks for clearing it up.
On doing katas
Approach programming katas differently than martial arts katas.
Would you like to become a better programmer? Then practice. It's no different from becoming a better musician, a better sports(wo)man, a better cook, a better artist, etcetera.
How do you practice programming?
There's many ways. Doing programming katas is one way.
Variation, not repetition #
When I talk to other programmers about katas, I often get the impression that people fail to extract value from the exercises. You can find catalogues of exercises on the internet, but there's a dearth of articles that discuss how to do katas.
Part of the problem is, I think, that the term comes from martial arts practice. In martial arts, one repeats the same movements over and over again in order to build up muscle memory. Repetition produces improvements.
Some people translate that concept literally. They try to do programming katas by doing the same exercise again and again, with no variation. After a few days or weeks, they stop because they can't see the point.
That's no wonder. Neither can I.
Programming and software design is mostly an intellectual (and perhaps artistic) endeavour. Unless you can't touch type, there's little need to build up muscle memory. You train your brain unlike you train your muscles. Repetition numbs the brain. Variation stimulates it.
Suggested variations #
I find that doing a kata is a great opportunity to explore alternatives. A kata is usually a limited exercise, which means that you can do it multiple times and compare outcomes.
You can find various kata catalogues on the internet. One of my favourites is the Coding Dojo. Among the katas there, I particularly like the Tennis kata. I'll use that as an example to describe how I often approach a kata.
The first time I encounter a kata I've never done before, I do it with as little fuss as possible. I use the programming language I'm most comfortable with, and don't attempt any stunts. I no longer remember when I first encountered the Tennis kata, but it's many years ago, and C# was my preferred language. I'd do the Tennis kata in C#, then, just to get acquainted with the problem.
Most good katas contain small surprises. They may sound simpler than they actually turn out to be. On the other hand, they're typically not overwhelmingly difficult. It pays to overcome the surprise the kata may hold without getting bogged down by trying some feat. The Tennis kata, for example, sounds easy, but most people stumble on the rules associated with deuce and advantage. How to model the API? How do you implement the algorithm?
Once you're comfortable with the essence of the exercise, introduce variations. Most of the variations I use take the form of some sort of constraint. Constraints liberate. Less is more.
Here's a list of suggestions:
- Follow test-driven development (TDD). That's my usual modus operandi, but if you don't normally practice TDD, a kata is a great opportunity.
- Use the (Gollum style) Devil's Advocate technique with TDD.
- Follow the Transformation Priority Premise.
- Do TDD without mocks.
- Do TDD with mocks.
- Use the Test Data Builder design pattern.
- Try property-based testing. I've done that with the Tennis kata multiple times.
- Put your mouse away.
- Hide the file tree in your editor or IDE. In Visual Studio, this is called the Solution Explorer, in Visual Studio Code it's just Explorer. Navigate the code by other means.
- Use another editor or IDE.
- Use another programming language. A kata is a great way to practice a new language. When you're learning a new language, you're often fighting with unfamiliar syntax, which is the reason I recommend that you first do the kata in a language with which you're familiar.
- Use only immutable data structures. This is a good first step towards learning functional programming.
- Keep the cyclomatic complexity of all methods at 1. I once did that with the Tennis kata.
- Use an unfamiliar API. If you normally use NUnit then try xUnit.net instead. Use a new Test Double library. Use a different assertion library. I once did the Tennis kata in Haskell using the lens library because I wanted to hone those skills. I've also done the Mark IV coffee maker exercise from APPP with Reactive Extensions.
- Employ a design pattern you'd like to understand better. I've had particular success with the Visitor design pattern.
- Refactor an existing kata solution to another design.
- Refactor another programmer's kata solution.
- Pair-program the kata.
- Use the Ping Pong pattern when pair programming.
- Mob-program it.
What I like about katas is that they're small enough that you can do the same exercise multiple times, but with different designs. This makes it easy to learn new ways of doing things, because you can compare different approaches to the same problem.
Conclusion #
The way that the idea of a programming kata was originally introduced is a bit unfortunate. On one hand, the metaphor may have helped adoption because martial arts are cool, and Japanese is a beautiful language. On the other hand, the underlying message is one of repetition, which is hardly helpful when it comes to exercising the brain.
Repetition dulls the brain, while variation stimulates it. Katas are great because they're short exercises, but you have to deliberately introduce diversity to make them work for you. You're not building muscle memory, you're forming new neural pathways.
Comments
Regarding kata variations, I'd like mention Jeff Bay's Object Calisthenics (by Jeff Bay). One could use all rules at once or just a subset of them.
Just briefly, this are the rules (details can be found on the web):
- One level of indentation per method
- Don’t use the ELSE keyword
- Wrap all primitives and strings
- First class collections
- One dot per line
- Don't abbreviate
- Keep all entities small
- No classes with more than two instance variables
- No getters/setters/properties
Johannes, that list is a great addition to my suggestions. Thank you.
The case of the unbalanced brackets
A code mystery.
One of my clients was kind enough to let me look at some of their legacy code. As I was struggling to understand how it worked, I encountered something that looked like this:
ApplyDueAmountG89.Calculate(postState.PartialMebershipsBAT.Where( d => (d.Data.Choicetype == GarplyChoicetype.AtoC || retirablePartialMembershipNr.Contains(d.Data.PartialMembershipNr)).ToList(), ApplyDueAmountG89.Situation.Depreciation, ApplyDueAmountG89.RecordType.Primo);
For the record, this isn't the actual code that my client gave me. I wouldn't post someone else's code without their permission. It is, however, a faithful imitation of the original code. What's wrong with it?
I'll wait.
Brackets #
Count the brackets. There's a missing closing bracket.
Yet, the code compiles. How?
Legacy code isn't humane code. There's a multitude of ways in which code can be obscure. This article describes one of them.
When brackets are nested and far apart, it's hard for the brain to parse and balance them. Yet, on closer inspection the brackets seem unbalanced.
Show whitespace #
Ever since I started programming in F#, I've turned on the Visual Studio feature that shows whitespace. F# does, after all, use significant whitespace (AKA the Off-side rule), and it helps to be able to detect if a tab character has slipped in among the spaces.
Visual Studio shows whitespace with pale blue dots and arrows. When that feature is turned on (Ctrl + e, s), the above code example looks different:
ApplyDueAmountG89.Calculate(postState.PartialMebershipsBAT.Where( ····d·=>·(d.Data.Choicetype·==·GarplyChoicetype.AtoC·||··············································· ············retirablePartialMembershipNr.Contains(d.Data.PartialMembershipNr)).ToList(), ············ApplyDueAmountG89.Situation.Depreciation, ············ApplyDueAmountG89.RecordType.Primo);
Notice the space characters that seem to run off to the right of the ||
operator. What's at the end of those spaces?
Yes, you guessed it: another Boolean expression, including the missing closing bracket:
d.Data.Choicetype == GarplyChoicetype.BtoC) &&
If you delete all those redundant spaces, this is the actual code:
ApplyDueAmountG89.Calculate(postState.PartialMebershipsBAT.Where( d => (d.Data.Choicetype == GarplyChoicetype.AtoC || d.Data.Choicetype == GarplyChoicetype.BtoC) && retirablePartialMembershipNr.Contains(d.Data.PartialMembershipNr)).ToList(), ApplyDueAmountG89.Situation.Depreciation, ApplyDueAmountG89.RecordType.Primo);
Imagine troubleshooting code like that, and not realising that there's another Boolean expression so far right that even a large screen doesn't show it. In the actual legacy code where I found this example, the extra Boolean expression started at column 209.
Conclusion #
Hiding significant code so far right that it's effectively invisible seems positively evil, but I don't think anyone did it deliberately. Rather, my guess is that someone performed a search-and-replace through the code base, and that this automatic change somehow removed a newline character.
In any case, keeping an eye on the line width of code could prevent something like this from happening. Stay within 80 characters.
Semigroup resonance FizzBuzz
An alternative solution to the FizzBuzz kata.
A common solution to the FizzBuzz kata is to write a loop from 1 to 100 and perform a modulo check for each number. Functional programming languages like Haskell don't have loops, so instead you'd typically solve the kata like this:
isAMultipleOf :: Integral a => a -> a -> Bool isAMultipleOf i multiplier = i `mod` multiplier == 0 convert :: (Integral a, Show a) => a -> String convert i | i `isAMultipleOf` 3 && i `isAMultipleOf` 5 = "FizzBuzz" convert i | i `isAMultipleOf` 3 = "Fizz" convert i | i `isAMultipleOf` 5 = "Buzz" convert i = show i main :: IO () main = mapM_ putStrLn $ convert <$> [1..100]
There's more than one way to skin this cat. In this article, I'll demonstrate one based on Semigroup
resonance.
Fizz stream #
The fundamental idea is to use infinite streams that repeat at different intervals. That idea isn't mine, but I've never seen it done without resorting to some sort of Boolean conditional or pattern matching.
You start with a finite sequence of values that represent the pulse of Fizz values:
[Nothing, Nothing, Just "Fizz"]
If you repeat that sequence indefinitely, you now have a pulse of Fizz values:
fizzes :: [Maybe String] fizzes = cycle [Nothing, Nothing, Just "Fizz"]
This stream of values is one-based, since the first two entries are Nothing
, and only every third is Just "Fizz"
:
*FizzBuzz> take 9 fizzes [Nothing, Nothing, Just "Fizz", Nothing, Nothing, Just "Fizz", Nothing, Nothing, Just "Fizz"]
If you're wondering why I chose a stream of Maybe String
instead of just a stream of String
values, I'll explain that now.
Buzz stream #
You can define an equivalent infinite stream of Buzz values:
buzzes :: [Maybe String] buzzes = cycle [Nothing, Nothing, Nothing, Nothing, Just "Buzz"]
The idea is the same, but the rhythm is different:
*FizzBuzz> take 10 buzzes [Nothing, Nothing, Nothing, Nothing, Just "Buzz", Nothing, Nothing, Nothing, Nothing, Just "Buzz"]
Why not simply generate a stream of String
values, like the following?
*FizzBuzz> take 10 $ cycle ["", "", "", "", "Buzz"] ["", "", "", "", "Buzz", "", "", "", "", "Buzz"]
At first glance this looks simpler, but it makes it harder to merge the stream of Fizz and Buzz values with actual numbers. Distinguishing between Just
and Nothing
values enables you to use the Maybe catamorphism.
Resonance #
You can now zip the fizzes
with the buzzes
:
fizzBuzzes :: [Maybe String] fizzBuzzes = zipWith (<>) fizzes buzzes
You combine the values by monoidal composition. Any Maybe
over a Semigroup
itself gives rise to a Monoid
, and since String
forms a Monoid
(and therefore also a Semigroup
) over concatenation, you can zip the two streams using the <>
operator.
*FizzBuzz> take 20 fizzBuzzes [Nothing, Nothing, Just "Fizz", Nothing, Just "Buzz", Just "Fizz", Nothing, Nothing, Just "Fizz", Just "Buzz", Nothing, Just "Fizz", Nothing, Nothing, Just "FizzBuzz", Nothing, Nothing, Just "Fizz", Nothing, Just "Buzz"]
Notice how the stream of fizzes
enters into a resonance pattern with the stream of buzzes
. Every fifteenth element the values Fizz and Buzz amplify each other and become FizzBuzz.
Numbers #
While you have an infinite stream of fizzBuzzes
, you also need a list of numbers. That's easy:
numbers :: [String] numbers = show <$> [1..100]
You just use a list comprehension and map each number to its String
representation using show
:
*FizzBuzz> take 18 numbers ["1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "17", "18"]
Now you just need to figure out how to merge the fizzBuzzes
with the numbers
.
Zip with catamorphism #
While you can trivially zip
fizzBuzzes
with numbers
, it doesn't solve the problem of which value to pick:
*FizzBuzz> take 5 $ zip numbers fizzBuzzes [("1", Nothing), ("2", Nothing), ("3", Just "Fizz"), ("4", Nothing), ("5", Just "Buzz")]
You want to use the second element of each tuple when there's a value, and only use the first element (the number) when the second element is Nothing
.
That's easily done with fromMaybe
(you'll need to import Data.Maybe
for that):
*FizzBuzz> fromMaybe "2" Nothing "2" *FizzBuzz> fromMaybe "3" $ Just "Fizz" "Fizz"
That's just what you need, so zip numbers
with fizzBuzzes
using fromMaybe
:
elements :: [String] elements = zipWith fromMaybe numbers fizzBuzzes
These elements
is a list of the values the kata instructs you to produce:
*FizzBuzz> take 14 elements ["1", "2", "Fizz", "4", "Buzz", "Fizz", "7", "8", "Fizz", "Buzz", "11", "Fizz", "13", "14"]
fromMaybe
is a specialisation of the Maybe catamorphism. I always find it interesting when I can solve a problem with catamorphisms and monoids, because it shows that perhaps, there's some value in knowing universal abstractions.
From 1 to 100 #
The kata instructions are to write a program that prints the numbers from 1 to 100, according to the special rules. You can use mapM_ putStrLn
for that:
main :: IO () main = mapM_ putStrLn elements
When you execute the main
function, you get the desired output:
1 2 Fizz 4 Buzz Fizz 7 8 Fizz Buzz 11 Fizz 13 14 FizzBuzz 16
... and so on.
Golf #
Haskell golfers may complain that the above code is unnecessarily verbose. I disagree, but you can definitely write the entire kata as a 'one-liner' if you want to:
main :: IO () main = mapM_ putStrLn $ zipWith fromMaybe (show <$> [1..100]) $ zipWith (<>) (cycle [Nothing, Nothing, Just "Fizz"]) (cycle [Nothing, Nothing, Nothing, Nothing, Just "Buzz"])
I've just mechanically in-lined all the values like fizzes
, buzzes
, etc. and formatted the code so that it fits comfortable in a 80x24 box. Apart from that, I don't think that this is an improvement, but it has the same behaviour as the above, more verbose alternative.
Conclusion #
You can implement the FizzBuzz kata using the fundamental concepts catamorphism, semigroup and monoid. No if-then-else
instructions or pattern matching is required. Instead, you use the string concatenation semigroup to enable resonance between two pulses, and the maybe catamorphism to combine with the list of numbers.
The case of the mysterious curly bracket
The story of a curly bracket that I thought was redundant. Not so.
One of my clients was kind enough to show me some of their legacy code. As I was struggling to understand how it worked, I encountered something like this:
// A lot of code has come before this. This is really on line 665, column 29. foreach (BARLifeScheme_BAR scheme in postPartialMembership.SchemesFilterObsolete(BARCompany.ONETIMESUM, false)) { var schemeCalculated = (BARLifeSchemeCalculated_BAR)scheme.SchemeCalculatedObsolete[basis.Data.Basis1order]; { decimal hfcFactor; if (postState.OverallFuturerule == BAROverallFuturerule.Standard) { var bonusKey = new BonusKey(postState.PNr()); hfcFactor = 1M - CostFactory.Instance() .CostProvider(postState.Data.FrameContractNr, postState.StateDate) .GetAdministrationpercentageContribution(bonusKey, basis.Data.Basis1order) / 100M; } // Much more code comes after this... } // ...and after this... }
For the record, this isn't the actual code that my client gave me. I wouldn't post someone else's code without their permission. It is, however, a faithful imitation of the original code.
The actual code started at line 665 and further to the right. It was part of a larger block of code with if
statements within foreach
loops within if
statements within foreach
loops, and so on. The foreach
keyword actually appeared at column 29. The entire file was 1708 lines long.
The code has numerous smells, but here I'll focus on a single oddity.
Inexplicable bracket #
Notice the curly bracket on the line before hfcFactor
. Why is it there?
Take a moment and see if you can guess.
It doesn't seem to be required. It just surrounds a block of code, but it belongs to none of the usual language constructs that would normally call for the use of curly brackets. There's no if
, foreach
, using
, or try
before it.
Residue #
I formed a theory as to why those brackets where in place. I thought that it might be the residue of an if
statement that was no longer there; that, perhaps, once the code had looked like this:
if (something < somethingElse) { decimal hfcFactor;
Later, a developer had discovered that the code in that block should always be executed, and had removed the if
statement without removing the curly brackets.
That was my theory, but then I noticed that this structure appeared frequently throughout the code. Mysterious curly brackets were common, sometimes even nesting each other.
This idiom appeared too often that it looked like the legacy from earlier epochs. It looked deliberate.
The real reason #
When I had the opportunity, I asked one of the developers.
He smiled sheepishly when he told me that those curly brackets were there to introduce a variable scope. The curly brackets protected variables within them from colliding with other variables elsewhere in the 744-line method.
Those scopes enabled programmers to declare variables with names that would otherwise collide with other variables. They even enabled developers to declare a variable with the same name, but a different type.
I was appalled.
Legacy #
I didn't write this article to point fingers. I don't think that professional software developers deliberately decide to write obscure code.
Code becomes obscure over time. It's a slow, unyielding process. As Brian Foote and Joseph Yoder wrote in The Selfish Class (here quoted from Pattern Languages of Program Design 3, p. 461):
That's a disturbing thought. It suggests that 'good' code is unstable. I suspect that code tends to rot beyond comprehension. It's death by a thousand cuts. It's not any single edit that produces legacy code. It's the glacial, inexorable slide towards increasingly complicated code."Will highly comprehensible code, by virtue of being easy to modify, inevitably be supplanted by increasingly less elegant code until some equilibrium is achieved between comprehensibility and fragility?"
Conclusion #
Methods may grow to such sizes that variables collide. The solution isn't to add artificial variable scopes. The solution is to extract helper methods. Keep methods small.
Zone of Ceremony
Static typing doesn't have to involve much ceremony.
I seem to get involved in long and passionate debates about static versus dynamic typing on a regular basis. I find myself clearly on the side of static typing, but this article isn't about the virtues of static versus dynamic typing. The purpose is to correct a common misconception about statically typed languages.
Ceremony #
People who favour dynamically typed languages over statically typed languages often emphasise that they find the lack of ceremony productive. That seems reasonable; only, it's a false dichotomy.
"Ceremony is what you have to do before you get to do what you really want to do."
Dynamically typed languages do seem to be light on ceremony, but you can't infer from that that statically typed languages have to require lots of ceremony. Unfortunately, all mainstream statically typed languages belong to the same family, and they do involve ceremony. I think that people extrapolate from what they know; they falsely conclude that all statically typed languages must come with the overhead of ceremony.
It looks to me more as though there's an unfortunate Zone of Ceremony:
Such a diagram can never be anything but a simplification, but I hope that it's illuminating. C++, Java, and C# are all languages that involve ceremony. To the right of them are what we could term the trans-ceremonial languages. These include F# and Haskell.
In the following, I'll show some code examples in various languages. I'll discuss ceremony according to the above definition. The discussion focuses on the amount of preparatory work one has to do, such as creating a new file, declaring a new class, and declaring types. The discussion is not about the implementation code. For that reason, I've removed colouring from the implementation code, and emphasised the code that I consider ceremonial.
Low ceremony of JavaScript #
Imagine that you're given a list of numbers, as well as a quantity. The quantity is a number to be consumed. You must remove elements from the left until you've consumed at least that quantity. Then return the rest of the list.
> consume ([1,2,3], 1); [ 2, 3 ] > consume ([1,2,3], 2); [ 3 ] > consume ([1,2,3], 3); [ 3 ] > consume ([1,2,3], 4); []
The first example consumes only the leading 1
, while both the second and the third example consumes both 1
and 2
because the sum of those values is 3
, and the requested quantity is 2
and 3
, respectively. The fourth example consumes all elements because the requested quantity is 4
, and you need both 1
, 2
, and 3
before the sum is large enough. You have to pick strictly from the left, so you can't decide to just take the elements 1
and 3
.
If you're wondering why such a function would be useful, here's my motivating example.
In JavaScript, you could implement the consume
function like this:
var consume = function (source, quantity) { if (!source) { return []; } var accumulator = 0; var result = []; for (var i = 0; i < source.length; i++) { var x = source[i]; if (quantity <= accumulator) result.push(x); accumulator += x; } return result; }
I'm a terrible JavaScript programmer, so I'm sure that it could have been done more elegantly, but as far as I can tell, it gets the job done. I wrote some tests, and I have 17 passing test cases. The point isn't about how you write the function, but how much ceremony is required. In JavaScript you don't need to declare any types. Just name the function and its arguments, and you're ready to write code.
High ceremony of C# #
Contrast the JavaScript example with C#. The same function in C# would look like this:
public static class Enumerable { public static IEnumerable<int> Consume( this IEnumerable<int> source, int quantity) { if (source is null) yield break; var accumulator = 0; foreach (var i in source) { if (quantity <= accumulator) yield return i; accumulator += i; } } }
Here you have to declare the type of each method argument, as well as the return type of the method. You also have to put the method in a class. This may not seem like much overhead, but if you later need to change the types, editing is required. This can affect downstream callers, so simple type changes ripple through code bases.
It gets worse, though. The above Consume
method only handles int
values. What if you need to call the method with long
arrays?
You'd have to add an overload:
public static IEnumerable<long> Consume( this IEnumerable<long> source, long quantity) { if (source is null) yield break; var accumulator = 0L; foreach (var i in source) { if (quantity <= accumulator) yield return i; accumulator += i; } }
Do you need support for short
? Add an overload. decimal
? Add an overload. byte
? Add an overload.
No wonder people used to dynamic languages find this awkward.
Low ceremony of F# #
You can write the same functionality in F#:
let inline consume quantity = let go (acc, xs) x = if quantity <= acc then (acc, Seq.append xs (Seq.singleton x)) else (acc + x, xs) Seq.fold go (LanguagePrimitives.GenericZero, Seq.empty) >> snd
There's no type declaration in sight, but nonetheless the function is statically typed. It has this somewhat complicated type:
quantity: ^a -> (seq< ^b> -> seq< ^b>) when ( ^a or ^b) : (static member ( + ) : ^a * ^b -> ^a) and ^a : (static member get_Zero : -> ^a) and ^a : comparison
While this looks arcane, it means that it support sequences of any type that comes with a zero value and supports addition and comparison. You can call it with both 32-bit integers, decimals, and so on:
> consume 2 [1;2;3];; val it : seq<int> = seq [3] > consume 2m [1m;2m;3m];; val it : seq<decimal> = seq [3M]
Static typing still means that you can't just call it with any type of value. An expression like consume "foo" [true;false;true]
will not compile.
You can explicitly declare types in F# (like you can in C#), but my experience is that if you don't, type changes tend to just propagate throughout your code base. Change a type of a function, and upstream callers generally just 'figure it out'. If you think of functions calling other functions as a graph, you often only have to adjust leaf nodes even when you change the type of something deep in your code base.
Low ceremony of Haskell #
Likewise, you can write the function in Haskell:
consume quantity = reverse . snd . foldl go (0, []) where go (acc, ys) x = if quantity <= acc then (acc, x:ys) else (acc + x, ys)
Again, you don't have to explicitly declare any types. The compiler figures them out. You can ask GHCi about the function's type, and it'll tell you:
> :t consume consume :: (Foldable t, Ord a, Num a) => a -> t a -> [a]
It's more compact than the inferred F# type, but the idea is the same. It'll compile for any Foldable
container t
and any type a
that belongs to the classes of types called Ord
and Num
. Num
supports addition and Ord
supports comparison.
There's little ceremony involved with the types in Haskell or F#, yet both languages are statically typed. In fact, their type systems are more powerful than C#'s or Java's. They can express relationships between types that those languages can't.
Summary #
In debates about static versus dynamic typing, contributors often generalise from their experience with C++, Java, or C#. They dislike the amount of ceremony required in these languages, but falsely believe that it means that you can't have static types without ceremony.
The statically typed mainstream languages seem to occupy a Zone of Ceremony.
Static typing without ceremony is possible, as evidenced by languages like F# and Haskell. You could call such languages trans-ceremonial languages. They offer the best of both worlds: compile-time checking and little ceremony.
Comments
In your initial int
C# example, I think your point is that method arguments and the return type require manifest typing. Then for your example about long
(and comments about short
, decimal
, and byte
), I think your point is that C#'s type system is primarily nominal. You then contrast those C# examples with F# and Haskell examples that utilize inferred and structural aspects of their type systems.
I also sometimes get involved in debates about static versus dynamic typing and find myself on the side of static typing. Furthermore, I also typically hear arguments against manifest and nominal typing instead of against static typing. In theory, I agree with those arguments; I also prefer type systems that are inferred and structural instead of those that are manifest and nominal.
I see the tradeoff as being among the users of the programming language, those responsible for writing and maintaining the compiler/interpreter, and what can be said about the correctness of the code. (In the rest of this paragraph, all statements about things being simple or complex are meant to be relative. I will also exaggerate for the sake of simplifying my statements.) For a dynamic language, the interpreter and coding are simple but there are no guarantees about correctness. For a static, manifest, and nominal language, the compiler is somewhere between simple and complex, the coding is complex, but at least there are some guarantees about correctness. For a static, inferred, structural language, the compiler is complex, coding is simple, and there are some guarantees about correctness.
Contrasting a dynamic language with one that is static, inferred, and structural, I see the tradeoff as being directly between the the compiler/interpreter writers and what can be said about the correctness of the code while the experience of those writing code in the language is mostly unchanged. I think that is your point being made by contrasting the JavaScript example (a dynamic language) with the F# and Haskell examples (that demonstrate the static, inferred, and structural behavior of their type systems).
While we are on the topic, I would like to say something that I think is controversial about duck typing. I think duck typing is "just" a dynamic type system that is also structural. This contradicts the lead of its Wikipedia article (linked above) as well as the subsection about structural type systems. They both imply that nominal vs structural typing is a spectrum that only exists for static languages. I disagree; I think dynamic languages can also exist on that spectrum. It is just that most dynamic languages are also structural. In contrast, I think that the manifest vs inferred spectrum exists for static languages but not for dynamic languages.
Nonetheless, that subsection makes a great observation. For structural languages, the difference between static and dynamic languages is not just some guarantees about correctness. Dynamic languages check for type correctness at the last possible moment. (That is saying more than saying that the type check happens at runtime.) For example, consider a function with dead code that "doesn't type". If the type system were static, then this function cannot be executed, but if the type system were dynamic, then it could be executed. More practically, suppose the function is a simple if-else
statement with code in the else
branch that "doesn't type" and that the corresponding Boolean expression always evaluates to true
. If the type system were static, then this function cannot be executed, but if the type system were dynamic, then it could be executed.
In my experience, the typical solution of a functional programmer would be to strengthen the input types so that the else
branch can be proved by the compiler to be dead code and then delete the dead code. This approach makes this one function simpler, and I generally am in favor of this. However, there is a sense in which we can't always repeat this for the calling function. Otherwise, we would end up with a program that is provably correct, which is impossible for a Turning-complete language. Instead, I think the practical solution is to (at some appropriate level) short-circuit the computation when given input that is not known to be good and either do nothing or report back to the user that the input wasn't accepted.
Using mostly both C# and TypeScript, two statically typed languages, I’ve experienced how it’s terser in TypeScript, essentially thanks to its type inference and its structural typing. I like the notion of “Ceremony” you gave to describe this and the fact that it’s not correlated to the kind of typing, dynamic or static 👍
Still, TypeScript is more verbose than F#, as we can see with the following code translation from F# to TypeScript using object literal instead of tuple for the better support of the former:
// const consume = (source: number[], quantity: number): number[]
const consume = (source: number[], quantity: number) =>
source.reduce(({ acc, xs }, x) =>
quantity <= acc
? { acc, xs: xs.concat(x) }
: { acc: acc + x, xs },
{ acc: 0, xs: [] as number[] }
).xs;
Checks:
> consume(1, [1,2,3]) [2,3] > consume(2, [1,2,3]) [3] > consume(3, [1,2,3]) [3] > consume(4, [1,2,3]) []
As we can see, the code is a little more verbose than in JavaScript but still terser than in C#. The returned type is inferred as number[]
but the as number[]
is a pity, necessary because the inferred type of the empty array []
is any[]
.
consume
is not generic: TypeScript/JavaScript as only one primitive for numbers: number
. It works for common scenarios but their no simple way to make it work with BigInt
, for instance using the union type number | bigint
. The more pragmatic option would be to copy-paste, replacing number
with bigint
and 0
with 0n
.
Put cyclomatic complexity to good use
An actually useful software metric.
In Humane Code I argue that software development suffers from a lack of useful measurements. While I stand by that general assertion, a few code metrics can be useful. Cyclomatic complexity, while no silver bullet, can be put to good use.
Recap #
I think of cyclomatic complexity as a measure of the number of pathways through a piece of code. Even the simplest body of code affords a single pathway, so the minimum cyclomatic complexity is 1. You can easily 'calculate' the cyclomatic complexity of a method for function. You start at one, and then you count how many times if
and for
occurs. For each of these keywords you find, you increment the number (which started at 1).
The specifics are language-dependent. The idea is to count branching and looping instructions. In C#, for example, you'd also have to include foreach
, while
, do
, and each case
in a switch
block. In other languages, the keywords to count will differ.
What's the cyclomatic complexity of this TryParse
method?
public static bool TryParse(string candidate, out UserNamePassworCredentials? credentials) { credentials = null; var arr = candidate.Split(','); if (arr.Length < 2) return false; credentials = new UserNamePassworCredentials(arr[0], arr[1]); return true; }
The cyclomatic complexity of this method is 2. You start with the number 1 and then increment it every time you find one of the branching keywords. In this case, there's only a single if
, so increment 1 to 2. That's it. If you're in doubt, Visual Studio can calculate metrics for you. (It calculates other metrics as well, but I don't find those useful.)
Guide for unit testing #
I find cyclomatic complexity useful because it measures the number of pathways through a method. As such, it indicates the minimum number of test cases you ought to furnish. (Edit July 14, 2023: While this tends to work well in practice, it turns out that it's not strictly true in general. See the article Is cyclomatic complexity really related to branch coverage? and the comments for more details.) This is useful when reviewing code and tests.
Sometimes I'm presented with code that other people wrote. When I look through the production code, I consider its cyclomatic complexity. If, for example, a method has a cyclomatic complexity of 5, I'd expect to find at least five test cases to cover that method.
At other times, I start by reading the tests. The number of test cases gives me a rough indication of what degree of complexity to expect. If I see four distinct tests for the same method, I expect it to have a cyclomatic complexity about 4.
I don't demand 100% coverage. Sometimes, people don't write tests for guard clauses, and I usually accept such omissions. On the other hand, I think that proper decision logic should be covered by tests. If I were to stick unwaveringly to cyclomatic complexity, that would make my reviews more objective, but not necessarily better. I could insist on 100% code coverage, but I don't consider that a good idea.
Presented with the above TryParse
method, I'd expect to see at least two unit tests, since the cyclomatic complexity is 2.
The need for more test cases #
Two unit tests aren't enough, though. You could write these two tests:
[Fact] public void TryParseSucceeds() { var couldParse = UserNamePassworCredentials.TryParse("foo,bar", out var actual); Assert.True(couldParse); var expected = new UserNamePassworCredentials("foo", "bar"); Assert.Equal(expected, actual); } [Fact] public void TryParseFails() { var couldParse = UserNamePassworCredentials.TryParse("foo", out var actual); Assert.False(couldParse); Assert.Null(actual); }
Using the Devil's advocate technique, however, this implementation of TryParse
passes both tests:
public static bool TryParse(string candidate, out UserNamePassworCredentials? credentials) { credentials = null; if (candidate != "foo,bar") return false; credentials = new UserNamePassworCredentials("foo", "bar"); return true; }
This is clearly not the correct implementation, but it has 100% code coverage. It also still has cyclomatic complexity of 2. The metric suggests a minimum number of tests - not a sufficient number.
More test cases #
It often makes sense to cover each branch with a single parametrised test:
[Theory] [InlineData("foo,bar", "foo", "bar")] [InlineData("baz,qux", "baz", "qux")] [InlineData("ploeh,fnaah", "ploeh", "fnaah")] [InlineData("foo,bar,baz", "foo", "bar")] public void TryParseSucceeds(string candidate, string userName, string password) { var couldParse = UserNamePassworCredentials.TryParse(candidate, out var actual); Assert.True(couldParse); var expected = new UserNamePassworCredentials(userName, password); Assert.Equal(expected, actual); } [Theory] [InlineData("")] [InlineData("foobar")] [InlineData("foo;bar")] [InlineData("foo")] public void TryParseFails(string candidate) { var couldParse = UserNamePassworCredentials.TryParse(candidate, out var actual); Assert.False(couldParse); Assert.Null(actual); }
Is a total of eight test cases the correct number? Cyclomatic complexity can't help you here. You'll have to rely on other heuristics, such as test-driven development, the transformation priority premise, and the Devil's Advocate.
Humane Code #
I also find cyclomatic complexity useful for another reason. I keep an eye on complexity because I care about code maintainability. In my Humane Code video, I discuss the magic number seven, plus or minus two.
When you read code, you essentially run a little emulator in your brain. You have to maintain state in order to interpret the code you look at. Will this conditional evaluate to true or false? Is the code going to exit that loop now? Is that array index out of bounds? You can only follow the code by keeping track of variables' contents, and your brain can keep track of approximately seven things.
Cyclomatic complexity is a measure of pathways - not how many things you need to keep track of. Still, in my experience, there seems to be a useful correlation. Code with high cyclomatic complexity tends to have many moving parts. There's too much to keep track of. With low cyclomatic complexity, on the other hand, the code involves few moving parts.
I use cyclomatic complexity 7 as an approximate maximum for that reason. It's only a rule of thumb, since I'm painfully aware that I'm transplanting experimental psychology to a context where no conclusions can be scientifically drawn. But like the 80/24 rule I find that it works well in practice.
Complexity of a method call #
Consider the above parametrised tests. Some of the test cases provide enough triangulation to defeat the Devil's attempt at hard-coding return values. This explains test values like "foo,bar"
, "baz,qux"
, and "ploeh,fnaah"
, but why did I include the "foo,bar,baz"
test case? And why did I include the empty string as one of the test cases for TryParseFails
?
When I write tests, I aspire to compose tests that verify the behaviour rather than the implementation of the System Under Test. The desired behaviour, I decided, is that any extra entries in the comma-separated input should be ignored. Likewise, if there's fewer than two entries, parsing should fail. There must be both a user name and a password.
Fortunately, this happens to be how Split already works. If you consider all the behaviour that Split
exhibits, it encapsulates moderate complexity. It can split on multiple alternative delimiters, it can throw away empty entries, and so on. What would happen if you inline some of that functionality?
public static bool TryParse(string candidate, out UserNamePassworCredentials? credentials) { credentials = null; var l = new List<string>(); var element = ""; foreach (var c in candidate) { if (c == ',') { l.Add(element); element = ""; } else element += c; } l.Add(element); if (l.Count < 2) return false; credentials = new UserNamePassworCredentials(l[0], l[1]); return true; }
This isn't as sophisticated as the Split
method it replaces, but it passes all eight test cases. Why did I do this? To illustrate the following point.
What's the cyclomatic complexity now?
Keep in mind that the externally observable behaviour (as defined by eight test cases) hasn't changed. The cyclomatic complexity, however, has. It's now 4 - double the previous metric.
A method call (like a call to Split
) can hide significant cyclomatic complexity. That's a desirable situation. This is the benefit that encapsulation offers: that you don't have to worry about implementation details as long as both caller and callee fulfils the contract.
When you calculate cyclomatic complexity, a method call doesn't increment the complexity, regardless of the degree of complexity that it encapsulates.
Summary #
Cyclomatic complexity is one of the rare programming metrics that I find useful. It measures the number of pathways through a body of code.
You can use it to guide your testing efforts. The number is the minimum number of tests you must write in order to cover all branches. You'll likely need more test cases than that.
You can also use the number as a threshold. I suggest that 7 ought to be the maximum cyclomatic complexity of a method or function. You're welcome to pick another number, but keeping an eye on cyclomatic complexity is useful. It tells you when it's time to refactor a complex method.
Cyclomatic complexity considers only the code that directly implements a method or function. That code can call other code, but what happens behind a method call doesn't impact the metric.
Comments
Do you know of a tool to calculate cyclomatic complexity for F#? It appears that the Visual Studio feature doesn't support it.
Ghillie, thank you for writing. I'm not aware of any such tool.
FWIW, it's not difficult to manually calculate cyclometric complexity for F#, but clearly that doesn't help if you'd like to automate the process.
It might be a fine project for anyone looking for a way to contribute to the F# ecosystem.
Hi, Mark. Thanks for your article. I'd commenting because I'd like to learn more about your thoughts on mutation testing. I ask this because I know you're not the biggest fan of code coverage as a useful metric. I'm not either, or at least I wasnt, until I learned about mutation testing.
My current view is that code coverage is only (mostly) meaningless if you don't have a way of measuring the quality of the tests. Since mutation testing's goal is exactly that (to test the tests, if you will) my opinion is that, if you use a mutation testing tool, then code coverage become really useful and you should try to get to 100%. I've even written a post about this subject.
So, in short: what are your thoughts on mutation testing and how it affects the meaning of code coverage, if at all? Looking forward to read your answer. A whole post on this would be even better!
Thanks!
Carlos, thank you for writing. I'm sympathetic to the idea of mutation testing, but apart from that, I have no opinion of it. I don't think that I ought to have an opinion about something with which I've no experience.
I first heard about mutation testing decades ago, but I've never come across a mutation testing tool for C# (or F#, for that matter). Can you recommend one?
Unfortunately, tooling is indeed one of the main Achilles heels of mutation testing, at least when it comes to .NET.
In the Java world, they have PIT, which is considered state of the art. For C#, I have tried a few tools, with no success. The most promising solution I've found so far, for C#, is Stryker.net, which is a port of the Stryker mutation, designed originally for JavaScript. The C# version is still in its early phases but it's already usable and it looks very promising.
Is mutation testing the automated version of what Mark has called the Devil's Advocate technique?
Tyson, I actually discuss the relationship with mutation testing in that article.
Comments
It is also possible to write that one-liner with the original (non-fluent) builder implementation. Did you mean to show how it is possible with the fluent builder implementation to create a
DELETE
request with a one-liner? You have such an example two code blocks later.Tyson, you are, of course, right. The default behaviour could also have been a one-liner with the non-fluent design. Every other configuration, however, can't be a one-liner with the Gang-of-Four pattern, while it can in the Fluent guise.
Among the example uses of your
HttpRequestMessageBuilder
, I see three HTTP verbs used:GET
,DELETE
, andPOST
. Furthermore, a body is added if and only if the method isPOST
. This matches my expectations gained from my limited experience doing web programming. If aGET
orDELETE
request had a body or if aPOST
request did not have a body, then I would suspect that such behavior was a bug.For the sake of a question that I would like to ask, let's suppose that a body must be added if and only if the method is
POST
. Under this assumption,HttpRequestMessageBuilder
can create invalid messages. For example, it can create aGET
request with a body, and it can create aPOST
request without a body. Under this assumption, how would you modify your design so that only valid messages can be created?Tyson, thank you for another inspiring question! It gives me a good motivation to write about polymorphic Builders. I'll try to address this question in a future article.
Tyson, I've now attempted to answer your question in a new article.