Waiting to happen

Monday, 11 January 2021 06:31:00 UTC

A typical future test maintenance problem.

In a recent article I showed a unit test and parenthetically mentioned that it might have a future maintenance problem. Here's a more recent version of the same test. Can you tell what the future issue might be?

[Theory]
[InlineData("2023-11-24 19:00""juliad@example.net""Julia Domna", 5)]
[InlineData("2024-02-13 18:15""x@example.com""Xenia Ng", 9)]
[InlineData("2023-08-23 16:55""kite@example.edu"null, 2)]
[InlineData("2022-03-18 17:30""shli@example.org""Shanghai Li", 5)]
public async Task PostValidReservationWhenDatabaseIsEmpty(
    string at, string email, string name, int quantity)
{
    var db = new FakeDatabase();
    var sut = new ReservationsController(
        new SystemClock(),
        new InMemoryRestaurantDatabase(Grandfather.Restaurant),
        db);
 
    var dto = new ReservationDto
    {
        Id = "B50DF5B1-F484-4D99-88F9-1915087AF568",
        At = at,
        Email = email,
        Name = name,
        Quantity = quantity
    };
    await sut.Post(dto);
 
    var expected = new Reservation(
        Guid.Parse(dto.Id),
        DateTime.Parse(dto.At, CultureInfo.InvariantCulture),
        new Email(dto.Email),
        new Name(dto.Name ?? ""),
        dto.Quantity);
    Assert.Contains(expected, db.Grandfather);
}

To be honest, there's more than one problem with this test, but presently I'm going to focus on one of them.

Since you don't know the details of the implementation, you may not be able to tell what the problem might be. It's not a trick question. On the other hand, you might still be able to guess, just from the clues available in the above code listing.

The code shown here is part of the sample code base that accompanies my book Code That Fits in Your Head.

Sooner or later #

Here are some clues to consider: I'm writing this article in the beginning of 2021. Consider the dates supplied via the [InlineData] attributes. Seen from 2021, they're all in the future.

Notice, as well, that the sut takes a SystemClock dependency. You don't know the SystemClock class (it's a proprietary class in this code base), but from the name I'm sure that you can imagine what it represents.

From the perspective of early 2021, all dates are going to be in the future for more than a year. What is going to happen, though, once the test runs after March 18, 2022?

That test case is going to fail.

You can't tell from the above code listing, but the system under test rejects reservations in the past. Once March 18, 2022 has come and gone, the reservation at "2022-03-18 17:30" is going to be in the past. The sut will reject the reservation, and the assertion will fail.

You have to be careful with tests that rely on the system clock.

Test Double? #

The fundamental problem is that the system clock is non-deterministic. A typical reaction to non-determinism in unit testing is to introduce a Test Double of some sort. Instead of using the system clock, you could use a Stub as a stand-in for the real time.

This is possible here as well. The ReservationsController class actually depends on an IClock interface that SystemClock implements. You could define a test-specific ConstantClock implementation that would always return a constant date and time. This would actually work, but would rely on an implementation detail.

At the moment, the ReservationsController only calls Clock.GetCurrentDateTime() a single time to get the current time. As soon as it has that value, it passes it to a pure function, which implements the business logic:

var now = Clock.GetCurrentDateTime();
if (!restaurant.MaitreD.WillAccept(now, reservations, reservation))
    return NoTables500InternalServerError();

A ConstantClock would work, but only as long as the ReservationsController only calls Clock.GetCurrentDateTime() once. If it ever began to call this method multiple times to detect the passing of time, using a constant time value would mostly likely again break the test. This seems brittle, so I don't want to go that way.

Relative time #

Working with the system clock in automated tests is easier if you deal with relative time. Instead of defining the test cases as absolute dates, express them as days into the future. Here's one way to refactor the test:

[Theory]
[InlineData(1049, 19, 00, "juliad@example.net""Julia Domna", 5)]
[InlineData(1130, 18, 15, "x@example.com""Xenia Ng", 9)]
[InlineData( 956, 16, 55, "kite@example.edu"null, 2)]
[InlineData( 433, 17, 30, "shli@example.org""Shanghai Li", 5)]
public async Task PostValidReservationWhenDatabaseIsEmpty(
    int days,
    int hours,
    int minutes,
    string email,
    string name,
    int quantity)
{
    var at = DateTime.Now.Date + new TimeSpan(days, hours, minutes, 0);
    var db = new FakeDatabase();
    var sut = new ReservationsController(
        new SystemClock(),
        new InMemoryRestaurantDatabase(Grandfather.Restaurant),
        db);
 
    var dto = new ReservationDto
    {
        Id = "B50DF5B1-F484-4D99-88F9-1915087AF568",
        At = at.ToString("O"),
        Email = email,
        Name = name,
        Quantity = quantity
    };
    await sut.Post(dto);
 
    var expected = new Reservation(
        Guid.Parse(dto.Id),
        DateTime.Parse(dto.At, CultureInfo.InvariantCulture),
        new Email(dto.Email),
        new Name(dto.Name ?? ""),
        dto.Quantity);
    Assert.Contains(expected, db.Grandfather);
}

The absolute dates always were fairly arbitrary, so I just took the current date and converted the dates to a number of days into the future. Now, the first test case will always be a date 1,049 days (not quite three years) into the future, instead of November 24, 2023.

The test is no longer a failure waiting to happen.

Conclusion #

Treating test cases that involve time and date as relative to the current time, instead of as absolute values, is usually a good idea if the system under test depends on the system clock.

It's always a good idea to factor as much code as you can as pure functions, like the above WillAccept method. Pure functions don't depend on the system clock, so here you can safely pass absolute time and date values. Pure functions are intrinsically testable.

Still, as the test pyramid suggests, relying exclusively on unit tests isn't a good idea. The test shown in this article isn't really a unit test, but rather a state-based integration test. It relies on both the system clock and a Fake database. Expressing the test cases for this test as relative time values effectively addresses the problem introduced by the system clock.

There are plenty of other problems with the above test. One thing that bothers me is that the 'fix' made the line count grow. It didn't quite fit into a 80x24 box before, but now it's even worse! I should do something about that, but that's a topic for another article.


Dynamic test oracles for rho problems

Monday, 04 January 2021 06:26:00 UTC

A proof of concept of cross-branch testing for compiled languages.

Hillel Wayne recently published an article called Cross-Branch Testing. It outlines an approach to a class of problems that are hard to test. He mentions computer vision and simulations, among others. I can add that it's also difficult to write intuitive tests of convex hulls and Conway's game of life.

Hillel Wayne calls these rho problems, 'just because'. I'm totally going to run with that term.

In the article, he outlines an approach where you test an iteration of rho code against a 'last known good' snapshot. He uses git worktree to set up a snapshot of the reference implementation. He then writes a property that compares the refactored code's behaviour against the reference.

The example code is in Python, which is a language that I don't know. As far as I can tell, it works because Python is 'lightweight' enough that you can load and execute source code directly. I found that the approach makes much sense, but I wondered how it would apply for statically typed, compiled languages. I decided to create a proof of concept in F#.

Test cases from Python #

My first problem was to port Hillel Wayne's example rho problem to F#. The function f doesn't have any immediate mathematical properties; nor is its behaviour intuitive. While I think that I understand what each line of code in f means, I don't really know Python. Since one of the properties of rho problems is that bugs can be subtle, I didn't trust myself to be able to port the Python code to F# without some test cases.

To solve that problem, I first found an online Python interpreter and pasted the f function into it. I then wrote code to print the output of a function call:

print(f'1, 2, 3, { f(1, 2, 3) }')

This line of code produces this output:

1, 2, 3, True

In other words, I could produce comma-separated values of input and actual output.

Hillel Wayne wrote properties using Hypothesis, which, it seems, by default runs each property 200 times.

In F# I'm going to use FsCheck, so I first used F# Interactive with FsCheck to produce 200 Python print statements like the above:

> Arb.Default.Int32().Generator
|> Gen.three
|> Gen.map (fun (x, y, z) -> sprintf "print(f'%i, %i, %i, { f(%i, %i, %i) }')" x y z x y z)
|> Gen.sample 100 200
|> List.iter (printfn "%s");;
print(f'-77, 67, 84, { f(-77, 67, 84) }')
print(f'58, -46, 3, { f(58, -46, 3) }')
print(f'21, 13, 94, { f(21, 13, 94) }')
...

This is a throwaway data pipeline that starts with an FsCheck integer generator, creates a triple from it, turns that triple into a Python print statement, and finally writes 200 of those to the console. The above code listing only shows the first three lines of output, while the rest are indicated by an ellipsis.

I copied those 200 print statements over to the online Python interpreter and ran the code. That produced 200 comma-separated values like these:

-77, 67, 84, False
58, -46, 3, False
21, 13, 94, True
...

These can serve as test cases for porting the Python code to F#.

Port to F# #

The next step is to write a parametrised test, using a provisional implementation of f:

[<Theory; MemberData(nameof fTestCases)>]
let ``test f`` x y z expected = expected =! f x y z

This test uses xUnit.net 2.4.1 and Unquote 5.0.0. As you can tell, apart from the annotations, it's a true one-liner. It calls the f function with the three supplied arguments x, y, and z and compares the return value with the expected value.

The code uses the new nameof feature of F# 5. fTestCases is a function in the same module that holds the test:

// unit -> seq<obj []>
let fTestCases () =
    use strm = typeof<Anchor>.Assembly.GetManifestResourceStream streamName
    use rdr = new StreamReader (strm)
    let s = rdr.ReadToEnd ()
    s.Split Environment.NewLine |> Seq.map csvToTestCase

It reads an embedded resource stream of test cases, like the above comma-separated values. Even though the values are in a text file, it's easier to embed the file in the test assembly, because it nicely dispenses with the problem of copying a text file to the appropriate output directory when the code compiles. That would, however, be an valid alternative.

Anchor is a dummy type to support typeof, and streamName is just a string constant that identifies the name of the stream.

The csvToTestCase function converts each line of comma-separated values to test cases for the [<Theory>] attribute:

// string -> obj []
let csvToTestCase (csv : string) =
    let values = csv.Split ','
    [|
        values.[0] |> Convert.ToInt32 |> box
        values.[1] |> Convert.ToInt32 |> box
        values.[2] |> Convert.ToInt32 |> box
        values.[3] |> Convert.ToBoolean |> box
    |]

It's not the safest code I could write, but this is, after all, a proof of concept.

The most direct port of the Python code I could produce is this:

// f : int -> int -> int -> bool
let f (x : int) (y : int) (z : int) =
    let mutable mx = bigint x
    let mutable my = bigint y
    let mutable mz = bigint z
    let mutable out = 0I
    for i in [0I..9I] do
        out <- out * mx + abs (my * mz - i * i)
        let x' = mx
        let y' = my
        let z' = mz
        mx <- y' + 1I
        my <- z'
        mz <- x'
    abs out % 100I < 10I

As F# code goes, it's disagreeable, but it passes all 200 test cases, so this will serve as an initial implementation. The out variable can grow to values that overflow even 64-bit integers, so I had to convert to bigint to get all test cases to pass.

If I make the same mutation to the code that Hillel Wayne did (abs out % 100I < 9I) two test cases fail. This gives me some confidence that I have a degree of problem coverage comparable to his.

Test oracle #

Now that a reference implementation exists, we can use it as a test oracle for refactorings. You can, for example, add a little test-only utility to your program portfolio:

open Prod
open FsCheck
 
[<EntryPoint>]
let main argv =
    Arb.Default.Int32().Generator
    |> Gen.three
    |> Gen.sample 100 200
    |> List.iter (fun (x, y, z) -> printfn "%i, %i, %i, %b" x y z (f x y z))
 
    0 // return an integer exit code

Notice that the last step in the pipeline is to output the values of each x, y, and z, as well as the result of calling f x y z.

This is a command-line executable that uses FsCheck to produce new test cases by calling the f function. It looks similar to the above one-off script that produced Python code, but this one instead just produces comma-separated values. You can run it from the command line to produce a new sample of test cases:

$ ./foracle
29, -48, -78, false
-8, -25, 13, false
-74, 34, -68, true
...

As above, I've used an ellipsis to indicate that in reality, 200 lines of comma-separated values scroll by.

When you use Bash, you can even pipe the output straight to a file:

$ ./foracle > csv.txt

You can now take the new comma-separated values and update the test values that the above test f test uses.

In other words, you use version n of f as a test oracle for version n + 1. When iteration n + 1 is a function of iteration n, you have a so-called dynamic system, so I think that we can call this technique dynamic test oracles.

The above foracle program is just a proof of concept. You could make it more flexible by making it take command-line arguments that would let you control the sample size and FsCheck's size parameter (the hard-coded 100 in the above code listing).

Refactoring #

With the confidence instilled by the test cases, we can now refactor the f function:

// f : int -> int -> int -> bool
let f (x : int) (y : int) (z : int) =
    let imp (x, y, z, out) i =
        let out = out * x + abs (y * z - i * i)
        y + 1I, z, x, out
    let (_, _, _, out) = List.fold imp (bigint x, bigint y, bigint z, 0I) [0I..9I]
    abs out % 100I < 10I

Instead of all those mutable variables, the function is, after all, just a left fold. Phew, I feel better now.

Conclusion #

This article demonstrated a proof of concept where you use a known good version (n) of the code as a test oracle for the next version (n + 1). In interpreted languages, you may be able to load two versions of the code base side by side, but that's rarely practical in a statically typed compiled language like F#. Instead, I used a utility program to generate test cases that can be used as a data source for a parametrised test.

The example rho problem takes only integers as input, and returns a simple Boolean value, so in this case it's trivial to encode each test case as a line of comma-separated values. For (real) problems that may involve more complex types, it'd be better to use another serialisation format, such as JSON or XML.

An outstanding issue is whether it's possible to implement shrinking behaviour when tests fail. Currently, the proof of concept just uses a set of serialised test cases. Normally, when a property-based testing framework like FsCheck detects a counter-example, it'll shrink the counter-example to values that are easier to understand than the original. This proof of concept doesn't do that. I'm not sure if a framework like FsCheck currently contains enough extensibility points to enable that sort of behaviour. I'll leave that question open for any reader interested in taking on that problem.


Comments

Hi Mark! Thanks for another thought provoking post.

I believe you and Hillel are writing characterization tests, which you've mentioned in the past. Namely, you're both using the behavior of existing code to verify the correctness of a refactor. The novel part to me is that Hillel is using code as the test oracle. Your solution serializes the oracle to a static file. The library I use for characterization tests (ApprovalTests) does this as well.

I believe shrinking is impossible when the oracle is a static file. However with Hillel's solution the oracle may be consulted at any time, making shrinking viable. If only there was a practical way to combine the two...

2021-01-06 23:01 UTC

A thought provoking post indeed!

In F# I'm going to use FsCheck...

I think that is a fine choice given the use case laid out in this post. In general though, I think Hedgehog is a better property-based testing library. Its killer feature is integrated shrinking, which means that all generators can also shrink and this extra power is essentially free.

For the record (because this can be a point of confusion), Haskell has QuickCheck and (Haskell) Hedgehog while F# has ports from Haskell called FsCheck and (FSharp) Hedgehog.

Jacob Stanley gave this excellent talk at YOW! Lambda Jam 2017 that explains the key idea that allows Hedgehog to have integrated shrinking. (Spoiler: A generic type that is invariant in its only type parameter is replaced by a different generic type that is monadic in its only type parameter. API design guided by functional programming for the win!)

Normally, when a property-based testing framework like FsCheck detects a counter-example, it'll shrink the counter-example to values that are easier to understand than the original.

In my experience, property-based tests written with QuickCheck / FsCheck do not normally shrink. I think this is because of the extra work required to enable shrinking. For an anecdotal example, consider this post by Fraser Tweedale. He believed that it would be faster to add (Haskell) Hedgehog as a dependency and create a generator for it than to add shrinking to his existing generator in QuickCheck.

In other words, you use version n of f as a test oracle for version n + 1. When iteration n + 1 is a function of iteration n, you have a so-called dynamic system, so I think that we can call this technique dynamic test oracles.

I am confused by this paragraph. I interpret your word "When" at the start of the second sentence as a common-language synonym for the mathematical word "If". Here is roughly how I understand that paragraph, where A stands for "version / iteration n of f" and B stands for "version / iteration n + 1 of f". "A depends on B. If B depends on A, then we have a dynamic system. Therefore, we have a dynamic system." I feel like the paragraph assumes (because it is obvious?) that version / iteration n + 1 of f depends on version / iteration n of f. In what sense is that the case?

An outstanding issue is whether it's possible to implement shrinking behaviour when tests fail. [...] I'll leave that question open for any reader interested in taking on that problem.

I am interested!

Quoting Mark and then Alex.

Hillel Wayne [...] outlines an approach where you test an iteration of rho code against a 'last known good' snapshot. He uses git worktree to set up a snapshot of the reference implementation. He then writes a property that compares the refactored code's behaviour against the reference.

The example code is in Python, which is a language that I don't know. As far as I can tell, it works because Python is 'lightweight' enough that you can load and execute source code directly. I found that the approach makes much sense, but I wondered how it would apply for statically typed, compiled languages. I decided to create a proof of concept in F#.

I believe shrinking is impossible when the oracle is a static file. However with Hillel's solution the oracle may be consulted at any time, making shrinking viable.

I want to start by elaborating on this to make sure we are all on the same page. I think of shrinking as involving two parts. On the one hand, we have the "shrink tree", which contains the values to test during the shrinking process. On the other hand, for each input tested, we need to know if the output should cause the test to pass or fail.

With Hedgehog, getting a shrink tree would not be too difficult. For a generator with type parameter 'a, the current generator API returns a "random" shrink tree of type 'a in which the root is an instance a of the type 'a and the tree completely depends on a. It should be easy to expose an additional function that accepts inputs of type 'a Gen and 'a and returns the tree with the given 'a as its root.

The difficult part is being able to query the test oracle. As Mark said, this seems easy to do in a dynamically-typed language like Python. In contrast, the fundamental issue with a statically-typed language like F# is that the compiled code exists in an assembly and only one assembly of a given name can be loaded in a given process at the same time.

This leads me to two ideas for workarounds. First, we could query the test oracle in a different process. I imagine an entry point could be generated that gives direct access to the test oracle. Then the test process could query the test oracle by executing this generated process. Second, we could generate a different assembly that exposes the test oracle. Then the test process could load this generated assembly to query the test oracle. The second approach seems like it would have a faster query time but be harder to implement. The first approach seems easier to implement but would probably have a slower query time. Maybe the query time would be fast enough though, especially if it was only queried when shrinking.

But given such a solution, who wants to restrict access to the test oracle only to shrinking? If the test oracle is always available, then there is no need to store input-output pairs. Instead of always checking that the system under test works correctly for a previously selected set of inputs, the property-based test can check that the system under test has the expected behavior for a unique set of inputs each time the property-based test is executed. In my experience, this is the default behavior of a property-based test.

One concern that some people might have is the idea of checking into the code repository the binary containing the test oracle. My first though is that the size of this is likely so small that it does not matter. My second thought is that the binary containing the test oracle does not have to be included in the repository. Instead, the workflow could be to (1) create the property-based test that uses the compiled test oracle, (2) refactor the system under test, (3) observe that the property-based test still passes, (4) commit the refactored code, and (5) discard the remaining changes, which will delete the property-based test and the compiled test oracle.

Instead of completely removing that property-based test, it might be better to leave it there with input-output pairs stored in a file. Then the conversion from that state of the property-based test to the one that uses the compiled test oracle will be much smaller.

2021-01-07 19:27 UTC

Alex, thank you for writing. Yes, I think that calling this a Characterisation Test is correct. I wasn't aware of the ApprovalTests library; thank you for mentioning it.

When I originally wrote the article, I was under the impression that shrinking might still be possible. I admit, though, that I hadn't thought things through. I think that Tyson Williams argues convincingly that this isn't possible.

2021-01-15 13:42 UTC

Tyson, thank you for writing. I'm well aware of Hedgehog, and I'm keen on the way it works. I rarely use it, however, as it so far doesn't quite seem to have the same degree of 'industrial strength' to it that FsCheck has. Additionally, I find that shrinking is less important in practice than it might seem in theory.

I'm not sure that I understand your confusion about the term dynamic. You write:

"A depends on B."

Why do you write that? I don't think, in the way you've labelled iterations, that A depends on B.

When it comes to shrinking, I think that you convincingly argues that it can't be done unless one is able to query the oracle. As long as all you have is a list of test cases, you can't do that... unless, perhaps, you were to also generate and run all the shrunk test cases when you capture the list of test cases... Again, I haven't thought this through, so there may be some obvious gotcha that I'm missing.

I would be wary of trying to host the previous iteration in a different process. This is technically possible, but, in .NET at least, quite cumbersome. You'll have to deal with data marshalling and lifetime management of the second process. It was difficult enough in .NET framework back when remoting was a thing; I'm not even sure how one would go about such a problem in .NET Core - particularly if you want it to work on both Windows, Linux, and Mac. HTTP?

2021-01-16 13:24 UTC
[Hedgehog] so far doesn't quite seem to have the same degree of 'industrial strength' to it that FsCheck has.

That seems reasonable. I haven't used FsCheck, so I wouldn't know myself. Several of us are making many great improvements to F# Hedgehog right now.

When it comes to shrinking, I think that you convincingly argues that it can't be done unless one is able to query the oracle. As long as all you have is a list of test cases, you can't do that... unless, perhaps, you were to also generate and run all the shrunk test cases when you capture the list of test cases... Again, I haven't thought this through, so there may be some obvious gotcha that I'm missing.

That would be too many test cases. The shrinking process finds two values n and n+1 such that the test passes for n and fails for n+1. This can be viewed as a constraint. The objective is to minimize the value of n. The only way to ensure that some value is the minimum is to test all values smaller than it. However, doing so is not practical. Property-based tests uses randomness precisely because it is not practical to test every possible value.

Instead, the shrinking process uses binary search as a heuristic to find a value satisfying the constraint that is rather small but not necessarily the smallest.

Why do you write that? I don't think, in the way you've labelled iterations, that A depends on B.

Ok. I will go slower and ask smaller questions.

When iteration n + 1 is a function of iteration n [...]

Does this phrase have the same meaning if "When" is replaced by "If"?

In other words, you use version n of f as a test oracle for version n + 1. When iteration n + 1 is a function of iteration n, you have a so-called dynamic system, so I think that we can call this technique dynamic test oracles.

I understand how version n of f is being used as a test oracle for version n + 1. In this blog post, in what sense is something of iteration n + 1 is a function of iteration n?

2021-01-30 16:36 UTC

Tyson, thank you for writing.

"Does this phrase have the same meaning if "When" is replaced by "If"?"
I'm not sure that there's a big difference, but then, I don't know how you parse that. As Kevlin Henney puts it,

"The act of describing a program in unambiguous detail and the act of programming are one and the same."

It seems to me that you're attempting to parse my prose as though it was an unambiguous description, which it can't be.

A dynamic system is a system such that xt+1 = f(xt), where xt is the value of x at time t, and xt+1 is the value of x at time t+1. For simplicity, this is the definition of a dynamic system in discrete time. Mathematically, you can also express it in continuous time using calculus.

2021-02-02 6:46 UTC
It seems to me that you're attempting to parse my prose as though it was an unambiguous description, which it can't be.

Oh, yes. My mistake. I meant to phrase in slightly differently thereby changing it from essentially an impossible question to one that only you can answer. Here is what I meant to ask.

Does this phrase have the same meaning to you if "When" is replaced by "If"?

No matter though. I simply misunderstood your description / defintion of a dynamical system. I understand now. Thanks for your patience and willingness to explain it to me again.

2021-03-25 03:47 UTC

An F# demo of validation with partial data round trip

Monday, 28 December 2020 09:22:00 UTC

An F# port of the previous Haskell proof of concept.

This article is part of a short article series on applicative validation with a twist. The twist is that validation, when it fails, should return not only a list of error messages; it should also retain that part of the input that was valid.

In the previous article you saw a Haskell proof of concept that demonstrated how to compose the appropriate applicative functor with a suitable semigroup to make validation work as desired. In this article, you'll see how to port that proof of concept to F#.

Data definitions #

Like in the previous article, we're going to need some types. These are essentially direct translations of the corresponding Haskell types:

type Input = { Name : string option; DoB : DateTime option; Address : string option}
type ValidInput = { Name : string; DoB : DateTime; Address : string }

The Input type plays the role of the input we'd like to validate, while ValidInput presents validated data.

If you're an F# fan, you can bask in the reality that F# records are terser than Haskell records. I like both languages, so I have mixed feelings about this.

Computation expression #

Haskell's main workhorse is its type class system. F# doesn't have that, but it has computation expressions, which in F# 5 got support for applicative functors. That's just what we need, and it turns out that there isn't a lot of code we have to write to make all of this work.

To recap from the Haskell proof of concept: We need a Result-like container that returns a tuple for errors. One element of the tuple should be a an endomorphism, which forms a monoid (and therefore also a semigroup). The other element should be a list of error messages - another monoid. In F# terms we'll write it as (('b -> 'b) * 'c list).

That's a tuple, and since tuples form monoids when their elements do the Error part of Result supports accumulation.

To support an applicative computation expression, we're going to need a a way to merge two results together. This is by far the most complicated piece of code in this article, all six lines of code:

module Result =
    // Result<'a       ,(('b -> 'b) * 'c list)> ->
    // Result<'d       ,(('b -> 'b) * 'c list)> ->
    // Result<('a * 'd),(('b -> 'b) * 'c list)>
    let merge x y =
        match x, y with
        | Ok xres, Ok yres -> Ok (xres, yres)
        | Error (f, e1s), Error (g, e2s)  -> Error (f >> g, e2s @ e1s)
        | Error e, Ok _ -> Error e
        | Ok _, Error e -> Error e

The merge function composes two input results together. The results have Ok types called 'a and 'd, and if they're both Ok values, the return value is an Ok tuple of 'a and 'd.

If one of the results is an Error value, it beats an Ok value. The only moderately complex operations is when both are Error values.

Keep in mind that an Error value in this instance contains a tuple of the type (('b -> 'b) * 'c list). The first element is an endomorphism 'b -> 'b and the other element is a list. The merge function composes the endomorphism f and g by standard function composition (the >> operator), and concatenates the lists with the standard @ list concatenation operator.

Because I'm emulating how the original forum post's code behaves, I'm concatenating the two lists with the rightmost going before the leftmost. It doesn't make any other difference than determining the order of the error list.

With the merge function in place, the computation expression is a simple matter:

type ValidationBuilder () =
    member _.BindReturn (x, f) = Result.map f x
    member _.MergeSources (x, y) = Result.merge x y

The last piece is a ValidationBuilder value:

[<AutoOpen>]
module ComputationExpressions =
    let validation = ValidationBuilder ()

Now, whenever you use the validation computation expression, you get the desired functionality.

Validators #

Before we can compose some validation functions, we'll need to have some validators in place. These are straightforward translations of the Haskell validation functions, starting with the name validator:

// Input -> Result<string,((Input -> Input) * string list)>
let validateName ({ Name = name } : Input) =
    match name with
    | Some n when n.Length > 3 -> Ok n
    | Some _ ->
        Error (
            (fun (args : Input) -> { args with Name = None }),
            ["no bob and toms allowed"])
    | None -> Error (id, ["name is required"])

When the name is too short, the endomorphism resets the Name field to None.

The date-of-birth validation function works the same way:

// DateTime -> Input -> Result<DateTime,((Input -> Input) * string list)>
let validateDoB (now : DateTime) ({ DoB = dob } : Input) =
    match dob with
    | Some d when d > now.AddYears -12 -> Ok d
    | Some _ ->
        Error (
            (fun (args : Input) -> { args with DoB = None }),
            ["get off my lawn"])
    | None -> Error (id, ["dob is required"])

Again, like in the Haskell proof of concept, instead of calling DateTime.Now from within the function, I'm passing now as an argument to keep the function pure.

The address validation concludes the set of validators:

// Input -> Result<string,(('a -> 'a) * string list)>
let validateAddress ({ Address = address }: Input) =
    match address with
    | Some a -> Ok a
    | None -> Error (id, ["add1 is required"])

The inferred endomorphism type here is the more general 'a -> 'a, but it's compatible with Input -> Input.

Composition #

All three functions have compatible Error types, so they ought to compose with the applicative computation expression to produce the desired behaviour:

// DateTime -> Input -> Result<ValidInput,(Input * string list)>
let validateInput now args =
    validation {
        let! name = validateName args
        and! dob = validateDoB now args
        and! address = validateAddress args
        return { Name = name; DoB = dob; Address = address }
    }
    |> Result.mapError (fun (f, msgs) -> f args, msgs)

The validation expression alone produces a Result<ValidInput,((Input -> Input) * string list)> value. To get an Input value in the Error tuple, we need to 'run' the Input -> Input endomorphism. The validateInput function does that by applying the endomorphism f to args when mapping the error with Result.mapError.

Tests #

To test that the validateInput works as intended, I first copied all the code from the original forum post. I then wrote eight characterisation tests against that code to make sure that I could reproduce the desired functionality.

I then wrote a parametrised test against the new function:

[<Theory; ClassData(typeof<ValidationTestCases>)>]
let ``Validation works`` input expected =
    let now = DateTime.Now
    let actual = validateInput now input
    expected =! actual

The ValidationTestCases class is defined like this:

type ValidationTestCases () as this =
    inherit TheoryData<Input, Result<ValidInput, Input * string list>> ()

This class produces a set of test cases, where each test case contains an input value and the expected output. To define the test cases, I copied the eight characterisation tests I'd already produced and adjusted them so that they fit the simpler API of the validateInput function. Here's a few examples:

let eightYearsAgo = DateTime.Now.AddYears -8
do this.Add (
    { Name = Some "Alice"; DoB = Some eightYearsAgo; Address = None },
    Error (
        { Name = Some "Alice"; DoB = Some eightYearsAgo; Address = None },
        ["add1 is required"]))
 
do this.Add (
    { Name = Some "Alice"; DoB = Some eightYearsAgo; Address = Some "x" },
    Ok ({ Name = "Alice"; DoB = eightYearsAgo; Address = "x" }))

The first case expects an Error value because the Input value has no address. The other test case expects an Ok value because all input is fine.

I copied all eight characterisation tests over, so now I have those eight tests, as well as the modified eight tests for the applicative-based API shown here. All sixteen tests pass.

Conclusion #

I find this solution to the problem elegant. It's always satisfying when you can implement what at first glance looks like custom behaviour using universal abstractions.

Besides the aesthetic value, I also believe that this keeps a team more productive. These concepts of monoids, semigroups, applicative functors, and so on, are concepts that you only have to learn once. Once you know them, you'll recognise them when you run into them. This means that there's less code to understand.

An ad-hoc implementation as the original forum post suggested (even though it looked quite decent) always puts the onus on a maintenance developer to read and understand even more one-off infrastructure code.

With an architecture based on universal abstractions and well-documented language features, a functional programmer that knows these things will be able to pick up what's going on without much trouble. Specifically, (s)he will recognise that this is 'just' applicative validation with a twist.

This article is the December 28 entry in the F# Advent Calendar in English 2020.

Next: A C# port of validation with partial round trip.


A Haskell proof of concept of validation with partial data round trip

Monday, 21 December 2020 06:54:00 UTC

Which Semigroup best addresses the twist in the previous article?

This article is part of a short article series on applicative validation with a twist. The twist is that validation, when it fails, should return not only a list of error messages; it should also retain that part of the input that was valid.

In this article, I'll show how I did a quick proof of concept in Haskell.

Data definitions #

You can't use the regular Either instance of Applicative for validation because it short-circuits on the first error. In other words, you can't collect multiple error messages, even if the input has multiple issues. Instead, you need a custom Applicative instance. You can easily write such an instance yourself, but there are a couple of libraries that already do this. For this prototype, I chose the validation package.

import Data.Bifunctor
import Data.Time
import Data.Semigroup
import Data.Validation

Apart from importing Data.Validation, I also need a few other imports for the proof of concept. All of them are well-known. I used no language extensions.

For the proof of concept, the input is a triple of a name, a date of birth, and an address:

data Input = Input {
  inputName :: Maybe String,
  inputDoB :: Maybe Day,
  inputAddress :: Maybe String }
  deriving (EqShow)

The goal is actually to parse (not validate) Input into a safer data type:

data ValidInput = ValidInput {
  validName :: String,
  validDoB :: Day,
  validAddress :: String }
  deriving (EqShow)

If parsing/validation fails, the output should report a collection of error messages and return the Input value with any valid data retained.

Looking for a Semigroup #

My hypothesis was that validation, even with that twist, can be implemented elegantly with an Applicative instance. The validation package defines its Validation data type such that it's an Applicative instance as long as its error type is a Semigroup instance:

Semigroup err => Applicative (Validation err)

The question is: which Semigroup can we use?

Since we need to return both a list of error messages and a modified Input value, it sounds like we'll need a product type of some sorts. A tuple will do; something like (Input, [String]). Is that a Semigroup instance, though?

Tuples only form semigroups if both elements give rise to a semigroup:

(Semigroup a, Semigroup b) => Semigroup (a, b)

The second element of my candidate is [String], which is fine. Lists are Semigroup instances. But what about Input? Can we somehow combine two Input values into one? It's not entirely clear how we should do that, so that doesn't seem too promising.

What we need to do, however, is to take the original Input and modify it by (optionally) resetting one or more fields. In other words, a series of functions of the type Input -> Input. Aha! There's the semigroup we need: Endo Input.

So the Semigroup instance we need is (Endo Input, [String]), and the validation output should be of the type Validation (Endo Input, [String]) a.

Validators #

Cool, we can now implement the validation logic; a function for each field, starting with the name:

validateName :: Input -> Validation (Endo Input, [String]) String
validateName (Input (Just name) _ _) | length name > 3 = Success name
validateName (Input (Just _) _ _) =
  Failure (Endo $ \x -> x { inputName = Nothing }, ["no bob and toms allowed"])
validateName _ = Failure (mempty, ["name is required"])

This function reproduces the validation logic implied by the forum question that started it all. Notice, particularly, that when the name is too short, the endomorphism resets inputName to Nothing.

The date-of-birth validation function works the same way:

validateDoB :: Day -> Input -> Validation (Endo Input, [String]) Day
validateDoB now (Input _ (Just dob) _) | addGregorianYearsRollOver (-12) now < dob =
  Success dob
validateDoB _ (Input _ (Just _) _) =
  Failure (Endo $ \x -> x { inputDoB = Nothing }, ["get off my lawn"])
validateDoB _ _ = Failure (mempty, ["dob is required"])

Again, the validation logic is inferred from the forum question, although I found it better keep the function pure by requiring a now argument.

The address validation is the simplest of the three validators:

validateAddress :: Monoid a => Input -> Validation (a, [String]) String
validateAddress (Input _ _ (Just a)) = Success a
validateAddress _ = Failure (mempty, ["add1 is required"])

This one's return type is actually more general than required, since I used mempty instead of Endo id. This means that it actually works for any Monoid a, which also includes Endo Input.

Composition #

All three functions return Validation (Endo Input, [String]), which has an Applicative instance. This means that we should be able to compose them together to get the behaviour we're looking for:

validateInput :: Day -> Input -> Either (Input, [String]) ValidInput
validateInput now args =
  toEither $
  first (first (`appEndo` args)) $
  ValidInput <$> validateName args <*> validateDoB now args <*> validateAddress args

That compiles, so it probably works.

Sanity check #

Still, it'd be prudent to check. Since this is only a proof of concept, I'm not going to set up a test suite. Instead, I'll just start GHCi for some ad-hoc testing:

λ> now <- localDay <&> zonedTimeToLocalTime <&> getZonedTime
λ> validateInput now & Input Nothing Nothing Nothing
Left (Input {inputName = Nothing, inputDoB = Nothing, inputAddress = Nothing},
      ["name is required","dob is required","add1 is required"])
λ> validateInput now & Input (Just "Bob") Nothing Nothing
Left (Input {inputName = Nothing, inputDoB = Nothing, inputAddress = Nothing},
      ["no bob and toms allowed","dob is required","add1 is required"])
λ> validateInput now & Input (Just "Alice") Nothing Nothing
Left (Input {inputName = Just "Alice", inputDoB = Nothing, inputAddress = Nothing},
      ["dob is required","add1 is required"])
λ> validateInput now & Input (Just "Alice") (Just & fromGregorian 2002 10 12) Nothing
Left (Input {inputName = Just "Alice", inputDoB = Nothing, inputAddress = Nothing},
      ["get off my lawn","add1 is required"])
λ> validateInput now & Input (Just "Alice") (Just & fromGregorian 2012 4 21) Nothing
Left (Input {inputName = Just "Alice", inputDoB = Just 2012-04-21, inputAddress = Nothing},
      ["add1 is required"])
λ> validateInput now & Input (Just "Alice") (Just & fromGregorian 2012 4 21) (Just "x")
Right (ValidInput {validName = "Alice", validDoB = 2012-04-21, validAddress = "x"})

In order to make the output more readable, I've manually edited the GHCi session by adding line breaks to the output.

It looks like it's working like it's supposed to. Only the last line successfully parses the input and returns a Right value.

Conclusion #

Before I started this proof of concept, I had an inkling of the way this would go. Instead of making the prototype in F#, I found it more productive to do it in Haskell, since Haskell enables me to compose things together. I particularly appreciate how a composition of types like (Endo Input, [String]) is automatically a Semigroup instance. I don't have to do anything. That makes the language great for prototyping things like this.

Now that I've found the appropriate semigroup, I know how to convert the code to F#. That's in the next article.

Next: An F# demo of validation with partial data round trip.


Comments

Great work and excellent post. I just had a few clarification quesitons.

...But what about Input? Can we somehow combine two Input values into one? It's not entirely clear how we should do that, so that doesn't seem too promising.

What we need to do, however, is to take the original Input and modify it by (optionally) resetting one or more fields. In other words, a series of functions of the type Input -> Input. Aha! There's the semigroup we need: Endo Input.

How rhetorical are those questions? Whatever the case, I will take the bait.

Any product type forms a semigroup if all of its elements do. You explicitly stated this for tuples of length 2; it also holds for records such as Input. Each field on that record has type Maybe a for some a, so it suffices to select a semigroup involving Maybe a. There are few different semigropus involving Maybe that have different functions.

I think the most common semigroup for Maybe a has the function that returns the first Just _ if one exists or else returns Nothing. Combining that with Nothing as the identity element gives the monoid that is typically associated with Maybe a (and I know by the name monoidal plus). Another monoid, and therefore a semigroup, is to return the last Just _ instead of the first.

Instead of the having a preference for Just _, the function could have a preference for Nothing. As before, when both inputs are Just _, the output could be either of the inputs.

I think either of those last two semigroups will achieved the desired behavior in the problem at hand. Your code never replaces an instace of Just a with a different instance, so we don't need a preference for some input when they are both Just _.

In the end though, I think the semigroup you derived from Endo leads to simpler code.

At the end of the type signature for validateName / validateDoB / validateAddress, what does String / Day / String mean?

Why did you pass all three arguments into every parsing/validation function? I think it is a bit simpler to only pass in the needed argument. Maybe you thought this was good enough for prototype code.

Why did you use add1 in your error message instead of address? Was it only for prototype code to make the message a bit shorter?

2020-12-21 14:21 UTC

Tyson, thank you for writing. The semigroup you suggest, I take it, would look something like this:

newtype Perhaps a = Perhaps { runPerhaps :: Maybe  a } deriving (EqShow)
 
instance Semigroup (Perhaps a) where
  Perhaps Nothing <> _ = Perhaps Nothing
  _ <> Perhaps Nothing = Perhaps Nothing
  Perhaps (Just x) <> _ = Perhaps (Just x)

That might work, but it's an atypical semigroup. I think that it's lawful - at least, I can't come up with a counterexample against associativity. It seems reminiscent of Boolean and (the All monoid), but it isn't a monoid, as far as I can tell.

Granted, a Monoid constraint isn't required to make the validation code work, but following the principle of least surprise, I still think that picking a well-known semigroup such as Endo is preferable.

Regarding your second question, the type signature of e.g. validateName is:

validateName :: Input -> Validation (Endo Input, [String]) String

Like Either, Validation has two type arguments: err and a; it's defined as data Validation err a. In the above function type, the return value is a Validation value where the err type is (Endo Input, [String]) and a is String.

All three validation functions share a common err type: (Endo Input, [String]). On the other hand, they return various a types: String, Day, and String, respectively.

Regarding your third question, I could also have defined the functions so that they would only have taken the values they'd need to validate. That would better fit Postel's law, so I should probably have done that...

As for the last question, I was just following the 'spec' implied by the original forum question.

2020-12-22 15:05 UTC

Validation, a solved problem?

Monday, 14 December 2020 08:28:00 UTC

A validation problem with a twist.

Until recently, I thought that data validation was a solved problem: Use an applicative functor. I then encountered a forum question that for a few minutes shook my faith.

After brief consideration, though, I realised that all is good. Validation, even with a twist, is successfully modelled with an applicative functor. Faith in computer science restored.

The twist #

Usually, when you see a demo of applicative validation, the result of validating is one of two: either a parsed result, or a collection of error messages.

λ> validateReservation $ ReservationJson "2017-06-30 19:00:00+02:00" 4 "Jane Doe" "j@example.com"
Validation (Right (Reservation {
    reservationDate = 2017-06-30 19:00:00 +0200,
    reservationQuantity = 4,
    reservationName = "Jane Doe",
    reservationEmail = "j@example.com"}))

λ> validateReservation $ ReservationJson "2017/14/12 6pm" 4.1 "Jane Doe" "jane.example.com"
Validation (Left ["Not a date.","Not a positive integer.","Not an email address."])

λ> validateReservation $ ReservationJson "2017-06-30 19:00:00+02:00" (-3) "Jane Doe" "j@example.com"
Validation (Left ["Not a positive integer."])

(Example from Applicative validation.)

What if, instead, you're displaying an input form? When users enter data, you want to validate it. Imagine, for the rest of this short series of articles that the input form has three fields: name, date of birth, and address. Each piece of data has associated validation rules.

If you enter a valid name, but an invalid date of birth, you want to clear the input form's date of birth, but not the name. It's such a bother for a user having to retype valid data just because a single field turned out to be invalid.

Imagine, for example, that you want to bind the form to a data model like this F# record type:

type Input = { Name : string option; DoB : DateTime option; Address : string option}

Each of these three fields is optional. We'd like validation to work in the following way: If validation fails, the function should return both a list of error messages, and also the Input object, with valid data retained, but invalid data cleared.

One of the rules implied in the forum question is that names must be more than three characters long. Thus, input like this is invalid:

{ Name = Some "Tom"; DoB = Some eightYearsAgo; Address = Some "x" }

Both the DoB and Address fields, however, are valid, so, along with error messages, we'd like our validation function to return a partially wiped Input value:

{ Name = None; DoB = Some eightYearsAgo; Address = Some "x" }

Notice that both DoB and Address field values are retained, while Name has been reset.

A final requirement: If validation succeeds, the return value should be a parsed value that captures that validation took place:

type ValidInput = { Name : string; DoB : DateTime; Address : string }

That requirement is straightforward. That's how you'd usually implement application validation. It's the partial data round-trip that seems to throw a spanner in the works.

How should we model such validation?

Theory, applied #

There's a subculture of functional programming that draws heavily on category theory. This is most prevalent in Haskell. I've been studying category theory in an attempt to understand what it's all about. I even wrote a substantial article series about some design patterns and how they relate to theory.

One thing I learned after I'd named that article series is that most of the useful theoretical concepts come from abstract algebra, with the possible exception of monads.

People often ask me: does all that theory have any practical use?

Yes, it does, as it turns out. It did, for example, enable me to identify a solution to the above twist in five to ten minutes.

It's a discussion that I often have, particularly with the always friendly F# community. Do you have to understand functors, monads, etcetera to be a productive F# developer?

To anyone who wants to learn F# I'd respond: Don't worry about that at the gate. Find a good learning resource and dive right in. It's a friendly language that you can learn gradually.

Sooner or later, though, you'll run into knotty problems that you may struggle to address. I've seen this enough times that it looks like a pattern. The present forum question is just one example. A beginner or intermediate F# programmer will typically attempt to solve the problem in an ad-hoc manner that may or may not be easy to maintain. (The solution proposed by the author of that forum question doesn't, by the way, look half bad.)

To be clear: there's nothing wrong with being a beginner. I was once a beginner programmer, and I'm still a beginner in multiple ways. What I'm trying to argue here is that there is value in knowing theory. With my knowledge of abstract algebra and how it applies to functional programming, it didn't take me long to identify a solution. I'll get to that later.

Before I outline a solution, I'd like to round off the discussion of applied theory. That question about monads comes up a lot. Do I have to understand functors, monads, etcetera to be a good F# developer?

I think it's like asking Do I have to understand polymorphism, design patterns, the SOLID principles, etcetera to be a good object-oriented programmer?

Those are typically not the first topics people are taught about OOD. I would assert, however, that understanding such topics do help. They may not be required to get started with OOP, but knowing them makes you a better programmer.

I think the same is true for functional programming. It's just a different skill set that makes you better in that paradigm.

Solution outline #

When you know a bit of theory, you may know that validation can be implemented with an applicative sum type like Either (AKA Option), with one extra requirement.

Either has two dimensions, left or right (success or failure, ok or error, etcetera). The applicative nature of it already supplies a way to compose the successes, but what if there's more than one validation error?

In my article about applicative validation I showed how to collect multiple error messages in a list. Lists, however, form a monoid, so I typed the validation API to be that flexible.

In fact, all you need is a semigroup. When I wrote the article on applicative validation, Haskell's Semigroup type class wasn't yet a supertype of Monoid, and I (perhaps without sufficient contemplation) just went with Monoid.

What remains is that applicative validation can collect errors for any semigroup of errors. All we need to solve the above validation problem with a twist, then, is to identify a suitable semigroup.

I don't want to give away everything in this article, so I'm going to leave you with this cliffhanger. Which semigroup solves the problem? Read on.

As is often my modus operandi, I first did a proof of concept in Haskell. With its type classes and higher-kinded polymorphism, it's much faster to prototype solutions than even in F#. In the next article, I'll describe how that turned out.

After the Haskell article, I'll show how it translates to F#. You can skip the Haskell article if you like.

Conclusion #

I still think that validation is a solved problem. It's always interesting when such a belief for a moment is challenged, and satisfying to discover that it still holds.

This is, after all, not proof of anything. Perhaps tomorrow, someone will throw another curve ball that I can't catch. If that happens, I'll have to update my beliefs. Until then, I'll consider validation a solved problem.

Next: A Haskell proof of concept of validation with partial data round trip.


Branching tests

Monday, 07 December 2020 06:25:00 UTC

Is it ever okay to branch and loop in a unit test?

When I coach development organisations about unit testing and test-driven development, there's often a sizeable group of developers who don't see the value of unit testing. Some of the arguments they typically use are worth considering.

A common complaint is that it's difficult to see the wisdom in writing code to prevent defects in code. That's not an unreasonable objection.

We have scant scientific knowledge about software engineering, but the little we know suggests that the number of defects is proportional to lines of code. The more lines of code, the more defects.

If that's true, adding more code - even when it's test code - seems like a bad idea.

Reasons to trust test code #

First, we should consider the possibility that the correlation between lines of code and defects doesn't mean that defects are evenly distributed. As Adam Tornhill argues in Your Code as a Crime Scene, defects tend to cluster in hotspots.

You can have a large proportion of your code base which is, for all intents and purpose, bug-free, and hotspots where defects keep spawning.

If this is true, adding test code isn't a problem if you can keep it bug-free.

That, however, sounds like a chicken-and-the-egg kind of problem. How can you know that test code is bug-free without tests?

I've previously answered that question. In short, you can trust a test for two reasons:

  • You've seen it fail (haven't you?)
  • It's simple
I usually think of the simplicity criterion as a limit on cyclomatic complexity: it should be 1. This means no branching and no loops in your tests.

That's what this article is actually about.

What's in a name? #

I was working with an online restaurant reservation system (example code), and had written this test:

[Theory]
[InlineData("2023-11-24 19:00""juliad@example.net""Julia Domna", 5)]
[InlineData("2024-02-13 18:15""x@example.com""Xenia Ng", 9)]
public async Task PostValidReservationWhenDatabaseIsEmpty(
    string at,
    string email,
    string name,
    int quantity)
{
    var db = new FakeDatabase();
    var sut = new ReservationsController(db);
 
    var dto = new ReservationDto
    {
        At = at,
        Email = email,
        Name = name,
        Quantity = quantity
    };
    await sut.Post(dto);
 
    var expected = new Reservation(
        DateTime.Parse(dto.At, CultureInfo.InvariantCulture),
        dto.Email,
        dto.Name,
        dto.Quantity);
    Assert.Contains(expected, db);
}

This is a state-based test that verifies that a valid reservation makes it to the database. The test has a cyclomatic complexity of 1, and I've seen it fail, so all is good. (It may, in fact, contain a future maintenance problem, but that's a topic for another article.)

The code shown here is part of the sample code base that accompanies my book Code That Fits in Your Head.

What constitutes a valid reservation? At the very least, we should demand that At is a valid date and time, and that Quantity is a positive number. The restaurant would like to be able to email a confirmation to the user, so an email address is also required. Email addresses are notoriously difficult to validate, so we'll just require that the the string isn't null.

What about the Name? I thought about this a bit and decided that, according to Postel's law, the system should accept null names. The name is only a convenience; the system doesn't need it, it's just there so that when you arrive at the restaurant, you can say "I have a reservation for Julia" instead of giving an email address to the maître d'hôtel. But then, if you didn't supply a name when you made the reservation, you can always state your email address when you arrive. To summarise, the name is just a convenience, not a requirement.

This decision meant that I ought to write a test case with a null name.

That turned out to present a problem. I'd defined the Reservation class so that it didn't accept null arguments, and I think that's the appropriate design. Null is just evil and has no place in my domain models.

That's not a problem in itself. In this case, I think it's acceptable to convert a null name to the empty string.

Copy and paste #

Allow me to summarise. If you consider the above unit test, I needed a third test case with a null name. In that case, expected should be a Reservation value with the name "". Not null, but "".

As far as I can tell, you can't easily express that in PostValidReservationWhenDatabaseIsEmpty without increasing its cyclomatic complexity. Based on the above introduction, that seems like a no-no.

What's the alternative? Should I copy the test and adjust the single line of code that differs? If I did, it would look like this:

[Theory]
[InlineData("2023-08-23 16:55""kite@example.edu"null, 2)]
public async Task PostValidReservationWithNullNameWhenDatabaseIsEmpty(
    string at,
    string email,
    string name,
    int quantity)
{
    var db = new FakeDatabase();
    var sut = new ReservationsController(db);
 
    var dto = new ReservationDto
    {
        At = at,
        Email = email,
        Name = name,
        Quantity = quantity
    };
    await sut.Post(dto);
 
    var expected = new Reservation(
        DateTime.Parse(dto.At, CultureInfo.InvariantCulture),
        dto.Email,
        "",
        dto.Quantity);
    Assert.Contains(expected, db);
}

Apart from the values in the [InlineData] attribute and the method name, the only difference from PostValidReservationWhenDatabaseIsEmpty is that expected has a hard-coded name of "".

This is not acceptable.

There's a common misconception that the DRY principle doesn't apply to unit tests. I don't see why this should be true. The DRY principle exists because copy-and-paste code is difficult to maintain. Unit test code is also code that you have to maintain. All the rules about writing maintainable code also apply to unit test code.

Branching in test #

What's the alternative? One option (that shouldn't be easily dismissed) is to introduce a Test Helper to perform the conversion from a nullable name to a non-nullable name. Such a helper would have a cyclomatic complexity of 2, but could be unit tested in isolation. It might even turn out that it'd be useful in the production code.

Still, that seems like overkill, so I instead made the taboo move and added branching logic to the existing test to see how it'd look:

[Theory]
[InlineData("2023-11-24 19:00""juliad@example.net""Julia Domna", 5)]
[InlineData("2024-02-13 18:15""x@example.com""Xenia Ng", 9)]
[InlineData("2023-08-23 16:55""kite@example.edu"null, 2)]
public async Task PostValidReservationWhenDatabaseIsEmpty(
    string at,
    string email,
    string name,
    int quantity)
{
    var db = new FakeDatabase();
    var sut = new ReservationsController(db);
 
    var dto = new ReservationDto
    {
        At = at,
        Email = email,
        Name = name,
        Quantity = quantity
    };
    await sut.Post(dto);
 
    var expected = new Reservation(
        DateTime.Parse(dto.At, CultureInfo.InvariantCulture),
        dto.Email,
        dto.Name ?? "",
        dto.Quantity);
    Assert.Contains(expected, db);
}

Notice that the expected name is now computed as dto.Name ?? "". Perhaps you think about branching instructions as relating exclusively to keywords such as if or switch, but the ?? operator is also a branching instruction. The test now has a cyclomatic complexity of 2.

Is that okay?

To branch or not to branch #

I think that in this case, it's okay to slightly increase the cyclomatic complexity of the test. It's not something I just pull out of my hat, though. I think it's possible to adjust the above heuristics to embrace this sort of variation.

To be clear, I consider this an advanced practice. If you're just getting started with unit testing, try to keep tests simple. Keep the cyclomatic complexity at 1.

Had I been in the above situation a couple of years ago, I might not have considered this option. About a year ago, though, I watched John Hughes' presentation Building on developers' intuitions to create effective property-based tests. When he, about 15 minutes in, wrote a test with a branching instruction, I remember becoming quite uncomfortable. This lasted for a while until I understood where he was going with it. It's truly an inspiring and illuminating talk; I highly recommend it.

How it relates to the problem presented here is through coverage. While the PostValidReservationWhenDatabaseIsEmpty test now has a cyclomatic complexity of 2, it's a parametrised test with three test cases. Two of these cover one branch, and the third covers the other.

What's more important is the process that produced the test. I added one test case at a time, and for each case, I saw the test fail.

Specifically, when I added the third test case with the null name, I first added the branching expression dto.Name ?? "" and ran the two existing tests. They still both passed, which bolstered my belief that they both exercised the left branch of that expression. I then added the third case and saw that it (and only it) failed. This supported my belief that the third case exercised the right branch of ??.

Branching in unit tests isn't something I do lightly. I still believe that it could make the test more vulnerable to future changes. I'm particularly worried about making a future change that might shift one or more of these test cases into false negatives in the form of tautological assertions.

Conclusion #

As you can tell, when I feel that I'm moving onto thin ice, I move deliberately. If there's one thing I've learned from decades of professional programming it's that my brain loves jumping to conclusions. Moving slowly and deliberately is my attempt at countering this tendency. I believe that it enables me to go faster in the long run.

I don't think that branching in unit tests should be common, but I believe that it may be occasionally valid. The key, I think, is to guarantee that each branch in the test is covered by a test case. The implication is that there must be at least as many test cases as the cyclomatic complexity. In other words, the test must be a parametrised test.


Comments

Hi Mark, I guess there is implicit cyclomatic complexity in the testing framework itself (For example, it loops through the InlineData records). That feels fine though, does this somehow have less cost than cyclomatic complexity in the test code itself? I guess, as you mentioned, it's acceptable because the alternative is violation of DRY.

With this in mind, I wonder how you feel about adding an expectedName parameter to the InlineData attributes, instead of the conditional in the test code? Maybe it's harder to read though when the test data includes input and output.

2020-12-07 08:36 UTC

James, thank you for writing. I consider the cyclomatic complexity of a method call to be 1, and Visual Studio code metrics agree with me. Whatever happens in a framework should, in my opinion, likewise be considered as encapsulated abstraction that's none of our business.

Adding an expectedName parameter to the method is definitely an option. I sometimes do that, and I could have done that here, too. In this situation, I think it's a toss-up. It'd make it harder for a later reader of the code to parse the test cases, but would simplify the test code itself, so that alternative comes with both advantages and disadvantages.

2020-12-08 11:02 UTC
Romain Deneau @DeneauRomain #

Hi Mark. To build up on the additional expectedName parameter, instead of keeping a single test with the 3 cases but the last being a edge case, I prefer introduce a specific test for the last case.

Then, to remove the duplication, we can extract a common method which will take this additional expectedName parameter:

		[Theory]
		[InlineData("2023-11-24 19:00""juliad@example.net""Julia Domna", 5)]
		[InlineData("2024-02-13 18:15""x@example.com",      "Xenia Ng",    9)]
		public async Task PostValidReservationWithNameWhenDatabaseIsEmpty
		           (string at,          string email,         string name,   int quantity) =>
			PostValidReservationWhenDatabaseIsEmpty(at, email, name, expectedName: name, quantity);

		[Fact]
		public async Task PostValidReservationWithoutNameWhenDatabaseIsEmpty() =>
			PostValidReservationWhenDatabaseIsEmpty(
				at          : "2023-11-24 19:00",
				email       : "juliad@example.net",
				name        : null,
				expectedName: "",
				quantity    : 5);

		private async Task PostValidReservationWhenDatabaseIsEmpty(
			    string at,
			    string email,
			    string name,
			    string expectedName,
			    int    quantity)
		{
		    var db = new FakeDatabase();
		    var sut = new ReservationsController(db);
		 
		    var dto = new ReservationDto
		    {
		        At       = at,
		        Email    = email,
		        Name     = name,
		        Quantity = quantity,
		    };
		    await sut.Post(dto);
		 
		    var expected = new Reservation(
		        DateTime.Parse(dto.At, CultureInfo.InvariantCulture),
		        dto.Email,
		        expectedName, // /!\ Not `dto.Name`,
		        dto.Quantity);
		    Assert.Contains(expected, db);
		}

				

2020-12-09 8:44 UTC

Romain, thank you for writing. There are, indeed, many ways to skin that cat. If you're comfortable with distributing a test over more than one method, I instead prefer to use another data source for the [Theory] attribute:

private class PostValidReservationWhenDatabaseIsEmptyTestCases :
    TheoryData<ReservationDtoReservation>
{
    public PostValidReservationWhenDatabaseIsEmptyTestCases()
    {
        AddWithName(new DateTime(2023, 11, 24, 19, 0, 0), "juliad@example.net""Julia Domna", 5);
        AddWithName(new DateTime(2024, 2, 13, 18, 15, 0), "x@example.com""Xenia Ng", 9);
        AddWithoutName(new DateTime(2023, 8, 23, 16, 55, 0), "kite@example.edu", 2);
    }
 
    private void AddWithName(DateTime at, string email, string name, int quantity)
    {
        Add(new ReservationDto
            {
                At = at.ToString("O"),
                Email = email,
                Name = name,
                Quantity = quantity
            },
            new Reservation(at, email, name, quantity));
    }
 
    private void AddWithoutName(DateTime at, string email, int quantity)
    {
        Add(new ReservationDto { At = at.ToString("O"), Email = email, Quantity = quantity },
            new Reservation(at, email, "", quantity));
    }
}
 
 
[TheoryClassData(typeof(PostValidReservationWhenDatabaseIsEmptyTestCases))]
public async Task PostValidReservationWhenDatabaseIsEmpty(
    ReservationDto dto, Reservation expected)
{
    var db = new FakeDatabase();
    var sut = new ReservationsController(db);
 
    await sut.Post(dto);
 
    Assert.Contains(expected, db);
}

Whether you prefer one over the other is, I think, subjective. I like my alternative, using a [ClassData] source, better, because I find it a bit more principled and 'pattern-based', if you will. I also like how small the actual test method becomes.

Your solution, on the other hand, is more portable, in the sense that you could also apply it in a testing framework that doesn't have the sort of capability that xUnit.net has. That's a definite benefit with your suggestion.

2020-12-10 20:05 UTC

Name by role

Monday, 30 November 2020 06:31:00 UTC

Consider naming variables according to their role, instead of their type.

My recent article on good names might leave you with the impression that I consider good names unimportant. Not at all. That article was an attempt at delineating the limits of naming. Good names aren't the panacea some people seem to imply, but they're still important.

As the cliché goes, naming is one of the hardest problems in software development. Perhaps it's hard because you have to do it so frequently. Every time you create a variable, you have to name it. It's also an opportunity to add clarity to a code base.

A common naming strategy is to name objects after their type:

Reservation? reservation = dto.Validate(id);

or:

Restaurant? restaurant = await RestaurantDatabase.GetRestaurant(restaurantId);

There's nothing inherently wrong with a naming scheme like this. It often makes sense. The reservation variable is a Reservation object, and there's not that much more to say about it. The same goes for the restaurant object.

In some contexts, however, objects play specific roles. This is particularly prevalent with primitive types, but can happen to any type of object. It may help the reader if you name the variables according to such roles.

In this article, I'll show you several examples. I hope these examples are so plentiful and varied that they can inspire you to come up with good names. Most of the code shown here is part of the sample code base that accompanies my book Code That Fits in Your Head.

A variable introduced only to be named #

In a recent article I showed this code snippet:

private bool SignatureIsValid(string candidate, ActionExecutingContext context)
{
    var sig = context.HttpContext.Request.Query["sig"];
    var receivedSignature = Convert.FromBase64String(sig.ToString());
 
    using var hmac = new HMACSHA256(urlSigningKey);
    var computedSignature = hmac.ComputeHash(Encoding.ASCII.GetBytes(candidate));
 
    var signaturesMatch = computedSignature.SequenceEqual(receivedSignature);
    return signaturesMatch;
}

Did you wonder about the signaturesMatch variable? Why didn't I just return the result of SequenceEqual, like the following?

private bool SignatureIsValid(string candidate, ActionExecutingContext context)
{
    var sig = context.HttpContext.Request.Query["sig"];
    var receivedSignature = Convert.FromBase64String(sig.ToString());
 
    using var hmac = new HMACSHA256(urlSigningKey);
    var computedSignature = hmac.ComputeHash(Encoding.ASCII.GetBytes(candidate));
 
    return computedSignature.SequenceEqual(receivedSignature);
}

Visual Studio even offers this as a possible refactoring that it'll do for you.

The inclusion of the signaturesMatch variable was a conscious decision of mine. I felt that directly returning the result of SequenceEqual was a bit too implicit. It forces readers to make the inference themselves: Ah, the two arrays contain the same sequence of bytes; that must mean that the signatures match!

Instead of asking readers to do that work themselves, I decided to do it for them. I hope that it improves readability. It doesn't change the behaviour of the code one bit.

Test roles #

When it comes to unit testing, there's plenty of inconsistent terminology. One man's mock object is another woman's test double. Most of the jargon isn't even internally consistent. Do yourself a favour and adopt a consistent pattern language. I use the one presented in xUnit Test Patterns.

For instance, the thing that you're testing is the System Under Test (SUT). This can be a pure function or a static method, but when it's an object, you're going to create a variable. Consider naming it sut. A typical test also defines other variables. Naming one of them sut clearly identifies which of them is the SUT. It also protects the tests against the class in question being renamed.

[Fact]
public void ScheduleSingleReservationCommunalTable()
{
    var table = Table.Communal(12);
    var sut = new MaitreD(
        TimeSpan.FromHours(18),
        TimeSpan.FromHours(21),
        TimeSpan.FromHours(6),
        table);
 
    var r = Some.Reservation;
    var actual = sut.Schedule(new[] { r });
 
    var expected = new[] { new TimeSlot(r.At, table.Reserve(r)) };
    Assert.Equal(expected, actual);
}

The above test follows my AAA.formatting heuristic. In all, it defines five variables, but there can be little doubt about which one is the sut.

The table and r variables follow the mainstream practice of naming variables after their type. They play no special role, so that's okay. You may balk at such a short variable name as r, and that's okay. In my defence, I follow Clean Code's N5 heuristic for long and short scopes. A variable name like r is fine when it only spans three lines of code (four, if you also count the blank line).

Consider also using the variable names expected and actual, as in the above example. In many unit testing frameworks, those are the argument names for the assertion. For instance, in xUnit.net (which the above test uses) the Assert.Equals overloads are defined as Equal<T>(T expected, T actual). Using these names for variables makes the roles clearer, I think.

The other #

The above assertion relies on structural equality. The TimeSlot class is immutable, so it can safely override Equals (and GetHashCode) to implement structural equality:

public override bool Equals(object? obj)
{
    return obj is TimeSlot other &&
           At == other.At &&
           Tables.SequenceEqual(other.Tables);
}

I usually call the downcast variable other because, from the perspective of the instance, it's the other object. I usually use that convention whenever an instance interacts with another object of the same type. Among other examples, this happens when you model objects as semigroups and monoids. The Angle struct, for example, defines this binary operation:

public Angle Add(Angle other)
{
    return new Angle(this.degrees + other.degrees);
}

Again, the method argument is in the role as the other object, so naming it other seems natural.

Here's another example from a restaurant reservation code base:

public bool Overlaps(Seating other)
{
    if (other is null)
        throw new ArgumentNullException(nameof(other));
 
    return Start < other.End && other.Start < End;
}

The Overlaps method is an instance method on the Seating class. Again, other seems natural.

Candidates #

The Overlaps method looks like a predicate, i.e. a function that returns a Boolean value. In the case of that method, other indicates the role of being the other object, but it also plays another role. It makes sense to me to call predicate input candidates. Typically, you have some input that you want to evaluate as either true or false. I think it makes sense to think of such a parameter as a 'truth candidate'. You can see one example of that in the above SignatureIsValid method.

There, the string parameter is a candidate for having a valid signature.

Here's another restaurant-related example:

public bool WillAccept(
    DateTime now,
    IEnumerable<Reservation> existingReservations,
    Reservation candidate)
{
    if (existingReservations is null)
        throw new ArgumentNullException(nameof(existingReservations));
    if (candidate is null)
        throw new ArgumentNullException(nameof(candidate));
    if (candidate.At < now)
        return false;
    if (IsOutsideOfOpeningHours(candidate))
        return false;
 
    var seating = new Seating(SeatingDuration, candidate.At);
    var relevantReservations =
        existingReservations.Where(seating.Overlaps);
    var availableTables = Allocate(relevantReservations);
    return availableTables.Any(t => t.Fits(candidate.Quantity));
}

Here, the reservation in question is actually not yet a reservation. It might be rejected, so it's a candidate reservation.

You can also use that name in TryParse methods, as shown in this article.

Data Transfer Objects #

Another name that I like to use is dto for Data Transfer Objects (DTOs). The benefit here is that as long as dto is unambiguous in context, it makes it easier to distinguish between a DTO and the domain model you might want to turn it into:

[HttpPost("restaurants/{restaurantId}/reservations")]
public async Task<ActionResult> Post(int restaurantId, ReservationDto dto)
{
    if (dto is null)
        throw new ArgumentNullException(nameof(dto));
 
    var id = dto.ParseId() ?? Guid.NewGuid();
    Reservation? reservation = dto.Validate(id);
    if (reservation is null)
        return new BadRequestResult();
 
    var restaurant = await RestaurantDatabase.GetRestaurant(restaurantId);
    if (restaurant is null)
        return new NotFoundResult();
 
    return await TryCreate(restaurant, reservation);
}

By naming the input parameter dto, I keep the name reservation free for the domain object, which ought to be the more important object of the two.

"A Data Transfer Object is one of those objects our mothers told us never to write."

I could have named the input parameter reservationDto instead of dto, but that would diminish the 'mental distance' between reservationDto and reservation. I like to keep that distance, so that the roles are more explicit.

Time #

You often need to make decisions based on the current time or date. In .NET the return value from DateTime.Now is a DateTime value. Typical variable names are dateTime, date, time, or dt, but why not call it now?

private async Task<ActionResult> TryCreate(Restaurant restaurant, Reservation reservation)
{
    using var scope = new TransactionScope(TransactionScopeAsyncFlowOption.Enabled);
 
    var reservations = await Repository.ReadReservations(restaurant.Id, reservation.At);
    var now = Clock.GetCurrentDateTime();
    if (!restaurant.MaitreD.WillAccept(now, reservations, reservation))
        return NoTables500InternalServerError();
 
    await Repository.Create(restaurant.Id, reservation).ConfigureAwait(false);
 
    scope.Complete();
 
    return Reservation201Created(restaurant.Id, reservation);
}

This is the TryCreate method called by the above Post method. Here, DateTime.Now is hidden behind Clock.GetCurrentDateTime() in order to make execution repeatable, but the idea remains: the variable represents the current time or date, or, with a bit of good will, now.

Notice that the WillAccept method (shown above) also uses now as a parameter name. That value's role is to represent now as a concept.

When working with time, I also sometimes use the variable names before and after. This is mostly useful in integration tests:

[Fact]
public async Task GetCurrentYear()
{
    using var api = new LegacyApi();
 
    var before = DateTime.Now;
    var response = await api.GetCurrentYear();
    var after = DateTime.Now;
 
    response.EnsureSuccessStatusCode();
    var actual = await response.ParseJsonContent<CalendarDto>();
    AssertOneOf(before.Year, after.Year, actual.Year);
    Assert.Null(actual.Month);
    Assert.Null(actual.Day);
    AssertLinks(actual);
}

While you can inject something like a Clock dependency in order to make your SUT deterministic, in integration tests you might want to see behaviour when using the system clock. You can often verify such behaviour by surrounding the test's Act phase with two calls to DateTime.Now. This gives you the time before and after the test exercised the SUT.

When you do that, however, be careful with the assertions. If such a test runs at midnight, before and after might be two different dates. If it runs on midnight December 31, it might actually be two different years! That's the reason that the test passes as long as the actual.Year is either of before.Year and after.Year.

Invalid values #

While integration tests often test happy paths, unit tests should also exercise error paths. What happens when you supply invalid input to a method? When you write such tests, you can identify the invalid values by naming the variables or parameters accordingly:

[Theory]
[InlineData(null)]
[InlineData("")]
[InlineData("bas")]
public async Task PutInvalidId(string invalidId)
{
    var db = new FakeDatabase();
    var sut = new ReservationsController(
        new SystemClock(),
        new InMemoryRestaurantDatabase(Some.Restaurant),
        db);
 
    var dummyDto = new ReservationDto
    {
        At = "2024-06-25 18:19",
        Email = "colera@example.com",
        Name = "Cole Aera",
        Quantity = 2
    };
    var actual = await sut.Put(invalidId, dummyDto);
 
    Assert.IsAssignableFrom<NotFoundResult>(actual);
}

Here, the invalid input represent an ID. To indicate that, I called the parameter invalidId.

The system under test is the Put method, which takes two arguments:

public Task<ActionResult> Put(string id, ReservationDto dto)

When testing an error path, it's important to keep other arguments well-behaved. In this example, I want to make sure that it's the invalidId that causes the NotFoundResult result. Thus, the dto argument should be as well-behaved as possible, so that it isn't going to be the source of divergence.

Apart from being well-behaved, that object plays no role in the test. It just needs to be there to make the code compile. xUnit Test Patterns calls such an object a Dummy Object, so I named the variable dummyDto as information to any reader familiar with that pattern language.

Derived class names #

The thrust of all of these examples is that you don't have to name variables after their types. You can extend this line of reasoning to class inheritance. Just because a base class is called Foo it doesn't mean that you have to call a derived class SomethingFoo.

This is something of which I have to remind myself. For example, to support integration testing with ASP.NET you'll need a WebApplicationFactory<TEntryPoint>. To override the default DI Container configuration, you'll have to derive from this class and override its ConfigureWebHost method. In an example I've previously published I didn't spend much time thinking about the class name, so RestaurantApiFactory it was.

At first, I named the variables of this type factory, or something equally devoid of information. That bothered me, so instead tried service, which I felt was an improvement, but still too vapid. I then adopted api as a variable name, but then realised that that also suggested a better class name. So currently, this defines my self-hosting API:

public sealed class SelfHostedApi : WebApplicationFactory<Startup>

Here's how I use it:

[Fact]
public async Task ReserveTableAtNono()
{
    using var api = new SelfHostedApi();
    var client = api.CreateClient();
    var dto = Some.Reservation.ToDto();
    dto.Quantity = 6;
 
    var response = await client.PostReservation("Nono", dto);
 
    var at = Some.Reservation.At;
    await AssertRemainingCapacity(client, at, "Nono", 4);
    await AssertRemainingCapacity(client, at, "Hipgnosta", 10);
}

The variable is just called api, but the reader can tell from the initialisation that this is an instance of the SelfHostedApi class. I like how that communicates that this is an integration test that uses a self-hosted API. It literally says that.

This test also uses the dto naming convention. Additionally, you may take note of the variable and property called at. That's another name for a date and time. I struggled with naming this value, until Karsten Strøbæk suggested that I used the simple word at: reservation.At indicates the date and time of the reservation without being encumbered by awkward details about date and time. Should we call it date? time? dateTime? No, just call it at. I find it elegant.

Conclusion #

Sometimes, a Reservation object is just a reservation, and that's okay. At other times, it's the actual value, or the expected value. If it represents an invalid reservation in a test case, it makes sense to call the variable invalidResevation.

Giving variables descriptive names improves code quality. You don't have to write comments as apologies for poor readability if a better name communicates what the comment would have said.

Consider naming variables (and classes) for the roles they play, rather than their types.

On the other hand, when variable names are in the way, consider point-free code.


Comments

Excellent name suggestions. Thanks for sharing them :)

I usually call the downcast variable other because, from the perspective of the instance, it's the other object. I usually use that convention whenever an instance interacts with another object of the same type.

The name other is good, but I prefer that because I think it is a better antonym of this (the keyword for the current instance) and because it has the same number of letters as this.

It makes sense to me to call predicate input candidates. Typically, you have some input that you want to evaluate as either true or false. I think it makes sense to think of such a parameter as a 'truth candidate'. You can see one example of that in the above SignatureIsValid method.

...

Here, the reservation in question is actually not yet a reservation. It might be rejected, so it's a candidate reservation.

I typically try to avoid turning some input into either true or false. In particular, I find it confusing for the syntax to say that some instance is a Reservation while the semantics says that it "is actually not yet a reservation". I think of this as an example of primitive obsession. Strictly speaking, I think "Primitive Obsession is when the code relies too much on primitives." (aka, on primitive types). In my mind though, I have generalized this to cover any code that relies too much on weaker types. Separate types Reservation and bool are weaker than separate types Reservation and CandidateReservation. I think Alexis King summarized this well with a blog post titled Parse, don’t validate.

And yet, my coworkers and I have engaged in friendly but serious debates for years about which of those two approaches is better. My argument, essentially as given above, is for separate types Reservation and CandidateReservation. The main counterargument is that these types are the same except for a database-generated ID, so just represent both using one type with an optional ID.

Have you thought about this before?

By naming the input parameter dto, I keep the name reservation free for the domain object, which ought to be the more important object of the two.

...

I could have named the input parameter reservationDto instead of dto, but that would diminish the 'mental distance' between reservationDto and reservation. I like to keep that distance, so that the roles are more explicit.

I prefer to emphasize the roles even more by using the names dto and model. We are in the implementation of the (Post) route for "restaurants/{restaurantId}/reservations", so I think it is clear from context that the dto and model are really a reservation DTO and a reservation model.

2020-11-30 20:46 UTC

Tyson, thank you for writing. Certainly, I didn't intent my article to dictate names. As you imply, there's room for both creativity and subjectivity, and that's fine. My suggestions were meant only for inspiration.

The main counterargument is that these types are the same except for a database-generated ID, so just represent both using one type with an optional ID.

Have you thought about this before?

Yes; I would think twice before deciding to model a domain type with a database-generated ID. A server-generated ID is an implementation detail that shouldn't escape the data access layer. If it does, you have a leaky abstraction at hand. Sooner or later, it's going to bite you.

The Reservation class in the above examples has this sole constructor:

public Reservation(Guid id, DateTime at, Email email, Name name, int quantity)

You can't create an instance without supplying an ID. On the other hand, any code can conjure up a GUID, so no server is required. At the type-level, there's no compelling reason to distinguish between a reservation and a candidate reservation.

Granted, you could define two types, Reservation and CandidateReservation, but they'd be isomorphic. In Haskell, you'd probably use a newtype for one of these types, and then you're back at Alexis King's blog.

2020-12-02 7:43 UTC
...naming is one of the hardest problems in software development. Perhaps it's hard because you have to do it so frequently.

Usually, doing things frequently means mastering them pretty quickly. Not so for naming. I guess, there are multiple issues:

  1. Words are ambiguous. The key is, not to do naming in isolation, the context matters. For example, it's difficult to come up with a good name for a method when we don't have a good name for its class, the whole component, etc. Similar with Clean Code's N5: the meaning of a short variable is clear in a small scope, closed context.
  2. Good naming requires deep understanding of the domain. Developers are usualy not good at the business they model. Sadly, it often means "necessary evil" for them.

Naming variables by their roles is a great idea!

Many thanks for another awesome post, I enjoyed reading it.

2020-12-11 09:11 UTC

Tomas, thank you for writing.

doing things frequently means mastering them pretty quickly. Not so for naming. I guess

Good point; I hadn't thought about that. I think that the reasons you list are valid.

As an additional observation, it may be that there's a connection to the notion of deliberate practice. As the catch-phrase about professional experience puts it, there's a difference between 20 years of experience and one year of experience repeated 20 times.

Doing a thing again and again generates little improvement if one does it by rote. One has to deliberately practice. In this case, it implies that a programmer should explicitly reflect on variable names, and consider more than one option.

I haven't met many software developers who do that.

2020-12-15 9:19 UTC

Good names are skin-deep

Monday, 23 November 2020 06:33:00 UTC

Good names are important, but insufficient, for code maintainability.

You should give the building blocks of your code bases descriptive names. It's easier to understand the purpose of a library, module, class, method, function, etcetera if the name contains a clue about the artefact's purpose. This is hardly controversial, and while naming is hard, most teams I visit agree that names are important.

Still, despite good intentions and efforts to name things well, code bases deteriorate into unmaintainable clutter.

Clearly, good names aren't enough.

Tenuousness of names #

A good name is tenuous. First, naming is hard, so while you may have spent some effort coming up with a good name, other people may misinterpret it. Because they originate from natural language, names are as ambiguous as language. (Terse operators, on the other hand...)

Another maintainability problem with names is that implementation may change over time, but the names remain constant. Granted, modern IDEs make it easy to rename methods, but developers rarely adjust names when they adjust behaviour. Even the best names may become misleading over time.

These weakness aren't the worst, though. In my experience, a more fundamental problem is that all it takes is one badly named 'wrapper object' before the information in a good name is lost.

Object with clear names enclosed in object with vague names.

In the figure, the inner object is well-named. It has a clear name and descriptive method names. All it takes before this information is lost, however, is another object with vague names to 'encapsulate' it.

An attempt at a descriptive method name #

Here's an example. Imagine an online restaurant reservation system. One of the features of this system is to take reservations and save them in the database.

A restaurant, however, is a finite resource. It can only accommodate a certain number of guests at the same time. Whenever the system receives a reservation request, it'll have to retrieve the existing reservations for that time and make a decision. Can it accept the reservation? Only if it can should it save the reservation.

How do you model such an interaction? How about a descriptive name? How about TrySave? Here's a possible implementation:

public async Task<bool> TrySave(Reservation reservation)
{
    if (reservation is null)
        throw new ArgumentNullException(nameof(reservation));
 
    var reservations = await Repository
        .ReadReservations(
            reservation.At,
            reservation.At + SeatingDuration)
        .ConfigureAwait(false);
    var availableTables = Allocate(reservations);
    if (!availableTables.Any(t => reservation.Quantity <= t.Seats))
        return false;
 
    await Repository.Create(reservation).ConfigureAwait(false);
    return true;
}

There's an implicit naming convention in .NET that methods with the Try prefix indicate an operation that may or may not succeed. The return value of such methods is either true or false, and they may also have out parameters if they optionally produce a value. That's not the case here, but I think one could make the case that TrySave succinctly describes what's going on.

All is good, then?

A vague wrapper #

After our conscientious programmer meticulously designed and named the above TrySave method, it turns out that it doesn't meet all requirements. Users of the system file a bug: the system accepts reservations outside the restaurant's opening hours.

The original programmer has moved on to greener pastures, so fixing the bug falls on a poor maintenance developer with too much to do. Having recently learned about the open-closed principle, our new protagonist decides to wrap the existing TrySave in a new method:

public async Task<bool> Check(Reservation reservation)
{
    if (reservation is null)
        throw new ArgumentNullException(nameof(reservation));
 
    if (reservation.At < DateTime.Now)
        return false;
    if (reservation.At.TimeOfDay < OpensAt)
        return false;
    if (LastSeating < reservation.At.TimeOfDay)
        return false;
 
    return await Manager.TrySave(reservation).ConfigureAwait(false);
}

This new method first checks whether the reservation is within opening hours and in the future. If that's not the case, it returns false. Only if these preconditions are fulfilled does it delegate the decision to that TrySave method.

Notice, however, the name. The bug was urgent, and our poor programmer didn't have time to think of a good name, so Check it is.

Caller's perspective #

How does this look from the perspective of calling code? Here's the Controller action that handles the pertinent HTTP request:

public async Task<ActionResult> Post(ReservationDto dto)
{
    if (dto is null)
        throw new ArgumentNullException(nameof(dto));
 
    Reservation? r = dto.Validate();
    if (r is null)
        return new BadRequestResult();
 
    var isOk = await Manager.Check(r).ConfigureAwait(false);
    if (!isOk)
        return new StatusCodeResult(StatusCodes.Status500InternalServerError);
 
    return new NoContentResult();
}

Try to forget the code you've just seen and imagine that you're looking at this code first. You'd be excused if you miss what's going on. It looks as though the method just does a bit of validation and then checks 'something' concerning the reservation.

There's no hint that the Check method might perform the significant side effect of saving the reservation in the database.

You'll only learn that if you read the implementation details of Check. As I argue in my Humane Code video, if you have to read the source code of an object, encapsulation is broken.

Such code doesn't fit in your brain. You'll struggle as you try keep track of all the things that are going on in the code, all the way from the outer boundary of the application to implementation details that relate to databases, third-party services, etcetera.

Straw man? #

You may think that this is a straw man argument. After all, wouldn't it be better to edit the original TrySave method?

Perhaps, but it would make that class more complex. The TrySave method has a cyclomatic complexity of only 3, while the Check method has a complexity of 5. Combining them might easily take them over some threshold.

Additionally, each of these two classes have different dependencies. As the TrySave method implies, it relies on both Repository and SeatingDuration, and the Allocate helper method (not shown) uses a third dependency: the restaurant's table configuration.

Likewise, the Check method relies on OpensAt and LastSeating. If you find it better to edit the original TrySave method, you'd have to combine these dependencies as well. Each time you do that, the class grows until it becomes a God object.

It's rational to attempt to separate things in multiple classes. It also, on the surface, seems to make unit testing easier. For example, here's a test that verifies that the Check method rejects reservations before the restaurant's opening time:

[Fact]
public async Task RejectReservationBeforeOpeningTime()
{
    var r = new Reservation(
        DateTime.Now.AddDays(10).Date.AddHours(17),
        "colaera@example.com",
        "Cole Aera",
        1);
    var mgrTD = new Mock<IReservationsManager>();
    mgrTD.Setup(mgr => mgr.TrySave(r)).ReturnsAsync(true);
    var sut = new RestaurantManager(
        TimeSpan.FromHours(18),
        TimeSpan.FromHours(21),
        mgrTD.Object);
 
    var actual = await sut.Check(r);
 
    Assert.False(actual);
}

By replacing the TrySave method by a test double, you've ostensibly decoupled the Check method from all the complexity of the TrySave method.

To be clear, this style of programming, with lots of nested interfaces and tests with mocks and stubs is far from ideal, but I still find it better than a big ball of mud.

Alternative #

A better alternative is Functional Core, Imperative Shell, AKA impureim sandwich. Move all impure actions to the edge of the system, leaving only referentially transparent functions as the main implementers of logic. It could look like this:

[HttpPost]
public async Task<ActionResult> Post(ReservationDto dto)
{
    if (dto is null)
        throw new ArgumentNullException(nameof(dto));
 
    var id = dto.ParseId() ?? Guid.NewGuid();
    Reservation? r = dto.Validate(id);
    if (r is null)
        return new BadRequestResult();
 
    var reservations = await Repository.ReadReservations(r.At).ConfigureAwait(false);
    if (!MaitreD.WillAccept(DateTime.Now, reservations, r))
        return NoTables500InternalServerError();
 
    await Repository.Create(r).ConfigureAwait(false);
 
    return Reservation201Created(r);
}

Nothing is swept under the rug here. WillAccept is a pure function, and while it encapsulates significant complexity, the only thing you need to understand when you're trying to understand the above Post code is that it returns either true or false.

Another advantage of pure functions is that they are intrinsically testable. That makes unit testing and test-driven development easier.

Even with a functional core, you'll also have an imperative shell. You can still test that, too, such as the Post method. It isn't referentially transparent, so you might be inclined to use mocks and stubs, but I instead recommend state-based testing with a Fake database.

Conclusion #

Good names are important, but don't let good names, alone, lull you into a false sense of security. All it takes is one vaguely named wrapper object, and all the information in your meticulously named methods is lost.

This is one of many reasons I try to design with static types instead of names. Not that I dismiss the value of good names. After all, you'll have to give your types good names as well.

Types are more robust in the face of inadvertent changes; or, rather, they tend to resist when we try to do something stupid. I suppose that's what lovers of dynamically typed languages feel as 'friction'. In my mind, it's entirely opposite. Types keep me honest.

Unfortunately, most type systems don't offer an adequate degree of safety. Even in F#, which has a great type system, you can introduce impure actions into what you thought was a pure function, and you'd be none the wiser. That's one of the reasons I find Haskell so interesting. Because of the way IO works, you can't inadvertently sweep surprises under the rug.


Comments

Johannes Schmitt #

I find the idea of the impure/pure/impure sandwich rather interesting and I agree with the benefits that it yields. However, I was wondering about where to move synchronization logic, i.e. the reservation system should avoid double bookings. With the initial TrySave approach it would be clear for me where to put this logic: the synchonrization mechanism should be part of the TrySave method. With the impure/pure/impure sandwich, it will move out to the most outer layern (HTTP Controller) - at least this is how I'd see it. My feelings tells me that this is a bit smelly, but I can't really pin point why I think so. Can you give some advice on this? How would you solve that?

2020-12-12 19:08 UTC

Johannes, thank you for writing. There are several ways to address that issue, depending on what sort of trade-off you're looking for. There's always a trade-off.

You can address the issue with a lock-free architecture. This typically involves expressing the desired action as a Command and putting it on a durable queue. If you combine that with a single-threaded, single-instance Actor that pulls Commands off the queue, you need no further transaction processing, because the architecture itself serialises writes. You can find plenty of examples of such an architecture on the internet, including (IIRC) my Pluralsight course Functional Architecture with F#.

Another option is to simply surround the impureim sandwich with a TransactionScope (if you're on .NET, that is).

2020-12-16 16:59 UTC

Redirect legacy URLs

Monday, 16 November 2020 06:47:00 UTC

Evolving REST API URLs when cool URIs don't change.

More than one reader reacted to my article on fit URLs by asking about bookmarks and original URLs. Daniel Sklenitzka's question is a good example:

"I see how signing the URLs prevents clients from retro-engineering the URL templates, but how does it help preventing breaking changes? If the client stores the whole URL instead of just the ID and later the URL changes because a restaurant ID is added, the original URL is still broken, isn't it?"

While I answered the question on the same page, I think that it's worthwhile to expand it.

The rules of HTTP #

I agree with the implicit assumption that clients are allowed to bookmark links. It seems, then, like a breaking change if you later change your internal URL scheme. That seems to imply that the bookmarked URL is gone, breaking a tenet of the HTTP protocol: Cool URIs don't change.

REST APIs are supposed to play by the rules of HTTP, so it'd seem that once you've published a URL, you can never retire it. You can, on the other hand, change its behaviour.

Let's call such URLs legacy URLs. Keep them around, but change them to return 301 Moved Permanently responses.

The rules of REST go both ways. The API is expected to play by the rules of HTTP, and so are the clients. Clients are not only expected to follow links, but also redirects. If a legacy URL starts returning a 301 Moved Permanently response, a well-behaved client doesn't break.

Reverse proxy #

As I've previously described, one of the many benefits of HTTP-based services is that you can put a reverse proxy in front of your application servers. I've no idea how to configure or operate NGINX or Varnish, but from talking to people who do know, I get the impression that they're quite scriptable.

Since the above ideas are independent of actual service implementation or behaviour, it's a generic problem that you should seek to address with general-purpose software.

Sequence diagram showing a reverse proxy returning a redirect response to a request for a legacy URL.

Imagine that a reverse proxy is configured with a set of rules that detects legacy URLs and knows how to forward them. Clearly, the reverse proxy must know of the REST API's current URL scheme to be able to do that. You might think that this would entail leaking an implementation detail, but just as I consider any database used by the API as part of the overall system, I'd consider the reverse proxy as just another part.

Redirecting with ASP.NET #

If you don't have a reverse proxy, you can also implement redirects in code. It'd be better to use something like a reverse proxy, because that would mean that you get to delete code from your code base, but sometimes that's not possible.

The code shown here is part of the sample code base that accompanies my book Code That Fits in Your Head.

In ASP.NET, you can return 301 Moved Permanently responses just like any other kind of HTTP response:

[Obsolete("Use Get method with restaurant ID.")]
[HttpGet("calendar/{year}/{month}")]
public ActionResult LegacyGet(int year, int month)
{
    return new RedirectToActionResult(
        nameof(Get),
        null,
        new { restaurantId = Grandfather.Id, year, month },
        permanent: true);
}

This LegacyGet method redirects to the current Controller action called Get by supplying the arguments that the new method requires. The Get method has this signature:

[HttpGet("restaurants/{restaurantId}/calendar/{year}/{month}")]
public async Task<ActionResult> Get(int restaurantId, int year, int month)

When I expanded the API from a single restaurant to a multi-tenant system, I had to grandfather in the original restaurant. I gave it a restaurantId, but in order to not put magic constants in the code, I defined it as the named constant Grandfather.Id.

Notice that I also adorned the LegacyGet method with an [Obsolete] attribute to make it clear to maintenance programmers that this is legacy code. You might argue that the Legacy prefix already does that, but the [Obsolete] attribute will make the compiler emit a warning, which is even better feedback.

Regression test #

While legacy URLs may be just that: legacy, that doesn't mean that it doesn't matter whether or not they work. You may want to add regression tests.

If you implement redirects in code (as opposed to a reverse proxy), you should also add automated tests that verify that the redirects work:

[Theory]
[InlineData("http://localhost/calendar/2020?sig=ePBoUg5gDw2RKMVWz8KIVzF%2Fgq74RL6ynECiPpDwVks%3D")]
[InlineData("http://localhost/calendar/2020/9?sig=ZgxaZqg5ubDp0Z7IUx4dkqTzS%2Fyjv6veDUc2swdysDU%3D")]
public async Task BookmarksStillWork(string bookmarkedAddress)
{
    using var api = new LegacyApi();
 
    var actual = await api.CreateDefaultClient().GetAsync(new Uri(bookmarkedAddress));
 
    Assert.Equal(HttpStatusCode.MovedPermanently, actual.StatusCode);
    var follow = await api.CreateClient().GetAsync(actual.Headers.Location);
    follow.EnsureSuccessStatusCode();
}

This test interacts with a self-hosted service at the HTTP level. LegacyApi is a test-specific helper class that derives from WebApplicationFactory<Startup>.

The test uses URLs that I 'bookmarked' before I evolved the URLs to a multi-tenant system. As you can tell from the host name (localhost), these are bookmarks against the self-hosted service. The test first verifies that the response is 301 Moved Permanently. It then requests the new address and uses EnsureSuccessStatusCode as an assertion.

Conclusion #

When you evolve fit URLs, it could break clients that may have bookmarked legacy URLs. Consider leaving 301 Moved Permanently responses at those addresses.


Checking signed URLs with ASP.NET

Monday, 09 November 2020 12:19:00 UTC

Use a filter to check all requested URL signatures.

This article is part of a short series on fit URLs. In the overview article, I argued that you should be signing URLs in order to prevent your REST APIs from becoming victims of Hyrum's law. In the previous article you saw how to sign URLs with ASP.NET.

In this article you'll see how to check the URLs of all HTTP requests to the API and reject those that aren't up to snuff.

The code shown here is part of the sample code base that accompanies my book Code That Fits in Your Head.

Filter #

If you want to intercept all incoming HTTP requests in ASP.NET Core, an IAsyncActionFilter is a good option. This one should look at the URL of all incoming HTTP requests and detect if the client tried to tamper with it.

internal sealed class UrlIntegrityFilter : IAsyncActionFilter
{
    private readonly byte[] urlSigningKey;
 
    public UrlIntegrityFilter(byte[] urlSigningKey)
    {
        this.urlSigningKey = urlSigningKey;
    }
 
    // More code comes here...

The interface only defines a single method:

public async Task OnActionExecutionAsync(
    ActionExecutingContext context,
    ActionExecutionDelegate next)
{
    if (IsGetHomeRequest(context))
    {
        await next().ConfigureAwait(false);
        return;
    }
 
    var strippedUrl = GetUrlWithoutSignature(context);
    if (SignatureIsValid(strippedUrl, context))
    {
        await next().ConfigureAwait(false);
        return;
    }
 
    context.Result = new NotFoundResult();
}

While the rule is to reject requests with invalid signatures, there's one exception. The 'home' resource requires no signature, as this is the only publicly documented URL for the API. Thus, if IsGetHomeRequest returns true, the filter invokes the next delegate and returns.

Otherwise, it strips the signature off the URL and checks if the signature is valid. If it is, it again invokes the next delegate and returns.

If the signature is invalid, on the other hand, the filter stops further execution by not invoking next. Instead, it sets the response to a 404 Not Found result.

It may seem odd to return 404 Not Found if the signature is invalid. Wouldn't 401 Unauthorized or 403 Forbidden be more appropriate?

Not really. Keep in mind that while this behaviour may use encryption technology, it's not a security feature. The purpose is to make it impossible for clients to 'retro-engineer' an implied interface. This protects them from breaking changes in the future. Clients are supposed to follow links, and the URLs given by the API itself are proper, existing URLs. If you try to edit a URL, then that URL doesn't work. It represents a resource that doesn't exist. While it may seem surprising at first, I find that a 404 Not Found result is the most appropriate status code to return.

Detecting a home request #

The IsGetHomeRequest helper method is straightforward:

private static bool IsGetHomeRequest(ActionExecutingContext context)
{
    return context.HttpContext.Request.Path == "/"
        && context.HttpContext.Request.Method == "GET";
}

This predicate only looks at the Path and Method of the incoming request. Perhaps it also ought to check that the URL has no query string parameters, but I'm not sure if that actually matters.

Stripping off the signature #

The GetUrlWithoutSignature method strips off the signature query string variable from the URL, but leaves everything else intact:

private static string GetUrlWithoutSignature(ActionExecutingContext context)
{
    var restOfQuery = QueryString.Create(
        context.HttpContext.Request.Query.Where(x => x.Key != "sig"));
 
    var url = context.HttpContext.Request.GetEncodedUrl();
    var ub = new UriBuilder(url);
    ub.Query = restOfQuery.ToString();
    return ub.Uri.AbsoluteUri;
}

The purpose of removing only the sig query string parameter is that it restores the rest of the URL to the value that it had when it was signed. This enables the SignatureIsValid method to recalculate the HMAC.

Validating the signature #

The SignatureIsValid method validates the signature:

private bool SignatureIsValid(string candidate, ActionExecutingContext context)
{
    var sig = context.HttpContext.Request.Query["sig"];
    var receivedSignature = Convert.FromBase64String(sig.ToString());
 
    using var hmac = new HMACSHA256(urlSigningKey);
    var computedSignature = hmac.ComputeHash(Encoding.ASCII.GetBytes(candidate));
 
    var signaturesMatch = computedSignature.SequenceEqual(receivedSignature);
    return signaturesMatch;
}

If the receivedSignature equals the computedSignature the signature is valid.

This prevents clients from creating URLs based on implied templates. Since clients don't have the signing key, they can't compute a valid HMAC, and therefore the URLs they'll produce will fail the integrity test.

Configuration #

As is the case for the URL-signing feature, you'll first need to read the signing key from the configuration system. This is the same key used to sign URLs:

var urlSigningKey = Encoding.ASCII.GetBytes(
    Configuration.GetValue<string>("UrlSigningKey"));

Next, you'll need to register the filter with the ASP.NET framework:

services.AddControllers(opts => opts.Filters.Add(new UrlIntegrityFilter(urlSigningKey)));

This is typically done in the ConfigureServices method of the Startup class.

Conclusion #

With a filter like UrlIntegrityFilter you can check the integrity of URLs on all incoming requests to your REST API. This prevents clients from making up URLs based on an implied interface. This may seem restrictive, but is actually for their own benefit. When they can't assemble URLs from scratch, the only remaining option is to follow the links that the API provides.

This enables you to evolve the API without breaking existing clients. While client developers may not initially appreciate having to follow links instead of building URLs out of templates, they may value that their clients don't break as you evolve the API.

Next: Redirect legacy URLs.


Page 18 of 73

"Our team wholeheartedly endorses Mark. His expert service provides tremendous value."
Hire me!