Domain modelling with REST

Wednesday, 07 December 2016 09:15:00 UTC

Make illegal states unrepresentable by using hyperlinks as the engine of application state.

Every piece of software, whether it's a web service, smart phone app, batch job, or speech recognition system, interfaces with the world in some way. Sometimes, that interface is a user interface, sometimes it's a machine-readable interface; sometimes it involves rendering pixels on a screen, and sometimes it involves writing to files, selecting records from a database, sending emails, and so on.

Programmers often struggle with how to model these interactions. This is particularly difficult because at the boundaries, systems no longer adhere to popular programming paradigms. Previously, I've explained why, at the boundaries, applications aren't object-oriented. By the same type of argument, neither are they functional (as in 'functional programming').

If that's the case, why should you even bother with 'domain modelling'? Particularly, does it even matter that, with algebraic data types, you can make illegal states unrepresentable? If you need to compromise once you hit the boundary of your application, is it worth the effort?

It is, if you structure your application correctly. Proper (level 3) REST architecture gives you one way to structure applications in such a way that you can surface the constraints of your domain model to the interface layer. When done correctly, you can also make illegal states unrepresentable at the boundary.

A payment example

In my previous article, I demonstrated how to use (static) types to model an on-line payment domain. To summarise, my task was to model three types of payments:

  • Individual payments, which happen only once.
  • Parent payments, which start a long-term payment relationship.
  • Child payments, which are automated payments originally authorised by an initial parent payment.
The constraint I had to model is that a child payment requires a transaction key that identifies the original parent payment. When making a payment, however, it's only valid to supply a transaction key for a child payment. It'd be invalid to supply a transaction key to a parent or an individual payment. On the other hand, you have to distinguish individual payments from parent payments, because only parent payments can be used to start a long-term payment relationship. All this leads to this table of possible combinations:
"StartRecurrent" : "false" "StartRecurrent" : "true"
"OriginalTransactionKey" : null Individual Parent
"OriginalTransactionKey" : "1234ABCD" Child (Illegal)
The table shows that it's illegal to simultaneously provide a transaction key and set StartRecurrent to true. The other three combinations, on the other hand, are valid.

As I demonstrated in my previous article, you can easily model this with algebraic data types.

At the boundary, however, there are no static types, so how could you model something like that as a web service?

A RESTful solution

A major advantage of REST is that it gives you a way to realise your domain model at the application boundary. It does require, though, that you design the API according to level 3 of the Richardson maturity model. In other words, it's not REST if you're merely tunnelling JSON (or XML) through HTTP. It's still not REST if you publish URL templates and expect clients to fill data into specific place-holders of those URLs.

It's REST if the only way a client can interact with your API is by following hyperlinks.

If you follow those design principles, however, it's easy to model the above payment domain as a RESTful API.

In the following, I will show examples in XML, but it could as well have been JSON. After all, a true REST API must support content negotiation. One of the reasons that I prefer XML is that I can use XPath to point out various nodes.

A client must begin at a pre-published 'home' resource, just like the home page of a web site. This resource presents affordances in the shape of hyperlinks. As recommended by the RESTful Web Services Cookbook, I always use ATOM links:

<home xmlns="http://example.com/payment"
      xmlns:atom="http://www.w3.org/2005/Atom">
  <payment-methods>
    <payment-method>
      <links>
        <atom:link rel="example:pay-individual"
                   href="https://example.com/gift-card" />
      </links>
      <id>gift-card</id>
    </payment-method>
    <payment-method>
      <links>
        <atom:link rel="example:pay-individual"
                   href="https://example.com/credit-card" />
        <atom:link rel="example:pay-parent"
                   href="https://example.com/recurrent/start/credit-card" />
      </links>
      <id>credit-card</id>
    </payment-method>
  </payment-methods>
</home>

A client receiving the above response is effectively presented with a choice. It can choose to pay with a gift card or credit card, and nothing else, however much it'd like to pay with, say, PayPal. For each of these payment methods, zero or more links are available.

Specifically, there are links to create both an individual or a parent payment with a credit card, but it's only possible to make an individual payment with a gift card. You can't set up a long-term, automated payment relationship with a gift card. (This may or may not make sense, depending on how you look at it, but it's fundamentally a business decision.)

Links are defined by relationship types, which are unique identifiers present in the rel attributes. You can think of them as equivalent to the human-readable text in an HTML a tag that suggests to a human user what to expect from clicking the link; only, rel attribute values are machine-readable and part of the contract between client and service.

Notice how the above XML representation only gives a client the option of making an individual or a parent payment with a credit card. A client can't make a child payment at this point. This follows the domain model, because you can't make a child payment without first having made a parent payment.

RESTful individual payments

If a client wishes to make an individual payment, it follows the link identified by

/home/payment-methods/payment-method[id = 'credit-card']/links/atom:link[@rel = 'example:pay-individual']/@href

In the above XPath query, I've ignored the default document namespace in order to make the expression more readable. The query returns https://example.com/credit-card, and the client can now make an HTTP POST request against that URL with a JSON or XML document containing details about the payment (not shown here, because it's not important for this particular discussion).

RESTful parent payments

If a client wishes to make a parent payment, the initial procedure is the same. First, it follows the link identified by

/home/payment-methods/payment-method[id = 'credit-card']/links/atom:link[@rel = 'example:pay-parent']/@href

The result of that XPath query is https://example.com/recurrent/start/credit-card, so the client can make an HTTP POST request against that URL with the payment details. Unlike the response for an individual payment, the response for a parent payment contains another link:

<payment xmlns="http://example.com/payment"
         xmlns:atom="http://www.w3.org/2005/Atom">
  <links>
    <atom:link rel="example:pay-child"
               href="https://example.com/recurrent/42" />
    <atom:link rel="example:payment-details"
               href="https://example.com/42" />
  </links>
  <amount>13.37</amount>
  <currency>EUR</currency>
  <invoice>1234567890</invoice>
</payment>

This response echoes the details of the payment: this is a payment of 13.37 EUR for invoice 1234567890. It also includes some links that a client can use to further interact with the payment:

  • The example:payment-details link can be used to query the API for details about the payment, for example its status.
  • The example:pay-child link can be used to make a child payment.
The example:pay-child link is only returned if the previous payment was a parent payment. When a client makes an individual payment, this link isn't present in the response, but when the client makes a parent payment, it is.

Another design principle of REST is that cool URIs don't change; once the API has shown a URL like https://example.com/recurrent/42 to a client, it should honour that URL indefinitely. The upshot of that is that a client can save that URL for later use. If a client wants to, say, renew a subscription, it can make a new HTTP POST request to that URL a month later, and that's going to be a child payment. Clients don't have to hack the URL in order to figure out what the transaction key is; they can simply store the complete URL as is and use it later.

A network of options

Using a design like the one sketched above, you can make illegal states unrepresentative. There's no way for a client to make a payment with StartRecurrent = true and a non-null transaction key; there's no link to that combination. Such an API uses hypermedia as the engine of application state.

It shouldn't be surprising that proper RESTful design works that way. After all, REST is essentially a distillate of the properties that make the World Wide Web work. On a human-readable web page, the user follows links to other pages, and a well-designed web site will only enable a link if the destination exists.

You can even draw a graph of the the API I've sketched above:

Graph of payment options, including a start node, and end node, and a node for each of the three payment types

In this diagram, you can see that when you make an individual payment, that's all you can do. You can also see that the only way to make a child payment is by first making a parent payment. There's also a path from parent payments directly to the end note, because a client doesn't have to make a child payment just because it made a parent payment.

If you think that this looks like a finite state machine, then that's no coincidence. That's exactly what it is. You have states (the nodes) and paths between them. If a state is illegal, then don't add that node; only add nodes for legal states, then add links between the nodes that model legal transitions.

Incidentally, languages like F# excel at implementing finite state machines, so it's no wonder I like to implement RESTful APIs in F#.

Summary

Truly RESTful design enables you to make illegal states unrepresentable by using hypermedia as the engine of application state. This gives you a powerful design tool to ensure that clients can only perform correct operations.

As I also wrote in my previous article, this, too, is no silver bullet. You can turn an API into a pit of success, but there are still many fault scenarios that you can't prevent.

If you were intrigued by this article, but are having trouble applying these design techniques to your own field, I'm available for hire for short or long-term engagements.


Easy domain modelling with types

Monday, 28 November 2016 07:21:00 UTC

Algebraic data types make domain modelling easy.

People often ask me if I think that F# is a good general-purpose language, and when I emphatically answer yes!, the natural next question is: why?

For years, I've been able to answer this question in the abstract, but I've been looking for a good concrete example with which I could illustrate the answer. I believe that I've now found such an example.

The abstract answer, by the way, is that F# has algebraic data types, which makes domain modelling much easier than in languages that don't have such types. Don't worry if the word 'algebraic' sounds scary, though. It's not at all difficult to understand, and I'll show you a simple example.

Payment types

At the moment, I'm working on an integration project: I'm developing a RESTful API that serves as Facade in front of a third-party payment provider. The third-party provider exposes their own API and web-based GUI that enable our end users to pay for services using credit cards, PayPal, and so on. The API that I'm developing presents a simplified, RESTful API to other clients in our organisation.

The example you're going to see here is real code that I'm writing in order to implement the desired functionality.

The system must be able to handle several different types of payment:

  • Sometimes, a user pays for a single thing, and that's the end of that transaction.
  • Other times, however, a user engages into a long-term payment relationship. This could be, for example, a subscription, or an 'auto-fill' style of relationship. This is handled in two distinct phases:
    • An initial payment (can sometimes be for a zero amount) that authorises the merchant to make further transactions.
    • Subsequent payments, based off that initial payment. These payments can be automated, because they require no further user interaction than the initial authorisation.
The third-party service calls these 'long-term relationship' payments for recurring payments, but in order to distinguish between the first and the subsequent payments in such a relationship, I decided to call them parent and child payments; accordingly, I call the one-off payments individual payments.

You can indicate the type of payment when interacting with the payment service's JSON-based API, like this:

{
  ...
  "StartRecurrent": "false"
  ...
}

Obviously, as the (illegal) ellipses suggests, there's much more data associated with a payment, but that's not important in this example. Since StartRecurrent is false, this is either an individual payment, or a child payment. If you want to start a long-term relationship, you must create a parent payment and set StartRecurrent to true.

Child payments, however, are a bit different, because you have to tell the payment service about the parent payment:

{
  ...
  "OriginalTransactionKey": "1234ABCD",
  "StartRecurrent": "false"
  ...
}

As you can see, when making a child payment, you supply the transaction ID for the parent payment. (This ID is given to you by the payment service when you initiate the parent payment.)

In this case, you're clearly not starting a new recurrent transaction.

There are two dimensions of variation in this example: StartRecurrent and OriginalTransactionKey. Let's put them in a table:

"StartRecurrent" : "false" "StartRecurrent" : "true"
"OriginalTransactionKey" : null Individual Parent
"OriginalTransactionKey" : "1234ABCD" Child (Illegal)
As the table suggests, the combination of an OriginalTransactionKey and setting StartRecurrent to true is illegal, or, in best case, meaningless.

How would you model the rules laid out in the above table? In languages like C#, it's difficult, but in F# it's easy.

C# attempts

Most C# developers would, I think, attempt to model a payment transaction with a class. If they aim for poka-yoke design, they might come up with a design like this:

public class PaymentType
{
    public PaymentType(bool startRecurrent)
    {
        this.StartRecurrent = startRecurrent;
    }
 
    public PaymentType(string originalTransactionKey)
    {
        if (originalTransactionKey == null)
            throw new ArgumentNullException(nameof(originalTransactionKey));
 
        this.StartRecurrent = false;
        this.OriginalTransactionKey = originalTransactionKey;
    }
 
    public bool StartRecurrent { private setget; }
 
    public string OriginalTransactionKey { private setget; }
}

This goes a fair way towards making illegal states unrepresentable, but it doesn't communicate to a fellow programmer how it should be used.

Code that uses instances of this PaymentType class could attempt to read the OriginalTransactionKey, which, depending on the type of payment, could return null. That sort of design leads to defensive coding.

Other people might attempt to solve the problem by designing a class hierarchy:

A hypothetical payment class hierarchy, showing a Payment base class, and three derived classes: IndividualPayment, ParentPayment, and ChildPayment.

(A variation on this design is to define an IPayment interface, and three concrete classes that implement that interface.)

This design trades better protection of invariants for violations of the Liskov Substitution Principle. Clients will have to (attempt to) downcast to subtypes in order to access all relevant data (particularly OriginalTransactionKey).

For completeness sake, I can think of at least one other option with significantly different trade-offs: applying the Visitor design pattern. This is, however, quite a complex solution, and most people will find the disadvantages greater than the benefits.

Is it such a big deal, then? After all, it's only a single data value (OriginalTransactionKey) that may or may not be there. Surely, most programmers will be able to deal with that.

This may be true in this isolated case, but keep in mind that this is only a motivating example. In many other situations, the domain you're trying to model is much more intricate, with many more exceptions to general rules. The more dimensions you add, the more difficult it becomes to reason about the code.

F# model

F#, on the other hand, makes dealing with such problems so simple that it's almost anticlimactic. The reason is that F#'s type system enables you to model alternatives of data, in addition to the combinations of data that C# (or Java) enables. Such alternatives are called discriminated unions.

In the code base I'm currently developing, I model the various payment types like this:

type PaymentService = { Name : string; Action : string }
 
type PaymentType =
| Individual of PaymentServiceParent of PaymentServiceChild of originalTransactionKey : string * paymentService : PaymentService

Here, PaymentService is a record type with some data about the payment (e.g. which credit card to use).

Even if you're not used to reading F# code, you can see three alternatives outlined on each of the three lines of code that start with a vertical bar (|). The PaymentType type has exactly three 'subtypes' (they're called cases, though). The illegal state of a non-null OriginalTransactionKey combined with StartRecurrent value of true is not possible. It can't be compiled.

Not only that, but all clients given a PaymentType value must deal with all three cases (or the compiler will issue a warning). Here's one example where our code is creating the JSON document to send to the payment service:

let name, action, startRecurrent, transaction =
    match req.PaymentType with
    | Individual { Name = name; Action = action } ->
        name, action, falseNone
    | Parent { Name = name; Action = action } -> name, action, trueNone
    | Child (transactionKey, { Name = name; Action = action }) ->
        name, action, falseSome transactionKey

This code example also extracts name and action from the PaymentType value, but the relevant values to be aware of are startRecurrent and transaction.

  • For an individual payment, startRecurrent becomes false and transaction becomes None (meaning that the value is missing).
  • For a parent payment, startRecurrent becomes true and transaction becomes None.
  • For a child payment, startRecurrent becomes false and transaction becomes Some transactionKey.
Notice that the (parent) transactionKey is only available when the payment is a child payment.

The values startRecurrent and transaction (as well as name and action) are then used to create a JSON document. I'm not showing that part of the code here, since there's actually a lot going on in the real code base, and it's not related to how to model the domain. Imagine that these values are passed to a constructor.

This is a real-world example that, I hope, demonstrates why I prefer F# over C# for domain modelling. The type system enables me to model alternatives as well as combinations of data, and thereby making illegal states unrepresentable - all in only a few lines of code.

Summary

Classes, in languages like C# and Java, enable you to model combinations of data. The more fields or properties you add to a class, the more combinations are possible. This is often useful, but sometimes you need to be able to model alternatives, rather than combinations.

Some languages, like F#, Haskell, OCaml, Elm, Kotlin, and many others, have type systems that give you the power to model both combinations and alternatives. Such types systems are said to have algebraic data types, but while the word sounds 'mathy', such types make it much easier to model complex domains.

There are more reasons to love F# than only its algebraic data types, but this is the foremost reason I find it a better language for mainstream development work than C#.

If you want to see a more complex example of modelling with types, a good next step would be the first article in my Types + Properties = Software article series.

Finally, I should be careful that I don't oversell the idea of making illegal states unrepresentable. Algebraic data types give you an extra dimension in which you can model domains, but there are still rules that they can't enforce. As an example, you can't state that integers must only fall in a certain range (e.g. only positive integers allowed). There are other type systems, such as dependent types, that give you even more power to embed domain rules into types, but as far as I know, there are no type systems that can fully model all rules as types. You'll still have to write some code as well.

The article is an instalment in the 2016 F# Advent calendar.


Comments

Mark,

I must be missing something important but it seems to me that the only advantage of using F# in this case is that the match is enforced to be exhaustive by the compiler. And of course the syntax is also nicer than a bunch of if's. In all other respects, the solution is basically equivalent to the C# class hierarchy approach.

Am I mistaken?

2016-12-03 08:38 UTC

Botond, thank you for writing. The major advantage is that enumeration of all possible cases is available at compile-time. One derived advantage of that is that the compiler can check whether a piece of code handles all cases. That's already, in my experience, a big deal. The sooner you can get feedback on your work, the better, and it doesn't get faster than compile-time feedback.

Another advantage of having all cases encoded in the type system is that it gives you better tool support. Imagine that you're looking at the return value of a function, and that this is the first time you're encountering that return type. If the return value is an abstract base class (or interface), you'll need to resort to either the documentation or reflection in order to figure out which subtypes exist. There can be arbitrarily many subtypes, and they can be scattered over arbitrarily many libraries (assemblies). Figuring out what to do with an abstract base class introduces a context switch that could have been avoided.

This is exactly another advantage offered by discriminated unions: when a function returns a discriminated union, you can immediately get tool support to figure out what to do with it, even if you've never encountered the type before.

The problem with examples such as the above is that I'm trying to explain how a language feature can help you with modelling complex domains, but if I try to present a really complex problem, no-one will have the patience to read the article. Instead, I have to come up with an example that's so simple that the reader doesn't give up, and hopefully still complex enough that the reader can imagine how it's a stand-in for a more complex problem.

When you look at the problem presented above, it's not that complex, so you can still keep a C# implementation in your head. As you add more variability to the problem, however, you can easily find yourself in a situation where the combinatorial explosion of possible values make it difficult to ensure that you've dealt with all edge cases. This is one of the main reasons that C# and Java code often throws run-time exceptions (particularly null-reference exceptions).

It did, in fact, turn out that the above example domain became more complex as I learned more about the entire range of problems I had to solve. When I described the problem above, I thought that all payments would have pre-selected payment methods. In other words, when a user is presented with a check-out page, he or she selects the payment method (PayPal, direct debit, and so on), and only then, when we know payment method, do we initiate the payment flow. It turns out, though, that in some cases, we should start the payment flow first, and then let the user pick the payment method from a list of options. It should be noted, however, that user-selection only makes sense for interactive payments, so a child payment can never be user-selectable (since it's automated).

It was trivial to extend the domain model with that new requirement:

type PaymentService = { Name : string; Action : string }
 
type PaymentMethod =
| PreSelected of PaymentServiceUserSelectable of string list
 
type TransactionKey = TransactionKey of string with
    override this.ToString () = match this with TransactionKey s -> s
 
type PaymentType =
| Individual of PaymentMethodParent of PaymentMethodChild of TransactionKey * PaymentService

This effectively uses the static type system to state that both the Individual and Parent cases can be defined in one of two ways: PreSelected or UserSelectable, each of which, again, contains heterogeneous data (PaymentService versus string list). Child payments, on the other hand, can't be user-selectable, but must be defined by a PaymentService value, as well as a transaction key (at this point, I'd also created a single-case union for the transaction key, but that's a different topic; it's still a string).

Handling all the different combinations was equally easy, and the compiler guarantees that I've handled all possible combinations:

let services, selectables, startRecurrent, transaction =
    match req.PaymentType with
    | Individual (PreSelected ps) ->
        service ps, NonefalseNone
    | Individual (UserSelectable us) ->
        [||], us |> String.concat ", " |> SomefalseNone
    | Parent (PreSelected ps) ->
        service ps, Nonetrue,  None
    | Parent (UserSelectable us) ->
        [||], us |> String.concat ", " |> Sometrue,  None
    | Child (TransactionKey transactionKey, ps) ->
        service ps, NonefalseSome transactionKey

How would you handle this with a class hierarchy, and what would the consuming code look like?

2016-12-06 10:50 UTC

When variable names are in the way

Tuesday, 25 October 2016 06:20:00 UTC

While Clean Code recommends using good variable names to communicate the intent of code, sometimes, variable names can be in the way.

Good guides to more readable code, like Clean Code, explain how explicitly introducing variables with descriptive names can make the code easier to understand. There's much literature on the subject, so I'm not going to reiterate it here. It's not the topic of this article.

In the majority of cases, introducing a well-named variable will make the code more readable. There are, however, no rules without exceptions. After all, one of the hardest tasks in programming is naming things. In this article, I'll show you an example of such an exception. While the example is slightly elaborate, it's a real-world example I recently ran into.

Escaping object-orientation

Regular readers of this blog will know that I write many RESTful APIs in F#, but using ASP.NET Web API. Since I like to write functional F#, but ASP.NET Web API is an object-oriented framework, I prefer to escape the object-oriented framework as soon as possible. (In general, it makes good architectural sense to write most of your code as framework-independent as possible.)

ASP.NET Web API expects you handle HTTP requests using Controllers, so I use Constructor Injection to inject a function that will do all the actual work, and delegate each request to a function call. It often looks like this:

type PushController (imp) =
    inherit ApiController ()
 
    member this.Post (portalId : string, req : PushRequestDtr) : IHttpActionResult =
        match imp req with
        | Success () -> this.Ok () :> _
        | Failure (ValidationFailure msg) -> this.BadRequest msg :> _
        | Failure (IntegrationFailure msg) ->
            this.InternalServerError (InvalidOperationException msg) :> _

This particular Controller only handles HTTP POST requests, and it does it by delegating to the injected imp function and translating the return value of that function call to various HTTP responses. This enables me to compose imp from F# functions, and thereby escape the object-oriented design of ASP.NET Web API. In other words, each Controller is an Adapter over a functional implementation.

For good measure, though, this Controller implementation ought to be unit tested.

A naive unit test attempt

Each HTTP request is handled at the boundary of the system, and the boundary of the system is always impure - even in Haskell. This is particularly clear in the case of the above PushController, because it handles Success (). In success cases, the result is () (unit), which strongly implies a side effect. Thus, a unit test ought to care not only about what imp returns, but also the input to the function.

While you could write a unit test like the following, it'd be naive.

[<Property(QuietOnSuccess = true)>]
let ``Post returns correct result on validation failure`` req (NonNull msg) =
    let imp _ = Result.fail (ValidationFailure msg)
    use sut = new PushController (imp)
 
    let actual = sut.Post req
 
    test <@ actual
            |> convertsTo<Results.BadRequestErrorMessageResult>
            |> Option.map (fun r -> r.Message)
            |> Option.exists ((=msg) @>

This unit test uses FsCheck's integration for xUnit.net, and Unquote for assertions. Additionally, it uses a convertsTo function that I've previously described.

The imp function for PushController must have the type PushRequestDtr -> Result<unit, BoundaryFailure>. In the unit test, it uses a wild-card (_) for the input value, so its type is 'a -> Result<'b, BoundaryFailure>. That's a wider type, but it matches the required type, so the test compiles (and passes).

FsCheck populates the req argument to the test function itself. This value is passed to sut.Post. If you look at the definition of sut.Post, you may wonder what happened to the individual (and currently unused) portalId argument. The explanation is that the Post method can be viewed as a method with two parameters, but it can also be viewed as an impure function that takes a single argument of the type string * PushRequestDtr - a tuple. In other words, the test function's req argument is a tuple. The test is not only concise, but also robust against refactorings. If you change the signature of the Post method, odds are that the test will still compile. (This is one of the many benefits of type inference.)

The problem with the test is that it doesn't verify the data flow into imp, so this version of PushController also passes the test:

type PushController (imp) =
    inherit ApiController ()
 
    member this.Post (portalId : stringreq : PushRequestDtr) : IHttpActionResult =
        let minimalReq =
            { Transaction = { Invoice = ""; Status = { Code = { Code = 0 } } } }
        match imp minimalReq with
        | Success () -> this.Ok () :> _
        | Failure (ValidationFailure msg) -> this.BadRequest msg :> _
        | Failure (IntegrationFailure msg) ->
            this.InternalServerError (InvalidOperationException msg) :> _

As the name implies, the minimalReq value is the 'smallest' value I can create of the PushRequestDtr type. As you can see, this implementation ignores the req method argument and instead passes minimalReq to imp. This is clearly wrong, but it passes the unit test test.

Data flow testing

Not only should you care about the output of imp, but you should also care about the input. This is because imp is inherently impure, so it'd be conceivable that the input values matter in some way.

As xUnit Test Patterns explains, automated tests should contain no branching, so I don't think it's a good idea to define a test-specific imp function using conditionals. Instead, we can use guard assertions to verify that the input is as expected:

[<Property(QuietOnSuccess = true)>]
let ``Post returns correct result on validation failure`` req (NonNull msg) =
    let imp candidate =
        candidate =! snd req
        Result.fail (ValidationFailure msg)
    use sut = new PushController (imp)
 
    let actual = sut.Post req
 
    test <@ actual
            |> convertsTo<Results.BadRequestErrorMessageResult>
            |> Option.map (fun r -> r.Message)
            |> Option.exists ((=msg) @>

The imp function is now implemented using Unquote's custom =! operator, which means that candidate must equal req. If not, Unquote will throw an exception, and thereby fail the test.

If candidate is equal to snd req, the =! operator does nothing, enabling the imp function to return the value Result.fail (ValidationFailure msg).

This version of the test verifies the entire data flow through imp: both input and output.

There is, however, a small disadvantage to writing the imp code this way. It isn't a big issue, but it annoys me.

Here's the heart of the matter: I had to come up with a name for the local PushRequestDtr value that the =! operator evaluates against snd req. I chose to call it candidate, which may seem reasonable, but that naming strategy doesn't scale.

In order to keep the introductory example simple, I chose a Controller method that doesn't (yet) use its portalId argument, but the code base contains other Controllers, for example this one:

type IdealController (imp) =
    inherit ApiController ()
 
    member this.Post (portalId : string, req : IDealRequestDtr) : IHttpActionResult =
        match imp portalId req with
        | Success (resp : IDealResponseDtr-> this.Ok resp :> _
        | Failure (ValidationFailure msg) -> this.BadRequest msg :> _
        | Failure (IntegrationFailure msg) ->
            this.InternalServerError (InvalidOperationException msg) :> _

This Controller's Post method passes both portalId and req to imp. In order to perform data flow verification of that implementation, the test has to look like this:

[<Property(QuietOnSuccess = true)>]
let ``Post returns correct result on success`` portalId req resp =
    let imp pid candidate =
        pid =! portalId
        candidate =! req
        Result.succeed resp
    use sut = new IdealController (imp)
 
    let actual = sut.Post (portalId, req)
 
    test <@ actual
            |> convertsTo<Results.OkNegotiatedContentResult<IDealResponseDtr>>
            |> Option.map (fun r -> r.Content)
            |> Option.exists ((=resp) @>

This is where I began to run out of good argument names. You need names for the portalId and req arguments of imp, but you can't use those names because they're already in use. You can't even shadow the names of the outer values, because the test-specific imp function has to close over those outer values in order to compare them to their expected values.

While I decided to call the local portal ID argument pid, it's hardly helpful. Explicit arguments have become a burden rather than a help to the reader. If only we could get rid of those explicit arguments.

Point free

Functional programming offers a well-known alternative to explicit arguments, commonly known as point-free programming. Some people find point-free style unreadable, but sometimes it can make the code more readable. Could this be the case here?

If you look at the test-specific imp functions in both of the above examples with explicit arguments, you may notice that they follow a common pattern. First they invoke one or more guard assertions, and then they return a value. You can model this with a custom operator:

// 'Guard' composition. Returns the return value if ``assert`` doesn't throw.
// ('a -> unit) -> 'b -> 'a -> 'b
let (>>!) ``assert`` returnValue x =
    ``assert`` x
    returnValue

The first argument, ``assert``, is a function with the type 'a -> unit. This is the assertion function: it takes any value as input, and returns unit. The implication is that it'll throw an exception if the assertion fails.

After invoking the assertion, the function returns the returnValue argument.

The reason I designed it that way is that it's composable, which you'll see in a minute. The reason I named it >>! was that I wanted some kind of arrow, and I thought that the exclamation mark relates nicely to Unquote's use of exclamation marks.

This enables you to compose the first imp example (for PushController) in point-free style:

[<Property(QuietOnSuccess = true)>]
let ``Post returns correct result on validation failure`` req (NonNull msg) =
    let imp = ((=!) (snd req)) >>! Result.fail (ValidationFailure msg)
    use sut = new PushController (imp)
 
    let actual = sut.Post req
 
    test <@ actual
            |> convertsTo<Results.BadRequestErrorMessageResult>
            |> Option.map (fun r -> r.Message)
            |> Option.exists ((=msg) @>

At first glance, most people would be likely to consider this to be less readable than before, and clearly, that's a valid standpoint. On the other hand, once you get used to identify the >>! operator, this becomes a concise shorthand. A data-flow-verifying imp mock function is composed of an assertion on the left-hand side of >>!, and a return value on the right-hand side.

Most importantly, those hard-to-name arguments are gone.

Still, let's dissect the expression ((=!) (snd req)) >>! Result.fail (ValidationFailure msg).

The expression on the left-hand side of the >>! operator is the assertion. It uses Unquote's must equal =! operator as a function. (In F#, infix operators are functions, and you can use them as functions by surrounding them by brackets.) While you can write an assertion as candidate =! snd req using infix notation, you can also write the same expression as a function call: (=!) (snd req) candidate. Since this is a function, it can be partially applied: (=!) (snd req); the type of that expression is PushRequestDtr -> unit, which matches the required type 'a -> unit that >>! expects from its ``assert`` argument. That explains the left-hand side of the >>! operator.

The right-hand side is easier, because that's the return value of the composed function. In this case the value is Result.fail (ValidationFailure msg).

You already know that the type of >>! is ('a -> unit) -> 'b -> 'a -> 'b. Replacing the generic type arguments with the actual types in use, 'a is PushRequestDtr and 'b is Result<'a ,BoundaryFailure>, so the type of imp is PushRequestDtr -> Result<'a ,BoundaryFailure>. When you set 'a to unit, this fits the required type of PushRequestDtr -> Result<unit, BoundaryFailure>.

This works because in its current incarnation, the imp function for PushController only takes a single value as input. Will this also work for IdealController, which passes both portalId and req to its imp function?

Currying

The imp function for IdealController has the type string -> IDealRequestDtr -> Result<IDealResponseDtr, BoundaryFailure>. Notice that it takes two arguments instead of one. Is it possible to compose an imp function with the >>! operator?

Consider the above example that exercises the success case for IdealController. What if, instead of writing

let imp pid candidate =
    pid =! portalId
    candidate =! req
    Result.succeed resp

you write the following?

let imp = ((=!) req) >>! Result.succeed resp

Unfortunately, that does work, because the type of that function is string * IDealRequestDtr -> Result<IDealResponseDtr, 'a>, and not string -> IDealRequestDtr -> Result<IDealResponseDtr, BoundaryFailure>, as it should be. It's almost there, but the input values are tupled, instead of curried.

You can easily correct that with a standard curry function:

let imp = ((=!) req) >>! Result.succeed resp |> Tuple2.curry

The Tuple2.curry function takes as input a function that has tupled arguments, and turns it into a curried function. Exactly what we need here!

The entire test is now:

[<Property(QuietOnSuccess = true)>]
let ``Post returns correct result on success`` req resp =
    let imp = ((=!) req) >>! Result.succeed resp |> Tuple2.curry
    use sut = new IdealController (imp)
 
    let actual = sut.Post req
 
    test <@ actual
            |> convertsTo<Results.OkNegotiatedContentResult<IDealResponseDtr>>
            |> Option.map (fun r -> r.Content)
            |> Option.exists ((=resp) @>

Whether or not you find this more readable than the previous example is, as always, subjective, but I like it because it's a succinct, composable way to address data flow verification. Once you get over the initial shock of partially applying Unquote's =! operator, as well as the cryptic-looking >>! operator, you may begin to realise that the same idiom is repeated throughout. In fact, it's more than an idiom. It's an implementation of a design pattern.

Mocks

When talking about unit testing, I prefer the vocabulary of xUnit Test Patterns, because of its unrivalled consistent terminology. Using Gerard Meszaros' nomenclature, a Test Double with built-in verification of interaction is called a Mock.

Most people (including me) dislike Mocks because they tend to lead to brittle unit tests. They tend to, but sometimes you need them. Mocks are useful when you care about side-effects.

Functional programming emphasises pure functions, which, by definition, are free of side-effects. In pure functional programming, you don't need Mocks.

Since F# is a multi-paradigmatic language, you sometimes have to write code in a more object-oriented style. In the example you've seen here, I've shown you how to unit test that Controllers correctly work as Adapters over (impure) functions. Here, Mocks are useful, even if they have no place in the rest of the code base.

Being able to express a Mock with a couple of minimal functions is, in my opinion, preferable to adding a big dependency to a 'mocking library'.

Concluding remarks

Sometimes, explicit values and arguments are in the way. By their presence, they force you to name them. Often, naming is good, because it compels you to make tacit knowledge explicit. In rare cases, though, the important detail isn't a value, or an argument, but instead an activity. An example of this is when verifying data flow. While the values are obviously present, the focus ought to be on the comparison. Thus, by making the local function arguments implicit, you can direct the reader's attention to the interaction - in this case, Unquote's =! must equal comparison.

In the introduction to this article, I told you that the code you've seen here is a real-life example. This is true.

I submitted my refactoring to point-free style as an internal pull request on the project I'm currently working. When I did that, I was genuinely in doubt about the readability improvement this would give, so I asked my reviewers for their opinions. I was genuinely ready to accept if they wanted to reject the pull request.

My reviewers disagreed internally, ultimately had a vote, and decided to reject the pull request. I don't blame them. We had a civil discussion about the pros and cons, and while they understood the advantages, they felt that the disadvantages weighed heavier.

In their context, I understand why they decided to decline the change, but that doesn't mean that I don't find this an interesting experiment. I expect to use something like this in the future in some contexts, while in other contexts, I'll stick with the more verbose (and harder to name) test-specific functions with explicit arguments.

Still, I like to solve problems using well-known compositions, which is the reason I prefer a composable, idiomatic approach over ad-hoc code.

If you'd like to learn more about unit testing and property-based testing in F# (and C#), you can watch some of my Pluralsight courses.


Decoupling decisions from effects

Monday, 26 September 2016 21:51:00 UTC

Functional programming emphasises pure functions, but sometimes decisions must be made based on impure data. The solution is to decouple decisions and effects.

Functional programmers love pure functions. Not only do they tend to be easy to reason about, they are also intrinsically testable. It'd be wonderful if we could build entire systems only from pure functions, but every functional programmer knows that the world is impure. Instead, we strive towards implementing as much of our code base as pure functions, so that an application is impure only at its boundaries.

The more you can do this, the more testable the system becomes. One rule of thumb about unit testing that I often use is that if a particular candidate for unit testing has a cyclomatic complexity of 1, it may be acceptable to skip unit testing it. Instead, we can consider such a unit a humble unit. If you can separate decisions from effects (which is what functional programmers often call impurities), you can often make the impure functions humble.

In other words: put all logic in pure functions that can be unit tested, and implement impure effects as humble functions that you don't need to unit test.

You want to see an example. So do I!

Example: conditional reading from file

In a recent discussion, Jamie Cansdale asks how I'd design and unit test something like the following C# method if I could instead redesign it in F#.

public static string GetUpperText(string path)
{
    if (!File.Exists(path)) return "DEFAULT";
    var text = File.ReadAllText(path);
    return text.ToUpperInvariant();
}

Notice how this method contains two impure operations: File.Exists and File.ReadAllText. Decision logic seems interleaved with IO. How can decisions be separated from effects?

(For good measure I ought to point out that obviously, the above example is so simple that by itself, it almost doesn't warrant testing. Think of it as a stand-in for a more complex problem.)

With a statement-based language like C#, it can be difficult to see how to separate decision logic from effects without introducing interfaces, but with expression-based languages like F#, it becomes close to trivial. In this article, I'll show you three alternatives.

All three alternatives, however, make use of the same function for turning text into upper case:

// string -> string
let getUpper (text : string) = text.ToUpperInvariant ()

Obviously, this function is so trivial that it's hardly worth testing, but remember to think about it as a stand-in for a more complex problem. It's a pure function, so it's easy to unit test:

[<Theory>]
[<InlineData("foo""FOO")>]
[<InlineData("bar""BAR")>]
let ``getUpper returns correct value`` input expected =
    let actual = getUpper input
    expected =! actual

This test uses xUnit.net 2.1.0 and Unquote 3.1.2. The =! operator is a custom Unquote operator; you can read it as must equal; that is: expected must equal actual. It'll throw an exception if this isn't the case.

Custom unions

Languages like F# come with algebraic data types, which means that in addition to complex structures, they also enable you to express types as alternatives. This means that you can represent a decision as one or more alternative pure values.

Although the examples you'll see later in this article are simpler, I think it'll be helpful to start with an ad hoc solution to the problem. Here, the decision is to either read from a file, or return a default value. You can express that using a custom discriminated union:

type Action = ReadFromFile of string | UseDefault of string

This type models two mutually exclusive cases: either you decide to read from the file identified by a file path (string), or your return a default value (also modelled as a string).

Using this Action type, you can write a pure function that makes the decision:

// string -> bool -> Action
let decide path fileExists =
    if fileExists
    then ReadFromFile path
    else UseDefault "DEFAULT"

This function takes two arguments: path (a string) and fileExists (a bool). If fileExists is true, it returns the ReadFromFile case; otherwise, it returns the UseDefault case.

Notice how this function neither checks whether the file exists, nor does it attempt to read the contents of the file. It only makes a decision based on input, and returns information about this decision as output. This function is pure, so (as I've claimed numerous times) is easy to unit test:

[<Theory>]
[<InlineData("ploeh.txt")>]
[<InlineData("fnaah.txt")>]
let ``decide returns correct result when file exists`` path =
    let actual = decide path true
    ReadFromFile path =! actual
 
[<Theory>]
[<InlineData("ploeh.txt")>]
[<InlineData("fnaah.txt")>]
let ``decide returns correct result when file doesn't exist`` path =
    let actual = decide path false
    UseDefault "DEFAULT" =! actual

One unit test function exercises the code path where the file exists, whereas the other test exercises the code path where it doesn't. Straightforward.

There's still some remaining work, because you need to somehow compose your pure functions with File.Exists and File.ReadAllText. You also need a way to extract the string value from the two cases of Action. One way to do that is to introduce another pure function:

// (string -> string) -> Action -> string
let getValue f = function
    | ReadFromFile path -> f path
    | UseDefault value  -> value

This is a function that returns the UseDefault data for that case, but invokes a function f in the ReadFromFile case. Again, since this function is pure it's easy to unit test it, but I'll leave that as an exercise.

You now have all the building blocks required to compose a function similar to the above GetUpperText C# method:

// string -> string
let getUpperText path =
    path
    |> File.Exists
    |> decide path
    |> getValue (File.ReadAllText >> getUpper)

This implementation pipes path into File.Exists, which returns a Boolean value indicating whether the file exists. This Boolean value is then piped into decide path, which (as you may recall) returns an Action. That value is finally piped into getValue (File.ReadAllText >> getUpper). Recall that getValue will only invoke the function argument when the Action is ReadFromFile, so File.ReadAllText >> getUpper is only executed in this case.

Notice how decisions and effectful functions are interleaved. All the decision functions are covered by unit tests; only File.Exists and File.ReadAllText aren't covered, but I find it reasonable to treat these as humble functions.

Either

Normally, decisions often involve a choice between two alternatives. In the above example, you saw how the alternatives were named ReadFromFile and UseDefault. Since a choice between two alternatives is so common, there's a well-known 'pattern' that gives you general-purpose tools to model decisions. This is known as the Either monad.

The F# core library doesn't (yet) come with an implementation of the Either monad, but it's easy to add. In this example, I'm using code from Scott Wlaschin's railway-oriented programming, although slightly modified, and including only the most essential building blocks for the example:

type Result<'TSuccess, 'TFailure> =
    | Success of 'TSuccess
    | Failure of 'TFailure
		
module Result =
    // ('a -> Result<'b, 'c>) -> Result<'a, 'c> -> Result<'b, 'c>
    let bind f = function
        | Success succ -> f succ
        | Failure fail -> Failure fail
 
    // ('a -> 'b) -> Result<'a, 'c> -> Result<'b, 'c>
    let map f = function
        | Success succ -> Success (f succ)
        | Failure fail -> Failure fail
 
    // ('a -> bool) -> 'a -> Result<'a, 'a>
    let split f x = if f x then Success x else Failure x
 
    // ('a -> 'b) -> ('c -> 'b) -> Result<'a, 'c> -> 'b
    let either f g = function
        | Success succ -> f succ
        | Failure fail -> g fail

In fact, the bind and map functions aren't even required for this particular example, but I included them anyway, because otherwise, readers already familiar with the Either monad would wonder why they weren't there.

All these functions are generic and pure, so they are easy to unit test. I'm not going to show you the unit tests, however, as I consider the functions belonging to that Result module as reusable functions. This is a module that would ship as part of a well-tested library. In fact, it'll soon be added to the F# core library.

With the already tested getUpper function, you now have all the building blocks required to implement the desired functionality:

// string -> string
let getUpperText path =
    path
    |> Result.split File.Exists
    |> Result.either (File.ReadAllText >> getUpper) (fun _ -> "DEFAULT")

This composition pipes path into Result.split, which uses File.Exists as a predicate to decide whether the path should be packaged into a Success or Failure case. The resulting Result<string, string> is then piped into Result.either, which invokes File.ReadAllText >> getUpper in the Success case, and the anonymous function in the Failure case.

Notice how, once again, the impure functions File.Exists and File.ReadAllText are used as humble functions, but interleaved with testable, pure functions that make all the decisions.

Maybe

Sometimes, a decision isn't so much between two alternatives as it's a decision between something that may exist, but also may not. You can model this with the Maybe monad, which in F# comes in the form of the built-in option type.

In fact, so much is already built in (and tested by the F# development team) that you almost don't need to add anything yourself. The only function you could consider adding is this:

module Option =
    // 'a -> 'a option -> 'a
    let defaultIfNone def x = defaultArg x def

As you can see, this function simply swaps the arguments for the built-in defaultArg function. This is done to make it more pipe-friendly. This function will most likely be added in a future version of F#.

That's all you need:

// string -> string
let getUpperText path =
    path
    |> Some
    |> Option.filter File.Exists
    |> Option.map (File.ReadAllText >> getUpper)
    |> Option.defaultIfNone "DEFAULT"

This composition starts with the path, puts it into a Some case, and pipes that option value into Option.filter File.Exists. This means that the Some case will only stay a Some value if the file exists; otherwise, it will be converted to a None value. Whatever the option value is, it's then piped into Option.map (File.ReadAllText >> getUpper). The composed function File.ReadAllText >> getUpper is only executed in the Some case, so if the file doesn't exist, the function will not attempt to read it. Finally, the option value is piped into Option.defaultIfNone, which returns the mapped value, or "DEFAULT" if the value was None.

Like in the two previous examples, the decision logic is implemented by pure functions, whereas the impure functions File.Exists and File.ReadAllText are handled as humble functions.

Summary

Have you noticed a pattern in all the three examples? Decisions are separated from effects using discriminated unions (both the above Action, Result<'TSuccess, 'TFailure>, and the built-in option are discriminated unions). In my experience, as long as you need to decide between two alternatives, the Either or Maybe monads are often sufficient to decouple decision logic from effects. Often, I don't even need to write any tests, because I compose my functions from the known, well-tested functions that belong to the respective monads.

If your decision has to branch between three or more alternatives, you can consider a custom discriminated union. For this particular example, though, I think I prefer the third, Maybe-based composition, but closely followed by the Either-based composition.

In this article, you saw three examples of how to decouple decision from effects; and I didn't even show you the Free monad!


Comments

Mark,

I can't understand how can the getValue function be pure. While I agree that it's easy to test, it's still the higher order function and it's purity depends on the purity of function passed as the argument. Even in Your example it takes File.ReadAllText >> getUpper which actually reaches to a file on the disk and I perceive it as reaching to an external shared state. Is there something I misunderstood?

2016-10-14 09:06 UTC

Grzegorz, thank you for writing. You make a good point, and in a sense you're correct. F# doesn't enforce purity, and this is both an advantage and a disadvantage. It's an advantage because it makes it easier for programmers migrating from C# to make a gradual transition to a more functional programming style. It's also an advantage exactly because it relies on the programmer's often-faulty reasoning to ensure that code is properly functional.

Functions in F# are only pure if they're implemented to be pure. For any given function type (signature) you can always create an impure function that fits the type. (If nothing else, you can always write "Hello, world!" to the console, before returning a value.)

The result of this is that few parts of F# are pure in the sense that you imply. Even List.map could be impure, if passed an impure function. In other words, higher-order functions in F# are only pure if composed of exclusively pure parts.

Clearly, this is in stark contrast to Haskell, where purity is enforced at the type level. In Haskell, a throw-away, poorly designed mini-API like the Action type and associated functions shown here wouldn't even compile. The Either and Maybe examples, on the other hand, would.

My assumption here is that function composition happens at the edge of the application - that is, in an impure (IO) context.

2016-10-15 09:02 UTC

Untyped F# HTTP route defaults for ASP.NET Web API

Tuesday, 09 August 2016 04:24:00 UTC

In ASP.NET Web API, route defaults can be provided by a dictionary in F#.

When you define a route in ASP.NET Web API 2, you most likely use the MapHttpRoute overload where you have to supply default values for the route template:

public static IHttpRoute MapHttpRoute(
    this HttpRouteCollection routes,
    string name,
    string routeTemplate,
    object defaults)

The defaults arguments has the type object, but while the compiler will allow you to put any value here, the implicit intent is that in C#, you should pass an anonymous object with the route defaults. A standard route looks like this:

configuration.Routes.MapHttpRoute(
    "DefaultAPI",
    "{controller}/{id}",
    new { Controller = "Home", Id = RouteParameter.Optional });

Notice how the name of the properties (Controller and Id) (case-insensitively) match the place-holders in the route template ({controller} and {id}).

While it's not clear from the type of the argument that this is what you're supposed to do, once you've learned it, it's easy enough to do, and rarely causes problems in C#.

Flexibility

You can debate the soundness of this API design, but as far as I can tell, it attempts to strike a balance between flexibility and syntax easy on the eyes. It does, for example, enable you to define a list of routes like this:

configuration.Routes.MapHttpRoute(
    "AvailabilityYear",
    "availability/{year}",
    new { Controller = "Availability" });
configuration.Routes.MapHttpRoute(
    "AvailabilityMonth",
    "availability/{year}/{month}",
    new { Controller = "Availability" });
configuration.Routes.MapHttpRoute(
    "AvailabilityDay",
    "availability/{year}/{month}/{day}",
    new { Controller = "Availability" });
configuration.Routes.MapHttpRoute(
    "DefaultAPI",
    "{controller}/{id}",
    new { Controller = "Home", Id = RouteParameter.Optional });

In this example, there are three alternative routes to an availability resource, keyed on either an entire year, a month, or a single date. Since the route templates (e.g. availability/{year}/{month}) don't specify an id place-holder, there's no reason to provide a default value for it. On the other hand, it would have been possible to define defaults for the custom place-holders year, month, or day, if you had so desired. In this example, however, there are no defaults for these place-holders, so if values aren't provided, none of the availability routes are matched, and the request falls through to the DefaultAPI route.

Since you can supply an anonymous object in C#, you can give it any property you'd like, and the code will still compile. There's no type safety involved, but using an anonymous object enables you to use a compact syntax.

Route defaults in F#

The API design of the MapHttpRoute method seems forged with C# in mind. I don't know how it works in Visual Basic .NET, but in F# there are no anonymous objects. How do you supply route defaults, then?

As I described in my article on creating an F# Web API project, you can define a record type:

type HttpRouteDefaults = { Controller : string; Id : obj }

You can use it like this:

GlobalConfiguration.Configuration.Routes.MapHttpRoute(
    "DefaultAPI",
    "{controller}/{id}",
    { Controller = "Home"; Id = RouteParameter.Optional }) |> ignore

That works fine for DefaultAPI, but it's hardly flexible. You must supply both a Controller and a Id value. If you need to define routes like the availability routes above, you can't use this HttpRouteDefaults type, because you can't omit the Id value.

While defining another record type is only a one-liner, you're confronted with the problem of naming these types.

In C#, the use of anonymous objects is, despite appearances, an untyped approach. Could something similar be possible with F#?

It turns out that the MapHttpRoute also works if you pass it an IDictionary<string, object>, which is possible in F#:

config.Routes.MapHttpRoute(
    "DefaultAPI",
    "{controller}/{id}",
    dict [
        ("Controller", box "Home")
        ("Id", box RouteParameter.Optional)]) |> ignore

While this looks more verbose than the previous alternative, it's more flexible. It's also stringly typed, which normally isn't an endorsement, but in this case is honest, because it's as strongly typed as the MapHttpRoute method. Explicit is better than implicit.

The complete route configuration corresponding to the above example would look like this:

config.Routes.MapHttpRoute(
    "AvailabilityYear",
    "availability/{year}",
    dict [("Controller"box "Availability")]) |> ignore
config.Routes.MapHttpRoute(
    "AvailabilityMonth",
    "availability/{year}/{month}",
    dict [("Controller"box "Availability")]) |> ignore
config.Routes.MapHttpRoute(
    "AvailabilityDay",
    "availability/{year}/{month}/{day}",
    dict [("Controller"box "Availability")]) |> ignore
config.Routes.MapHttpRoute(
    "DefaultAPI",
    "{controller}/{id}",
    dict [
        ("Controller"box "Home")
        ("Id"box RouteParameter.Optional)]) |> ignore

If you're interested in learning more about developing ASP.NET Web API services in F#, watch my Pluralsight course A Functional Architecture with F#.


Conditional composition of functions

Monday, 04 July 2016 06:53:00 UTC

A major benefit of Functional Programming is the separation of decisions and (side-)effects. Sometimes, however, one decision ought to trigger an impure operation, and then proceed to make more decisions. Using functional composition, you can succinctly conditionally compose functions.

In my article on how Functional Architecture falls into the Ports and Adapters pit of success, I describe how Haskell forces you to separate concerns:

  • Your Domain Model should be pure, with business decisions implemented by pure functions. Not only does it make it easier for you to reason about the business logic, it also has the side-benefit that pure functions are intrinsically testable.
  • Side-effects, and other impure operations (such as database queries) can be isolated and implemented as humble functions.
A common pattern that you can often use is:
  1. Read some data using an impure query.
  2. Pass that data to a pure function.
  3. Use the return value from the pure function to perform various side-effects. You could, for example, write data to a database, send an email, or update a user interface.
Sometimes, however, things are more complex. Based on the answer from one pure function, you may need to query an additional data source to gather extra data, call a second pure function, and so on. In this article, you'll see one way to accomplish this.

Caravans for extra space

Based on my previous restaurant-booking example, Martin Rykfors suggests "a new feature request. The restaurant has struck a deal with the local caravan dealership, allowing them to rent a caravan to park outside the restaurant in order to increase the seating capacity for one evening. Of course, sometimes there are no caravans available, so we'll need to query the caravan database to see if there is a big enough caravan available that evening:"

findCaravan :: ServiceAddress -> Int -> ZonedTime -> IO (Maybe Caravan)

The above findCaravan is a slight modification of the function Martin suggests, because I imagine that the caravan dealership exposes their caravan booking system as a web service, so the function needs a service address as well. This change doesn't impact the proposed solution, though.

This problem definition fits the above general problem statement: you'd only want to call the findCaravan function if checkCapacity returns Left CapacityExceeded.

That's still a business decision, so you ought to implement it as a pure function. If you (for a moment) imagine that you have a Maybe Caravan instead of an IO (Maybe Caravan), you have all the information required to make that decision:

checkCaravanCapacityOnError :: Error
                            -> Maybe Caravan
                            -> Reservation
                            -> Either Error Reservation
checkCaravanCapacityOnError CapacityExceeded (Just caravan) reservation =
  if caravanCapacity caravan < quantity reservation
  then Left CapacityExceeded
  else Right reservation
checkCaravanCapacityOnError err _ _ = Left err

Notice that this function not only takes Maybe Caravan, it also takes an Error value. This encodes into the function's type that you should only call it if you have an Error that originates from a previous step. This Error value also enables the function to only check the caravan's capacity if the previous Error was a CapacityExceeded. Error can also be ValidationError, in which case there's no reason to check the caravan's capacity.

This takes care of the Domain Model, but you still need to figure out how to get a Maybe Caravan value. Additionally, if checkCaravanCapacityOnError returns Right Reservation, you'd probably want to reserve the caravan for the evening. You can imagine that this is possible with the following function:

reserveCaravan :: ServiceAddress -> ZonedTime -> Caravan -> IO ()

This function reserves the caravan at the supplied time. In order to keep the example simple, you can imagine that the provided ZonedTime indicates an entire day (or evening), and not just an instant.

Composition of caravan-checking

As a first step, you can compose an impure function that

  1. Queries the caravan dealership for a caravan
  2. Calls the pure checkCaravanCapacityOnError function
  3. Reserves the caravan if the return value was Right Reservation
Notice how these steps follow the impure-pure-impure pattern I described above. You can compose such a function like this:

import Control.Monad (forM_)
import Control.Monad.Trans (liftIO)
import Control.Monad.Trans.Either (EitherT(..), hoistEither)

checkCaravan :: Reservation -> Error -> EitherT Error IO Reservation
checkCaravan reservation err = do
  c <- liftIO $ findCaravan svcAddr (quantity reservation) (date reservation)
  newRes <- hoistEither $ checkCaravanCapacityOnError err c reservation
  liftIO $ forM_ c $ reserveCaravan svcAddr (date newRes)
  return newRes

It starts by calling findCaravan by closing over svcAddr (a ServiceAddress value). This is an impure operation, but you can use liftIO to make c a Maybe Caravan value that can be passed to checkCaravanCapacityOnError on the next line. This function returns Either Error Reservation, but since this function is defined in an EitherT Error IO Reservation context, newRes is a Reservation value. Still, it's important to realise that exactly because of this context, execution will short-circuit at that point if the return value from checkCaravanCapacityOnError is a Left value. In other words, all subsequent expression are only evaluated if checkCaravanCapacityOnError returns Right. This means that the reserveCaravan function is only called if a caravan with enough capacity is available.

The checkCaravan function will unconditionally execute if called, so as the final composition step, you'll need to figure out how to compose it into the overall postReservation function in such a way that it's only called if checkCapacity returns Left.

Conditional composition

In the previous incarnation of this example, the overall entry point for the HTTP request in question was this postReservation function:

postReservation :: ReservationRendition -> IO (HttpResult ())
postReservation candidate = fmap toHttpResult $ runEitherT $ do
  r <- hoistEither $ validateReservation candidate
  i <- liftIO $ getReservedSeatsFromDB connStr $ date r
  hoistEither $ checkCapacity 10 i r
  >>= liftIO . saveReservation connStr

Is it possible to compose checkCaravan into this function in such a way that it's only going to be executed if checkCapacity returns Left? Yes, by adding to the hoistEither $ checkCapacity 10 i r pipeline:

import Control.Monad.Trans (liftIO)
import Control.Monad.Trans.Either (EitherT(..), hoistEither, right, eitherT)

postReservation :: ReservationRendition -> IO (HttpResult ())
postReservation candidate = fmap toHttpResult $ runEitherT $ do
  r <- hoistEither $ validateReservation candidate
  i <- liftIO $ getReservedSeatsFromDB connStr $ date r
  eitherT (checkCaravan r) right $ hoistEither $ checkCapacity 10 i r
  >>= liftIO . saveReservation connStr

Contrary to F#, you have to read Haskell pipelines from right to left. In the second-to-last line of code, you can see that I've added eitherT (checkCaravan r) right to the left of hoistEither $ checkCapacity 10 i r, which was already there. This means that, instead of binding the result of hoistEither $ checkCapacity 10 i r directly to the saveReservation composition (via the monadic >>= bind operator), that result is first passed to eitherT (checkCaravan r) right.

The eitherT function composes two other functions: the leftmost function is invoked if the input is Left, and the right function is invoked if the input is Right. In this particular example, (checkCaravan r) is the closure being invoked in the Left - and only in the Left - case. In the Right case, the value is passed on unmodified, but elevated back into the EitherT context using the right function.

(BTW, the above composition has a subtle bug: the capacity is still hard-coded as 10, even though reserving extra caravans actually increases the overall capacity of the restaurant for the day. I'll leave it as an exercise for you to make the capacity take into account any reserved caravans. You can download all of the source code, if you want to give it a try.)

Interleaving

Haskell has strong support for composition of functions. Not only can you interleave pure and impure code, but you can also do it conditionally. In the above example, the eitherT function holds the key to that. The overall flow of the postReservation function is:

  1. Validate the input
  2. Get already reserved seats from the database
  3. Check the reservation request against the restaurant's remaining capacity
  4. If the capacity is exceeded, attempt to reserve a caravan of sufficient capacity
  5. If one of the previous steps decided that the restaurant has enough capacity, then save the reservation in the database
  6. Convert the result (whether it's Left or Right) to an HTTP response
Programmers who are used to implementing such solutions using C#, Java, or similar languages, may feel hesitant by delegating a branching decision to a piece of composition code, instead of something they can unit test.

Haskell's type system is remarkably helpful here. Haskell programmers often joke that if it compiles, it works, and there's a morsel of truth in that sentiment. Both functions used with eitherT must return a value of the same type, but the left function must be a function that takes the Left type as input, whereas the right function must be a function that takes the Right type as input. In the above example, (checkCaravan r) is a partially applied function with the type Error -> EitherT Error IO Reservation; that is: the input type is Error, so it can only be composed with an Either Error a value. That matches the return type of checkCapacity 10 i r, so the code compiles, but if I accidentally switch the arguments to eitherT, it doesn't compile.

I find it captivating to figure out how to 'click' together such interleaving functions using the composition functions that Haskell provides. Often, when the composition compiles, it works as intended.


Comments

This is something of a tangent, but I wanted to hint to you that Haskell can help reduce the boilerplate it takes to compose monadic computations like this.

The MonadError class abstracts monads which support throwing and catching errors. If you don't specify the concrete monad (transformer stack) a computation lives in, it's easier to compose it into a larger environment.

checkCaravanCapacityOnError :: MonadError Error m
                            => Error
                            -> Maybe Caravan
                            -> Reservation
                            -> m Reservation
checkCaravanCapacityOnError CapacityExceeded (Just caravan) reservation
  | caravanCapacity caravan < quantity reservation = throwError CapacityExceeded
  | otherwise = return reservation
checkCaravanCapacityOnError err _ _ = throwError err

I'm programming to the interface, not the implementation, by using throwError and return instead of Left and Right. This allows me to dispense with calls to hoistEither when I come to call my function in the context of a bigger monad:

findCaravan :: MonadIO m => ServiceAddress -> Int -> ZonedTime -> m (Maybe Caravan)
reserveCaravan :: MonadIO m => ServiceAddress -> ZonedTime -> m ()

checkCaravan :: (MonadIO m, MonadError Error m) => Reservation -> Error -> m Reservation
checkCaravan reservation err = do
  c <- findCaravan svcAddr (quantity reservation) (date reservation)
  newRes <- checkCaravanCapacityOnError err c reservation
  traverse_ (reserveCaravan svcAddr (date newRes)) c
  return newRes

Note how findCaravan and reserveCaravan only declare a MonadIO constraint, whereas checkCaravan needs to do both IO and error handling. The type class system lets you declare the capabilities you need from your monad without specifying the monad in question. The elaborator figures out the right number of calls to lift when it builds the MonadError dictionary, which is determined by the concrete type you choose for m at the edge of your system.

A logical next step here would be to further constrain the effects that a given IO function can perform. In this example, I'd consider writing a separate class for monads which support calling the caravan service: findCaravan :: MonadCaravanService m => ServiceAddress -> Int -> ZonedTime -> m (Maybe Caravan). This ensures that findCaravan can only call the caravan service, and not perform any other IO. It also makes it easier to mock functions which call the caravan service by writing a fake instance of MonadCaravanService.

F# doesn't support this style of programming because it lacks higher-kinded types. You can't abstract over m; you have to pick a concrete monad up-front. This is bad for code reuse: if you need to add (for example) runtime configuration to your application you have to rewrite the implementation of your monad, and potentially every function which uses it, rather than just tacking on a MonadReader constraint to the functions that need it and adding a call to runReaderT at the entry point to your application.

Finally, monad transformers are but one style of effect typing; extensible-effects is an alternative which is gaining popularity.

2016-07-04 12:05 UTC

Hi Mark,

Thank you very much for the elaborate explanation. I'm also delighted that you stuck with my admittedly contrived example of using caravans to compensate for the restaurant's lack of space. Or is that nothing compared to some of the stranger real-life feature requests some of us have seen?

I agree with your point on my previous comment that my suggestion could be considered a leaky abstraction and would introduce unnecessary requirements to the implementation. It just feels strange to let go of the idea that the domain logic is to be unconditionally pure, and not just mostly pure with the occasional impure function passed in as an argument. It's like what you say towards the end of the post - I feel hesitant to mix branching code and composition code together. The resulting solution you propose here is giving me second thoughts though. The postReservation function didn't become much more complex as I'd feared, with the branching logic nicely delegated to the eitherT function. The caravan logic also gets its own composition function that is easy enough to understand on its own. I guess I've got some thinking to do.

So, a final question regarding this example: To what extent would you apply this technique when solving the same problem in F#? It seems that we are using an increasing amount of Haskell language features not present in F#, so maybe not everything would translate over cleanly.

2016-07-04 19:05 UTC

Martin, I'm still experimenting with how that influences my F# code. I'd like to at least attempt to back-port something like this to F# using computation expressions, but it may turn out that it's not worth the effort.

2016-07-04 20:46 UTC

Roman numerals via property-based TDD

Tuesday, 28 June 2016 07:28:00 UTC

An example of doing the Roman numerals kata with property-based test-driven development.

The Roman numerals kata is a simple programming exercise. You should implement conversion to and from Roman numerals. This always struck me as the ultimate case for example-driven development, but I must also admit that I've managed to get stuck on the exercise doing exactly that: throwing more and more examples at the problem. Prompted by previous successes with property-based testing, I wondered whether the problem would be more tractable if approached with property-based testing. This turned out to be the case.

Single values

When modelling a problem with property-based testing, you should attempt to express it in terms of general rules, but even so, the fundamental rule of Roman numerals is that there are certain singular symbols that have particular numeric values. There are no overall principles guiding these relationships; they simply are. Therefore, you'll still need to express these singular values as particular values. This is best done with a simple parametrised test, here using xUnit.net 2.1:

[<Theory>]
[<InlineData("I",    1)>]
[<InlineData("V",    5)>]
[<InlineData("X",   10)>]
[<InlineData("L",   50)>]
[<InlineData("C",  100)>]
[<InlineData("D",  500)>]
[<InlineData("M", 1000)>]
let ``elemental symbols have correct values`` (symbol : string) expected =
    Some expected =! Numeral.tryParseRoman symbol

The =! operator is a custom operator defined by Unquote (3.1.1), an assertion library. You can read it as must equal; that is, you can read this particular assertion as some expected must equal tryParseRoman symbol.

As you can see, this simple test expresses the relationship between the singular Roman numeral values and their decimal counterparts. You might still consider this automated test as example-driven, but I more think about it as establishing the ground rules for how Roman numerals work. If you look at the Wikipedia article, for instance, it also starts explaining the system by listing the values of these seven symbols.

Round-tripping

A common technique when applying property-based testing to parsing problems is to require that values can round-trip. This should also be the case here:

let romanRange = Gen.elements [1..3999] |> Arb.fromGen
 
[<Property(QuietOnSuccess = true)>]
let ``tryParse is the inverse of toRoman`` () =
    Prop.forAll romanRange <| fun i ->
        test <@ Some i = (i |> Numeral.toRoman
                            |> Option.bind Numeral.tryParseRoman) @>

This property uses FsCheck (2.2.4). First, it expresses a range of relevant integers. For various reasons (that we'll return to) we're only going to attempt conversion of the integers between 1 and 3,999. The value romanRange has the type Arbitrary<int>, where Arbitrary<'a> is a type defined by FsCheck. You can think of it as a generator of random values. In this case, romanRange generates random integers between 1 and 3,999.

When used with Prop.forAll, the property states that for all values drawn from romanRange, the anonymous function should succeed. The i argument within that anonymous function is populated by romanRange, and the function is executed 100 times (by default).

The test function is another Unquote function. It evaluates and reports on the quoted boolean expression; if it evaluates to true, nothing happens, but it throws an exception if it evaluates to false.

The particular expression states that if you call toRoman with i, and then call tryParseRoman with the return value of that function call, the result should be equal to i. Both sides should be wrapped in a Some case, though, since both toRoman and tryParseRoman might also return None. For the values in romanRange, however, you'd expect that the round-trip always succeeds.

Additivity

The fundamental idea about Roman numerals is that they are additive: I means 1, II means (1 + 1 =) 2, XXX means (10 + 10 + 10 =) 30, and MLXVI means (1000 + 50 + 10 + 5 + 1 =) 1066. You simply count and add. Yes, there are special rules for subtractive shorthand, but forget about those for a moment. If you have a Roman numeral with symbols in strictly descending order, you can simply add the symbol values together.

You can express this with FsCheck. It looks a little daunting, but actually isn't that bad. I'll show it first, and then walk you through it:

[<Property(QuietOnSuccess = true)>]
let ``symbols in descending order are additive`` () =
    let stringRepeat char count = String (char, count)
    let genSymbols count symbol =
        [0..count] |> List.map (stringRepeat symbol) |> Gen.elements
    let thousands =    genSymbols 3 'M'
    let fiveHundreds = genSymbols 1 'D'
    let hundreds =     genSymbols 3 'C'
    let fifties =      genSymbols 1 'L'
    let tens =         genSymbols 3 'X'
    let fives =        genSymbols 1 'V'
    let ones =         genSymbols 3 'I'
    let symbols = 
        [thousands; fiveHundreds; hundreds; fifties; tens; fives; ones]
        |> Gen.sequence
        |> Gen.map String.Concat
        |> Arb.fromGen
    Prop.forAll symbols <| fun s ->
 
        let actual = Numeral.tryParseRoman s
        
        let expected =
            s
            |> Seq.map (string >> Numeral.tryParseRoman)
            |> Seq.choose id
            |> Seq.sum
            |> Some            
        expected =! actual

The first two lines are two utility functions. The function stringRepeat has the type char -> int -> string. It simply provides a curried form of the String constructor overload that enables you to repeat a particular char value. As an example, stringRepeat 'I' 0 is "", stringRepeat 'X' 2 is "XX", and so on.

The function genSymbols has the type int -> char -> Gen<string>. It returns a generator that produces a repeated string no longer than the specified length. Thus, genSymbols 3 'M' is a generator that draws random values from the set [""; "M"; "MM"; "MMM"], genSymbols 1 'D' is a generator that draws random values from the set [""; "D"], and so on. Notice that the empty string is one of the values that the generator may use. This is by design.

Using genSymbols, you can define generators for all the symbols: up to three thousands, up to one five hundreds, up to three hundreds, etc. thousands, fiveHundreds, hundreds, and so on, are all values of the type Gen<string>.

You can combine all these string generators to a single generator using Gen.sequence, which takes a seq<Gen<'a>> as input and returns a Gen<'a list>. In this case, the input is [thousands; fiveHundreds; hundreds; fifties; tens; fives; ones], which has the type Gen<string> list, so the output is a Gen<string list> value. Values generated could include ["MM"; ""; ""; ""; "X"; ""; ""], [""; "D"; "CC"; "L"; "X"; "V"; ""], and so on.

Instead of lists of strings, you need single string values. These are easy to create using the built-in method String.Concat. You have to do it within a Gen value, though, so it's Gen.map String.Concat. Finally, you can convert the resulting Gen<string> to an Arbitrary using Arb.fromGen. The final symbols value has the type Arbitrary<string>. It'll generate values such as "MMX" and "DCCLXV".

This is a bit of work to ensure that proper Roman numerals are generated, but the rest of the property is tractable. You can use FsCheck's Prop.forAll to express the property that when tryParseRoman is called with any of the generated numerals, the return value is equal to the expected value.

The expected value is the sum of the value of each of the symbols in the input string, s. The string type implements the interface char seq, so you can map each of the characters by invoking tryParseRoman. That gives you a seq<int option>, because tryParseRoman returns int option values. You can use Seq.choose id to throw away any None values there may be, and then Seq.sum to calculate the sum of the integers. Finally, you can pipe the sum into the Some case constructor to turn expected into an int option, which matches the type of actual.

Now that you have expected and actual values, you can assert that these two values must be equal to each other. This property states that for all strictly descending numerals, the return value must be the sum of the constituent symbols.

Subtractiveness

The principal rule for Roman numerals is that of additivity, but if you only apply the above rules, you'd allow numerals such as IIII for 4, LXXXX for 90, etc. While there's historical precedent for such notation, it's not allowed in 'modern' Roman numerals. If there are more than three repeated characters, you should instead prefer subtractive notation: IV for 4, XC for 90, and so on.

Subtractive notation is, however, only allowed within adjacent groups. Logically, you could write 1999 as MIM, but this isn't allowed. The symbol I can only be used to subtract from V and X, X can only subtract from L and C, and C can only subtract from D and M.

Within these constraints, you have to describe the property of subtractive notation. Here's one way to do it:

type OptionBuilder() =
    member this.Bind(v, f) = Option.bind f v
    member this.Return v = Some v
 
let opt = OptionBuilder()
 
[<Property(QuietOnSuccess = true)>]
let ``certain symbols in ascending are subtractive`` () =
    let subtractive (subtrahend : char) (minuends : string) = gen {
        let! minuend = Gen.elements minuends
        return subtrahend, minuend }
    let symbols = 
        Gen.oneof [
            subtractive 'I' "VX"
            subtractive 'X' "LC"
            subtractive 'C' "DM" ]
        |> Arb.fromGen
    Prop.forAll symbols <| fun (subtrahend, minuend) ->
        let originalRoman = String [| subtrahend; minuend |]
 
        let actual = Numeral.tryParseRoman originalRoman
        let roundTrippedRoman = actual |> Option.bind Numeral.toRoman
 
        let expected = opt {
            let! m = Numeral.tryParseRoman (string minuend)
            let! s = Numeral.tryParseRoman (string subtrahend)
            return m - s }
        expected =! actual
        Some originalRoman =! roundTrippedRoman

Like the previous property, this looks like a mouthful, but isn't too bad. I'll walk you through it.

Initially, ignore the OptionBuilder type and the opt value; we'll return to them shortly. The property itself starts by defining a function called subtractive, which has the type char -> string -> Gen<char * char>. The first argument is a symbol representing the subtrahend; that is: the number being subtracted. The next argument is a sequence of minuends; that is: numbers from which the subtrahend will be subtracted.

The subtractive function is implemented with a gen computation expression. It first uses a let! binding to define that a singular minuend is a random value drawn from minuends. As usual, Gen.elements is the workhorse: it defines a generator that randomly draws from a sequence of values, and because minuends is a string, and the string type implements char seq, it can be used with Gen.elements to define a generator of char values. While Gen.elements minuends returns a Gen<char> value, the use of a let! binding within a computation expression causes minuend to have the type char.

The second line of code in subtractive returns a tuple of two char values: the subtrahend first, and the minuend second. Normally, when subtracting with the arithmetic minus operator, you'd write a difference as minuend - subtrahend; the minuends comes first, followed by the subtrahend. The Roman subtractive notation, however, is written with the subtrahend before the minuend, which is the reason that the subtractive function returns the symbols in that order. It's easier to think about that way.

The subtractive function enables you to define generators of Roman numerals using subtractive notation. Since I can only be used before V and X, you can define a generator using subtractive 'I' "VX". This is a Gen<char * char> value. Likewise, you can define subtractive 'X' "LC" and subtractive 'C' "DM" and use Gen.oneOf to define a generator that randomly selects one of these generators, and uses the selected generator to produce a value. As always, the last step is to convert the generator into an Arbitrary with Arb.fromGen, so that symbols has the type Arbitrary<char * char>.

Equipped with an Arbitrary, you can again use Prop.forAll to express the desired property. First, originalRoman is created from the subtrahend and minuend. Due to the way symbols is defined, originalRoman will have values like "IV", "XC", and so on.

The property then proceeds to invoke tryParseRoman. It also uses the actual value to produce a round-tripped value. Not only should the parser correctly understand subtractive notation, but the integer-to-Roman conversion should also prefer this notation.

The last part of the property is the assertion. Here, you need opt, which is a computation builder for option values.

In the assertion, you need to calculate the expected value. Both minuend and subtrahend are char values; in order to find their corresponding decimal values, you'll need to call tryParseRoman. The problem is that tryParseRoman returns an int option. For example, tryParseRoman "I" returns Some 1, so you may need to subtract Some 1 from Some 5. The most readable way I've found is to use a computation expression. Using let! bindings, both m and s are int values, which you can easily subtract using the normal - operator.

expected and actual are both int option values, so can be compared using Unquote's must equal operator.

Finally, the property also asserts that the original value must be equal to the round-tripped value. If not, you could have an implementation that correctly parses "IV" as 4, but converts 4 to "IIII".

Limited repetition

The previous property only ensures that subtractive notation is used in simple cases, like IX or CD. It doesn't verify that composite numerals are correctly written. As an example, the converter should convert 1893 to MDCCCXCIII, not MDCCCLXXXXIII. The second alternative is incorrect because it uses LXXXX to represent 90, instead of XC.

The underlying property is that any given symbol can only be repeated at most three times. A symbol can appear more than thrice in total, as demonstrated by the valid numeral MDCCCXCIII, in which C appears four times. For any group of repeated characters, however, a character must only be repeated once, twice, or thrice.

This also explain why the maximum Roman numeral is MMMCMXCIX, or 3,999.

In order to express this property in code, you first need a function to group characters. Here, I've chosen to reuse one of my own creation:

// seq<'a> -> 'a list list when 'a : equality
let group xs =
    let folder x = function
        | [] -> [[x]]
        | (h::t)::ta when h = x -> (x::h::t)::ta
        | acc -> [x]::acc
    Seq.foldBack folder xs []

This function will, for example, group "MDCCCXCIII" like this:

> "MDCCCXCIII" |> group;;
val it : char list list =
  [['M']; ['D']; ['C'; 'C'; 'C']; ['X']; ['C']; ['I'; 'I'; 'I']]

All you need to do is to find the length of all such sub-lists, and assert that the maximum is at most 3:

[<Property(QuietOnSuccess = true)>]
let ``there can be no more than three identical symbols in a row`` () =
    Prop.forAll romanRange <| fun i ->
 
        let actual = Numeral.toRoman i
        
        test <@ actual
                |> Option.map (group >> (List.map List.length>> List.max)
                |> Option.exists (fun x -> x <= 3) @>

Since actual is a string option, you need to express the assertion within the Maybe (Option) monad. First, you can use Option.map to map any value (should there be one) to find the maximum length of any repeated character group. This returns an int option.

Finally, you can pipe that int option into Option.exists, which will evaluate to false if there's no value, or if the boolean expression x <= 3 evaluates to false.

Input range

At this point, you're almost done. The only remaining properties you'll need to specify is that the maximum value is 3,999, and the minimum value is 1. Negative numbers, or zero, are not allowed:

[<Property(QuietOnSuccess = true)>]
let ``negative numbers and zero are not supported`` i =
    let i = -(abs i)
    let actual = Numeral.toRoman i
    test <@ Option.isNone actual @>

In this property, the function argument i can be any number, but calling abs ensures that it's positive (or zero), and the unary - operator then converts that positive value to a negative value.

Notice that the new i value shadows the input argument of the same name. This is a common trick when writing properties. It prevents me from accidentally using the input value provided by FsCheck. While the input argument is useful as a seed value, it isn't guaranteed to model the particular circumstances of this property. Here, you only care to assert what happens if the input is negative or zero. Specifically, you always want the return value to be None.

Likewise for too large input values:

[<Property(QuietOnSuccess = true)>]
let ``numbers too big are not supported`` () =
    Gen.choose (4000, Int32.MaxValue) |> Arb.fromGen |> Prop.forAll <| fun i ->
        let actual = Numeral.toRoman i
        test <@ Option.isNone actual @>

Here, Gen.choose is used to define an Arbitrary<int> that only produces numbers between 4000 and Int32.MaxValue (including both boundary values).

This test is similar to the one that exercises negative values, so you could combine them to a single function if you'd like. I'll leave this as an exercise, though.

Implementation

The interesting part of this exercise is, I think, how to define the properties. There are many ways you can implement the functions to pass all properties. Here's one of them:

open System
 
let tryParseRoman candidate = 
    let add x = Option.map ((+) x)
    let rec imp acc = function
        | 'I'::'X'::xs -> imp (acc |> add    9) xs
        | 'I'::'V'::xs -> imp (acc |> add    4) xs        
        | 'I'::xs      -> imp (acc |> add    1) xs
        | 'V'::xs      -> imp (acc |> add    5) xs
        | 'X'::'C'::xs -> imp (acc |> add   90) xs
        | 'X'::'L'::xs -> imp (acc |> add   40) xs        
        | 'X'::xs      -> imp (acc |> add   10) xs
        | 'L'::xs      -> imp (acc |> add   50) xs
        | 'C'::'M'::xs -> imp (acc |> add  900) xs
        | 'C'::'D'::xs -> imp (acc |> add  400) xs        
        | 'C'::xs      -> imp (acc |> add  100) xs
        | 'D'::xs      -> imp (acc |> add  500) xs
        | 'M'::xs      -> imp (acc |> add 1000) xs
        | []           -> acc
        | _            -> None
    candidate |> Seq.toList |> imp (Some 0)
 
let toRoman i =
    let rec imp acc = function
        | x when x >= 1000 -> imp ('M'::     acc) (x - 1000)
        | x when x >=  900 -> imp ('M'::'C'::acc) (x -  900)
        | x when x >=  500 -> imp ('D'::     acc) (x -  500)
        | x when x >=  400 -> imp ('D'::'C'::acc) (x -  400)
        | x when x >=  100 -> imp ('C'::     acc) (x -  100)
        | x when x >=   90 -> imp ('C'::'X'::acc) (x -   90)
        | x when x >=   50 -> imp ('L'::     acc) (x -   50)
        | x when x >=   40 -> imp ('L'::'X'::acc) (x -   40)
        | x when x >=   10 -> imp ('X'::     acc) (x -   10)
        | x when x >=    9 -> imp ('X'::'I'::acc) (x -    9)
        | x when x >=    5 -> imp ('V'::     acc) (x -    5)
        | x when x >=    4 -> imp ('V'::'I'::acc) (x -    4)
        | x when x >=    1 -> imp ('I'::     acc) (x -    1)
        | _                -> acc
    if 0 < i && i < 4000
    then imp [] i |> List.rev |> List.toArray |> String |> Some
    else None

Both functions use tail-recursive inner imp functions in order to accumulate the appropriate answer.

One of the nice properties (that I didn't test for) of this implementation is that the tryParseRoman function is a Tolerant Reader. While toRoman would never create such a numeral, tryParseRoman correctly understands some alternative renderings:

> "MDCCCLXXXXIII" |> tryParseRoman;;
val it : int option = Some 1893
> 1893 |> toRoman;;
val it : String option = Some "MDCCCXCIII"

In other words, the implementation follows Postel's law. tryParseRoman is liberal in what it accepts, while toRoman is conservative in what it returns.

Summary

Some problems look, at first glance, as obvious candidates for example-driven development. In my experience, this particularly happen when obvious examples abound. It's not difficult to come up with examples of Roman numerals, so it seems intuitive that you should just start writing some test cases with various examples. In my experience, though, that doesn't guarantee that you're led towards a good implementation.

The more a problem description is based on examples, the harder it can be to identify the underlying properties. Still, they're often there, once you start looking. As I've previously reported, using property-based test-driven development enables you to proceed in a more incremental fashion, because properties describe only parts of the desired solution.

If you're interested in learning more about Property-Based Testing, you can watch my introduction to Property-based Testing with F# Pluralsight course.


SUT Double

Wednesday, 15 June 2016 18:01:00 UTC

While it's possible to rely on Extract and Override for unit testing, Dependency Injection can make code and tests simpler and easier to maintain.

In object-oriented programming, many people still struggle with the intersection of design and testability. If a unit test is an automated test of a unit in isolation of its dependencies, then how do you isolate an object from its dependencies, most notably databases, web services, and the like?

One technique, described in The Art of Unit Testing, is called Extract and Override. The idea is that you write a class, but use a Template Method, Factory Method, or other sort of virtual class method to expose extensibility points that a unit test can use to achieve the desired isolation.

People sometimes ask me whether that isn't good enough, and why I advocate the (ostensibly) more complex technique of Dependency Injection?

It's easiest to understand the advantages and disadvantages if we have an example to discuss.

Example: Extract and Override

Imagine that you're creating a reservation system for a restaurant. Clients (web sites, smart phone apps, etcetera) display a user interface where you can fill in details about your reservation: your name, email address, the number of guests, and the date of your reservation. When you submit your reservation request, the client POSTs a JSON document to a web service.

In this example, the web service is implemented by a Controller class, and the Post method handles the incoming request:

public int Capacity { get; }
 
public IHttpActionResult Post(ReservationDto reservationDto)
{
    DateTime requestedDate;
    if(!DateTime.TryParse(reservationDto.Date, out requestedDate))
        return this.BadRequest("Invalid date.");
 
    var reservedSeats = this.ReadReservedSeats(requestedDate);
    if(this.Capacity < reservationDto.Quantity + reservedSeats)
        return this.StatusCode(HttpStatusCode.Forbidden);
 
    this.SaveReservation(requestedDate, reservationDto);
 
    return this.Ok();
}

The implementation is simple: It first attempts to validate the incoming document, and returns an error message if the document is invalid. Second, it reads the number of already reserved seats via a helper method, and rejects the request if the remaining capacity is insufficient. On the other hand, if the remaining capacity is sufficient, it saves the reservation and returns 200 OK.

The Post method relies on two helper methods that handles communication with the database:

public virtual int ReadReservedSeats(DateTime date)
{
    const string sql = @"
        SELECT COALESCE(SUM([Quantity]), 0) FROM [dbo].[Reservations]
        WHERE YEAR([Date]) = YEAR(@Date)
        AND MONTH([Date]) = MONTH(@Date)
        AND DAY([Date]) = DAY(@Date)";
 
    var connStr = ConfigurationManager.ConnectionStrings["booking"]
        .ConnectionString;
 
    using (var conn = new SqlConnection(connStr))
    using (var cmd = new SqlCommand(sql, conn))
    {
        cmd.Parameters.Add(new SqlParameter("@Date", date));
 
        conn.Open();
        return (int)cmd.ExecuteScalar();
    }
}
 
public virtual void SaveReservation(
    DateTime dateTime,
    ReservationDto reservationDto)
{
    const string sql = @"
        INSERT INTO [dbo].[Reservations] ([Date], [Name], [Email], [Quantity])
        VALUES (@Date, @Name, @Email, @Quantity)";
 
    var connStr = ConfigurationManager.ConnectionStrings["booking"]
        .ConnectionString;
 
    using (var conn = new SqlConnection(connStr))
    using (var cmd = new SqlCommand(sql, conn))
    {
        cmd.Parameters.Add(
            new SqlParameter("@Date", reservationDto.Date));
        cmd.Parameters.Add(
            new SqlParameter("@Name", reservationDto.Name));
        cmd.Parameters.Add(
            new SqlParameter("@Email", reservationDto.Email));
        cmd.Parameters.Add(
            new SqlParameter("@Quantity", reservationDto.Quantity));
 
        conn.Open();
        cmd.ExecuteNonQuery();
    }
}

In this example, both helper methods are public, but they could have been protected without changing any conclusions; by being public, though, they're easier to override using Moq. The important detail is that both of these methods are overridable. In C#, you declare that with the virtual keyword; in Java, methods are overridable by default.

Both of these methods use elemental ADO.NET to communicate with the database. You can also use an ORM, or any other database access technique you prefer - it doesn't matter for this discussion. What matters is that the methods can be overridden by unit tests.

Here's a unit test of the happy path:

[Fact]
public void PostReturnsCorrectResultAndHasCorrectStateOnAcceptableRequest()
{
    var json =
            new ReservationDto
            {
                Date = "2016-05-31",
                Name = "Mark Seemann",
                Email = "mark@example.com",
                Quantity = 1
            };
    var sut = new Mock<ReservationsController> { CallBase = true };
    sut
        .Setup(s => s.ReadReservedSeats(new DateTime(2016, 5, 31)))
        .Returns(0);
    sut
        .Setup(s => s.SaveReservation(new DateTime(2016, 5, 31), json))
        .Verifiable();
            
    var actual = sut.Object.Post(json);
 
    Assert.IsAssignableFrom<OkResult>(actual);
    sut.Verify();
}

This example uses the Extract and Override technique, but instead of creating a test-specific class that derives from ReservationsController, it uses Moq to create a dynamic Test Double for the System Under Test (SUT) - a SUT Double.

Since both ReadReservedSeats and SaveReservation are virtual, Moq can override them with test-specific behaviour. In this test, it defines the behaviour of ReadReservedSeats in such a way that it returns 0 when the input is a particular date.

While the test is simple, it has a single blemish. It ought to follow the Arrange Act Assert pattern, so it shouldn't have to configure the SaveReservation method before it calls the the Post method. After all, the SaveReservation method is a Command, and you should use Mocks for Commands. In other words, the test ought to verify the interaction with the SaveReservation method in the Assert phase; not configure it in the Arrange phase.

Unfortunately, if you don't configure the SaveReservation method before calling Post, Moq will use the base implementation, which will attempt to interact with the database. The database isn't available in the unit test context, so without that override, the base method will throw an exception, causing the test to fail.

Still, that's a minor issue. In general, the test is easy to follow, and the design of the ReservationsController class itself is also straightforward. Are there any downsides?

Example: shared connection

From a design perspective, the above version is fine, but you may find it inefficient that ReadReservedSeats and SaveReservation both open and close a connection to the database. Wouldn't it be more efficient if they could share a single connection?

If (by measuring) you decide that you'd like to refactor the ReservationsController class to use a shared connection, your first attempt might look like this:

public IHttpActionResult Post(ReservationDto reservationDto)
{
    DateTime requestedDate;
    if (!DateTime.TryParse(reservationDto.Date, out requestedDate))
        return this.BadRequest("Invalid date.");
 
    using (var conn = this.OpenDbConnection())
    {
        var reservedSeats = this.ReadReservedSeats(conn, requestedDate);
        if (this.Capacity < reservationDto.Quantity + reservedSeats)
            return this.StatusCode(HttpStatusCode.Forbidden);
 
        this.SaveReservation(conn, requestedDate, reservationDto);
 
        return this.Ok();
    }
}
 
public virtual SqlConnection OpenDbConnection()
{
    var connStr = ConfigurationManager.ConnectionStrings["booking"]
        .ConnectionString;
 
    var conn = new SqlConnection(connStr);
    try
    {
        conn.Open();
    }
    catch
    {
        conn.Dispose();
        throw;
    }
    return conn;
}
 
public virtual int ReadReservedSeats(SqlConnection conn, DateTime date)
{
    const string sql = @"
        SELECT COALESCE(SUM([Quantity]), 0) FROM [dbo].[Reservations]
        WHERE YEAR([Date]) = YEAR(@Date)
        AND MONTH([Date]) = MONTH(@Date)
        AND DAY([Date]) = DAY(@Date)";
 
    using (var cmd = new SqlCommand(sql, conn))
    {
        cmd.Parameters.Add(new SqlParameter("@Date", date));
 
        return (int)cmd.ExecuteScalar();
    }
}
 
public virtual void SaveReservation(
    SqlConnection conn,
    DateTime dateTime,
    ReservationDto reservationDto)
{
    const string sql = @"
        INSERT INTO [dbo].[Reservations] ([Date], [Name], [Email], [Quantity])
        VALUES (@Date, @Name, @Email, @Quantity)";
 
    using (var cmd = new SqlCommand(sql, conn))
    {
        cmd.Parameters.Add(
            new SqlParameter("@Date", reservationDto.Date));
        cmd.Parameters.Add(
            new SqlParameter("@Name", reservationDto.Name));
        cmd.Parameters.Add(
            new SqlParameter("@Email", reservationDto.Email));
        cmd.Parameters.Add(
            new SqlParameter("@Quantity", reservationDto.Quantity));
 
        cmd.ExecuteNonQuery();
    }
}

You've changed both ReadReservedSeats and SaveReservation to take an extra parameter: the connection to the database. That connection is created by the OpenDbConnection method, but you also have to make that method overridable, because otherwise, it'd attempt to connect to a database during unit testing, and thereby causing the tests to fail.

You can still unit test using the Extract and Overide technique, but the test becomes more complicated:

[Fact]
public void PostReturnsCorrectResultAndHasCorrectStateOnAcceptableRequest()
{
    var json =
            new ReservationDto
            {
                Date = "2016-05-31",
                Name = "Mark Seemann",
                Email = "mark@example.com",
                Quantity = 1
            };
    var sut = new Mock<ReservationsController> { CallBase = true };
    sut.Setup(s => s.OpenDbConnection()).Returns((SqlConnection)null);
    sut
        .Setup(s => s.ReadReservedSeats(
            It.IsAny<SqlConnection>(),
            new DateTime(2016, 5, 31)))
        .Returns(0);
    sut
        .Setup(s => s.SaveReservation(
            It.IsAny<SqlConnection>(),
            new DateTime(2016, 5, 31), json))
        .Verifiable();
            
    var actual = sut.Object.Post(json);
 
    Assert.IsAssignableFrom<OkResult>(actual);
    sut.Verify();
}

Not only must you override ReadReservedSeats and SaveReservation, but you must also supply a Dummy Object for the connection object, as well as override OpenDbConnection. Still manageable, perhaps, but the design indisputably deteriorated.

You can summarise the flaw by a single design smell: Feature Envy. Both the ReadReservedSeats and the SaveReservation methods take an argument of the type SqlConnection. On the other hand, they don't use any instance members of the ReservationsController class that currently hosts them. These methods seem like they ought to belong to SqlConnection instead of ReservationsController. That's not possible, however, since SqlConnection is a class from the Base Class Library, but you can, instead, create a new Repository class.

Example: Repository

A common design pattern is the Repository pattern, although the way it's commonly implemented today has diverged somewhat from the original description in PoEAA. Here, I'm going to apply it like people often do. You start by defining a new class:

public class SqlReservationsRepository : IDisposable
{
    private readonly Lazy<SqlConnection> lazyConn;
 
    public SqlReservationsRepository()
    {
        this.lazyConn = new Lazy<SqlConnection>(this.OpenSqlConnection);
    }
 
    private SqlConnection OpenSqlConnection()
    {
        var connStr = ConfigurationManager.ConnectionStrings["booking"]
            .ConnectionString;
 
        var conn = new SqlConnection(connStr);
        try
        {
            conn.Open();
        }
        catch
        {
            conn.Dispose();
            throw;
        }
        return conn;
    }
 
    public virtual int ReadReservedSeats(DateTime date)
    {
        const string sql = @"
            SELECT COALESCE(SUM([Quantity]), 0) FROM [dbo].[Reservations]
            WHERE YEAR([Date]) = YEAR(@Date)
            AND MONTH([Date]) = MONTH(@Date)
            AND DAY([Date]) = DAY(@Date)";
 
        using (var cmd = new SqlCommand(sql, this.lazyConn.Value))
        {
            cmd.Parameters.Add(new SqlParameter("@Date", date));
 
            return (int)cmd.ExecuteScalar();
        }
    }
 
    public virtual void SaveReservation(
        DateTime dateTime,
        ReservationDto reservationDto)
    {
        const string sql = @"
            INSERT INTO [dbo].[Reservations] ([Date], [Name], [Email], [Quantity])
            VALUES (@Date, @Name, @Email, @Quantity)";
 
        using (var cmd = new SqlCommand(sql, this.lazyConn.Value))
        {
            cmd.Parameters.Add(
                new SqlParameter("@Date", reservationDto.Date));
            cmd.Parameters.Add(
                new SqlParameter("@Name", reservationDto.Name));
            cmd.Parameters.Add(
                new SqlParameter("@Email", reservationDto.Email));
            cmd.Parameters.Add(
                new SqlParameter("@Quantity", reservationDto.Quantity));
 
            cmd.ExecuteNonQuery();
        }
    }
 
    public void Dispose()
    {
        this.Dispose(true);
        GC.SuppressFinalize(this);
    }
 
    protected virtual void Dispose(bool disposing)
    {
        if (disposing)
            this.lazyConn.Value.Dispose();
    }
}

The new SqlReservationsRepository class contains the two ReadReservedSeats and SaveReservation methods, and because you've now moved them to a class that contains a shared database connection, the methods don't need the connection as a parameter.

The ReservationsController can use the new SqlReservationsRepository class to do its work, while keeping connection management efficient. In order to make it testable, however, you must make it overridable. The SqlReservationsRepository class' methods are already virtual, but that's not the class you're testing. The System Under Test is the ReservationsController class, and you have to make its use of SqlReservationsRepository overridable as well.

If you wish to avoid Dependency Injection, you can use a Factory Method:

public IHttpActionResult Post(ReservationDto reservationDto)
{
    DateTime requestedDate;
    if (!DateTime.TryParse(reservationDto.Date, out requestedDate))
        return this.BadRequest("Invalid date.");
 
    using (var repo = this.CreateRepository())
    {
        var reservedSeats = repo.ReadReservedSeats(requestedDate);
        if (this.Capacity < reservationDto.Quantity + reservedSeats)
            return this.StatusCode(HttpStatusCode.Forbidden);
 
        repo.SaveReservation(requestedDate, reservationDto);
 
        return this.Ok();
    }
}
        
public virtual SqlReservationsRepository CreateRepository()
{
    return new SqlReservationsRepository();
}

The Factory Method in the above example is the CreateRepository method, which is virtual, and thereby overridable. You can override it in a unit test like this:

[Fact]
public void PostReturnsCorrectResultAndHasCorrectStateOnAcceptableRequest()
{
    var json =
            new ReservationDto
            {
                Date = "2016-05-31",
                Name = "Mark Seemann",
                Email = "mark@example.com",
                Quantity = 1
            };
    var repo = new Mock<SqlReservationsRepository>();
    repo
        .Setup(r => r.ReadReservedSeats(new DateTime(2016, 5, 31)))
        .Returns(0);
    repo
        .Setup(r => r.SaveReservation(new DateTime(2016, 5, 31), json))
        .Verifiable();
    var sut = new Mock<ReservationsController> { CallBase = true };
    sut.Setup(s => s.CreateRepository()).Returns(repo.Object);
 
    var actual = sut.Object.Post(json);
 
    Assert.IsAssignableFrom<OkResult>(actual);
    repo.Verify();
}

You'll notice that not only did the complexity increase of the System Under Test, but the test itself became more complicated as well. In the previous version, at least you only needed to create a single Mock<T>, but now you have to create two different test doubles and connect them. This is a typical example demonstrating the shortcomings of the Extract and Override technique. It doesn't scale well as complexity increases.

Example: Dependency Injection

In 1994 we we were taught to favor object composition over class inheritance. You can do that, and still keep your code loosely coupled with Dependency Injection. Instead of relying on virtual methods, inject a polymorphic object into the class:

public class ReservationsController : ApiController
{
    public ReservationsController(IReservationsRepository repository)
    {
        if (repository == null)
            throw new ArgumentNullException(nameof(repository));
 
        this.Capacity = 12;
        this.Repository = repository;
    }
 
    public int Capacity { get; }
 
    public IReservationsRepository Repository { get; }
 
    public IHttpActionResult Post(ReservationDto reservationDto)
    {
        DateTime requestedDate;
        if (!DateTime.TryParse(reservationDto.Date, out requestedDate))
            return this.BadRequest("Invalid date.");
 
        var reservedSeats =
            this.Repository.ReadReservedSeats(requestedDate);
        if (this.Capacity < reservationDto.Quantity + reservedSeats)
            return this.StatusCode(HttpStatusCode.Forbidden);
 
        this.Repository.SaveReservation(requestedDate, reservationDto);
 
        return this.Ok();
    }
}

In this version of ReservationsController, an IReservationsRepository object is injected into the object via the constructor and saved in a class field for later use. When the Post method executes, it calls Repository.ReadReservedSeats and Repository.SaveReservation without further ado.

The IReservationsRepository interface is defined like this:

public interface IReservationsRepository
{
    int ReadReservedSeats(DateTime date);
    void SaveReservation(DateTime dateTime, ReservationDto reservationDto);
}

Perhaps you're surprised to see that it merely defines the two ReadReservedSeats and SaveReservation methods, but makes no attempt at making the interface disposable.

Not only is IDisposable an implementation detail, but it also keeps the implementation of ReservationsController simple. Notice how it doesn't attempt to control the lifetime of the injected repository, which may or may not be disposable. In a few paragraphs, we'll return to this matter, but first, witness how the unit test became simpler as well:

[Fact]
public void PostReturnsCorrectResultAndHasCorrectStateOnAcceptableRequest()
{
    var json =
            new ReservationDto
            {
                Date = "2016-05-31",
                Name = "Mark Seemann",
                Email = "mark@example.com",
                Quantity = 1
            };
    var repo = new Mock<IReservationsRepository>();
    repo
        .Setup(r => r.ReadReservedSeats(new DateTime(2016, 5, 31)))
        .Returns(0);
    var sut = new ReservationsController(repo.Object);
 
    var actual = sut.Post(json);
 
    Assert.IsAssignableFrom<OkResult>(actual);
    repo.Verify(
        r => r.SaveReservation(new DateTime(2016, 5, 31), json));
}

With this test, you can finally use the Arrange Act Assert structure, instead of having to configure the SaveReservation method call in the Arrange phase. This test arranges the Test Fixture by creating a Test Double for the IReservationsRepository interface and injecting it into the ReservationsController. You only need to configure the ReadReservedSeats method, because there's no default behaviour that you need to suppress.

You may be wondering about the potential memory leak when the SqlReservationsRepository is in use. After all, ReservationsController doesn't dispose of the injected repository.

You address this concern when you compose the dependency graph. This example uses ASP.NET Web API, which has an extensibility point for this exact purpose:

public class PureCompositionRoot : IHttpControllerActivator
{
    public IHttpController Create(
        HttpRequestMessage request,
        HttpControllerDescriptor controllerDescriptor,
        Type controllerType)
    {
        if(controllerType == typeof(ReservationsController))
        {
            var repo = new SqlReservationsRepository();
            request.RegisterForDispose(repo);
            return new ReservationsController(repo);
        }
 
        throw new ArgumentException(
            "Unexpected controller type: " + controllerType,
            nameof(controllerType));
    }
}

The call to request.RegisterForDispose tells the ASP.NET Web API framework to dispose of the concrete repo object once it has handled the request and served the response.

Conclusion

At first glance, it seems like the overall design would be easier if you make your SUT testable via inheritance, but once things become more complex, favouring composition over inheritance gives you more freedom to design according to well-known design patterns.

In case you want to dive into the details of the code presented here, it's available on GitHub. You can follow the progression of the code in the repository's commit history.

If you're interested in learning more about advanced unit testing techniques, you can watch my popular Pluralsight course.


TIE fighter FsCheck properties

Tuesday, 17 May 2016 10:55:00 UTC

Use the F# TIE fighter operator combination to eliminate a hard-to-name intermediary value.

A doctrine of Clean Code is the Boy Scout Rule: leave the code cleaner than you found it. Attempting to live by that rule, I'm always looking for ways to improve my code.

Writing properties with FsCheck is enjoyable, but I've been struggling with expressing ad-hoc Arbitraries in a clean style. Although I've already twice written about this, I've recently found another improvement.

Cleaner, but not clean enough

Previously, I've described how you can use the backward pipe operator to avoid enclosing a multi-line expression in brackets:

[<Property(QuietOnSuccess = true)>]
let ``Any live cell with > 3 live neighbors dies`` (cell : int * int) =
    let nc = Gen.elements [4..8] |> Arb.fromGen
    Prop.forAll nc <| fun neighborCount ->
        let liveNeighbors =
            cell
            |> findNeighbors
            |> shuffle
            |> Seq.take neighborCount
            |> Seq.toList
        
        let actual : State =
            calculateNextState (cell :: liveNeighbors |> shuffle |> set) cell
 
        Dead =! actual

This property uses the custom Arbitrary nc to express a property about Conway's Game of Life. The value nc is a Arbitrary<int>, which is a generator of integer values between 4 and 8 (both included).

There's a problem with that code, though: I don't care about nc; I care about the values it creates. That's the important value in my property, and this is the reason I reserved the descriptive name neighborCount for that role. The property cares about the neighbour count: for any neighbour count in the range 4-8, it should hold that blah blah blah, and so on...

Unfortunately, I was still left with the need to pass an Arbitrary<'a> to Prop.forAll, so I named it the best I could: nc (short for Neighbour Count). Following Clean Code's heuristics for short symbol scope, I considered it an acceptable name, albeit a bit cryptic.

TIE Fighters to the rescue!

One day, I was looking at a property like the one above, and I thought: if you consider the expression Prop.forAll nc in isolation, you could also write it nc |> Prop.forAll. Does it work in this context? Yes it does:

[<Property(QuietOnSuccess = true)>]
let ``Any live cell with > 3 live neighbors dies`` (cell : int * int) =
    let nc = Gen.elements [4..8] |> Arb.fromGen
    (nc |> Prop.forAll) <| fun neighborCount ->
        let liveNeighbors =
            cell
            |> findNeighbors
            |> shuffle
            |> Seq.take neighborCount
            |> Seq.toList
        
        let actual : State =
            calculateNextState (cell :: liveNeighbors |> shuffle |> set) cell
 
        Dead =! actual

This code compiles, and is equivalent to the first example. I deliberately put the expression nc |> Prop.forAll in brackets to be sure that I'd made a correct replacement. Pipes are left-associative, though, so in this case, the brackets are redundant:

[<Property(QuietOnSuccess = true)>]
let ``Any live cell with > 3 live neighbors dies`` (cell : int * int) =
    let nc = Gen.elements [4..8] |> Arb.fromGen
    nc |> Prop.forAll <| fun neighborCount ->
        let liveNeighbors =
            cell
            |> findNeighbors
            |> shuffle
            |> Seq.take neighborCount
            |> Seq.toList
        
        let actual : State =
            calculateNextState (cell :: liveNeighbors |> shuffle |> set) cell
 
        Dead =! actual

This still works (and fails if the system under test has a defect).

Due to referential transparency, though, the value nc is equal to the expression Gen.elements [4..8] |> Arb.fromGen, so you can replace it:

[<Property(QuietOnSuccess = true)>]
let ``Any live cell with > 3 live neighbors dies`` (cell : int * int) =
    Gen.elements [4..8]
    |> Arb.fromGen
    |> Prop.forAll <| fun neighborCount ->
        let liveNeighbors =
            cell
            |> findNeighbors
            |> shuffle
            |> Seq.take neighborCount
            |> Seq.toList
        
        let actual : State =
            calculateNextState (cell :: liveNeighbors |> shuffle |> set) cell
 
        Dead =! actual

In the above example, I've also slightly reformatted the expression, so that each expression composed with a forward pipe is on a separate line. That's not required, but I find it more readable.

Notice how Prop.forAll is now surrounded by pipes: |> Prop.forAll <|. This is humorously called TIE fighter infix.

Summary

Sometimes, giving an intermediary value a descriptive name can improve code readability, but in other cases, such a value is only in the way. This was the case with the nc value in the first example above. Using TIE fighter infix notation enables you to get rid of a redundant, hard-to-name symbol. In my opinion, because nc didn't add any information to the code, I find the refactored version easier to read. I've left the code base cleaner than I found it.


CQS and server-generated Entity IDs

Friday, 06 May 2016 17:36:00 UTC

Is it a violation of Command Query Separation to update an ID property when saving an Entity to a database?

In my Encapsulation and SOLID course on Pluralsight, I explain how the elusive object-oriented quality encapsulation can be approximated by the actionable principles of Command Query Separation (CQS) and Postel's law.

One of the questions that invariably arise when people first learn about CQS is how to deal with (database) server-generated IDs. While I've already covered that question, I recently came upon a variation of the question:

"The Create method is a command, then if the object passed to this method have some changes in their property, does it violate any rule? I mean that we can always get the new id from the object itself, so we don't need to return another integer. Is it good or bad practice?"
In this article, I'll attempt to answer this question.

Returning an ID by mutating input

I interpret the question like this: an Entity (as described in DDD) can have a mutable Id property. A Create method could save the Entity in a database, and then update the input value's Id property with the newly created record's ID.

As an example, consider this User class:

public class User
{
    public int Id { getset; }
 
    public string FirstName { getset; }
 
    public string LastName { getset; }
}

In order to create a new User in your database, you could define an API like this:

public interface IUserRepository
{
    void Create(User user);
}

An implementation of IUserRepository based on a relational database could perform an INSERT into the appropriate database, get the ID of the created record, and update the User object's Id property.

This test snippet demonstrates that behaviour:

var u = new User { FirstName = "Jane", LastName = "Doe" };
Assert.Equal(0, u.Id);
 
repository.Create(u);
Assert.NotEqual(0, u.Id);

When you create the u object, by not assigning a value to the Id property, it will have the default value of 0. Only after the Create method returns does Id hold a proper value.

Evaluation

Does this design adhere to CQS? Yes, it does. The Create method doesn't return a value, but rather changes the state of the system. Not only does it create a record in your database, but it also changes the state of the u object. Nowhere does CQS state that an operation can't change more than a single part of the system.

Is it good design, then? I don't think that it is.

First, the design violates another part of encapsulation: protection of invariants. The invariants of any Entity is that is has an ID. It is the single defining feature of Entities that they have enduring and stable identities. When you change the identity of an Entity, it's no longer the same Entity. Yet this design allows exactly that:

u.Id = 42;
// ...
u.Id = 1337;

Second, such a design puts considerable trust in the implicit protocol between any client and the implementation of IUserRepository. Not only must the Create method save the User object in a data store, but it must also update the Id.

What happens, though, if you replay the method call:

repository.Create(new User { FirstName = "Ada", LastName = "Poe" });
repository.Create(new User { FirstName = "Ada", LastName = "Poe" });

This will result in duplicated entries, because the repository can't detect whether this is a replay, or simply two new users with the same name. You may find this example contrived, but in these days of cloud-based storage, it's common to apply retry strategies to clients.

If you use one of the alternatives I previously outlined, you will not have this problem.

Invariants

Even if you don't care about replays or duplicates, you should still consider the invariants of Entities. As a minimum, you shouldn't be able to change the ID of an Entity. For a class like User, it'll also make client developers' job easier if it can provide some useful guarantees. This is part of Postel's law applied: be conservative in what you send. In this case, at least guarantee that no values will be null. Thus, a better (but by no means perfect) User class could be:

public class User
{
    public User(int id)
    {
        if (id <= 0)
            throw new ArgumentOutOfRangeException(
                nameof(id),
                "The ID must be a (unique) positive value.");;
 
        this.Id = id;
        this.firstName = "";
        this.lastName = "";
    }
 
    public int Id { get; }
 
    private string firstName;        
    public string FirstName
    {
        get { return this.firstName; }
        set
        {
            if (value == null)
                throw new ArgumentNullException(nameof(value));
            this.firstName = value;
        }
    }
 
    private string lastName;
    public string LastName
    {
        get { return this.lastName; }
        set
        {
            if (value == null)
                throw new ArgumentNullException(nameof(value));
            this.lastName = value;
        }
    }
}

The constructor initialises all class fields, and ensure that Id is a positive integer. It can't ensure that the ID is unique, though - at least, not without querying the database, which would introduce other problems.

How do you create a new User object and save it in the database, then?

One option is this:

public interface IUserRepository
{
    void Create(string firstName, string lastName);
 
    User Read(int id);
 
    void Update(User user);
 
    void Delete(int id);
}

The Create method doesn't use the User class at all, because at this time, the Entity doesn't yet exist. It only exists once it has an ID, and in this scenario, this only happens once the database has stored it and assigned it an ID.

Other methods can still use the User class as either input or output, because once the Entity has an ID, its invariants are satisfied.

There are other problems with this design, though, so I still prefer the alternatives I originally sketched.

Conclusion

Proper object-oriented design should, as a bare minimum, consider encapsulation. This includes protecting the invariants of objects. For Entities, it means that an Entity must always have a stable and enduring identity. Well-designed Entities guarantee that identity values are always present and immutable. This requirement tend to be at odds with most popular Object-Relational Mappers (ORM), which require that ID fields are externally assignable. In order to use ORMs, most programmers compromise on encapsulation, ending up with systems that are relational in nature, but far from object-oriented.


Comments

>" This includes protecting the invariants of objects. For Entities, it means that an Entity must always have a stable and enduring identity."

Great points.

However, how would you get from void Create(string firstName, string lastName) to User Read(int id) especially if an entity doesn't have a natural key?

2016-05-06 18:26 UTC

Vladimir, thank you for writing. Exactly how you get the ID depends on the exact requirements of your application. As an example, if you're developing a web application, you may simply be creating the user, and then that's the end of the HTTP request. Meanwhile, asynchronously, a background process is performing the actual database insertion and subsequently sends an email to the user with a link to click in order to verify the email. In that link is the generated ID, so when your server receives the next request, you already know the ID.

That may not always be a practical strategy, but I started to describe this technique, because my experience with CQRS and REST tells me that you have lots of options for communicating state and identity. Still, if you need an ID straight away, you can always pass a GUID as a correlation ID, so that you can later find the Entity you created. That's the technique I originally described.

2016-05-06 19:04 UTC

Thank you for the reply.

BTW, ORMs don't require you to make Ids externally assignable. You can have ORMs assign Ids for you internally, and that's a preferred design in most cases. Also, you can make the Id setters non-public and thus protect entities' encapsulation. There still are issues, of course. Namely, entities don't have established identities until they are saved to the DB or added to the Context/Session. But this issue isn't that big comparing to what you described in the post.

2016-05-06 19:25 UTC
Ethan Nelson

I was skimming through the comments and had this nagging suspicion that the entire topic of, "how do I get my ID with CQS", was a false dichotomy. The premise that the "ID" stored in the database is the "identity" of the object or entity I think can be challenged.

I prefer to look at the "ID" as purely database implementation details. Assuming the repository is SQL, the key type preferred by db admins will be the smallest-reasonable monotonically increasing ID... namely INT. But switch to Amazon DynamoDB and that conclusion flies out the window. With their totally managed and distributed indexes and hashing algorithms... GUID's would be fantastic.

What I WISH developers would do is take the time to think about their Domain and form their own understanding of what truly defines the identity of the entity. Whatever decision they make, it translates to a "natural key" for persistence.

Natural keys are great because a developer has promised, "these combinations of properties uniquely identify entities of type T". The result is unique indexes that can have dramatic performance benefits... outside the application benefit of being able to construct your repository like this:

public interface IRepository<T>
{
    void Create(T item);
}

As for the "ID" field? Who cares?... that's up to the database. The developer already knows exactly how to find this T. I wouldn't even include it on the domain model object.

This would give db admins options. We can look at the natural key size, evaluate how it is queried and order it appropriately. We can decide if the nature of typical joins merit clustering (as is common with Header -> Detail style records). If needed, we can optimize lookup of large keys through stored procedures and computed persisted checksum columns. Nullability and default values become part of design discussions on the front end. Finally, if we feel like it... we can assign a surrogate key, or use a GUID. The developer doesn't care, our dba's do.

Thinking back to my last 6 or so development projects, I've never had to "get the id". The next operation is always a matter of query or command on unique criteria I was responsible for creating in the first place. This approach makes my dba's happy.

2016-05-10 15:06 UTC

Ethan, thank you for your comment; I agree with 90 percent of what you wrote, which is also the reason I prefer the options I previously outlined.

Entities don't always have 'natural IDs', though, and even when they have, experience has taught me not to use them. Is a person's email address a natural ID? It is, but people change email addresses.

Here in Denmark, for many years many public software systems have used the Danish personal identification numbers as natural keys, as they were assumed to be nationally unique and stable. These numbers encode various information about a person, such as birth-date and sex. A few inhabitants, however, get sex change operations, and then desire a new number that corresponds to their new sex. Increasingly, these requests are being granted. New ID, same person.

I've seen such changes happen so many times during my career that I've become distrustful of 'natural IDs'. If I need to identify an Entity, I usually attach a GUID to it.

2016-05-10 20:04 UTC
Ethan Nelson

With regard to entity identification, I'm not sure stability matters. Who cares if the email address changes... as long as it is unique? Uniqueness is sufficient for the application to identify the entity. I agree from a strictly database standpoint; I'm not thrilled about assigning a primary key to a natural key due to the stability issue across relationships. But that does not invalidate the key for use as an identity in my mind. Both indexes can exist, both be unique, and each with their own purpose.

Also, it is difficult for me to envision a scenario where there is no natural key. If there are two rows where the only difference is an auto-incrementing integer, or different guids... what is the meaning of one row versus the next? That meaning cannot, by definition, be contained in the surrogate keys... they add nothing to the business meaning of the entity.

I feel I've derailed the discussion into database modeling theory, and we are talking about CQS with "create the entity" as a stimulating use case. Suffice it to say, IMHO, if (a, b) is unique today according to business rule, than it should be my unique identity from a developer perspective. By leaving row identifiers up to DBA's and dropping them from my domain model entirely, I minimize impact to those technical implementations when the business rules change such that (a, b, c) becomes the new unique, and, preserve CQS in the process.

2016-05-10 23:41 UTC

Page 1 of 35

"Our team wholeheartedly endorses Mark. His expert service provides tremendous value."
Hire me!