Repeatable execution in C#

Monday, 06 April 2020 07:46:00 UTC

A C# example of Goldilogs.

This article is part of a series of articles about repeatable execution. The introductory article argued that if you've logged the impure actions that a system made, you have enough information to reproduce what happened. The previous article verified that for the example scenario, the impure actions were limited to reading the current time and interacting with the application database.

This article shows how to implement equivalent functionality in C#. You should be able to extrapolate from this to other object-oriented programming languages.

The code is available on GitHub.

Impure actions #

In the previous article I modelled impure actions as free monads. In C#, it'd be more idiomatic to use Dependency Injection. Model each impure interaction as an interface.

public interface IClock
{
    DateTime GetCurrentDateTime();
}

The demo code demonstrates a single feature of a REST API and it only requires a single method on this interface to work. Following the Dependency Inversion Principle

"clients [...] own the abstract interfaces"

This interface only defines a single method, because that's all the client code requires.

Likewise, the client code only needs two methods to interact with the database:

public interface IReservationsRepository
{
    IEnumerable<ReservationReadReservations(DateTime date);
    void Create(Reservation reservation);
}

In the Haskell example code base, I also implemented GET for /reservations, but I forgot to do that here. There's only two methods on the interface: one to query the database, and one to create a new row.

Receive a reservation #

The central feature of the service is to receive and handle an HTTP POST request, as described in the introductory article. When a document arrives it triggers a series of non-trivial work:

  1. The service validates the input data. Among other things, it checks that the reservation is in the future. It uses GetCurrentDateTime for this.
  2. It queries the database for existing reservations. It uses ReadReservations for this.
  3. It uses complex business logic to determine whether to accept the reservation. This essentially implements the Maître d' kata.
  4. If it accepts the reservation, it stores it. It uses Create for this.
These steps manifest as this Controller method:

public ActionResult Post(ReservationDto dto)
{
    if (!DateTime.TryParse(dto.Date, out var _))
        return BadRequest($"Invalid date: {dto.Date}.");
 
    Reservation reservation = Mapper.Map(dto);
 
    if (reservation.Date < Clock.GetCurrentDateTime())
        return BadRequest($"Invalid date: {reservation.Date}.");
 
    var reservations = Repository.ReadReservations(reservation.Date);
    bool accepted = maîtreD.CanAccept(reservationsreservation);
    if (!accepted)
        return StatusCode(StatusCodes.Status500InternalServerError, "Couldn't accept.");
 
    Repository.Create(reservation);
    return Ok();
}

Clock and Repository are injected dependencies, and maîtreD is an object that implements the decision logic as the pure CanAccept function.

Composition #

The Post method is defined on a class called ReservationsController with these dependencies:

public ReservationsController(
    TimeSpan seatingDuration,
    IReadOnlyCollection<Tabletables,
    IReservationsRepository repository,
    IClock clock)

The seatingDuration and tables arguments are primitive dependencies used to configure the maîtreD object. I could also have injected maîtreD as a concrete dependency, but I decided against that for no particular reason.

There's no logging dependency, but the system still logs. Like in the previous example, logging is a cross-cutting concern and exclusively addressed through composition:

if (controllerType == typeof(ReservationsController))
{
    var l = new ScopedLog(new FileLog(LogFile));
    var controller = new ReservationsController(
        SeatingDuration,
        Tables,
        new LogReservationsRepository(
            new SqlReservationsRepository(ConnectionString),
            l),
        new LogClock(
            new SystemClock(),
            l));
    Logs.AddOrUpdate(controllerl, (_x) => x);
    return controller;
}

Each dependency is wrapped by a logger. We'll return to that in a minute, but consider first the actual implementations.

Using the system clock #

Using the system clock is easy:

public class SystemClock : IClock
{
    public DateTime GetCurrentDateTime()
    {
        return DateTime.Now;
    }
}

This implementation of IClock simply delegates to DateTime.Now. Again, no logging service is injected.

Using the database #

Using the database isn't much harder. I don't find that ORMs offer any benefits, so I prefer to implement database functionality using basic database APIs:

public void Create(Reservation reservation)
{
    using (var conn = new SqlConnection(ConnectionString))
    using (var cmd = new SqlCommand(createReservationSql, conn))
    {
        cmd.Parameters.Add(
            new SqlParameter("@Guid"reservation.Id));
        cmd.Parameters.Add(
            new SqlParameter("@Date"reservation.Date));
        cmd.Parameters.Add(
            new SqlParameter("@Name"reservation.Name));
        cmd.Parameters.Add(
            new SqlParameter("@Email"reservation.Email));
        cmd.Parameters.Add(
            new SqlParameter("@Quantity"reservation.Quantity));
 
        conn.Open();
        cmd.ExecuteNonQuery();
    }
}
 
private const string createReservationSql = @"
    INSERT INTO [dbo].[Reservations] ([Guid], [Date], [Name], [Email], [Quantity])
    OUTPUT INSERTED.Id
    VALUES (@Guid, @Date, @Name, @Email, @Quantity)";

The above code snippet implements the Create method of the IReservationsRepository interface. Please refer to the Git repository for the full code if you need more details.

If you prefer to implement your database functionality with an ORM, or in another way, you can do that. It doesn't change the architecture of the system. No logging service is required to interact with the database.

Compose with logging #

As the above composition code snippet suggests, logging is implemented with Decorators. The ultimate implementation of IClock is SystemClock, but the Composition Root decorates it with LogClock:

public class LogClock : IClock
{
    public LogClock(IClock innerScopedLog log)
    {
        Inner = inner;
        Log = log;
    }
 
    public IClock Inner { get; }
    public ScopedLog Log { get; }
 
    public DateTime GetCurrentDateTime()
    {
        var currentDateTime = Inner.GetCurrentDateTime();
        Log.Observe(
            new Interaction
            {
                Operation = nameof(GetCurrentDateTime),
                Output = currentDateTime
            });
        return currentDateTime;
    }
}

ScopedLog is a Concrete Dependency that, among other members, affords the Observe method. Notice that LogClock implements IClock by decorating another polymorphic IClock instance. It delegates functionality to inner, logs the currentDateTime and returns it.

The LogReservationsRepository class implements the same pattern:

public class LogReservationsRepository : IReservationsRepository
{
    public LogReservationsRepository(IReservationsRepository innerScopedLog log)
    {
        Inner = inner;
        Log = log;
    }
 
    public IReservationsRepository Inner { get; }
    public ScopedLog Log { get; }
 
    public void Create(Reservation reservation)
    {
        Log.Observe(
            new Interaction
            {
                Operation = nameof(Create),
                Input = new { reservation }
            });
        Inner.Create(reservation);
    }
 
    public IEnumerable<ReservationReadReservations(DateTime date)
    {
        var reservations = Inner.ReadReservations(date);
        Log.Observe(
            new Interaction
            {
                Operation = nameof(ReadReservations),
                Input = new { date },
                Output = reservations
            });
        return reservations;
    }
}

This architecture not only implements the desired functionality, but also Goldilogs: not too little, not too much, but just what you need. Notice that I didn't have to change any of my Domain Model or HTTP-specific code to enable logging. This cross-cutting concern is enabled entirely via composition.

Repeatability #

An HTTP request like this:

POST /reservations/ HTTP/1.1
Content-Type: application/json

{
  "id""7bc3fc93-a777-4138-8630-a805e7246335",
  "date""2020-03-20 18:45:00",
  "name""Kozue Kaburagi",
  "email""ninjette@example.net",
  "quantity": 4
}

produces a log entry like this:

{
  "entry": {
    "time""2020-01-02T09:50:34.2678703+01:00",
    "operation""Post",
    "input": {
      "dto": {
        "id""7bc3fc93-a777-4138-8630-a805e7246335",
        "date""2020-03-20 18:45:00",
        "email""ninjette@example.net",
        "name""Kozue Kaburagi",
        "quantity": 4
      }
    },
    "output"null
  },
  "interactions": [
    {
      "time""2020-01-02T09:50:34.2726143+01:00",
      "operation""GetCurrentDateTime",
      "input"null,
      "output""2020-01-02T09:50:34.2724012+01:00"
    },
    {
      "time""2020-01-02T09:50:34.3571224+01:00",
      "operation""ReadReservations",
      "input": { "date""2020-03-20T18:45:00" },
      "output": [
        {
          "id""c3cbfbc7-6d64-4ead-84ef-7f89de5b7e1c",
          "date""2020-03-20T19:00:00",
          "email""emp@example.com",
          "name""Elissa Megan Powers",
          "quantity": 3
        }
      ]
    },
    {
      "time""2020-01-02T09:50:34.3587586+01:00",
      "operation""Create",
      "input": {
        "reservation": {
          "id""7bc3fc93-a777-4138-8630-a805e7246335",
          "date""2020-03-20T18:45:00",
          "email""ninjette@example.net",
          "name""Kozue Kaburagi",
          "quantity": 4
        }
      },
      "output"null
    }
  ],
  "exit": {
    "time""2020-01-02T09:50:34.3645105+01:00",
    "operation"null,
    "input"null,
    "output": { "statusCode": 200 }
  }
}

I chose to gather all information regarding a single HTTP request into a single log entry and format it as JSON. I once worked with an organisation that used the ELK stack in that way, and it made it easy to identify and troubleshoot issues in production.

You can use such a log file to reproduce the observed behaviour, for example in a unit test:

[Fact]
public void NinjetteRepro()
{
    string log = Log.LoadFile("Ninjette.json");
    (ReservationsController sutReservationDto dto) =
        Log.LoadReservationsControllerPostScenario(log);
 
    var actual = sut.Post(dto);
 
    Assert.IsAssignableFrom<OkResult>(actual);
}

This test reproduces the behaviour that was recorded in the above JSON log. While there was already one existing reservation (returned from ReadReservations), the system had enough remaining capacity to accept the new reservation. Therefore, the expected result is an OkResult.

Replay #

You probably noticed the helper methods Log.LoadFile and Log.LoadReservationsControllerPostScenario. This API is just a prototype to get the point across. There's little to say about LoadFile, since it just reads the file. The LoadReservationsControllerPostScenario method performs the bulk of the work. It parses the JSON string into a collection of observations. It then feeds these observations to test-specific implementations of the dependencies required by ReservationsController.

For example, here's the test-specific implementation of IClock:

public class ReplayClock : IClock
{
    private readonly Queue<DateTime> times;
 
    public ReplayClock(IEnumerable<DateTimetimes)
    {
        this.times = new Queue<DateTime>(times);
    }
 
    public DateTime GetCurrentDateTime()
    {
        return times.Dequeue();
    }
}

The above JSON log example only contains a single observation of GetCurrentDateTime, but an arbitrary log may contain zero, one, or several observations. The idea is to replay them, starting with the earliest. ReplayClock just creates a Queue of them and Dequeue every time GetCurrentDateTime executes.

The test-specific ReplayReservationsRepository class works the same way:

public class ReplayReservationsRepository : IReservationsRepository
{
    private readonly IDictionary<DateTimeQueue<IEnumerable<Reservation>>> reads;
 
    public ReplayReservationsRepository(
        IDictionary<DateTimeQueue<IEnumerable<Reservation>>> reads)
    {
        this.reads = reads;
    }
 
    public void Create(Reservation reservation)
    {
    }
 
    public IEnumerable<ReservationReadReservations(DateTime date)
    {
        return reads[date].Dequeue();
    }
}

You'll notice that in order to implement ReadReservations, the ReplayReservationsRepository class needs a dictionary of queues. The ReplayClock class didn't need a dictionary, because GetCurrentDateTime takes no input. The ReadReservations method, on the other hand, takes a date as a method argument. You might have observations of ReadReservations for different dates, and multiple observations for each date. That's the reason that ReplayReservationsRepository needs a dictionary of queues.

The Create method doesn't return anything, so I decided that this methods should do nothing.

The LoadReservationsControllerPostScenario function parses the JSON log and creates instances of these Test Doubles.

var repository = new ReplayReservationsRepository(reads);
var clock = new ReplayClock(times);
var controller = new ReservationsController(seatingDurationtablesrepositoryclock);

And that, together with the parsed HTTP input, is what LoadReservationsControllerPostScenario returns:

return (controllerdto);

This is only a prototype to illustrate the point that you can reproduce an interaction if you have all the impure inputs and outputs. The details are available in the source code repository.

Summary #

This article demonstrated how making the distinction between pure and impure code is useful in many situations. For logging purposes, you only need to log the impure inputs and outputs. That's neither too little logging, nor too much, but just right: Goldilogs.

Model any (potentially) impure interaction as a dependency and use Dependency Injection. This enables you to reproduce observed behaviour from logs. Don't inject logging services into your Controllers or Domain Models.


Repeatable execution in Haskell

Monday, 30 March 2020 08:02:00 UTC

A way to figure out what to log, and what not to log, using Haskell.

This article is part of a series of articles about repeatable execution. The previous article argued that if you've logged the impure actions that a system made, you have enough information to reproduce what happened.

In most languages, it's difficult to discriminate between pure functions and impure actions, but Haskell explicitly makes that distinction. I often use it for proof of concepts for that reason. I'll do that here as well.

This proof of concept is mostly to verify what a decade of functional programming has already taught me. For the functionality that the previous article introduced, the impure actions involve a database and the system clock.

The code shown in this article is available on GitHub.

Pure interactions #

I'll use free monads to model impure interactions as pure functions. For this particular example code base, an impureim sandwich would have been sufficient. I do, however, get the impression that many readers find it hard to extrapolate from impureim sandwiches to a general architecture. For the benefit of those readers, the example uses free monads.

The system clock interaction is the simplest:

newtype ClockInstruction next = CurrentTime (LocalTime -> next) deriving Functor

There's only one instruction. It takes no input, but returns the current time and date.

For database interactions, I went through a few iterations and arrived at this set of instructions:

data ReservationsInstruction next =
    ReadReservation UUID (Maybe Reservation -> next)
  | ReadReservations LocalTime ([Reservation] -> next)
  | CreateReservation Reservation next
  deriving Functor

There's two queries and a command. The intent with the CreateReservation command is to create a new reservation row in the database. The two queries fetch a single reservation based on ID, or a set of reservations based on a date. A central type for this instruction set is Reservation:

data Reservation = Reservation
  { reservationId :: UUID
  , reservationDate :: LocalTime
  , reservationName :: String
  , reservationEmail :: String
  , reservationQuantity :: Int
  } deriving (EqShowReadGeneric)

The program has to interact both with the system clock and the database, so ultimately it turned out to be useful to combine these two instruction sets into one:

type ReservationsProgram = Free (Sum ReservationsInstruction ClockInstruction)

I used the Sum functor to combine the two instruction sets, and then turned them into a Free monad.

With free monads, I find that my code becomes more readable if I define helper functions for each instruction:

readReservation :: UUID -> ReservationsProgram (Maybe Reservation)
readReservation rid = liftF $ InL $ ReadReservation rid id
 
readReservations :: LocalTime -> ReservationsProgram [Reservation]
readReservations t = liftF $ InL $ ReadReservations t id
 
createReservation :: Reservation -> ReservationsProgram ()
createReservation r = liftF $ InL $ CreateReservation r ()
 
currentTime :: ReservationsProgram LocalTime
currentTime = liftF $ InR $ CurrentTime id

There's much else going on in the code base, but that's how I model feature-specific impure actions.

Receive a reservation #

The central feature of the service is to receive and handle an HTTP POST request, as described in the introductory article. When a document arrives it triggers a series of non-trivial work:

  1. The service validates the input data. Among other things, it checks that the reservation is in the future. It uses currentTime for this.
  2. It queries the database for existing reservations. It uses readReservations for this.
  3. It uses complex business logic to determine whether to accept the reservation. This essentially implements the Maître d' kata.
  4. If it accepts the reservation, it stores it. It uses createReservation for this.
These steps manifest as this function:

tryAccept :: NominalDiffTime
          -> [Table]
          -> Reservation
          -> ExceptT (APIError ByteStringReservationsProgram ()
tryAccept seatingDuration tables r = do
  now <- lift currentTime
  _ <- liftEither $ validateReservation now r
  reservations <-
    fmap (removeNonOverlappingReservations seatingDuration r) <$>
    lift $ readReservations $ reservationDate r
 
  _ <- liftEither $ canAccommodateReservation tables reservations r
 
  lift $ createReservation r

If you're interested in details, the code is available on GitHub. I may later write other articles about interesting details.

In the context of repeatable execution and logging, the key is that this is a pure function. It does, however, return a ReservationsProgram (free monad), so it's not going to do anything until interpreted. The interpreters are impure, so this is where logging has to take place.

HTTP API #

The above tryAccept function is decoupled from boundary concerns. It has little HTTP-specific functionality.

I've written the actual HTTP API using Servant. The following function translates the above Domain Model to an HTTP API:

type ReservationsProgramT = FreeT (Sum ReservationsInstruction ClockInstruction)
 
reservationServer :: NominalDiffTime
                  -> [Table]
                  -> ServerT ReservationAPI (ReservationsProgramT Handler)
reservationServer seatingDuration tables = getReservation :<|> postReservation
  where
    getReservation rid = do
      mr <- toFreeT $ readReservation rid
      case mr of
        Just r -> return r
        Nothing -> throwError err404
    postReservation r = do
      e <- toFreeT $ runExceptT $ tryAccept seatingDuration tables r
      case e of
        Right () -> return ()
        Left (ValidationError err) -> throwError $ err400 { errBody = err }
        Left  (ExecutionError err) -> throwError $ err500 { errBody = err }

This API also exposes a reservation as a resource you can query with a GET request, but I'm not going to comment much on that. It uses the above readReservation helper function, but there's little logic involved in the implementation.

The above reservationServer function implements, by the way, only a partial API. It defines the /reservations resource, as explained in the overview article. Its type is defined as:

type ReservationAPI =
       Capture "reservationId" UUID :> Get '[JSON] Reservation
  :<|> ReqBody '[JSON] Reservation :> Post '[JSON] ()

That's just one resource. Servant enables you define many resources and combine them into a larger API. For this example, the /reservations resource is all there is, so I define the entire API like this:

type API = "reservations" :> ReservationAPI

You can also define your complete server from several partial services, but in this example, I only have one:

server = reservationServer

Had I had more resources, I could have combined several values with a combinator, but now that I have only reservationServer it seems redundant, I admit.

Hosting the API #

The reservationServer function, and thereby also server, returns a ServerT value. Servant ultimately demands a Server value to serve it. We need to transform the ServerT value into a Server value, which we can do with hoistServer:

runApp :: String -> Int -> IO ()
runApp connStr port = do
  putStrLn $ "Starting server on port " ++ show port ++ "."
  putStrLn "Press Ctrl + C to stop the server."
  ls <- loggerSet
  let logLn s = pushLogStrLn ls $ toLogStr s
  let hoistSQL = hoistServer api $ runInSQLServerAndOnSystemClock logLn $ pack connStr
  (seatingDuration, tables) <- readConfig
  logHttp <- logHttpMiddleware ls
  run port $ logHttp $ serve api $ hoistSQL $ server seatingDuration tables

The hoistServer function enables you to translate a ServerT api m into a ServerT api n value. Since Server is a type alias for ServerT api Handler, we need to translate the complicated monad returned from server into a Handler. The runInSQLServerAndOnSystemClock function does most of the heavy lifting.

You'll also notice that the runApp function configures some logging. Apart from some HTTP-level middleware, the logLn function logs a line to a text file. The runApp function passes it as an argument to the runInSQLServerAndOnSystemClock function. We'll return to logging later in this article, but first I find it instructive to outline what happens in runInSQLServerAndOnSystemClock.

As the name implies, two major actions take place. The function interprets database interactions by executing impure actions against SQL Server. It also interprets clock interactions by querying the system clock.

Using the system clock #

The system-clock-based interpreter is the simplest of the two interpreters. It interprets ClockInstruction values by querying the system clock for the current time:

runOnSystemClock :: MonadIO m => ClockInstruction (m a) -> m a
runOnSystemClock (CurrentTime next) = liftIO (zonedTimeToLocalTime <$> getZonedTime) >>= next

This function translates a ClockInstruction (m a) to an m a value by executing the impure getZonedTime function. From the returned ZonedTime value, it then extracts the local time, which it passes to next.

You may have two questions:

  • Why map ClockInstruction (m a) instead of ClockInstruction a?
  • Why MonadIO?
I'll address each in turn.

My ultimate goal with each of these interpreters is to compose them into runInSQLServerAndOnSystemClock. As described above, this function transforms ServerT API (ReservationsProgramT Handler) into a ServerT API Handler (also known as Server API). Another way to put this is that we need to collapse ReservationsProgramT Handler to Handler by, so to speak, removing ReservationsProgramT.

Recall that a type like ReservationsProgramT Handler is really in 'curried' form. This is actually the parametrically polymorphic type ReservationsProgramT Handler a. Likewise, Handler is also parametrically polymorphic: Handler a. What we need, then, is a function with the type ReservationsProgramT Handler a -> Handler a or, more generally, FreeT f m a -> m a. This follows because ReservationsProgramT is an alias for FreeT ..., and Handler is a container of a values.

There's a function for that in Control.Monad.Trans.Free called iterT:

iterT :: (Functor f, Monad m) => (f (m a) -> m a) -> FreeT f m a -> m a

This fits our need. For each of the functors in ReservationsProgramT, then, we need a function f (m a) -> m a. Specifically, for ClockInstruction, we need to define a function with the type ClockInstruction (Handler a) -> Handler a. Consider, however, the definition of Handler. It's a newtype over a newtype, so much wrapping is required. If I specifically wanted to return that explicit type, I'd have to take the IO vale produced by getZonedTime and wrap it in Handler, which would require me to first wrap it in ExceptT, which again would require me to wrap it in Either. That's a lot of bother, but Handler is also a MonadIO instance, and that elegantly sidesteps the issue. By implementing runOnSystemClock with liftIO, it works for all MonadIO instances, including Handler.

Hopefully, that explains why runOnSystemClock has the type that it has.

Using the database #

The database interpreter is more complex than runOnSystemClock, but it follows the same principles. The reasoning outlined above also apply here.

runInSQLServer :: MonadIO m => Text -> ReservationsInstruction (m a) -> m a
runInSQLServer connStr (ReadReservation rid next) =
  liftIO (readReservation connStr rid) >>= next
runInSQLServer connStr (ReadReservations t next) =
  liftIO (readReservations connStr t) >>= next
runInSQLServer connStr (CreateReservation r next) =
  liftIO (insertReservation connStr r) >> next

Since ReservationsInstruction is a sum type with three cases, the runInSQLServer action has to handle all three. Each case calls a dedicated helper function. I'll only show one of these to give you a sense for how they look.

readReservations :: Text -> LocalTime -> IO [Reservation]
readReservations connStr (LocalTime d _) =
  let sql =
        "SELECT [Guid], [Date], [Name], [Email], [Quantity]\
        \FROM [dbo].[Reservations]\
        \WHERE CONVERT(DATE, [Date]) = " <> toSql d
  in withConnection connStr $ \conn -> fmap unDbReservation <$> query conn sql

You can see all the details about withConnection, unDbReservation, etcetera in the Git repository. The principal point is that these are just normal IO actions.

Basic composition #

The two interpreters are all we need to compose a working system:

runInSQLServerAndOnSystemClock :: MonadIO m => Text -> ReservationsProgramT m a -> m a
runInSQLServerAndOnSystemClock connStr = iterT go
  where go (InL rins) = DB.runInSQLServer connStr rins
        go (InR cins) = runOnSystemClock cins

The iterT function enables you to interpret a FreeT value, of which ReservationsProgramT is an alias. The go function just pattern-matches on the two cases of the Sum functor, and delegates to the corresponding interpreter.

This composition enables the system to run and do the intended work. You can start the server and make GET and POST requests against the /reservations resource, as outlined in the first article in this small series.

This verifies what I already hypothesized. This feature set requires two distinct sets of impure interactions:

  • Getting the current time
  • Querying and writing to a database
Once you've worked with Haskell for some time, you'll get good at predicting which actions are impure, and which functionality can be kept pure. The current result isn't surprising.

It does make it clear what ought to be logged. All the pure functionality can be reproduced if you have the inputs. You only need to log the impure interactions, and now you know what they are.

Compose with logging #

You need to log the impure operations, and you know that they're interacting with the system clock and the database. As usual, starting with the system clock is most accessible. You can write what's essentially a Decorator of any ClockInstruction interpreter:

logClock :: MonadIO m
         => (String -> IO ())
         -> (forall x. ClockInstruction (m x) -> m x)
         -> ClockInstruction (m a) -> m a
logClock logLn inner (CurrentTime next) = do
  output <- inner $ CurrentTime return
  liftIO $ writeLogEntry logLn "CurrentTime" () output
  next output

The logClock action decorates any inner interpreter with the logging action logLn. It returns an action of the same type as it decorates.

It relies on a helper function called writeLogEntry, which handles some of the formalities of formatting and time-stamping each log entry.

You can decorate any database interpreter in the same way:

logReservations :: MonadIO m
                => (String -> IO ())
                -> (forall x. ReservationsInstruction (m x) -> m x)
                -> ReservationsInstruction (m a) -> m a
logReservations logLn inner (ReadReservation rid next) = do
  output <- inner $ ReadReservation rid return
  liftIO $ writeLogEntry logLn "ReadReservation" rid output
  next output
logReservations logLn inner (ReadReservations t next) = do
  output <- inner $ ReadReservations t return
  liftIO $ writeLogEntry logLn "ReadReservations" t output
  next output
logReservations logLn inner (CreateReservation r next) = do
  output <- inner $ CreateReservation r (return ())
  liftIO $ writeLogEntry logLn "CreateReservation" r output
  next

The logReservations action follows the same template as logClock; only it has more lines of code because ReservationsInstruction is a discriminated union with three cases.

With these Decorator actions you can change the application composition so that it logs all impure inputs and outputs:

runInSQLServerAndOnSystemClock :: MonadIO m
                               => (String -> IO ())
                               -> Text
                               -> ReservationsProgramT m a -> m a
runInSQLServerAndOnSystemClock logLn connStr = iterT go
  where go (InL rins) = logReservations logLn (DB.runInSQLServer connStr) rins
        go (InR cins) = logClock logLn runOnSystemClock cins

This not only implements the desired functionality, but also Goldilogs: not too little, not too much, but just what you need. Notice that I didn't have to change any of my Domain Model or HTTP-specific code to enable logging. This cross-cutting concern is enabled entirely via composition.

Repeatability #

An HTTP request like this:

POST /reservations/ HTTP/1.1
Content-Type: application/json

{
  "id""c3cbfbc7-6d64-4ead-84ef-7f89de5b7e1c",
  "date""2020-03-20 19:00:00",
  "name""Elissa Megan Powers",
  "email""emp@example.com",
  "quantity": 3
}

produces a series of log entries like these:

LogEntry {logTime = 2019-12-29 20:21:53.0029235 UTC, logOperation = "CurrentTime", logInput = "()", logOutput = "2019-12-29 21:21:53.0029235"}
LogEntry {logTime = 2019-12-29 20:21:54.0532677 UTC, logOperation = "ReadReservations", logInput = "2020-03-20 19:00:00", logOutput = "[]"}
LogEntry {logTime = 2019-12-29 20:21:54.0809254 UTC, logOperation = "CreateReservation", logInput = "Reservation {reservationId = c3cbfbc7-6d64-4ead-84ef-7f89de5b7e1c, reservationDate = 2020-03-20 19:00:00, reservationName = \"Elissa Megan Powers\", reservationEmail = \"emp@example.com\", reservationQuantity = 3}", logOutput = "()"}
LogEntry {logTime = 2019-12-29 20:21:54 UTC, logOperation = "PostReservation", logInput = "\"{ \\\"id\\\": \\\"c3cbfbc7-6d64-4ead-84ef-7f89de5b7e1c\\\", \\\"date\\\": \\\"2020-03-20 19:00:00\\\", \\\"name\\\": \\\"Elissa Megan Powers\\\", \\\"email\\\": \\\"emp@example.com\\\", \\\"quantity\\\": 3 }\"", logOutput = "()"}

This is only a prototype to demonstrate what's possible. In an attempt to make things simple for myself, I decided to just log data by using the Show instance of each value being logged. In order to reproduce behaviour, I'll rely on the corresponding Read instance for the type. This was probably naive, and not a decision I would employ in a production system, but it's good enough for a prototype.

For example, the above log entry states that the CurrentTime instruction was evaluated and that the output was 2019-12-29 21:21:53.0029235. Second, the ReadReservations instruction was evaluated with the input 2020-03-20 19:00:00 and the output was the empty list ([]). The third line records that the CreateReservation instruction was evaluated with a particular input, and that the output was ().

The fourth and final record is the the actual values observed at the HTTP boundary.

You can load and parse the logged data into a unit test or an interactive session:

λ> l <- lines <$> readFile "the/path/to/the/log.txt"
λ> replayData = readReplayData l
λ> replayData
ReplayData {
  observationsOfPostReservation =
    [Reservation {
      reservationId = c3cbfbc7-6d64-4ead-84ef-7f89de5b7e1c,
      reservationDate = 2020-03-20 19:00:00,
      reservationName = "Elissa Megan Powers",
      reservationEmail = "emp@example.com",
      reservationQuantity = 3}],
  observationsOfRead = fromList [],
  observationsOfReads = fromList [(2020-03-20 19:00:00,[[]])],
  observationsOfCurrentTime = [2019-12-29 21:21:53.0029235]}
λ> r = head $ observationsOfPostReservation replayData
λ> r
Reservation {
  reservationId = c3cbfbc7-6d64-4ead-84ef-7f89de5b7e1c,
  reservationDate = 2020-03-20 19:00:00,
  reservationName = "Elissa Megan Powers",
  reservationEmail = "emp@example.com",
  reservationQuantity = 3}

(I've added line breaks and indentation to some of the output to make it more readable, compared to what GHCi produces.)

The most important thing to notice is the readReplayData function that parses the log file into Haskell data. I've also written a prototype of a function that can replay the actions as they happened:

λ> (seatingDuration, tables) <- readConfig
λ> replay replayData $ tryAccept seatingDuration tables r
Right ()

The original HTTP request returned 200 OK and that's exactly how reservationServer translates a Right () result. So the above interaction is a faithful reproduction of what actually happened.

Replay #

You may have noticed that I used a replay function above. This is only a prototype to get the point across. It's just another interpreter of ReservationsProgram (or, rather an ExceptT wrapper of ReservationsProgram):

replay :: ReplayData -> ExceptT e ReservationsProgram a -> Either e a
replay d = replayImp d . runExceptT
  where
    replayImp :: ReplayData -> ReservationsProgram a -> a
    replayImp rd p = State.evalState (iterM go p) rd
    go (InL (ReadReservation rid next)) = replayReadReservation rid >>= next
    go (InL (ReadReservations t next)) = replayReadReservations t >>= next
    go (InL (CreateReservation _ next)) = next
    go (InR (CurrentTime next)) = replayCurrentTime >>= next

While this is compact Haskell code that I wrote, I still found it so abstruse that I decided to add a type annotation to a local function. It's not required, but I find that it helps me understand what replayImp does. It uses iterM (a cousin to iterT) to interpret the ReservationsProgram. The entire interpretation is stateful, so runs in the State monad. Here's an example:

replayCurrentTime :: State ReplayData LocalTime
replayCurrentTime = do
  xs <- State.gets observationsOfCurrentTime
  let (observation:rest) = xs
  State.modify (\s -> s { observationsOfCurrentTime = rest })
  return observation

The replayCurrentTime function replays log observations of CurrentTime instructions. The observationsOfCurrentTime field is a list of observed values, parsed from a log. A ReservationsProgram might query the CurrentTime multiple times, so there could conceivably be several such observations. The idea is to replay them, starting with the earliest.

Each time the function replays an observation, it should remove it from the log. It does that by first retrieving all observations from the state. It then pattern-matches the observation from the rest of the observations. I execute my code with the -Wall option, so I'm puzzled that I don't get a warning from the compiler about that line. After all, the xs list could be empty. This is, however, prototype code, so I decided to ignore that issue.

Before the function returns the observation it updates the replay data by effectively removing the observation, but without touching anything else.

The replayReadReservation and replayReadReservations functions follow the same template. You can consult the source code repository if you're curious about the details. You may also notice that the go function doesn't do anything when it encounters a CreateReservation instruction. This is because that instruction has no return value, so there's no reason to consult a log to figure out what to return.

Summary #

The point of this article was to flesh out a fully functional feature (a vertical slice, if you're so inclined) in Haskell, in order to verify that the only impure actions involved are:

  • Getting the current time
  • Interacting with the application database
This turns out to be the case.

Furthermore, prototype code demonstrates that based on a log of impure interactions, you can repeat the logged execution.

Now that we know what is impure and what can be pure, we can reproduce the same architecture in C# (or another mainstream programming language).

Next: Repeatable execution in C#.


Comments

The Free Monad, as any monad, enforces sequential operations.

How would you deal with having to sent multiple transactions (let's say to the db and via http), while also retrying n times if it fails?

2020-04-06 9:38 UTC

Jiehong, thank you for writing. I'm not sure that I can give you a complete answer, as this is something that I haven't experimented with in Haskell.

In C#, on the other hand, you can implement stability patterns like Circuit Breaker and retries with Decorators. I don't see why you can't do that in Haskell as well.

2020-04-10 10:15 UTC

Repeatable execution

Monday, 23 March 2020 08:17:00 UTC

What to log, and how to log it.

When I visit software organisations to help them make their code more maintainable, I often see code like this:

public ILog Log { get; }
 
public ActionResult Post(ReservationDto dto)
{
    Log.Debug($"Entering {nameof(Post)} method...");
    if (!DateTime.TryParse(dto.Date, out var _))
    {
        Log.Warning("Invalid reservation date.");
        return BadRequest($"Invalid date: {dto.Date}.");
    }
 
    Log.Debug("Mapping DTO to Domain Model.");
    Reservation reservation = Mapper.Map(dto);
 
    if (reservation.Date < DateTime.Now)
    {
        Log.Warning("Invalid reservation date.");
        return BadRequest($"Invalid date: {reservation.Date}.");
    }
 
    Log.Debug("Reading existing reservations from database.");
    var reservations = Repository.ReadReservations(reservation.Date);
    bool accepted = maîtreD.CanAccept(reservationsreservation);
    if (!accepted)
    {
        Log.Warning("Not enough capacity");
        return StatusCode(
            StatusCodes.Status500InternalServerError,
            "Couldn't accept.");
    }
 
    Log.Info("Adding reservation to database.");
    Repository.Create(reservation);
    Log.Debug($"Leaving {nameof(Post)} method...");
    return Ok();
}

Logging like this annoys me. It adds avoidable noise to the code, making it harder to read, and thus, more difficult to maintain.

Ideal #

The above code ought to look like this:

public ActionResult Post(ReservationDto dto)
{
    if (!DateTime.TryParse(dto.Date, out var _))
        return BadRequest($"Invalid date: {dto.Date}.");
 
    Reservation reservation = Mapper.Map(dto);
 
    if (reservation.Date < Clock.GetCurrentDateTime())
        return BadRequest($"Invalid date: {reservation.Date}.");
 
    var reservations = Repository.ReadReservations(reservation.Date);
    bool accepted = maîtreD.CanAccept(reservationsreservation);
    if (!accepted)
    {
        return StatusCode(
            StatusCodes.Status500InternalServerError,
            "Couldn't accept.");
    }
 
    Repository.Create(reservation);
    return Ok();
}

This is more readable. The logging statements are gone from the code, thereby amplifying the essential behaviour of the Post method. The noise is gone.

Wait a minute! you might say, You can't just remove logging! Logging is important.

Yes, I agree, and I didn't. This code still logs. It logs just what you need to log. No more, no less.

Types of logging #

Before we talk about the technical details, I think it's important to establish some vocabulary and context. In this article, I use the term logging broadly to describe any sort of action of recording what happened while software executed. There's more than one reason an application might have to do that:

  • Instrumentation. You may log to support your own work. The first code listing in this article is a typical example of this style of logging. If you've ever had the responsibility of having to support an application that runs in production, you know that you need insight into what happens. When people report strange behaviours, you need those logs to support troubleshooting.
  • Telemetry. You may log to support other people's work. You can write status updates, warnings, and errors to support operations. You can record Key Performance Indicators (KPIs) to support 'the business'.
  • Auditing. You may log because you're legally obliged to do so.
  • Metering. You may log who does what so that you can bill users based on consumption.
Regardless of motivation, I still consider these to be types of logging. All are cross-cutting concerns in that they're independent of application features. Logging records what happened, when it happened, who or what triggered the event, and so on. The difference between instrumentation, telemetry, auditing, and metering is only what you choose to persist.

Particularly when it comes to instrumentation, I often see examples of 'overlogging'. When logging is done to support future troubleshooting, you can't predict what you're going to need, so it's better to log too much data than too little.

It'd be even better to log only what you need. Not too little, not too much, but just the right amount of logging. Obviously, we should call this Goldilogs.

Repeatability #

How do you know what to log? How do you know that you've logged everything that you'll need, when you don't know your future needs?

The key is repeatability. Just like you should be able to reproduce builds and repeat deployments, you should also be able to reproduce execution.

If you can replay what happened when a problem manifested itself, you can troubleshoot it. You need to log just enough data to enable you to repeat execution. How do you identify that data?

Consider a line of code like this:

int z = x + y;

Would you log that?

It might make sense to log what x and y are, particularly if these values are run-time values (e.g. entered by a user, the result of a web service call, etc.):

Log.Debug($"Adding {x} and {y}.");
int z = x + y;

Would you ever log the result, though?

Log.Debug($"Adding {x} and {y}.");
int z = x + y;
Log.Debug($"Result of addition: {z}");

There's no reason to log the result of the calculation. Addition is a pure function; it's deterministic. If you know the inputs, you can always repeat the calculation to get the output. Two plus two is always four.

The more your code is composed from pure functions, the less you need to log.

Log only impure actions #

In principle, all code bases interleave pure functions with impure actions. In most procedural or object-oriented code, no attempt is made of separating the two:

A box of mostly impure (red) code with vertical stripes of green symbolising pure code.

I've here illustrated impure actions with red and pure functions with green. Imagine that this is a conceptual block of code, with execution flowing from top to bottom. When you write normal procedural or object-oriented code, most of the code will have some sort of local side effect in the form of a state change, a more system-wide side effect, or be non-deterministic. Occasionally, arithmetic calculation or similar will form small pure islands.

While you don't need to log the output of those pure functions, it hardly makes a difference, since most of the code is impure. It would be a busy log, in any case.

Once you shift towards functional-first programming, your code may begin to look like this instead:

A box of mostly pure (green) code with a few vertical stripes of red symbolising impure code.

You may still have some code that occasionally executes impure actions, but largely, most of the code is pure. If you know the inputs to all the pure code, you can reproduce that part of the code. This means that you only need to log the non-deterministic parts: the impure actions. Particularly, you need to log the outputs from the impure actions, because these impure output values become the inputs to the next pure block of code.

This style of architecture is what you'll often get with a well-designed F# code base, but you can also replicate it in C# or another object-oriented programming language. I'd also draw a diagram like this to illustrate how Haskell code works if you model interactions with free monads.

This is the most generally applicable example, so articles in this short series show a Haskell code base with free monads, as well as a C# code base.

In reality, you can often get away with an impureim sandwich:

A box with a thin red slice on top, a thick green middle, and a thin red slice at the bottom.

This architecture makes things simpler, including logging. You only need to log the inital and the concluding impure actions. The rest, you can always recompute.

I could have implemented the comprehensive example code shown in the next articles as impureim sandwiches, but I chose to use free monads in the Haskell example, and Dependency Injection in the C# example. I did this in order to offer examples from which you can extrapolate a more complex architecture for your production code.

Examples #

I've produced two equivalent example code bases to show how to log just enough data. The first is in Haskell because it's the best way to be sure that pure and impure code is properly separated.

Both example applications have the same externally visible behaviour. They showcase a focused vertical slice of a restaurant reservation system. The only feature they support is the creation of a reservation.

Clients make reservations by making an HTTP POST request to the reservation system:

POST /reservations HTTP/1.1
Content-Type: application/json

{
  "id""84cef648-1e5f-467a-9d13-1b81db7f6df3",
  "date""2021-12-21 19:00:00",
  "email""mark@example.com",
  "name""Mark Seemann",
  "quantity": 4
}

This is an attempt to make a reservation for four people at December 21, 2021 at seven in the evening. Both code bases support this HTTP API.

If the web service accepts the reservation, it'll write the reservation as a record in a SQL Server database. The table is defined as:

CREATE TABLE [dbo].[Reservations] (
    [Id]         INT                NOT NULL IDENTITY,
    [Guid]       UNIQUEIDENTIFIER   NOT NULL UNIQUE,
    [Date]       DATETIME2          NOT NULL,
    [Name]       NVARCHAR (50)      NOT NULL,
    [Email]      NVARCHAR (50)      NOT NULL,
    [Quantity]   INT                NOT NULL
    PRIMARY KEY CLUSTERED ([Id] ASC)

Both implementations of the service can run on the same database.

The examples follow in separate articles:

Readers not comfortable with Haskell are welcome to skip directly to the C# article.

Log metadata #

In this article series, I focus on run-time data. The point is that there's a formal method to identify what to log: Log the inputs to and outputs from impure actions.

I don't focus on metadata, but apart from run-time data, each log entry should be accompanied by metadata. As a minimum, each entry should come with information about the time it was observed, but here's a list of metadata to consider:

  • Date and time of the log entry. Make sure to include the time zone, or alternatively, log exclusively in UTC.
  • The version of the software that produced the entry. This is particularly important if you deploy new versions of the software several times a day.
  • The user account or security context in which the application runs.
  • The machine ID, if you consolidate server farm logs in one place.
  • Correlation IDs, if present.
I don't claim that this list is complete, but I hope it can inspire you to add the metadata that you need.

Conclusion #

You only need to log what happens in impure actions. In a normal imperative or object-oriented code base, this is almost a useless selection criterion, because most of what happens is impure. Thus, you need to log almost everything.

There's many benefits to be had from moving towards a functional architecture. One of them is that it simplifies logging. Even a functional-first approach, as is often seen in idiomatic F# code bases, can simplify your logging efforts. The good news is that you can adopt a similar architecture in object-oriented code. You don't even have to compromise the design.

I've worked on big C# code bases where we logged all the impure actions. It was typically less than a dozen impure actions per HTTP request. When there was a problem in production, I could usually reproduce what caused it based on the logs.

You don't have to overlog to be able to troubleshoot your production code. Log the data that matters, and only that. Log the impure inputs and outputs.

Next: Repeatable execution in Haskell.


Comments

I like the simplicity of "log the impure inputs and outputs", and logging to ensure repeatability. But consider a common workflow: Load a (DDD) aggregate from DB, call pure domain logic it, and store the result.

The aggregate may be very complex, e.g. an order with not just many properties itself, but also many sub-entities (line items, etc.) and value objects. In order to get repeatability, you need to log the entire aggregate that was loaded. This is hard/verbose (don't get too tempted by F#'s nice stringification of records and unions – you'll want to avoid logging sensitive information), and you'll still end up with huge multi-line dumps in your log (say, 10 lines for the order and 10 lines per line item = 100 lines for an order with 9 line items, for a single operation / log statement).

Intuitively to me, that seems like a silly amount of logging, but I can't see any alternative if you want to get full repeatability in order to debug problems. Thoughts?

2020-04-02 10:21 UTC
...you'll want to avoid logging sensitive information...

If your application caches sensitive information, then it would be reasonble to only cache an encrypted version. That way, logging the information is not a security issue and neither is holding that infomration in memory (where malware could read it via memory-scraping).

...you'll still end up with huge multi-line dumps in your log...

Not all logging is meant to be directly consumed by humans. Structued logging is makes it easy for computers to consume logging events. For an event sourcing architecture (such as the Elm architecture), one can record all events and then create tooling to allow playback. I hope Elmish.WPF gets something like this if/when some of the Fable debugging tools are ported to F#.

2020-04-04 19:49 UTC

Christer, thank you for writing. There's two different concerns, as far as I can see. I'd like to start with the size issue.

The size of software data is unintuitive to the human brain. A data structure that looks overwhelmingly vast to us is nothing but a few hundred kilobytes in raw data. I often have that discussion regarding API design, but the same arguments could apply to logging. How much would 100 lines of structured JSON entail?

Let's assume some that JSON properties are numbers (prices, quantities, IDs, etcetera.) while others could be unbounded strings. Let's assume that the numbers all take up four bytes, while Unicode strings are each 100 bytes on average. The average byte size of a 'line' could be around 50 bytes, depending on the proportion of numbers to strings. 100 lines, then, would be 5,000 bytes, or around 5 kB.

Even if data structures were a few orders of magnitude larger, that in itself wouldn't keep me up at night.

Of course, that's not the whole story. The volume of data is also important. If you log hundred such entries every second, it obviously adds up. It could be prohibitive.

That scenario doesn't fit my experience, but for the sake of argument, let's assume that that's the case. What's the alternative to logging the impure operations?

You can decide to log less, but that has to be an explicit architectural decision, because if you do that, there's going to be behaviour that you can't reproduce. Logging all impure operations is the minimum amount of logging that'll guarantee that you can fully reproduce all behaviour. You may decide that there's behaviour that you don't need to be able to reconstruct, but I think that this ought to be an explicit decision.

There may be a better alternative. It also addresses the issue regarding sensitive data.

Write pure functions that take as little input as possible, and produce as little output as possible. In the realm of security design there's the concept of datensparsamkeit (data frugality). Only deal with the data you really need.

Does that pure function really need to take as input an entire aggregate, or would a smaller projection do? If the latter, implement the impure operation so that it returns the projection. That's a smaller data set, so less to log.

The same goes for sensitive input. Perhaps instead of a CPR number the function only needs an age value. If so, then you only need to log the age.

These are deliberate design decision you must take. You don't get such solutions for free, but you can if you will.

2020-04-09 15:06 UTC

Thank you for the reply. It makes sense that this should be a deliberate design decision.

I'm working full-time with F# myself, and would be very interested in seeing how you think this could be solved in F#. In this series, you are demonstrating solutions for Haskell (free monads) and C# (dependency injection), but as you have alluded to previously on your blog, neither of those are idiomatic solutions in F# (free monads are cumbersome without higher-kinded types, and DI primarily fits an object-oriented architecture).

I realize you may not choose to write another blog post in this series tackling F# (though it would nicely "fill the gap" between Haskell and C#, and you're definitely the one to write it!), but even a few keywords/pointers would be helpful.

2020-04-10 20:54 UTC

Christer, thank you for writing. I am, indeed, not planning to add another article to this small series. Not that writing the article itself would be too much trouble, but in order to stay on par with the two other articles, I'd have to develop the corresponding F# code base. That currently doesn't look like it'd be the best use of my time.

In F# you can use partial application for dependency injection. I hope that nothing I wrote gave the impression that this isn't idiomatic F#. What I've demonstrated is that it isn't functional, but since F# is a multiparadigmatic language, that's still fine.

The C# example in this article series shows what's essentially an impureim sandwich, so it shouldn't be too hard to translate that architecture to F#. It's already quite functional.

2020-04-14 8:03 UTC

Conway's Law: latency versus throughput

Monday, 16 March 2020 06:46:00 UTC

Organising work in one way optimises for low latency; in another for throughput.

It's a cliché that the software industry is desperate for talent. I also believe that it's a myth. As I've previously observed, the industry seems desperate for talent within commute range. The implication is that although we perform some of the most intangible and digitised work imaginable, we're expected to be physically present in an office.

Since 2014 I've increasingly been working from home, and I love it. I also believe that it's an efficient way to develop software, but not only for the reasons usually given.

I believe that distributed, asynchronous software development optimises throughput, but may sacrifice reaction time (i.e. increase latency).

The advantages of working in an office #

It's easy to criticise office work, but if it's so unpopular, why is it still the mainstream?

I think that there's a multitude of answers to that question. One is that this may be the only way that management can imagine. Since programming is so intangible, it's impossible to measure productivity. What a manager can do, though, is to watch who arrives early, who's the last to leave, and who seems to be always in front of his or her computer, or in a meeting, and so on.

Another answer to the question is that people actually like working together. I currently advice IDQ on software development principles and architecture. They have a tight-knit development team. The first day I visited them, I could feel a warm and friendly vibe. I've been visiting them regularly for about a year, now, and the first impression has proven correct. As we Danes say, that work place is positively hyggelig.

Some people also prefer to go to the office to have a formal boundary between their professional and private lives.

Finally, if you're into agile software development, you've probably heard about the benefits of team co-location.

When the team is located in the same room, working towards the same goals, communication is efficient - or is it?

You can certainly get answers to your questions quickly. All you have to do is to interrupt the person who can answer. If you don't know who that is, you just interrupt everybody until you've figured it out. While offices are interruption factories (as DHH puts it), this style of work can reduce latency.

If you explicitly follow e.g. lean software development and implement something like one-piece flow, you can reduce your cycle time. The less delay between activities, the faster you can deliver value. Once you've delivered one piece (e.g. a feature), you move on to the next.

Alternating blocks of activities and delays going from left to right.

If this is truly the goal, then putting all team members in the same office makes sense. You don't get higher communications bandwidth than when you're physically together. All the subtle intonations of the way your colleagues speak, the non-verbal cues, etcetera are there if you know how to interpret them.

Consequences of team co-location #

I've seen team co-location work for small teams. People can pair program or even mob program. You can easily draw on the expertise of your co-workers. It does require, however, that everyone respects boundaries.

It's a balancing act. You may get your answer sooner, but your interruption could break your colleague's concentration. The net result could be negative productivity.

While I've seen team co-location work, I've seen it fail more frequently. There are many reasons for this.

First, there's all the interruptions. Most programmers don't like being interrupted.

Second, the opportunity for ad-hoc communication easily leads to poor software architecture. This follows from Conway's law, which argues that

"Any organization that designs a system [...] will inevitably produce a design whose structure is a copy of the organization's communication structure."

I know that it's not a law in any rigid sense of the word, but it can often be fruitful to keep an eye out for this sort of interaction. Based on my experience, it seems to happen often.

Ad-hoc office communication leads to ad-hoc communication structures in the code. There's typically little explicit architecture. Knowledge is in the heads of people.

Such an organisation tends to have an oral culture. There's no permanence of knowledge, no emphasis on readability of code (because you can always ask someone if there's code you don't understand), and meetings all the time.

I once worked as a consultant for a company where there was only one old-timer around. He spent most of his time in meetings, because he knew all the intricate details of how everything worked and talked together, and other people needed to know.

After I'd been involved with that (otherwise wonderful) company on and off for a few years, I accumulated some knowledge as well, and people wanted to have meetings with me.

In the beginning, I obliged. Then it turned out that a week after I'd had a meeting, I'd be called to what would essentially be the same meeting again. Why? Because some other stakeholder heard about the first meeting and decided that he or she also required that information. The solution? Call another meeting.

My counter-move was to begin to write things down. When people would call a meeting, I'd ask for an agenda. That alone filtered away more than half of the meetings. When I did receive an agenda, I could often reply:

"Based on the agenda, I believe you'll find everything you need to know here. If not, please let me know what's missing so that I can update the document"

I'd attach said document. By doing that, I eliminated ninety percent of my meetings.

Notice what I did. I changed the communication structure - at least locally around me. Soon after, I went remote with that client, and had a few successful years doing that.

I hope that the previous section outlined that working in an office can be effective, but as I've now outlined, it can also be dysfunctional.

If you truly must deliver as soon as possible, because if you don't, the organisation isn't going to be around in five years, office work, with its low communications latency may be the best option.

Remote office work #

I often see companies advertise for programmers. When remote work is an option, it often comes with the qualification that it must be within a particular country, or a particular time zone.

There can be legal or bureaucratic reasons why a company only wants to hire within a country. I get that, but I consider a time zone requirement a danger sign. The same goes for "we use Slack" or whatever other 'team room' instant messaging technology is cool these days.

That tells me that while the company allows people to be physically not in the office, they must still obey office hours. This indicates to me that communication remains ad-hoc and transient. Again, code quality suffers.

These days, because of the Corona virus, many organisations deeply entrenched in the oral culture of co-location find that they must now work as a distributed team. They try to address the situation by setting up day-long video conference calls.

It may work in an office, but it's not the best fit for a distributed team.

Distributed asynchronous software development #

Decades of open-source development has shown another way. Successful open-source software (OSS) projects are distributed and use asynchronous communication channels (mailing lists, issue trackers). It's worth considering the causation. I don't think anyone sat down and decided to do it this way in order to be successful. I think that the OSS projects that became successful became successful exactly because they organised work that way.

When contributions are voluntary, you have to cast a wide net. A successful OSS project should accept contributions from around the world. If an excellent contribution from Japan falters because the project team is based in the US, and immediate, real-time communication is required, then that project has odds against it.

An OSS project that works asynchronously can receive contributions from any time zone. The disadvantage can be significant communication lag.

If you get a contribution from Australia, but you're in Europe, you may send a reply asking for clarifications or improvements. At the time you do that, the contributor may have already gone to bed. He or she isn't going to reply later, at which time you've gone to bed.

It can take days to get anything done. That doesn't sound efficient, and if you're in a one-piece flow mindset it isn't. You need to enable parallel development. If you do that, you can work on something else while you wait for your asynchronous collaborator to respond.

A diagram juxtaposing two pieces finished one after the other, against two pieces finished in parallel.

In this diagram, the wait-times in the production of one piece (e.g. one feature) can be used to move forward with another feature. The result is that you may actually be able to finish both tasks sooner than if you stick strictly to one-piece flow.

Before you protest: in reality, delay times are much longer than implied by the diagram. An activity could be something as brief as responding to a request for more information. You may be able to finish this activity in 30 minutes, whereafter the delay time is another twenty hours. Thus, in order to keep throughput comparable, you need to juggle not two, but dozens of parallel processes.

You may also feel the urge to protest that the diagram postulates a false dichotomy. That's not my intention. Even with co-location, you could do parallel development.

There's also the argument that parallel development requires context switching. That's true, and it comes with overhead.

My argument is only this: if you decide to shift to an asynchronous process, then I consider parallel development essential. Even with parallel development, you can't get the same (low) latency as is possible in the office, but you may be able to get better throughput.

This again has implications for software architecture. Parallel development works when features can be developed independently of each other - when there's only minimal dependencies between various areas of the code.

Conway's law is relevant here as well. If you decouple the communication between various parts of the system, you can also decouple the development of said parts. Ultimately, the best fit for a distributed, asynchronous software development process may be a distributed, asynchronous system.

Quadrants #

This is the point where, if this was a Gartner report, it'd include a 2x2 table with four quadrants. It's not, but I'll supply it anyway:

Synchronous Asynchronous
Distributed Virtual office OSS-like parallel development
Co-located Scrum, XP, etc. Emailing the person next to you
The current situation where the oral, co-located teams find themselves forced to work remotely is what I call the virtual office. If the Corona outbreak is over in weeks, it's probably best to just carry on in that way. I don't, however, think it's a sustainable process model. There's still too much friction involved in having to be connected to a video conference call for 8 hours each day. Long-term, I think that a migration towards a distributed, asynchronous process would be beneficial.

I've yet to discuss the fourth quadrant. This is where people sit next to each other, yet still email each other. That's just weird. Like the virtual office, I don't think it's a long-term sustainable process. The advantages of just talking to each other is just too great. If you're co-located, ad-hoc communication is possible, so that's where the software architecture will gravitate as well. Again, Conway's law applies.

If you want to move towards a sustainable distributed process, you should consider adjusting everything accordingly. A major endeavour in that shift involves migrating from an oral to a written culture. Basecamp has a good guide to get you started.

Your writer reveals himself #

I intend this to be an opinion piece. It's based on a combination of observations made by others, mixed with my personal experiences, but I also admit that it's coloured by my personal preferences. I strongly prefer distributed, asynchronous processes with an emphasis on written communication. Since this blog contains more than 500 articles, it should hardly come as a surprise to anyone that I'm a prolific writer.

I've had great experiences with distributed, asynchronous software development. One such experience was the decade I led the AutoFixture open-source project. Other experiences include a handful of commercial, closed-source projects where I did the bulk of the work remotely.

This style of work benefits my employer. By working asynchronously, I have to document what I do, and why I do it. I leave behind a trail of text artefacts other people can consult when I'm not available.

I like asynchronous processes because they liberate me to work when I want to, where I want to. I take advantage of this to go for a run during daylight hours (otherwise an issue during Scandinavian winters), to go grocery shopping outside of rush hour, to be with my son when he comes home from school, etcetera. I compensate by working at other hours (evenings, weekends). This isn't a lifestyle that suits everyone, but it suits me.

This preference produces a bias in the way that I see the world. I don't think I can avoid that. Like DHH I view offices as interruption factories. I self-identify as an introvert. I like being alone.

Still, I've tried to describe some forces that affect how work is done. I've tried to be fair to co-location, even though I don't like it.

Conclusion #

Software development with a co-located team can be efficient. It offers the benefits of high-bandwidth communication, pair programming, and low-latency decision making. It also implies an oral tradition. Knowledge has little permanence and the team is vulnerable to key team members going missing.

While such a team organisation can work well when team members are physically close to each other, I believe that this model comes under pressure when team members work remotely. I haven't seen the oral, ad-hoc team process work well in a distributed setting.

Successful distributed software development is asynchronous and based on a literate culture. It only works if the software architecture allows it. Code has to be decoupled and independently deployable. If it is, though, you can perform work in parallel. Conway's law still applies.


Polymorphic Builder

Monday, 09 March 2020 06:47:00 UTC

Keeping illegal states unrepresentable with the Builder pattern.

As a reaction to my article on Builder isomorphisms Tyson Williams asked:

"If a GET or DELETE request had a body or if a POST request did not have a body, then I would suspect that such behavior was a bug.

"For the sake of a question that I would like to ask, let's suppose that a body must be added if and only if the method is POST. Under this assumption, HttpRequestMessageBuilder can create invalid messages. For example, it can create a GET request with a body, and it can create a POST request without a body. Under this assumption, how would you modify your design so that only valid messages can be created?"

I'm happy to receive that question, because I struggled to find a compelling example of a Builder where polymorphism seems warranted. Here, it does.

Valid combinations #

Before showing code, I think a few comments are in order. As far as I'm aware, the HTTP specification doesn't prohibit weird combinations like a GET request with a body. Still, such a combination is so odd that it seems fair to design an API to prevent this.

On the other hand I think that a POST request without a body should still be allowed. It's not too common, but there are edge cases where this combination is valid. If you want to cause a side effect to happen, a GET is inappropriate, but sometimes all you want do to is to produce an effect. In the Restful Web Services Cookbook Subbu Allamaraju gives this example of a fire-and-forget bulk task:

POST /address-correction?before=2010-01-01 HTTP/1.1

As he puts it, "in essence, the client is "flipping a switch" to start the work."

I'll design the following API to allow this combination, also because it showcases how that sort of flexibility can still be included. On the other hand, I'll prohibit the combination of a request body in a GET request, as Tyson Williams suggested.

Expanded API #

I'll expand on the HttpRequestMessageBuilder example shown in the previous article. As outlined in another article, apart from the Build method the Builder really only has two capabilities:

  • Change the HTTP method
  • Add (or update) a JSON body
These are the two combinations we now need to model differently. If I do that now, there'd be no other affordances offered by the Builder API. In order to make the example more explicit, I'll first add a pair of new capabilities:
  • Add or change the Accept header
  • Add or change a Bearer token
When I do that, the HttpRequestMessageBuilder class now looks like this:

public class HttpRequestMessageBuilder
{
    private readonly Uri url;
    private readonly object? jsonBody;
    private readonly string? acceptHeader;
    private readonly string? bearerToken;
 
    public HttpRequestMessageBuilder(string url) : this(new Uri(url)) { }
 
    public HttpRequestMessageBuilder(Uri url) :
        this(urlHttpMethod.Get, nullnullnull) { }
 
    private HttpRequestMessageBuilder(
        Uri url,
        HttpMethod method,
        objectjsonBody,
        stringacceptHeader,
        stringbearerToken)
    {
        this.url = url;
        Method = method;
        this.jsonBody = jsonBody;
        this.acceptHeader = acceptHeader;
        this.bearerToken = bearerToken;
    }
 
    public HttpMethod Method { get; }
 
    public HttpRequestMessageBuilder WithMethod(HttpMethod newMethod)
    {
        return new HttpRequestMessageBuilder(
            url,
            newMethod,
            jsonBody,
            acceptHeader,
            bearerToken);
    }
 
    public HttpRequestMessageBuilder AddJsonBody(object jsonBody)
    {
        return new HttpRequestMessageBuilder(
            url,
            Method,
            jsonBody,
            acceptHeader,
            bearerToken);
    }
 
    public HttpRequestMessageBuilder WithAcceptHeader(string newAcceptHeader)
    {
        return new HttpRequestMessageBuilder(
            url,
            Method,
            jsonBody,
            newAcceptHeader,
            bearerToken);
    }
 
    public HttpRequestMessageBuilder WithBearerToken(string newBearerToken)
    {
        return new HttpRequestMessageBuilder(
            url,
            Method,
            jsonBody,
            acceptHeader,
            newBearerToken);
    }
 
    public HttpRequestMessage Build()
    {
        var message = new HttpRequestMessage(Method, url);
        BuildBody(message);
        AddAcceptHeader(message);
        AddBearerToken(message);
        return message;
    }
 
    private void BuildBody(HttpRequestMessage message)
    {
        if (jsonBody is null)
            return;
 
        string json = JsonConvert.SerializeObject(jsonBody);
        message.Content = new StringContent(json);
        message.Content.Headers.ContentType.MediaType = "application/json";
    }
 
    private void AddAcceptHeader(HttpRequestMessage message)
    {
        if (acceptHeader is null)
            return;
 
        message.Headers.Accept.ParseAdd(acceptHeader);
    }
 
    private void AddBearerToken(HttpRequestMessage message)
    {
        if (bearerToken is null)
            return;
 
        message.Headers.Authorization =
            new AuthenticationHeaderValue("Bearer", bearerToken);
    }
}

Notice that I've added the methods WithAcceptHeader and WithBearerToken, with supporting implementation. So far, those are the only changes.

It enables you to build HTTP request messages like this:

HttpRequestMessage msg = new HttpRequestMessageBuilder(url)
    .WithBearerToken("cGxvZWg=")
    .Build();

Or this:

HttpRequestMessage msg = new HttpRequestMessageBuilder(url)
    .WithMethod(HttpMethod.Post)
    .AddJsonBody(new
    {
        id = Guid.NewGuid(),
        date = "2021-02-09 19:15:00",
        name = "Hervor",
        email = "hervor@example.com",
        quantity = 2
    })
    .WithAcceptHeader("application/vnd.foo.bar+json")
    .WithBearerToken("cGxvZWg=")
    .Build();

It still doesn't address Tyson Williams' requirement, because you can build an HTTP request like this:

HttpRequestMessage msg = new HttpRequestMessageBuilder(url)
    .AddJsonBody(new {
        id = Guid.NewGuid(),
        date = "2020-03-22 19:30:00",
        name = "Ælfgifu",
        email = "ælfgifu@example.net",
        quantity = 1 })
    .Build();

Recall that the default HTTP method is GET. Since the above code doesn't specify a method, it creates a GET request with a message body. That's what shouldn't be possible. Let's make illegal states unrepresentable.

Builder interface #

Making illegal states unrepresentable is a catch phrase coined by Yaron Minsky to describe advantages of statically typed functional programming. Unintentionally, it also describes a fundamental tenet of object-oriented programming. In Object-Oriented Software Construction Bertrand Meyer describes object-oriented programming as the discipline of guaranteeing that an object can never be in an invalid state.

In the present example, we can't allow an arbitrary HTTP Builder object to afford an operation to add a body, because that Builder object might produce a GET request. On the other hand, there are operations that are always legal: adding an Accept header or a Bearer token. Because these operations are always legal, they constitute a shared API. Extract those to an interface:

public interface IHttpRequestMessageBuilder
{
    IHttpRequestMessageBuilder WithAcceptHeader(string newAcceptHeader);
    IHttpRequestMessageBuilder WithBearerToken(string newBearerToken);
    HttpRequestMessage Build();
}

Notice that both the With[...] methods return the new interface. Any IHttpRequestMessageBuilder must implement the interface, but is free to support other operations not part of the interface.

HTTP GET Builder #

You can now implement the interface to build HTTP GET requests:

public class HttpGetMessageBuilder : IHttpRequestMessageBuilder
{
    private readonly Uri url;
    private readonly string? acceptHeader;
    private readonly string? bearerToken;
 
    public HttpGetMessageBuilder(string url) : this(new Uri(url)) { }
 
    public HttpGetMessageBuilder(Uri url) : this(urlnullnull) { }
 
    private HttpGetMessageBuilder(
        Uri url,
        stringacceptHeader,
        stringbearerToken)
    {
        this.url = url;
        this.acceptHeader = acceptHeader;
        this.bearerToken = bearerToken;
    }
 
    public IHttpRequestMessageBuilder WithAcceptHeader(string newAcceptHeader)
    {
        return new HttpGetMessageBuilder(url, newAcceptHeader, bearerToken);
    }
 
    public IHttpRequestMessageBuilder WithBearerToken(string newBearerToken)
    {
        return new HttpGetMessageBuilder(url, acceptHeader, newBearerToken);
    }
 
    public HttpRequestMessage Build()
    {
        var message = new HttpRequestMessage(HttpMethod.Get, url);
        AddAcceptHeader(message);
        AddBearerToken(message);
        return message;
    }
 
    private void AddAcceptHeader(HttpRequestMessage message)
    {
        if (acceptHeader is null)
            return;
 
        message.Headers.Accept.ParseAdd(acceptHeader);
    }
 
    private void AddBearerToken(HttpRequestMessage message)
    {
        if (bearerToken is null)
            return;
 
        message.Headers.Authorization =
            new AuthenticationHeaderValue("Bearer", bearerToken);
    }
}

Notice that the Build method hard-codes HttpMethod.Get. When you're using an HttpGetMessageBuilder object, you can't modify the HTTP method. You also can't add a request body, because there's no API that affords that operation.

What you can do, for example, is to create an HTTP request with an Accept header:

HttpRequestMessage msg = new HttpGetMessageBuilder(url)
    .WithAcceptHeader("application/vnd.foo.bar+json")
    .Build();

This creates a request with an Accept header, but no Bearer token.

HTTP POST Builder #

As a peer to HttpGetMessageBuilder you can implement the IHttpRequestMessageBuilder interface to support POST requests:

public class HttpPostMessageBuilder : IHttpRequestMessageBuilder
{
    private readonly Uri url;
    private readonly object? jsonBody;
    private readonly string? acceptHeader;
    private readonly string? bearerToken;
 
    public HttpPostMessageBuilder(string url) : this(new Uri(url)) { }
 
    public HttpPostMessageBuilder(Uri url) : this(urlnullnullnull) { }
 
    public HttpPostMessageBuilder(string urlobject jsonBody) :
        this(new Uri(url), jsonBody)
    { }
 
    public HttpPostMessageBuilder(Uri urlobject jsonBody) :
        this(urljsonBodynullnull)
    { }
 
    private HttpPostMessageBuilder(
        Uri url,
        objectjsonBody,
        stringacceptHeader,
        stringbearerToken)
    {
        this.url = url;
        this.jsonBody = jsonBody;
        this.acceptHeader = acceptHeader;
        this.bearerToken = bearerToken;
    }
 
    public IHttpRequestMessageBuilder WithAcceptHeader(string newAcceptHeader)
    {
        return new HttpPostMessageBuilder(
            url,
            jsonBody,
            newAcceptHeader,
            bearerToken);
    }
 
    public IHttpRequestMessageBuilder WithBearerToken(string newBearerToken)
    {
        return new HttpPostMessageBuilder(
            url,
            jsonBody,
            acceptHeader,
            newBearerToken);
    }
 
    public HttpRequestMessage Build()
    {
        var message = new HttpRequestMessage(HttpMethod.Post, url);
        BuildBody(message);
        AddAcceptHeader(message);
        AddBearerToken(message);
        return message;
    }
 
    private void BuildBody(HttpRequestMessage message)
    {
        if (jsonBody is null)
            return;
 
        string json = JsonConvert.SerializeObject(jsonBody);
        message.Content = new StringContent(json);
        message.Content.Headers.ContentType.MediaType = "application/json";
    }
 
    private void AddAcceptHeader(HttpRequestMessage message)
    {
        if (acceptHeader is null)
            return;
 
        message.Headers.Accept.ParseAdd(acceptHeader);
    }
 
    private void AddBearerToken(HttpRequestMessage message)
    {
        if (bearerToken is null)
            return;
 
        message.Headers.Authorization =
            new AuthenticationHeaderValue("Bearer", bearerToken);
    }
}

This class affords various constructor overloads. Two of them don't take a JSON body, and two of them do. This supports both the case where you do want to supply a request body, and the edge case where you don't.

I didn't add an explicit WithJsonBody method to the class, so you can't change your mind once you've created an instance of HttpPostMessageBuilder. The only reason I didn't, though, was to save some space. You can add such a method if you'd like to. As long as it's not part of the interface, but only part of the concrete HttpPostMessageBuilder class, illegal states are still unrepresentable. You can represent a POST request with or without a body, but you can't represent a GET request with a body.

You can now build requests like this:

HttpRequestMessage msg =
    new HttpPostMessageBuilder(
        url,
        new
        {
            id = Guid.NewGuid(),
            date = "2021-02-09 19:15:00",
            name = "Hervor",
            email = "hervor@example.com",
            quantity = 2
        })
    .WithAcceptHeader("application/vnd.foo.bar+json")
    .WithBearerToken("cGxvZWg=")
    .Build();

This builds a POST request with both a JSON body, an Accept header, and a Bearer token.

Is polymorphism required? #

In my previous Builder article, I struggled to produce a compelling example of a polymorphic Builder. It seems that I've now mended the situation. Or have I?

Is the IHttpRequestMessageBuilder interface really required?

Perhaps. It depends on your usage scenarios. I can actually delete the interface, and none of the usage examples I've shown here need change.

On the other hand, had I written helper methods against the interface, obviously I couldn't just delete it.

The bottom line is that polymorphism can be helpful, but it still strikes me as being non-essential to the Builder pattern.

Conclusion #

In this article, I've shown how to guarantee that Builders never get into invalid states (according to the rules we've arbitrarily established). I used the common trick of using constructors for object initialisation. If a constructor completes without throwing an exception, we should expect the object to be in a valid state. The price I've paid for this design is some code duplication.

You may have noticed that there's duplicated code between HttpGetMessageBuilder and HttpPostMessageBuilder. There are ways to address that concern, but I'll leave that as an exercise.

For the sake of brevity, I've only shown examples written as Immutable Fluent Builders. You can refactor all the examples to mutable Fluent Builders or to the original Gang-of-Four Builder pattern. This, too, will remain an exercise for the interested reader.


Comments

I'm happy to receive that question, because I struggled to find a compelling example of a Builder where polymorphism seems warranted. Here, it does.

I know of essentially one occurrence in .NET. Starting with IEnumerable<T>, calling either of the extension methods OrderBy or OrderByDescending returns IOrderedEnumerable<T>, which has the additional extension methods ThenBy and ThenByDescending.

Quoting your recent Builder isomorphisms post.

The Builder pattern isn't useful only because it enables you to "separate the construction of a complex object from its representation." It's useful because it enables you to present an API that comes with good default behaviour, but which can be tweaked into multiple configurations.

I also find the builder pattern useful because its methods typically accept one argument one at a time. The builders in your recent posts are like this. The OrderBy and ThenBy methods and their Descending alternatives in .NET are also examples of this.

However, some of the builders in your recent posts have some constructors that take multiple arguments. That is the situation that I was trying to address when I asked

Have you ever written a builder that accepted multiple arguments one at a time none of which have reasonable defaults?

This could be a kata variation: all public functions accept at most one argument. So Foo(a, b) would not be allowed but Foo.WithA(a).WithB(b) would. In an issue on this blog's GitHub, jaco0646 nicely summerized the reasoning that could lead to applying this design philosophy to production code by saying

Popular advice for a builder with required parameters is to put those in a constructor; but with more than a handful of required parameters, we return to the original problem: too much complexity in a constructor.

That comment by jaco0646 also supplied names by which this type of design is known. Those names (with the same links from the comment) are Builder with a twist or Step Builder. This is great, because I didn't have any good names. (I vaguely recall once thinking that another name was "progressive API" or "progressive fluent API", but now when I search for anything with "progressive", all I get are false positives for progressive web app.

When replacing a multi-argument constructor with a sequence of function calls that each accept one argument, care must be taken to ensure that illegal state remains unrepresentable. My general impression is that many libraries have designed such APIs well. The two that I have enough experience with to recommend as good examples of this design are the fluent configuration API in Entity Framework and Fluent Assertions. As I said before, the most formal treatment I have seen about this type of API design was in this blog post.

2020-03-11 17:57 UTC

Tyson, apart from as a kata constraint, is there any reason to prefer such a design?

I'll be happy to give it a more formal treatment if there's reasonable scenario. Can you think of one?

I don't find the motivation given by jaco0646 convincing. If you have more than a handful of required parameters, I agree that that's an issue with complexity, but I don't agree that the solution is to add more complexity on top of it. Builders add complexity.

At a glance, though, with something like Foo.WithA(a).WithB(b) it seems to me that you're essentially reinventing currying the hard way around.

Related to the overall Builder discussion (but not to currying) you may also find this article and this Stack Overflow answer interesting.

2020-03-13 6:19 UTC
...is there any reason to prefer such a design?

Yes. Just like you, I want to write small functions. In that post, you suggest an arbitrary maximum of 24 lines. One thing I find fascinating about functional programming is how useful the common functions are (such as map) and how they are implemented in only a few lines (often just one line). There is a correlation between the number of function arguments and the length of the function. So to help control the length of a function, it helps to control the number of arguments to the functions. I think Robert Martin has a similar argument. When talking about functions in chapter 3 of Clean Code, his first section is about writing small functions and a later section about function arguments open by saying

The ideal number of arguments for a function is zero (niladic). Next comes one (monadic), followed closely by two (dyadic). Three arguments (triadic) should be avoided where possible. More than three (polyadic) requires very special justification--and then shouldn't be used anyway.

In the C# code a.Foo(b), Foo is an instance method that "feels" like it only has one argument. In reality, its two inputs are a and b, and that code uses infix notation. The situation is similar in the F# code a |> List.map f. The function List.map (as well as the operator |>) has two arguments and is applied using infix notation. I try to avoid creating functions that have more than two arguments.

I don't find the motivation given by jaco0646 convincing. If you have more than a handful of required parameters, I agree that that's an issue with complexity, but I don't agree that the solution is to add more complexity on top of it. Builders add complexity.

I am not sure how you are measuring complexity. I like to think that there are two types of complexity: local and global. For the sake of argument, let's suppose

  1. that local complexity is only defined for a function and is the number of arguments of that function and
  2. that global complexity is only defined for a entire program and is the number of lines in the program.
I would argue that many required arguments is an issue of local complexity. We can decrease the local complexity at the expense of increasing the global complexity by replacing one function accepting many arguments with several functions accepting fewer arguments. I like to express this idea with the phrase "minimize the maximum (local) complexity".

...you may also find this article [titled The Builder pattern is a finite state machine]...interesting.

Indeed, that is a nice article. Finite state machines/automata (both deterministic and nondeterministic) have the same expressiveness as regular expressions.

At a glance, though, with something like Foo.WithA(a).WithB(b) it seems to me that you're essentially reinventing currying the hard way around.

It is. As a regular expression, it would be something like AB. I was just trying to give a simple example. The point of the article you shared is that the builder pattern is much more expressive. I have previously shared a similar article, but I like yours better. Thanks :)

...you may also find...this Stack Overflow answer interesting.

Wow. That is extremely cleaver! I would never thought of that. Thank you very much for sharing.

I'll be happy to give it a more formal treatment if there's reasonable scenario. Can you think of one?

As I said above, I often try to find ways to minimize the maximum complexity of the code that I write. In this case, the reason that I originally asked you about the builder pattern is that I was trying to improve the API for creating a binding in Elmish.WPF. The tutorial has a great section about bindings. There are many binding types, and each has multiple ways to create it. Most arguments are required and some are optional.

Here is a closed issue that was created during the transition to the current binding API, which uses method overloading. In an attempt to come up with a better API, I suggested that we could use your suggestion to replace overloading with discriminated unions, but my co-maintainer wasn't convinced that it would be better.

Three days later, I increased the expressiveness of our bindings in this pull request. Conceptually it was a small change; I added a single optional argument. For a regular expression, such a change is trivial. However, in my case it was a delta of +300 lines of mind-numbingly boring code.

I agree with my co-maintainer that the current binding API is pretty good for the user. On the implementation side though, I am not satisfied. I want to find something better without sacrificing (and maybe even improving) the user's experience.

2020-03-16 04:03 UTC

Impureim sandwich

Monday, 02 March 2020 07:03:00 UTC

Pronounced 'impurium sandwich'.

Since January 2017 I've been singing the praise of the impure/pure/impure sandwich, but I've never published an article that defines the term. I intend this article to remedy the situation.

Functional architecture #

In a functional architecture pure functions can't call impure actions. On the other hand, as Simon Peyton Jones observed in a lecture, observing the result of pure computation is a side-effect. In practical terms, executing a pure function is also impure, because it happens non-deterministically. Thus, even for a piece of software written in a functional style, the entry point must be impure.

While pure functions can't call impure actions, there's no rule to prevent the obverse. Impure actions can call pure functions.

Therefore, the best we can ever hope to achieve is an impure entry point that calls pure code and impurely reports the result from the pure function.

A box with a thin red slice on top, a thick green middle, and a thin red slice at the bottom.

The flow of code here goes from top to bottom:

  1. Gather data from impure sources.
  2. Call a pure function with that data.
  3. Change state (including user interface) based on return value from pure function.
This is the impure/pure/impure sandwich.

Metaphor #

The reason I call this a sandwich is that I think that it looks like a sandwich, albeit, perhaps, a rather tall one. According to the myth of the sandwich, the 4th Earl of Sandwich was a notorious gambler. While playing cards, he'd order two slices of bread with meat in between. This enabled him to keep playing without greasing the cards. His compatriots would order the same as Sandwich, or simply a Sandwich, and the name stuck.

I like the sandwich as a metaphor. The bread is an affordance, in the spirit of Donald A. Norman. It enables you to handle the meat without getting your fingers greased. In the same way, I think, impure actions enable you to handle a pure function. They let you invoke and observe the result of it.

Examples #

One of the cleanest examples of an impureim sandwich remains my original article:

tryAcceptComposition :: Reservation -> IO (Maybe Int)
tryAcceptComposition reservation = runMaybeT $
  liftIO (DB.readReservations connectionString $ date reservation)
  >>= MaybeT . return . flip (tryAccept 10) reservation
  >>= liftIO . DB.createReservation connectionString

I've here repeated the code, but coloured the background of the impure, pure, and impure parts of the sandwich.

I've shown plenty of other examples of this sandwich architecture, recently, for example, while refactoring a registration flow in F#:

let sut pid r = async {
    let! validityOfProof = AsyncOption.traverse (twoFA.VerifyProof r.Mobile) pid
    let decision = completeRegistrationWorkflow r validityOfProof
    return!
        decision
        |> AsyncResult.traverseBoth db.CompleteRegistration twoFA.CreateProof
        |> AsyncResult.cata (fun () -> RegistrationCompleted) ProofRequired
    }

This last example looks as though the bottom part of the sandwich is larger then the rest of the composition. This can sometimes happen (and, in fact, last line of code is also pure). On the other hand, the pure part in the middle will typically look like just a single line of code, even when the invoked function performs work of significant complexity.

The sandwich is a pattern independent of language. You can also apply it in C#:

public async Task<IActionResult> Post(Reservation reservation)
{
    return await Repository.ReadReservations(reservation.Date)
        .Select(rs => maîtreD.TryAccept(rs, reservation))
        .SelectMany(m => m.Traverse(Repository.Create))
        .Match(InternalServerError("Table unavailable"), Ok);
}

Like in the previous F# example, the final Match is most likely pure. In practice, you may not know, because a method like InternalServerError or Ok is an inherited base class method. Regardless, I don't think that it's architecturally important, because what's going on there is rather trivial.

Naming #

Since the metaphor occurred to me, I've been looking for a better name. The term impure/pure/impure sandwich seems too inconvenient, but nevertheless, people seem to have picked it up.

I want a more distinct name, but have had trouble coming up with one. I've been toying with various abbreviations of impure and pure, but have finally settled on impureim sandwich. It's a contraction of impure/pure/impure.

Why this particular contraction?

I've played with lots of alternatives:

  • impureim: impure/pure/impure
  • ipi: impure/pure/impure
  • impi: impure/pure/impure
  • impim: impure/pure/impure
and so on...

I like impureim because the only anagram that I'm aware of is imperium. I therefore suggest that you pronounce it impurium sandwich. That'll work as a neologic shibboleth.

Summary #

Functional architecture prohibits pure functions from invoking impure actions. On the other hand, a pure function is useless if you can't observe its result. A functional architecture, thus, must have an impure entry point that invokes a pure function and uses another impure action to act on the result.

I suggest that we call such an impure/pure/impure interaction an impureim sandwich, and that we pronounce it an impurium sandwich.


Comments

I find this example slightly simplistic. What happens when the logic has to do cascade reads/validations as it is typically done? Then you get impureimpureim...? Or do you fetch all data upfront even though it might be...irrelevant? For example, you want to send a comment to a blog post, but that post has forbidden new comments? Wouldn't you want to validate first and then fetch blog post if necessary?

2020-03-02 07:45 UTC

Toni, thank you for writing. As I write in another article,

"It's my experience that it's conspicuously often possible to implement an impure/pure/impure sandwich."

On the other hand, I never claimed that you can always do this. The impureim sandwich is a design pattern. It gives a name to a general, reusable solution to a commonly occurring problem within a given context.

In cases where you can't apply the impureim sandwich pattern, other patterns are available.

2020-03-02 8:54 UTC
Flechto #

I like this idea and it gives a word to they pattern I have been trying to use but I do have some questions. In the C# example you have a field `maîtreD`. I am assuming that the value comes from dependency injection. Is that the case? And if so can it really be called a pure function? Is that tested in isolation and the test for the function in the example you test that the results from ReadReservations are passed to `maîtreD.TryAccept`? Or is there something else I am missing?

2021-04-23 21:41 UTC

Flechto, thank you for writing. You don't have to assume anything about the code. If you following links in the article, you should be able to find the source code.

Conceptually, yes, the maîtreD class field is initialised via Constructor Injection. What makes you think that that makes it impure?

2021-04-25 15:47 UTC

Discerning and maintaining purity

Monday, 24 February 2020 07:31:00 UTC

Functional programming depends on referential transparency, but identifying and keeping functions pure requires deliberate attention.

Referential transparency is the essence of functional programming. Most other traits that people associate with functional programming emerge from it: immutability, recursion, higher-order functions, functors and monads, etcetera.

To summarise, a pure function has to obey two rules:

  • The same input always produces the same output.
  • Calling it causes no side effects.
While those rules are easy to understand and remember, in practice they're harder to follow than most people realise.

Lack of abstraction #

Mainstream programming languages don't distinguish between pure functions and impure actions. I'll use C# for examples, but you can draw the same conclusions for Java, C, C++, Visual Basic .NET and so on - even for F# and Clojure.

Consider this line of code:

string validationMsg = Validator.Validate(dto);

Is Validate a pure function?

You might want to look at the method signature before you answer:

public static string Validate(ReservationDto dto)

This is, unfortunately, not helpful. Will Validate always return the same string for the same dto? Can we guarantee that there's no side effects?

You can't answer these questions only by examining the method signature. You'll have to go and read the code.

This breaks encapsulation. It ruins abstraction. It makes code harder to maintain.

I can't stress this enough. This is what I've attempted to describe in my Humane Code video. We waste significant time reading existing code. Mostly because it's difficult to understand. It doesn't fit in our brains.

Agile Principles, Patterns, and Practices defines an abstraction as

"the amplification of the essential and the elimination of the irrelevant"

Robert C. Martin
This fits with the definition of encapsulation from Object-Oriented Software Construction. You should be able to interact with an object without knowledge of its implementation details.

When you have to read the code of a method, it indicates a lack of abstraction and encapsulation. Unfortunately, that's the state of affairs when it comes to referential transparency in mainstream programming languages.

Manual analysis #

If you read the source code of the Validate method, however, it's easy to figure out whether it's pure:

public static string Validate(ReservationDto dto)
{
    if (!DateTime.TryParse(dto.Date, out var _))
        return $"Invalid date: {dto.Date}.";
    return "";
}

Is the method deterministic? It seems like it. In fact, in order to answer that question, you need to know if DateTime.TryParse is deterministic. Assume that it is. Apart from the TryParse call, you can easily reason about the rest of this method. There's no randomness or other sources of non-deterministic behaviour in the method, so it seems reasonable to conclude that it's deterministic.

Does the method produce side effects? Again, you have to know about the behaviour of DateTime.TryParse, but I think it's safe to conclude that there's no side effects.

In other words, Validate is a pure function.

Testability #

Pure functions are intrinsically testable because they depend exclusively on their input.

[Fact]
public void ValidDate()
{
    var dto = new ReservationDto { Date = "2021-12-21 19:00", Quantity = 2 };
    var actual = Validator.Validate(dto);
    Assert.Empty(actual);
}

This unit test creates a reservation Data Transfer Object (DTO) with a valid date string and a positive quantity. There's no error message to produce for a valid DTO. The test asserts that the error message is empty. It passes.

You can with similar ease write a test that verifies what happens if you supply an invalid Date string.

Maintaining purity #

The problem with manual analysis of purity is that any conclusion you reach only lasts until someone edits the code. Every time the code changes, you must re-evaluate.

Imagine that you need to add a new validation rule. The system shouldn't accept reservations in the past, so you edit the Validate method:

public static string Validate(ReservationDto dto)
{
    if (!DateTime.TryParse(dto.Date, out var date))
        return $"Invalid date: {dto.Date}.";

    if (date < DateTime.Now)
        return $"Invalid date: {dto.Date}.";

    return "";
}

Is the method still pure? No, it's not. It's now non-deterministic. One way to observe this is to let time pass. Assume that you wrote the above unit test well before December 21, 2021. That test still passes when you make the change, but months go by. One day (on December 21, 2021 at 19:00) the test starts failing. No code changed, but now you have a failing test.

I've made sure that the examples in this article are simple, so that they're easy to follow. This could mislead you to think that the shift from referential transparency to impurity isn't such a big deal. After all, the test is easy to read, and it's clear why it starts failing.

Imagine, however, that the code is as complex as the code base you work with professionally. A subtle change to a method deep in the bowels of a system can have profound impact on the entire architecture. You thought that you had a functional architecture, but you probably don't.

Notice that no types changed. The method signature remains the same. It's surprisingly difficult to maintain purity in a code base, even if you explicitly set out to do so. There's no poka-yoke here; constant vigilance is required.

Automation attempts #

When I explain these issues, people typically suggest some sort of annotation mechanism. Couldn't we use attributes to identify pure functions? Perhaps like this:

[Pure]
public static string Validate(ReservationDto dto)

This doesn't solve the problem, though, because this still still compiles:

[Pure]
public static string Validate(ReservationDto dto)
{
    if (!DateTime.TryParse(dto.Date, out var date))
        return $"Invalid date: {dto.Date}.";
            
    if (date < DateTime.Now)
        return $"Invalid date: {dto.Date}.";
            
    return "";
}

That's an impure action annotated with the [Pure] attribute. It still compiles and passes all tests (if you run them before December 21, 2021). The annotation is a lie.

As I've already implied, you also have the compound problem that you need to know the purity (or lack thereof) of all APIs from the base library or third-party libraries. Can you be sure that no pure function becomes impure when you update a library from version 2.3.1 to 2.3.2?

I'm not aware of any robust automated way to verify referential transparency in mainstream programming languages.

Language support #

While no mainstream languages distinguish between pure functions and impure actions, there are languages that do. The most famous of these is Haskell, but other examples include PureScript and Idris.

I find Haskell useful for exactly that reason. The compiler enforces the functional interaction law. You can't call impure actions from pure functions. Thus, you wouldn't be able to make a change to a function like Validate without changing its type. That would break most consuming code, which is a good thing.

You could write an equivalent to the original, pure version of Validate in Haskell like this:

validateReservation :: ReservationDTO -> Either String ReservationDTO
validateReservation r@(ReservationDTO _ d _ _ _) =
  case readMaybe d of
    Nothing -> Left $ "Invalid date: " ++ d ++ "."
    Just (_ :: LocalTime) -> Right r

This is a pure function, because all Haskell functions are pure by default.

You can change it to also check for reservations in the past, but only if you also change the type:

validateReservation :: ReservationDTO -> IO (Either String ReservationDTO)
validateReservation r@(ReservationDTO _ d _ _ _) =
  case readMaybe d of
    Nothing -> return $ Left $ "Invalid date: " ++ d ++ "."
    Just date -> do
      utcNow <- getCurrentTime
      tz <- getCurrentTimeZone
      let now = utcToLocalTime tz utcNow
      if date < now
        then return $ Left $ "Invalid date: " ++ d ++ "."
        else return $ Right r

Notice that I had to change the return type from Either String ReservationDTO to IO (Either String ReservationDTO). The presence of IO marks the 'function' as impure. If I hadn't changed the type, the code simply wouldn't have compiled, because getCurrentTime and getCurrentTimeZone are impure actions. These types ripple through entire code bases, enforcing the functional interaction law at every level of the code base.

Pure date validation #

How would you validate, then, that a reservation is in the future? In Haskell, like this:

validateReservation :: LocalTime -> ReservationDTO -> Either String ReservationDTO
validateReservation now r@(ReservationDTO _ d _ _ _) =
  case readMaybe d of
    Nothing -> Left $ "Invalid date: " ++ d ++ "."
    Just date ->
      if date < now
        then Left $ "Invalid date: " ++ d ++ "."
        else Right r

This function remains pure, although it still changes type. It now takes an additional now argument that represents the current time. You can retrieve the current time as an impure action before you call validateReservation. Impure actions can always call pure functions. This enables you to keep your complex domain model pure, which makes it simpler, and easier to test.

Translated to C#, that corresponds to this version of Validate:

public static string Validate(DateTime nowReservationDto dto)
{
    if (!DateTime.TryParse(dto.Date, out var date))
        return $"Invalid date: {dto.Date}.";
 
    if (date < now)
        return $"Invalid date: {dto.Date}.";
 
    return "";
}

This version takes an additional now input parameter, but remains deterministic and free of side effects. Since it's pure, it's trivial to unit test.

[Theory]
[InlineData("2010-01-01 00:01""2011-09-11 18:30", 3)]
[InlineData("2019-11-26 13:59""2019-11-26 19:00", 2)]
[InlineData("2030-10-02 23:33""2030-10-03 00:00", 2)]
public void ValidDate(string nowstring reservationDateint quantity)
{
    var dto = new ReservationDto { Date = reservationDate, Quantity = quantity };
    var actual = Validator.Validate(DateTime.Parse(now), dto);
    Assert.Empty(actual);
}

Notice that while the now parameter plays the role of the current time, the fact that it's just a value makes it trivial to run simulations of what would have happened if you ran this function in 2010, or what will happen when you run it in 2030. A test is really just a simulation by another name.

Summary #

Most programming languages don't explicitly distinguish between pure and impure code. This doesn't make it impossible to do functional programming, but it makes it arduous. Since the language doesn't help you, you must constantly review changes to the code and its dependencies to evaluate whether code that's supposed to be pure remains pure.

Tests can help, particularly if you employ property-based testing, but vigilance is still required.

While Haskell isn't a mainstream programming language, I find that it helps me flush out my wrong assumptions about functional programming. I write many prototypes and proofs of concept in Haskell for that reason.

Once you get the hang of it, it becomes easier to spot sources of impurity in other languages as well.

  • Anything with the void return type must be assumed to induce side effects.
  • Everything that involves random numbers is non-deterministic.
  • Everything that relies on the system clock is non-deterministic.
  • Generating a GUID is non-deterministic.
  • Everything that involves input/output is non-deterministic. That includes the file system and everything that involves network communication. In C# this implies that all asynchronous APIs should be considered highly suspect.
If you want to harvest the benefits of functional programming in a mainstream language, you must look out for such pitfalls. There's no tooling to assist you.


Comments

You might be interested in taking a look at PurityAnalyzer; An open source roslyn-based analyzer for C# that I started developing to help maintain pure C# code.

Unfortunately, it is still not production-ready yet and I didn't have time to work on it in the last year. I was hoping contributors would help.

2020-02-24 08:16 UTC

Yacoub, thank you for writing. I wasn't aware of PurityAnalyzer. Do I understand it correctly that it's based mostly on a table of methods known (or assumed) to be pure? It also seems to look for certain attributes, under the assumption that if a [Pure] attribute is present, then one can trust it. Did I understand it correctly?

The fundamental problems with such an approach aside, I can't think of a better solution for the current .NET platform. If you want contributors, though, you should edit the repository's readme-file so that it explains how the tool works, and how contributors could get involved.

2020-02-26 7:12 UTC

Here are the answers to your questions:

1.it's based mostly on a table of methods known (or assumed) to be pure?

This is true for compiled methods, e.g., methods in the .NET frameworks. There are lists maintained for .NET methods that are pure. The lists of course are still incomplete.

For methods in the source code, the analyzer checks if they call impure methods, but it also checks other things like whether they access mutable state. The list of other things is not trivial. If you are interested in the details, see this article. It shows some of the details.

2. It also seems to look for certain attributes, under the assumption that if a [Pure] attribute is present, then one can trust it. Did I understand it correctly?

I don't use the [Pure] attribute because I think that the definition of pure used by Microsoft with this attribute is different than what I consider to be pure. I used a special [IsPure] attribute. There are also other attributes like [IsPureExceptLocally], [IsPureExceptReadLocally], [ReturnsNewObject], etc. The article I mentioned above explains some differences between these.

I agree with you that I should work on readme file to explain details and ask for contributors.

2020-02-26 09:51 UTC

I love this post and enthusiastically agree with all the points you made.

Is the method deterministic? It seems like it. In fact, in order to answer that question, you need to know if DateTime.TryParse is deterministic. Assume that it is.

For what its worth, that overload of DateTime.TryParse is impure because it depends on DateTimeFormatInfo.CurrentInfo, which depends on System.Threading.Thread.CurrentThread.CurrentCulture, which is mutable.

There are lists maintained for .NET methods that are pure.

Yacoub, could you share some links to such lists?

2020-02-26 20:14 UTC

Tyson, I actually knew that, but in order to keep the example simple and compelling, I chose to omit that fact. That's why I phrased the sentence "Assume that it is" (my emphasis) 😉

2020-02-26 21:56 UTC

Tyson, I meant lists maintained as part of the PurityAnalyzer project. You can find them here.

2020-02-27 07:48 UTC
The [Haskell] compiler enforces the functional interaction law. You can't call impure actions from pure functions.

And in contrast, the C# compiler does not enfore the functional interaction law, right?

For exampe, suppose Foo and Bar are pure functions such that Foo calls Bar and the code compiles. Then only change the implementation of Bar in such a way that it is now impure and the code still compiles, which is possible. So Foo is now also impure as well, but its implementation didn't change. Therefore, the C# compiler does not enfore the functional interaction law.

Is this consistent with what you mean by the functional interaction law?

2020-03-07 12:59 UTC

Tyson, thank you for writing. The C# compiler doesn't help protect your intent, if your intent is to apply a functional architecture.

In your example, Foo starts out pure, but becomes impure. That's a result of the law. The law itself isn't broken, but the relationships change. That's often not what you want, so you can say that the compiler doesn't help you maintain a functional architecture.

A compiler like Haskell protects the intent of the law. If foo (Haskell functions must start with a lower-case letter) and bar both start out pure, foo can call bar. When bar later becomes impure, its type changes and foo can no longer invoke it.

I can try to express the main assertion of the functional interaction law like this: a pure function can't call an impure action. This has different implications in different compiler contexts. In Haskell, functions can be statically declared to be either pure or impure. This means that the Haskell compiler can prevent pure functions from calling impure actions. In C#, there's no such distinction at the type level. The implication is therefore different: that if Foo calls Bar and Bar is impure, then Foo must also be impure. This follows by elimination, because a pure function can't call an impure action. Therefore, since Foo can call Bar, and Bar is impure, then Foo must also be impure.

The causation is reversed, so to speak.

Does that answer your question?

2020-03-08 11:32 UTC

Yes, that was a good answer. Thank you.

...a pure function can't call an impure action.

We definitely want this to be true, but let's try to make sure it is. What do you think about the C# function void Foo() => DateTime.Now;? It has lots of good propertie: it alreays returns the same value (something isomorphic to Unit), and it does not mutate anything. However, it calls the impure property DateTime.Now. I think a reasonable person could argue that this function is pure. My guess is that you would say that it is impure. Am I right? I am willing to accept that.

...a pure function has to obey two rules:
  • The same input always produces the same output.
  • Calling it causes no side effects.

Is it possible for a function to violate the first rule but not violate the second rule?

2020-03-09 04:12 UTC

Tyson, I'm going to assume that you mean something like void Foo() { var _ = DateTime.Now; }, since the code you ask about doesn't compile 😉

That function is, indeed pure, because it has no observable side effects, and it always returns unit. Purity is mostly a question of what we can observe if we consider the function a black box.

Obviously, based on that criterion, we can refactor the function to void Foo() { } and we wouldn't be able to tell the difference. This version of Foo is clearly pure, although degenerate.

Is it possible for a function to violate the first rule but not violate the second rule?
Yes, the following method is non-deterministic, but has no side effects: DateTime Foo() => DateTime.Now; The input is always unit, but the return value can change.

2020-03-10 9:03 UTC

I think I need to practice test driven comment writing ;) Thanks for seeing through my syntax errors again.

Oh, you think that that function is pure. Interesting. It follows then that the functional interaction law (pure functions cannot call impure actions) does not follow from the definition of a pure function. It is possible, in theory and in practice, for a pure function to call an impure action. Instead, the functional interaction law is "just" a goal to aspire to when designing a programming language. Haskell achieved that goal while C# and F# did not. Do you agree with this? (This is really what I was driving towards in this comment above, but I was trying to approach this "blasphemous" claim slowly.)

Just as you helped me distinguish between function purity and totality in this comment, I think it would be helpful for us to consider separately the two defining properties of a pure function. The first property is "the same input always produces the same output". Let's call this weak determinism. Determinism is could be defined as "the same input always produces the same sequence of states", which includes the state of the output, so determinism is indeed stronger than weak determinism. The second property is "causes no side effect". It seems to me that there is either a lack of consensus or a lack of clarity about what constitutes a side effect. One definition I like is mutation of state outside of the current stack frame.

One reason the functional interaction law is false in general is because the corresponding interaction law for weak determinism also false in general. The function I gave above (that called DateTime.Now and then returned unit) is a trivial example of that. A nontrivial example is quicksort.

At this point, I wanted to claim that the side effect interaction law is true in general, but it is not. This law says that a function that is side-effect free cannot call a function that causes a side effect. A counterexample is void Foo() { int i = 0; Bar(ref i); } with void Bar(ref int i) => i++;. That is, Bar mutates state outside of its stack frame, namely in the stack frame of Foo, so it is not side-effect free, but Foo is. (And I promise that I tested that code for compiler errors.)

I need to think more about that. Is there a better definition of side effect, one for which the side effect interaction law is true?

I just realized something that I think is interesting. Purely functional programming languages enforce a property of functions stronger than purity. With respect to the first defining property of a pure function (aka weak determinism), purely functional programming languages enforce the stronger notion of determinism. Otherwise, the compiler would need to realize that functions like quicksort should be allowed (because it is weakly deterministic). This reminds me of the debate between static and dynamic programming languages. In the process of forbidding certain unsafe code, static languages end up forbidding some safe code as well.

2020-03-10 14:05 UTC

Tyson, I disagree with your basic premise:

"It follows then that the functional interaction law (pure functions cannot call impure actions) does not follow from the definition of a pure function."
I don't think that this follows.

The key is that your example is degenerate. The Foo function is only pure because DateTime.Now isn't used. The actual, underlying property that we're aiming for is referential transparency. Can you replace Foo with its value? Yes, you can.

Perhaps you think this is a hand-wavy attempt to dodge a bullet, but I don't think that it is. You can write the equivalent function in Haskell like this:

foo :: () -> ()
foo () =
  let _ = getCurrentTime
  in ()

I don't recall if you're familiar with Haskell, but for the benefit of any reader who comes by and wishes to follow this discussion, here are the important points:

  • The function calls getCurrentTime, which is an impure action. Its type is IO UTCTime. The IO container marks the action as impure.
  • The underscore is a wildcard that tells Haskell to discard the value.
  • The type of foo is () -> (). It takes unit as input and returns unit. There's no IO container involved, so the function is pure.
This works because Haskell is a strictly functional language. Every expression is referentially transparent. The implication is that something like IO UTCTime is an opaque container of UTCTime values. A pure caller can see the container, but not its contents. A common interpretation of this is that IO represents the superposition of all possible values, just like Schrödinger's box. Also, since Haskell is a lazily evaluated language, actions are only evaluated when their values are needed for something. Since the value of getCurrentTime is discarded, the impure action never runs (the box is never opened). This may be clearer with this example:

bar :: () -> ()
bar () =
  let _ = putStrLn "Bar!"
  in ()

Like foo, bar calls an impure action: putStrLn, which corresponds to Console.WriteLine. Having the type String -> IO () it's impure. It works like this:

> putStrLn "Example"
Example

None the less, because bar discards the IO () return value after it calls putStrLn, it never evaluates:

> bar ()
()

Perhaps a subtle rephrasing of the functional interaction law would be more precise. Perhaps it should say that a pure function can't evaluate an impure action.

Bringing this back to C#, we have to keep in mind that C# doesn't enforce the functional interaction law in any way. Thus, the law works ex-post, instead of in Haskell, where it works ex-ante. Is the Foo C# code pure? Yes, it is, because it's referentially transparent.

Regarding the purity of QuickSort, you may find this discussion interesting.

2020-03-12 7:40 UTC
...Haskell is a strictly functional language. Every expression is referentially transparent. ... Is the Foo C# code pure? Yes, it is, because it's referentially transparent.

So every function in Haskell is referentially transparent, and if a funciton in C# is referentially transparent, then it is pure. Is C# necessary there? Does referential transparency impliy purity regardless of langauge? Do you consider purity and referential transparency to be concepts that imply each other regulardless of language? I think a function is referential transparency if and only if it is pure, and I think this is independent of the langauge.

If C# is not necessary, then it follows that every function in Haskell is pure. This seems like a contradiction with this statement.

The function calls getCurrentTime, which is an impure action. Its [return] type is IO UTCTime. The IO container marks the action as impure.

You cited Bartosz Milewski there. He also says that every function in Haskell is pure. He calls Haskell functions returning IO a pure action. I agree with Milewski; I think every function in Haskell is pure.

Perhaps a subtle rephrasing of the functional interaction law would be more precise. Perhaps it should say that a pure function can't evaluate an impure action.

How does this rephrasing help? In the exmaple from my previous comment, bar is impure while foo is pure even though foo evaluates bar, which can be verified by putting a breakpoint in bar when evaluating foo or by observing that i has value 1 when foo returns. If Haskell contained impure functions, then replacing "calls" with "evalutes" helps because everything is lazy in Haskell, but I don't see how it helps in an eager langauge like C#.

Regarding the purity of QuickSort, you may find this discussion interesting.

Oh, sorry. I now see that my reference to quicksort was unclear. I meant the randomized version of quicksort for the pivot is selected uniformily at random from all elements being sorted. That refrasing of the functional interaction law doesn't address the issue I am trying to point out with quicksort. To elborate, consider this randomized version of quicksort that has no side effects. I think this function is pure even though it uses randomness, which is necessarily obtained from an impure function.

2020-07-06 13:57 UTC

Tyson, my apologies that I've been so dense. I think that I'm beginning to understand where you're going with this. Calling out randomised pivot selection in quicksort helped, I think.

I would consider a quicksort function referentially transparent, even if it were to choose the pivot at random. Even if it does that, you can replace a given function call with its output. The only difference you might observe across multiple function calls would be varying execution time, due to lucky versus unlucky random pivot selection. Execution time is, however, not a property that impacts whether or not we consider a function pure.

Safe Haskell can't do that, though, so you're correct when you say:

"In the process of forbidding certain unsafe code, static languages end up forbidding some safe code as well."
(Actually, you can implement quicksort like that in Haskell as well. In order to not muddy the waters, I've so far ignored that the language has an escape hatch for (among other purposes) this sort of scenario: unsafePerformIO. In Safe Haskell, however, you can't use it, and I've never myself had to use it.)

I'm going to skip the discussion about whether or not all of Haskell is pure, because I think it's a red herring. We can discuss it later, if you're interested.

I think that you're right, though, that the functional interaction law has to come with a disclaimer. I'm not sure exactly how to formulate it, but I need to take a detour around side effects, and then perhaps you can help me with that.

Functional programmers know that every execution has side effects. In the extreme, running any calculation on a computer produces heat. There could be other side effects as well, such as CPU registers changing values, data moving in and out of processor caches, and so on. The question is: when do side effects become significant?

We don't consider the generation of heat a significant side effect. What about a debug trace? If it doesn't affect the state of the system, does it count? If not, then how about logging or auditing?

We usually draw the line somewhere and say that anything on one side counts, and things on the other side don't. The bottom line is, though, that we consider some side effects insignificant.

I think that you have now demonstrated that there's symmetry. Not only are there insignificant side effects, but insignificant randomness also exists. The randomness involved in choosing a pivot in quicksort has no significant impact on the output.

Was that what you meant by weak determinism?

2020-07-07 19:53 UTC

Builder as a monoid

Monday, 17 February 2020 07:18:00 UTC

Builder, particularly Fluent Builder, is one of the more useful design patterns. Here's why.

This article is part of a series of articles about design patterns and their universal abstraction counterparts.

The Builder design pattern is an occasionally useful pattern, but mostly in its Fluent Builder variation. I've already described that Builder, Fluent Builder, and Immutable Fluent Builder are isomorphic. The Immutable Fluent Builder variation is a set of pure functions, so among the three variations, it best fits the set of universal abstractions that I've so far discussed in this article series.

Design Patterns describes 23 patterns. Some of these are more useful than others. I first read the book in 2003, and while I initially used many of the patterns, after some years I settled into a routine where I'd reach for the same handful of patterns and ignore the rest.

What makes some design patterns more universally useful than others? There's probably components of both subjectivity and chance, but I also believe that there's some correlation to universal abstractions. I consider abstractions universal when they are derived from universal truths (i.e. mathematics) instead of language features or 'just' experience. That's what the overall article series is about. In this article, you'll learn how the Builder pattern is an instance of a universal abstraction. Hopefully, this goes a long way towards explaining why it seems to be so universally useful.

Builder API, isolated #

I'll start with the HttpRequestMessageBuilder from the article about Builder isomorphisms, particularly its Immutable Fluent Builder incarnation. Start by isolating those methods that manipulate the Builder. These are the functions that had void return types in the original Builder incarnation. Imagine, for example, that you extract an interface of only those methods. What would such an interface look like?

public interface IHttpRequestMessageBuilder
{
    HttpRequestMessageBuilder AddJsonBody(object jsonBody);
    HttpRequestMessageBuilder WithMethod(HttpMethod newMethod);
}

Keep in mind that on all instance methods, the instance itself can be viewed as 'argument 0'. In that light, each of these methods take two arguments: a Builder and the formal argument (jsonBody and newMethod, respectively). Each method returns a Builder. I've already described how this is equivalent to an endomorphism. An endomorphism is a function that returns the same type of output as its input, and it forms a monoid.

This can be difficult to see, so I'll make it explicit. The code that follows only exists to illustrate the point. In no way do I endorse that you write code in this way.

Explicit endomorphism #

You can define a formal interface for an endomorphism:

public interface IEndomorphism<T>
{
    T Run(T x);
}

Notice that it's completely generic. The Run method takes a value of the generic type T and returns a value of the type T. The identity of the monoid, you may recall, is the eponymously named identity function which returns its input without modification. You can also define the monoidal combination of two endomorphisms:

public class AppendEndomorphism<T> : IEndomorphism<T>
{
    private readonly IEndomorphism<T> morphism1;
    private readonly IEndomorphism<T> morphism2;
 
    public AppendEndomorphism(IEndomorphism<Tmorphism1IEndomorphism<Tmorphism2)
    {
        this.morphism1 = morphism1;
        this.morphism2 = morphism2;
    }
 
    public T Run(T x)
    {
        return morphism2.Run(morphism1.Run(x));
    }
}

This implementation of IEndomorphism<T> composes two other IEndomorphism<T> objects. When its Run method is called, it first calls Run on morphism1 and then uses the return value of that method call (still a T object) as input for Run on morphism2.

If you need to combine more than two endomorphisms then that's also possible, because monoids accumulate.

Explicit endomorphism to change HTTP method #

You can adapt the WithMethod method to the IEndomorphism<HttpRequestMessageBuilder> interface:

public class ChangeMethodEndomorphism : IEndomorphism<HttpRequestMessageBuilder>
{
    private readonly HttpMethod newMethod;
 
    public ChangeMethodEndomorphism(HttpMethod newMethod)
    {
        this.newMethod = newMethod;
    }
 
    public HttpRequestMessageBuilder Run(HttpRequestMessageBuilder x)
    {
        if (x is null)
            throw new ArgumentNullException(nameof(x));
 
        return x.WithMethod(newMethod);
    }
}

In itself, this is simple code, but it does turn things on their head. The newMethod argument is now a class field (and constructor argument), while the HttpRequestMessageBuilder has been turned into a method argument. Keep in mind that I'm not doing this because I endorse this style of API design; I do it to demonstrate how the Immutable Fluent Builder pattern is an endomorphism.

Since ChangeMethodEndomorphism is an Adapter between IEndomorphism<HttpRequestMessageBuilder> and the WithMethod method, I hope that this is becoming apparent. I'll show one more Adapter.

Explicit endomorphism to add a JSON body #

In the example code, there's one more method that modifies an HttpRequestMessageBuilder object, and that's the AddJsonBody method. You can also create an Adapter over that method:

public class AddJsonBodyEndomorphism : IEndomorphism<HttpRequestMessageBuilder>
{
    private readonly object jsonBody;
 
    public AddJsonBodyEndomorphism(object jsonBody)
    {
        this.jsonBody = jsonBody;
    }
 
    public HttpRequestMessageBuilder Run(HttpRequestMessageBuilder x)
    {
        if (x is null)
            throw new ArgumentNullException(nameof(x));
 
        return x.AddJsonBody(jsonBody);
    }
}

While the AddJsonBody method itself is more complicated than WithMethod, the Adapter is strikingly similar.

Running an explicit endomorphism #

You can use the IEndomorphism<T> API to compose a pipeline of operations that will, for example, make an HttpRequestMessageBuilder build an HTTP POST request with a JSON body:

IEndomorphism<HttpRequestMessageBuildermorphism = new AppendEndomorphism<HttpRequestMessageBuilder>(
    new ChangeMethodEndomorphism(HttpMethod.Post),
    new AddJsonBodyEndomorphism(new
    {
        id = Guid.NewGuid(),
        date = "2020-03-22 19:30:00",
        name = "Ælfgifu",
        email = "ælfgifu@example.net",
        quantity = 1
    }));

You can then Run the endomorphism over a new HttpRequestMessageBuilder object to produce an HTTP request:

HttpRequestMessage msg = morphism.Run(new HttpRequestMessageBuilder(url)).Build();

The msg object represents an HTTP POST request with the supplied JSON body.

Once again, I stress that the purpose of this little exercise is only to demonstrate how an Immutable Fluent Builder is an endomorphism, which is a monoid.

Test Data Builder endomorphism #

You can give Test Data Builders the same treatment, again only to demonstrate that the reason they compose so well is because they're monoids. I'll use an immutable variation of the AddressBuilder from this article.

For example, to modify a city, you can introduce an endomorphism like this:

public class CityEndomorphism : IEndomorphism<AddressBuilder>
{
    private readonly string city;
 
    public CityEndomorphism(string city)
    {
        this.city = city;
    }
 
    public AddressBuilder Run(AddressBuilder x)
    {
        return x.WithCity(city);
    }
}

You can use it to create an address in Paris like this:

IEndomorphism<AddressBuildermorphism = new CityEndomorphism("Paris");
Address address = morphism.Run(new AddressBuilder()).Build();

The address is fully populated with Street, PostCode, and so on, but apart from City, you know none of the values.

Sweet spot #

Let's return to the question from the introduction to the article. What makes some design patterns useful? I don't think that there's a single answer to that question, but I find it intriguing that so many of the useful patterns turn out to be equivalent to universal abstractions. The Builder pattern is a monoid. From a programming perspective, the most useful characteristic of semigroups and monoids is that they enable you to treat many objects as one object. Monoids compose.

Of the three Builder variations, the Immutable Fluent Builder is the most useful. It's also the variation that most clearly corresponds to the endomorphism monoid. Viewing it as an endomorphism reveals its strengths. When or where is a Builder most useful?

Don't be mislead by Design Patterns, which states the intent of the Builder pattern like this:

"Separate the construction of a complex object from its representation so that the same construction process can create different representations."

Gamma et al, Design Patterns, 1994
This may still be the case, but I don't find that this is the primary advantage offered by the pattern. We've learned much about the utility of each design pattern since 1994, so I don't blame the Gang of Four for not seeing this. I do think, however, that it's important to emphasise that the benefit you can derive from a pattern may differ from the original motivation.

An endomorphism represents a modification of a value. You need a value to get started, and you get a modified value (of the same type) as output.

An object (a little circle) to the left, going into a horizontally oriented pipe, and coming out to the right in a different colour.

Sometimes, all you need is the initial object.

An object represented as a little circle.

And sometimes, you need to compose several changes.

An object (a little circle) to the left, going into two horizontally oriented pipe, on after the other, and coming out to the right in a different colour.

To me, this makes the sweet spot for the pattern clear. Use an (Immutable) Fluent Builder when you have a basic object that's useful in itself, but where you want to give client code the option to make changes to the defaults.

Sometimes, the initial object has self-contained default values. Test Data Builders are good examples of that:

public AddressBuilder()
{
    this.street = "";
    this.city = "";
    this.postCode = new PostCodeBuilder().Build();
}

The AddressBuilder constructor fully initialises the object. You can use its WithNoPostcode, WithStreet, etcetera methods to make changes to it, but you can also use it as is.

In other cases, client code must initialise the object to be built. The HttpRequestMessageBuilder is an example of that:

public HttpRequestMessageBuilder(string url) : this(new Uri(url)) { }
 
public HttpRequestMessageBuilder(Uri url) : this(urlHttpMethod.Get, null) { }
 
private HttpRequestMessageBuilder(Uri urlHttpMethod methodobjectjsonBody)
{
    this.url = url;
    Method = method;
    this.jsonBody = jsonBody;
}

While there's more than one constructor overload, client code must supply a url in one form or other. That's the precondition of this class. Given a valid url, though, an HttpRequestMessageBuilder object can be useful without further modification, but you can also modify it by calling its methods.

You often see the Builder pattern used for configuration APIs. The ASP.NET Core IApplicationBuilder is a prominent example of the Fluent Builder pattern. The NServiceBus endpoint configuration API, on the other hand, is based on the classic Builder pattern. It makes sense to use an endomorphic design for framework configuration. Framework designers want to make it as easy to get started with their framework as possible. For this reason, it's important to provide a useful default configuration, so that you can get started with as little ceremony as possible. On the other hand, a framework must be flexible. You need a way to tweak the configuration to support your particular needs. The Builder pattern supports both scenarios.

Other examples include Test Data Builders, as well as specialised Builders such as UriBuilder and SqlConnectionStringBuilder.

It's also worth noting that F# copy-and-update expressions are endomorphisms. That's the reason that when you have immutable records, you need no Test Data Builders.

Summary #

The Builder pattern comes in (at least) three variations: the Gang-of-Four Builder pattern, Fluent Builder, and Immutable Fluent Builder. All are isomorphic to each other, and are equivalent to the endomorphism monoid.

Viewing Builders as endomorphisms may mostly be an academic exercise, but I think it highlights the sweet spot for the pattern. It's particularly useful when you wish to expose an API that offers simple defaults, while at the same time enabling client code to make changes to those defaults. When those changes involve several steps (as e.g. AddJsonBody) you can view each modifier method as a Facade.

Next: Visitor as a sum type.


Builder isomorphisms

Monday, 10 February 2020 07:06:00 UTC

The Builder pattern is equivalent to the Fluent Builder pattern.

This article is part of a series of articles about software design isomorphisms. An isomorphism is when a bi-directional lossless translation exists between two representations. Such translations exist between the Builder pattern and two variations of the Fluent Builder pattern. Since the names sound similar, this is hardly surprising.

isomorphism between Builder, Fluent Builder, and Immutable Fluent Builder.

Given an implementation that uses one of those three patterns, you can translate your design into one of the other options. This doesn't imply that each is of equal value. When it comes to composability, both versions of Fluent Builder are superior to the classic Builder pattern.

A critique of the Maze Builder example #

In these articles, I usually first introduce the form presented in Design Patterns. The code example given by the Gang of Four is, however, problematic. I'll start by pointing out the problems and then proceed to present a simpler, more useful example.

The book presents an example centred on a MazeBuilder abstract class. The original example is in C++, but I here present my C# interpretation:

public abstract class MazeBuilder
{
    public virtual void BuildMaze() { }
 
    public virtual void BuildRoom(int room) { }
 
    public virtual void BuildDoor(int roomFromint roomTo) { }
 
    public virtual Maze GetMaze()
    {
        return null;
    }
}

As the book states, "the maze-building operations of MazeBuilder do nothing by default. They're not declared pure virtual to let derived classes override only those methods in which they're interested." This means that you could technically write a derived class that overrides only BuildRoom. That's unlikely to be useful, since GetMaze still returns null.

Moreover, the presence of the BuildMaze method indicates sequential coupling. A client (a Director, in the pattern language of Design Patterns) is supposed to first call BuildMaze before calling any of the other methods. What happens if a client forgets to call BuildMaze? What happens if client code calls the method after some of the other methods. What happens if it calls it multiple times?

Another issue with the sample code is that it's unclear how it accomplishes its stated goal of separating "the construction of a complex object from its representation." The StandardMazeBuilder presented seems tightly coupled to the Maze class to a degree where it's hard to see how to untangle the two. The book fails to make a compelling example by instead presenting a CountingMazeBuilder that never implements GetMaze. It never constructs the desired complex object.

Don't interpret this critique as a sweeping dismissal of the pattern, or the book in general. As this article series implies, I've invested significant energy in it. I consider the book seminal, but we've learned much since its publication in 1994. A common experience is that not all of the patterns in the book are equally useful, and of those that are, some are useful for different reasons than the book gives. The Builder pattern is an example of that.

The Builder pattern isn't useful only because it enables you to "separate the construction of a complex object from its representation." It's useful because it enables you to present an API that comes with good default behaviour, but which can be tweaked into multiple configurations. The pattern is useful even without polymorphism.

HTTP request Builder #

The HttpRequestMessage class is a versatile API with good default behaviour, but it can be a bit awkward if you want to make an HTTP request with a body and particular headers. You can often get around the problem by using methods like PostAsync on HttpClient, but sometimes you need to drop down to SendAsync. When that happens, you need to build your own HttpRequestMessage objects. A Builder can encapsulate some of that work.

public class HttpRequestMessageBuilder
{
    private readonly Uri url;
    private object? jsonBody;
 
    public HttpRequestMessageBuilder(string url) : this(new Uri(url)) { }
 
    public HttpRequestMessageBuilder(Uri url)
    {
        this.url = url;
        Method = HttpMethod.Get;
    }
 
    public HttpMethod Method { getset; }
 
    public void AddJsonBody(object jsonBody)
    {
        this.jsonBody = jsonBody;
    }
 
    public HttpRequestMessage Build()
    {
        var message = new HttpRequestMessage(Method, url);
        BuildBody(message);
        return message;
    }
 
    private void BuildBody(HttpRequestMessage message)
    {
        if (jsonBody is null)
            return;
 
        string json = JsonConvert.SerializeObject(jsonBody);
        message.Content = new StringContent(json);
        message.Content.Headers.ContentType.MediaType = "application/json";
    }
}

Compared to Design Patterns' example, HttpRequestMessageBuilder isn't polymorphic. It doesn't inherit from a base class or implement an interface. As I pointed out in my critique of the MazeBuilder example, polymorphism doesn't seem to be the crux of the matter. You could easily introduce a base class or interface that defines the Method, AddJsonBody, and Build members, but what would be the point? Just like the MazeBuilder example fails to present a compelling second implementation, I can't think of another useful implementation of a hypothetical IHttpRequestMessageBuilder interface.

Notice that I dropped the Build prefix from most of the Builder's members. Instead, I reserved the word Build for the method that actually creates the desired object. This is consistent with most modern Builder examples I've encountered.

The HttpRequestMessageBuilder comes with a reasonable set of default behaviours. If you just want to make a GET request, you can easily do that:

var builder = new HttpRequestMessageBuilder(url);
HttpRequestMessage msg = builder.Build();
 
HttpClient client = GetClient();
var response = await client.SendAsync(msg);

Since you only call the builder's Build method, but never any of the other members, you get the default behaviour. A GET request with no body.

Notice that the HttpRequestMessageBuilder protects its invariants. It follows the maxim that you should never be able to put an object into an invalid state. Contrary to Design Patterns' StandardMazeBuilder, it uses its constructors to enforce an invariant. Regardless of what sort of HttpRequestMessage you want to build, it must have a URL. Both constructor overloads require all clients to supply one. (In order to keep the code example as simple as possible, I've omitted all sorts of precondition checks, like checking that url isn't null, that it's a valid URL, and so on.)

If you need to make a POST request with a JSON body, you can change the defaults:

var builder = new HttpRequestMessageBuilder(url);
builder.Method = HttpMethod.Post;
builder.AddJsonBody(new {
    id = Guid.NewGuid(),
    date = "2020-03-22 19:30:00",
    name = "Ælfgifu",
    email = "ælfgifu@example.net",
    quantity = 1 });
HttpRequestMessage msg = builder.Build();
 
HttpClient client = GetClient();
var response = await client.SendAsync(msg);

Other combinations of Method and AddJsonBody are also possible. You could, for example, make a DELETE request without a body by only changing the Method.

This incarnation of HttpRequestMessageBuilder is cumbersome to use. You must first create a builder object and then mutate it. Once you've invoked its Build method, you rarely need the object any longer, but the builder variable is still in scope. You can address those usage issues by refactoring a Builder to a Fluent Builder.

HTTP request Fluent Builder #

In the Gang of Four Builder pattern, no methods return anything, except the method that creates the object you're building (GetMaze in the MazeBuilder example, Build in the HttpRequestMessageBuilder example). It's always possible to refactor such a Builder so that the void methods return something. They can always return the object itself:

public HttpMethod Method { getprivate set; }
 
public HttpRequestMessageBuilder WithMethod(HttpMethod newMethod)
{
    Method = newMethod;
    return this;
}
 
public HttpRequestMessageBuilder AddJsonBody(object jsonBody)
{
    this.jsonBody = jsonBody;
    return this;
}

Changing AddJsonBody is as easy as changing its return type and returning this. Refactoring the Method property is a bit more involved. It's a language feature of C# (and a few other languages) that classes can have properties, so this concern isn't general. In languages without properties, things are simpler. In C#, however, I chose to make the property setter private and instead add a method that returns HttpRequestMessageBuilder. Perhaps it's a little confusing that the name of the method includes the word method, but keep in mind that the method in question is an HTTP method.

You can now create a GET request with a one-liner:

HttpRequestMessage msg = new HttpRequestMessageBuilder(url).Build();

You don't have to declare any builder variable to mutate. Even when you need to change the defaults, you can just start with a builder and keep on chaining method calls:

HttpRequestMessage msg = new HttpRequestMessageBuilder(url)
    .WithMethod(HttpMethod.Post)
    .AddJsonBody(new {
        id = Guid.NewGuid(),
        date = "2020-03-22 19:30:00",
        name = "Ælfgifu",
        email = "ælfgifu@example.net",
        quantity = 1 })
    .Build();

This creates a POST request with a JSON message body.

We can call this pattern Fluent Builder because this version of the Builder pattern has a Fluent Interface.

This usually works well enough in practice, but is vulnerable to aliasing. What happens if you reuse an HttpRequestMessageBuilder object?

var builder = new HttpRequestMessageBuilder(url);
var deleteMsg = builder.WithMethod(HttpMethod.Delete).Build();
var getMsg = builder.Build();

As the variable names imply, the programmer responsible for these three lines of code incorrectly believed that without the call to WithMethod, the builder will use its default behaviour when Build is called. The previous line of code, however, mutated the builder object. Its Method property remains HttpMethod.Delete until another line of code changes it!

HTTP request Immutable Fluent Builder #

You can disarm the aliasing booby trap by making the Fluent Builder immutable. A good first step in that refactoring is making sure that all class fields are readonly:

private readonly Uri url;
private readonly object? jsonBody;

The url field was already marked readonly, so the change only applies to the jsonBody field. In addition to the class fields, don't forget any automatic properties:

public HttpMethod Method { get; }

The HttpMethod property previously had a private setter, but this is now gone. It's also strictly read only.

Now that all data is read only, the only way you can 'change' values is via a constructor. Add a constructor overload that receives all data and chain the other constructors into it:

public HttpRequestMessageBuilder(string url) : this(new Uri(url)) { }
 
public HttpRequestMessageBuilder(Uri url) : this(urlHttpMethod.Get, null) { }
 
private HttpRequestMessageBuilder(Uri urlHttpMethod methodobjectjsonBody)
{
    this.url = url;
    Method = method;
    this.jsonBody = jsonBody;
}

I'm usually not keen on allowing null arguments, but I made the all-encompassing constructor private. In that way, at least no client code gets the wrong idea.

The optional modification methods can now only do one thing: return a new object:

public HttpRequestMessageBuilder WithMethod(HttpMethod newMethod)
{
    return new HttpRequestMessageBuilder(url, newMethod, jsonBody);
}
 
public HttpRequestMessageBuilder AddJsonBody(object jsonBody)
{
    return new HttpRequestMessageBuilder(url, Method, jsonBody);
}

The client code looks the same as before, but now you no longer have an aliasing problem:

var builder = new HttpRequestMessageBuilder(url);
var deleteMsg = builder.WithMethod(HttpMethod.Delete).Build();
var getMsg = builder.Build();

Now deleteMsg represents a Delete request, and getMsg truly represents a GET request.

Since this variation of the Fluent Builder pattern is immutable, it's natural to call it an Immutable Fluent Builder.

You've now seen how to refactor from Builder via Fluent Builder to Immutable Fluent Builder. If these three pattern variations are truly isomorphic, it should also be possible to move in the other direction. I'll leave it as an exercise for the reader to do this with the HTTP request Builder example. Instead, I will briefly discuss another example that starts at the Fluent Builder pattern.

Test Data Fluent Builder #

A prominent example of the Fluent Builder pattern would be the set of all Test Data Builders. I'm going to use the example I've already covered. You can visit the previous article for all details, but in summary, you can, for example, write code like this:

Address address = new AddressBuilder().WithCity("Paris").Build();

This creates an Address object with the City property set to "Paris". The Address class comes with other properties. You can trust that the AddressBuilder gave them values, but you don't know what they are. You can use this pattern in unit tests when you need an Address in Paris, but you don't care about any of the other data.

In my previous article, I implemented AddressBuilder as a Fluent Builder. I did that in order to stay as true to Nat Pryce's original example as possible. Whenever I use the Test Data Builder pattern in earnest, however, I use the immutable variation so that I avoid the aliasing issue.

Test Data Builder as a Gang-of-Four Builder #

You can easily refactor a typical Test Data Builder like AddressBuilder to a shape more reminiscent of the Builder pattern presented in Design Patterns. Apart from the Build method that produces the object being built, change all other methods to void methods:

public class AddressBuilder
{
    private string street;
    private string city;
    private PostCode postCode;
 
    public AddressBuilder()
    {
        this.street = "";
        this.city = "";
        this.postCode = new PostCodeBuilder().Build();
    }
 
    public void WithStreet(string newStreet)
    {
        this.street = newStreet;
    }
 
    public void WithCity(string newCity)
    {
        this.city = newCity;
    }
 
    public void WithPostCode(PostCode newPostCode)
    {
        this.postCode = newPostCode;
    }
 
    public void WithNoPostcode()
    {
        this.postCode = new PostCode();
    }
 
    public Address Build()
    {
        return new Address(this.street, this.city, this.postCode);
    }
}

You can still build a test address in Paris, but it's now more inconvenient.

var addressBuilder = new AddressBuilder();
addressBuilder.WithCity("Paris");
 
Address address = addressBuilder.Build();

You can still use multiple Test Data Builders to build more complex test data, but the classic Builder pattern doesn't compose well.

var invoiceBuilder = new InvoiceBuilder();
var recipientBuilder = new RecipientBuilder();
var addressBuilder = new AddressBuilder();
addressBuilder.WithNoPostcode();
recipientBuilder.WithAddress(addressBuilder.Build());
invoiceBuilder.WithRecipient(recipientBuilder.Build());
Invoice invoice = invoiceBuilder.Build();

These seven lines of code creates an Invoice object with a address without a post code. Compare that with the Fluent Builder example in the previous article. This is a clear example that while the variations are isomorphic, they aren't equally useful. The classic Builder pattern isn't as practical as one of the Fluent variations.

You might protest that this variation of AddressBuilder, InvoiceBuilder, etcetera isn't equivalent to the Builder pattern. After all, the Builder shown in Design Patterns is polymorphic. That's really not an issue, though. Just extract an interface from the concrete builder:

public interface IAddressBuilder
{
    Address Build();
    void WithCity(string newCity);
    void WithNoPostcode();
    void WithPostCode(PostCode newPostCode);
    void WithStreet(string newStreet);
}

Make the concrete class implement the interface:

public class AddressBuilder : IAddressBuilder

You could argue that this adds no value. You'd be right. This goes contrary to the Reused Abstractions Principle. I think that the same criticism applies to Design Patterns' original description of the pattern, as I've already pointed out. The utility in the pattern comes from how it gives client code good defaults that it can then tweak as necessary.

Summary #

The Builder pattern was originally described in Design Patterns. Later, smart people like Nat Pryce figured out that by letting each mutating operation return the (mutated) Builder, such a Fluent API offered superior composability. A further improvement to the Fluent Builder pattern makes the Builder immutable in order to avoid aliasing issues.

All three variations are isomorphic. Work that one of these variations afford is also afforded by the other variations.

On the other hand, the variations aren't equally useful. Fluent APIs offer superior composability.

Next: Church encoding.


Comments

You can now[, with the fluent builder implementation,] create a GET request with a one-liner:

HttpRequestMessage msg = new HttpRequestMessageBuilder(url).Build();

It is also possible to write that one-liner with the original (non-fluent) builder implementation. Did you mean to show how it is possible with the fluent builder implementation to create a DELETE request with a one-liner? You have such an example two code blocks later.

2020-02-25 02:18 UTC

Tyson, you are, of course, right. The default behaviour could also have been a one-liner with the non-fluent design. Every other configuration, however, can't be a one-liner with the Gang-of-Four pattern, while it can in the Fluent guise.

2020-02-25 6:44 UTC

Among the example uses of your HttpRequestMessageBuilder, I see three HTTP verbs used: GET, DELETE, and POST. Furthermore, a body is added if and only if the method is POST. This matches my expectations gained from my limited experience doing web programming. If a GET or DELETE request had a body or if a POST request did not have a body, then I would suspect that such behavior was a bug.

For the sake of a question that I would like to ask, let's suppose that a body must be added if and only if the method is POST. Under this assumption, HttpRequestMessageBuilder can create invalid messages. For example, it can create a GET request with a body, and it can create a POST request without a body. Under this assumption, how would you modify your design so that only valid messages can be created?

2020-02-25 14:34 UTC

Tyson, thank you for another inspiring question! It gives me a good motivation to write about polymorphic Builders. I'll try to address this question in a future article.

2020-03-02 8:40 UTC

Tyson, I've now attempted to answer your question in a new article.

2020-03-09 6:53 UTC

Non-exceptional averages

Monday, 03 February 2020 06:38:00 UTC

How do you code without exceptions? Here's one example.

Encouraging object-oriented programmers to avoid throwing exceptions is as fun as telling them to renounce null references. To be fair, exception-throwing is such an ingrained feature of C#, Java, C++, etcetera that it can be hard to see how to do without it.

To be clear, I don't insist that you pretend that exceptions don't exist in languages that have them. I'm also not advocating that you catch all exceptions in order to resurface them as railway-oriented programming. On the other hand, I do endorse the generally good advice that you shouldn't use exceptions for control flow.

What can you do instead? Despite all the warnings against railway-oriented programming, Either is still a good choice for a certain kind of control flow. Exceptions are for exceptional situations, such as network partitions, running out of memory, disk failures, and so on. Many run-time errors are both foreseeable and preventable. Prefer code that prevents errors.

There's a few ways you can do that. One of them is to protect invariants by enforcing pre-conditions. If you have a static type system, you can use the type system to prevent errors.

Average duration #

How would you calculate the average of a set of durations? You might, for example, need to calculate average duration of message handling for a polling consumer. C# offers many built-in overloads of the Average extension method, but none that calculates the average of TimeSpan values.

How would you write that method yourself?

It's not a trick question.

Based on my experience coaching development teams, this is a representative example:

public static TimeSpan Average(this IEnumerable<TimeSpantimeSpans)
{
    var sum = TimeSpan.Zero;
    var count = 0;
    foreach (var ts in timeSpans)
    {
        sum += ts;
        count++;
    }
    return sum / count;
}

This gets the job done in most situations, but it has two error modes. It doesn't work if timeSpans is empty, and it doesn't work if it's infinite.

When the input collection is empty, you'll be trying to divide by zero, which isn't allowed. How do you deal with that? Most programmers I've met just shrug and say: don't call the method with an empty collection. Apparently, it's your responsibility as the caller. You have to memorise that this particular Average method has that particular precondition.

I don't think that's a professional position. This puts the burden on client developers. In a world like that, you have to learn by rote the preconditions of thousands of APIs.

What can you do? You could add a Guard Clause to the method.

Guard Clause #

Adding a Guard Clause doesn't really make the method much easier to reason about for client developers, but at least it protects an invariant.

public static TimeSpan Average(this IEnumerable<TimeSpantimeSpans)
{
    if (!timeSpans.Any())
        throw new ArgumentOutOfRangeException(
            nameof(timeSpans),
            "Can't calculate the average of an empty collection.");
 
    var sum = TimeSpan.Zero;
    var count = 0;
    foreach (var ts in timeSpans)
    {
        sum += ts;
        count++;
    }
    return sum / count;
}

Don't get me wrong. I often write code like this because it makes it easier for me as a library developer to reason about the rest of the method body. On the other hand, it basically just replaces one run-time exception with another. Before I added the Guard Clause, calling Average with an empty collection would cause it to throw an OverflowException; now it throws an ArgumentOutOfRangeException.

From client developers' perspective, this is only a marginal improvement. You're still getting no help from the type system, but at least the run-time error is a bit more informative. Sometimes, that's the best you can do.

Finite collections #

The Average method has two preconditions, but we've only addressed one. The other precondition is that the input timeSpans must be finite. Unfortunately, this compiles:

static IEnumerable<TInfinitelyRepeat<T>(T x)
{
    while (trueyield return x;
}
var ts = new TimeSpan(1, 2, 3, 4);
var tss = InfinitelyRepeat(ts);
 
var avg = tss.Average();

Since tss infinitely repeats ts, the Average method call (theoretically) loops forever; in fact it quickly overflows because it keeps adding TimeSpan values together.

Infinite collections aren't allowed. Can you make that precondition explicit?

I don't know of a way to test that timeSpans is finite at run time, but I can change the input type:

public static TimeSpan Average(this IReadOnlyCollection<TimeSpantimeSpans)
{
    if (!timeSpans.Any())
        throw new ArgumentOutOfRangeException(
            nameof(timeSpans),
            "Can't calculate the average of an empty collection.");
 
    var sum = TimeSpan.Zero;
    foreach (var ts in timeSpans)
        sum += ts;
    return sum / timeSpans.Count;
}

Instead of accepting any IEnumerable<TimeSpan> as an input argument, I've now constrained timeSpans to an IReadOnlyCollection<TimeSpan>. This interface has been in .NET since .NET 4.5 (I think), but it lives a quiet existence. Few people know of it.

It's just IEnumerable<T> with an extra constraint:

public interface IReadOnlyCollection<T> : IEnumerable<T>
{
    int Count { get; }
}

The Count property strongly implies that the IEnumerable<T> is finite. Also, that the value is an int implies that the maximum size of the collection is 2,147,483,647. That's probably going to be enough for most day-to-day use.

You can no longer pass an infinite stream of values to the Average method. It's simply not going to compile. That both communicates and protects the invariant that infinite collections aren't allowed. It also makes the implementation code simpler, since the method doesn't have to count the elements. That information is already available from timeSpans.Count.

If a type can address one invariant, can it also protect the other?

Non-empty collection #

You can change the input type again. Here I've used this NotEmptyCollection<T> implementation:

public static TimeSpan Average(this NotEmptyCollection<TimeSpantimeSpans)
{
    var sum = timeSpans.Head;
    foreach (var ts in timeSpans.Tail)
        sum += ts;
    return sum / timeSpans.Count;
}

Now client code can no longer call the Average method with an empty collection. That's also not going to compile.

You've replaced a run-time check with a compile-time check. It's now clear to client developers who want to call the method that they must supply a NotEmptyCollection<TimeSpan>, instead of just any IReadOnlyCollection<TimeSpan>.

You can also simplify the implementation code:

public static TimeSpan Average(this NotEmptyCollection<TimeSpantimeSpans)
{
    var sum = timeSpans.Aggregate((xy) => x + y);
    return sum / timeSpans.Count;
}

How do we know that NotEmptyCollection<T> contains at least one element? The constructor enforces that constraint:

public NotEmptyCollection(T headparams T[] tail)
{
    if (head == null)
        throw new ArgumentNullException(nameof(head));
 
    this.Head = head;
    this.Tail = tail;
}

But wait, there's a Guard Clause and a throw there! Have we even accomplished anything, or did we just move the throw around?

Parse, don't validate #

A Guard Clause is a kind of validation. It validates that input fulfils preconditions. The problem with validation is that you have to repeat it in various different places. Every time you receive some data as an input argument, it may or may not have been validated. A receiving method can't tell. There's no flag on a string, or a number, or a collection, which is set when data has been validated.

Every method that receives such an input will have to perform validation, just to be sure that the preconditions hold. This leads to validation code being duplicated over a code base. When you duplicate code, you later update it in most of the places it appears, but forget to update it in a few places. Even if you're meticulous, a colleague may not know about the proper way of validating a piece of data. This leads to bugs.

As Alexis King explains in her Parse, don’t validate article, 'parsing' is the process of validating input of weaker type into a value of a stronger type. The stronger type indicates that validation has happened. It's like a Boolean flag that indicates that, yes, the data contained in the type has been through validation, and found to hold.

This is also the case of NotEmptyCollection<T>. If you have an object of that type, you know that it has already been validated. You know that the collection isn't empty. Even if you think that it looks like we've just replaced one exception with another, that's not the point. The point is that we've replaced scattered and unsystematic validation code with a single verification step.

You may still be left with the nagging doubt that I didn't really avoid throwing an exception. I think that the NotEmptyCollection<T> constructor strikes a pragmatic balance. If you look only at the information revealed by the type (i.e. what an IDE would display), you'll see this when you program against the class:

public NotEmptyCollection(T headparams T[] tail)

While you could, technically, pass null as the head parameter, it should be clear to you that you're trying to do something you're not supposed to do: head is not an optional argument. Had it been optional, the API designer should have provided an overload that you could call without any value. Such a constructor overload isn't available here, so if you try to cheat the compiler by passing null, don't be surprised to get a run-time exception.

For what it's worth, I believe that you can only be pragmatic if you know how to be dogmatic. Is it possible to protect NotEmptyCollection<T>'s invariants without throwing exceptions?

Yes, you could do that by making the constructor private and instead afford a static factory method that returns a Maybe or Either value. In Haskell, this is typically called a smart constructor. It's only a few lines of code, so I could easily show it here. I chose not to, though, because I'm concerned that readers will interpret this article the wrong way. I like Maybe and Either a lot, but I agree with the above critics that it may not be idiomatic in object-oriented languages.

Summary #

Encapsulation is central to object-oriented design. It's the notion that it's an object's own responsibility to protect its invariants. In statically typed object-oriented programming languages, objects are instances of classes. Classes are types. Types encapsulate invariants; they carry with them guarantees.

You can sometimes model invariants by using types. Instead of performing a run-time check on input arguments, you can declare constructors and methods in such a way that they only take arguments that are already guaranteed to be valid.

That's one way to reduce the amount of exceptions that your code throws.


Comments

Great post. I too prefer to avoid exceptions by strengthening preconditions using types.

Since tss infinitely repeats ts, the Average method call (theoretically) loops forever; in fact it quickly overflows because it keeps adding TimeSpan values together.

I am not sure what you mean here.  My best guess is that you are saying that this code would execute forever except that it will overflow, which will halt the execution.  However, I think the situation is ambiguous.  This code is impure because, as the Checked and Unchecked documentation says, its behavior depends on whether or not the -checked compiler option is given.  This dependency on the compiler option can be removed by wrapping this code in a checked or unchecked block, which would either result in a thrown exception or an infinite loop respectively.

This gets the job done in most situations, but it has two error modes. It doesn't work if timeSpans is empty, and it doesn't work if it's infinite.

There is a third error mode, and it exists in every implementation you gave.  The issue of overflow is not restricted to the case of infinitely many TimeSpans.  It only takes two.  I know of or remember this bug as "the last binary search bug".  That article shows how to correctly compute the average of two integers without overflowing.  A correct implementation for computing the average of more than two integers is to map each element to a mixed fraction with the count as the divisor and then appropriately aggregate those values.  The implementation given in this Quora answer seems correct to me.

I know all this is unrelated to the topic of your post, but I also know how much you prefer to use examples that avoid this kind of accidental complexity.  Me too!  However, I still like your example and can't think of a better one at the moment.

2020-02-05 14:13 UTC

Tyson, thank you for writing. Given an infinite stream of values, the method throws an OverflowException. This is because TimeSpan addition explicitly does that:

> TimeSpan.MaxValue + new TimeSpan(1)
System.OverflowException: TimeSpan overflowed because the duration is too long.
  + System.TimeSpan.Add(System.TimeSpan)
  + System.TimeSpan.op_Addition(System.TimeSpan, System.TimeSpan)

This little snippet from C# Interactive also illustrates the third error mode that I hadn't considered. Good point, that.

2020-02-06 6:47 UTC

Ah, yes. You are correct. Thanks for pointing out my mistake. Another way to verify this is inspecting TimeSpan.Add in Mircosoft's reference source. I should have done those checks before posting. Thanks again!

2020-02-06 13:33 UTC

Page 22 of 73

"Our team wholeheartedly endorses Mark. His expert service provides tremendous value."
Hire me!