Do ORMs reduce the need for mapping?

Monday, 18 September 2023 14:40:00 UTC

With some Entity Framework examples in C#.

In a recent comment, a reader asked me to expand on my position on object-relational mappers (ORMs), which is that I'm not a fan:

I consider ORMs a waste of time: they create more problems than they solve.

While I acknowledge that only a Sith deals in absolutes, I favour clear assertions over guarded language. I don't really mean it that categorically, but I do stand by the general sentiment. In this article I'll attempt to describe why I don't reach for ORMs when querying or writing to a relational database.

As always, any exploration of such a kind is made in a context, and this article is no exception. Before proceeding, allow me to delineate the scope. If your context differs from mine, what I write may not apply to your situation.

Scope #

It's been decades since I last worked on a system where the database 'came first'. The last time that happened, the database was hidden behind an XML-based RPC API that tunnelled through HTTP. Not a REST API by a long shot.

Since then, I've worked on various systems. Some used relational databases, some document databases, some worked with CSV, or really old legacy APIs, etc. Common to these systems was that they were not designed around a database. Rather, they were developed with an eye to the Dependency Inversion Principle, keeping storage details out of the Domain Model. Many were developed with test-driven development (TDD).

When I evaluate whether or not to use an ORM in situations like these, the core application logic is my main design driver. As I describe in Code That Fits in Your Head, I usually develop (vertical) feature slices one at a time, utilising an outside-in TDD process, during which I also figure out how to save or retrieve data from persistent storage.

Thus, in systems like these, storage implementation is an artefact of the software architecture. If a relational database is involved, the schema must adhere to the needs of the code; not the other way around.

To be clear, then, this article doesn't discuss typical CRUD-heavy applications that are mostly forms over relational data, with little or no application logic. If you're working with such a code base, an ORM might be useful. I can't really tell, since I last worked with such systems at a time when ORMs didn't exist.

The usual suspects #

The most common criticism of ORMs (that I've come across) is typically related to the queries they generate. People who are skilled in writing SQL by hand, or who are concerned about performance, may look at the SQL that an ORM generates and dislike it for that reason.

It's my impression that ORMs have come a long way over the decades, but frankly, the generated SQL is not really what concerns me. It never was.

In the abstract, Ted Neward already outlined the problems in the seminal article The Vietnam of Computer Science. That problem description may, however, be too theoretical to connect with most programmers, so I'll try a more example-driven angle.

Database operations without an ORM #

Once more I turn to the trusty example code base that accompanies Code That Fits in Your Head. In it, I used SQL Server as the example database, and ADO.NET as the data access technology.

I considered this more than adequate for saving and reading restaurant reservations. Here, for example, is the code that creates a new reservation row in the database:

public async Task Create(int restaurantId, Reservation reservation)
{
    if (reservation is null)
        throw new ArgumentNullException(nameof(reservation));
 
    using var conn = new SqlConnection(ConnectionString);
    using var cmd = new SqlCommand(createReservationSql, conn);
    cmd.Parameters.AddWithValue("@Id", reservation.Id);
    cmd.Parameters.AddWithValue("@RestaurantId", restaurantId);
    cmd.Parameters.AddWithValue("@At", reservation.At);
    cmd.Parameters.AddWithValue("@Name", reservation.Name.ToString());
    cmd.Parameters.AddWithValue("@Email", reservation.Email.ToString());
    cmd.Parameters.AddWithValue("@Quantity", reservation.Quantity);
 
    await conn.OpenAsync().ConfigureAwait(false);
    await cmd.ExecuteNonQueryAsync().ConfigureAwait(false);
}
 
private const string createReservationSql = @"
    INSERT INTO [dbo].[Reservations] (
        [PublicId], [RestaurantId], [At], [Name], [Email], [Quantity])
    VALUES (@Id, @RestaurantId, @At, @Name, @Email, @Quantity)";

Yes, there's mapping, even if it's 'only' from a Domain Object to command parameter strings. As I'll argue later, if there's a way to escape such mapping, I'm not aware of it. ORMs don't seem to solve that problem.

This, however, seems to be the reader's main concern:

"I can work with raw SQL ofcourse... but the mapping... oh the mapping..."

It's not a concern that I share, but again I'll remind you that if your context differs substantially from mine, what doesn't concern me could reasonably concern you.

You may argue that the above example isn't representative, since it only involves a single table. No foreign key relationships are involved, so perhaps the example is artificially easy.

In order to work with a slightly more complex schema, I decided to port the read-only in-memory restaurant database (the one that keeps track of the restaurants - the tenants - of the system) to SQL Server.

Restaurants schema #

In the book's sample code base, I'd only stored restaurant configurations as JSON config files, since I considered it out of scope to include an online tenant management system. Converting to a relational model wasn't hard, though. Here's the database schema:

CREATE TABLE [dbo].[Restaurants] (
    [Id]               INT            NOT NULL,
    [Name]             NVARCHAR (50)  NOT NULL UNIQUE,
    [OpensAt]          TIME           NOT NULL,
    [LastSeating]      TIME           NOT NULL,
    [SeatingDuration]  TIME           NOT NULL
    PRIMARY KEY CLUSTERED ([Id] ASC)
)
 
CREATE TABLE [dbo].[Tables] (
    [Id]               INT            NOT NULL IDENTITY,
    [RestaurantId]     INT            NOT NULL REFERENCES [dbo].[Restaurants](Id),
    [Capacity]         INT            NOT NULL,
    [IsCommunal]       BIT            NOT NULL
    PRIMARY KEY CLUSTERED ([Id] ASC)
)

This little subsystem requires two database tables: One that keeps track of the overall restaurant configuration, such as name, opening and closing times, and another database table that lists all a restaurant's physical tables.

You may argue that this is still too simple to realistically capture the intricacies of existing database systems, but conversely I'll remind you that the scope of this article is the sort of system where you develop and design the application first; not a system where you're given a relational database upon which you must create an application.

Had I been given this assignment in a realistic setting, a relational database probably wouldn't have been my first choice. Some kind of document database, or even blob storage, strikes me as a better fit. Still, this article is about ORMs, so I'll pretend that there are external circumstances that dictate a relational database.

To test the system, I also created a script to populate these tables. Here's part of it:

INSERT INTO [dbo].[Restaurants] ([Id], [Name], [OpensAt], [LastSeating], [SeatingDuration])
VALUES (1, N'Hipgnosta', '18:00', '21:00', '6:00')
 
INSERT INTO [dbo].[Tables] ([RestaurantId], [Capacity], [IsCommunal])
VALUES (1, 10, 1)
 
INSERT INTO [dbo].[Restaurants] ([Id], [Name], [OpensAt], [LastSeating], [SeatingDuration])
VALUES (2112, N'Nono', '18:00', '21:00', '6:00')
 
INSERT INTO [dbo].[Tables] ([RestaurantId], [Capacity], [IsCommunal])
VALUES (2112, 6, 1)
 
INSERT INTO [dbo].[Tables] ([RestaurantId], [Capacity], [IsCommunal])
VALUES (2112, 4, 1)
 
INSERT INTO [dbo].[Tables] ([RestaurantId], [Capacity], [IsCommunal])
VALUES (2112, 2, 0)
 
INSERT INTO [dbo].[Tables] ([RestaurantId], [Capacity], [IsCommunal])
VALUES (2112, 2, 0)
 
INSERT INTO [dbo].[Tables] ([RestaurantId], [Capacity], [IsCommunal])
VALUES (2112, 4, 0)
 
INSERT INTO [dbo].[Tables] ([RestaurantId], [Capacity], [IsCommunal])
VALUES (2112, 4, 0)

There are more rows than this, but this should give you an idea of what data looks like.

Reading restaurant data without an ORM #

Due to the foreign key relationship, reading restaurant data from the database is a little more involved than reading from a single table.

public async Task<Restaurant?> GetRestaurant(string name)
{
    using var cmd = new SqlCommand(readByNameSql);
    cmd.Parameters.AddWithValue("@Name", name);
    
    var restaurants = await ReadRestaurants(cmd);
    return restaurants.SingleOrDefault();
}
 
private const string readByNameSql = @"
    SELECT [Id], [Name], [OpensAt], [LastSeating], [SeatingDuration]
    FROM [dbo].[Restaurants]
    WHERE [Name] = @Name
 
    SELECT [RestaurantId], [Capacity], [IsCommunal]
    FROM [dbo].[Tables]
    JOIN [dbo].[Restaurants]
    ON [dbo].[Tables].[RestaurantId] = [dbo].[Restaurants].[Id]
    WHERE [Name] = @Name";

There are more than one option when deciding how to construct the query. You could make one query with a join, in which case you'd get rows with repeated data, and you'd then need to detect duplicates, or you could do as I've done here: Query each table to get multiple result sets.

I'm not claiming that this is better in any way. I only chose this option because I found the code that I had to write less offensive.

Since the IRestaurantDatabase interface defines three different kinds of queries (GetAll(), GetRestaurant(int id), and GetRestaurant(string name)), I invoked the rule of three and extracted a helper method:

private async Task<IEnumerable<Restaurant>> ReadRestaurants(SqlCommand cmd)
{
    var conn = new SqlConnection(ConnectionString);
    cmd.Connection = conn;
 
    await conn.OpenAsync();
    using var rdr = await cmd.ExecuteReaderAsync();
 
    var restaurants = Enumerable.Empty<Restaurant>();
    while (await rdr.ReadAsync())
        restaurants = restaurants.Append(ReadRestaurantRow(rdr));
 
    if (await rdr.NextResultAsync())
        while (await rdr.ReadAsync())
            restaurants = ReadTableRow(rdr, restaurants);
 
    return restaurants;
}

The ReadRestaurants method does the overall work of opening the database connection, executing the query, and moving through rows and result sets. Again, we'll find mapping code hidden in helper methods:

private static Restaurant ReadRestaurantRow(SqlDataReader rdr)
{
    return new Restaurant(
        (int)rdr["Id"],
        (string)rdr["Name"],
        new MaitreD(
            new TimeOfDay((TimeSpan)rdr["OpensAt"]),
            new TimeOfDay((TimeSpan)rdr["LastSeating"]),
            (TimeSpan)rdr["SeatingDuration"]));
}

As the name suggests, ReadRestaurantRow reads a row from the Restaurants table and converts it into a Restaurant object. At this time, however, it creates each MaitreD object without any tables. This is possible because one of the MaitreD constructors takes a params array as the last parameter:

public MaitreD(
    TimeOfDay opensAt,
    TimeOfDay lastSeating,
    TimeSpan seatingDuration,
    params Table[] tables) :
    this(opensAt, lastSeating, seatingDuration, tables.AsEnumerable())
{
}

Only when the ReadRestaurants method moves on to the next result set can it add tables to each restaurant:

private static IEnumerable<Restaurant> ReadTableRow(
    SqlDataReader rdr,
    IEnumerable<Restaurant> restaurants)
{
    var restaurantId = (int)rdr["RestaurantId"];
    var capacity = (int)rdr["Capacity"];
    var isCommunal = (bool)rdr["IsCommunal"];
    var table = isCommunal ? Table.Communal(capacity) : Table.Standard(capacity);
 
    return restaurants.Select(r => r.Id == restaurantId ? AddTable(r, table) : r);
}

As was also the case in ReadRestaurantRow, this method uses string-based indexers on the rdr to extract the data. I'm no fan of stringly-typed code, but at least I have automated tests that exercise these methods.

Could an ORM help by creating strongly-typed classes that model database tables? To a degree; I'll discuss that later.

In any case, since the entire code base follows the Functional Core, Imperative Shell architecture, the entire Domain Model is made of immutable data types with pure functions. Thus, ReadTableRow has to iterate over all restaurants and add the table when the Id matches. AddTable does that:

private static Restaurant AddTable(Restaurant restaurant, Table table)
{
    return restaurant.Select(m => m.WithTables(m.Tables.Append(table).ToArray()));
}

I can think of other ways to solve the overall mapping task when using ADO.NET, but this was what made most sense to me.

Reading restaurants with Entity Framework #

Does an ORM like Entity Framework (EF) improve things? To a degree, but not enough to outweigh the disadvantages it also brings.

In order to investigate, I followed the EF documentation to scaffold code from a database I'd set up for only that purpose. For the Tables table it created the following Table class and a similar Restaurant class.

public partial class Table
{
    public int Id { getset; }
 
    public int RestaurantId { getset; }
 
    public int Capacity { getset; }
 
    public bool IsCommunal { getset; }
 
    public virtual Restaurant Restaurant { getset; } = null!;
}

Hardly surprising. Also, hardly object-oriented, but more about that later, too.

Entity Framework didn't, by itself, add a Tables collection to the Restaurant class, so I had to do that by hand, as well as modify the DbContext-derived class to tell it about this relationship:

entity.OwnsMany(r => r.Tables, b =>
{
    b.Property<int>(t => t.Id).ValueGeneratedOnAdd();
    b.HasKey(t => t.Id);
});

I thought that such a simple foreign key relationship would be something an ORM would help with, but apparently not.

With that in place, I could now rewrite the above GetRestaurant method to use Entity Framework instead of ADO.NET:

public async Task<Restaurants.Restaurant?> GetRestaurant(string name)
{
    using var db = new RestaurantsContext(ConnectionString);
    var dbRestaurant = await db.Restaurants.FirstOrDefaultAsync(r => r.Name == name);
    if (dbRestaurant == null)
        return null;
 
    return ToDomainModel(dbRestaurant);
}

The method now queries the database, and EF automatically returns a populated object. This would be nice if it was the right kind of object, but alas, it isn't. GetRestaurant still has to call a helper method to convert to the correct Domain Object:

private static Restaurants.Restaurant ToDomainModel(Restaurant restaurant)
{
    return new Restaurants.Restaurant(
        restaurant.Id,
        restaurant.Name,
        new MaitreD(
            new TimeOfDay(restaurant.OpensAt),
            new TimeOfDay(restaurant.LastSeating),
            restaurant.SeatingDuration,
            restaurant.Tables.Select(ToDomainModel).ToList()));
}

While this helper method converts an EF Restaurant object to a proper Domain Object (Restaurants.Restaurant), it also needs another helper to convert the table objects:

private static Restaurants.Table ToDomainModel(Table table)
{
    if (table.IsCommunal)
        return Restaurants.Table.Communal(table.Capacity);
    else
        return Restaurants.Table.Standard(table.Capacity);
}

As should be clear by now, using vanilla EF doesn't reduce the need for mapping.

Granted, the mapping code is a bit simpler, but you still need to remember to map restaurant.Name to the right constructor parameter, restaurant.OpensAt and restaurant.LastSeating to their correct places, table.Capacity to a constructor argument, and so on. If you make changes to the database schema or the Domain Model, you'll need to edit this code.

Encapsulation #

This is the point where more than one reader wonders: Can't you just..?

In short, no, I can't just.

The most common reaction is most likely that I'm doing this all wrong. I'm supposed to use the EF classes as my Domain Model.

But I can't, and I won't. I can't because I already have classes in place that serve that purpose. I also will not, because it would violate the Dependency Inversion Principle. As I recently described, the architecture is Ports and Adapters, or, if you will, Clean Architecture. The database Adapter should depend on the Domain Model; the Domain Model shouldn't depend on the database implementation.

Okay, but couldn't I have generated the EF classes in the Domain Model? After all, a class like the above Table is just a POCO Entity. It doesn't depend on the Entity Framework. I could have those classes in my Domain Model, put my DbContext in the data access layer, and have the best of both worlds. Right?

The code shown so far hints at a particular API afforded by the Domain Model. If you've read my book, you already know what comes next. Here's the Table Domain Model's API:

public sealed class Table
{ 
    public static Table Standard(int seats)
 
    public static Table Communal(int seats)
 
    public int Capacity { get; }
 
    public int RemainingSeats { get; }
 
    public Table Reserve(Reservation reservation)
 
    public T Accept<T>(ITableVisitor<T> visitor)
}

A couple of qualities of this design should be striking: There's no visible constructor - not even one that takes parameters. Instead, the type affords two static creation functions. One creates a standard table, the other a communal table. My book describes the difference between these types, and so does the MaƮtre d' kata.

This isn't some frivolous design choice of mine, but rather quite deliberate. That Table class is a Visitor-encoded sum type. You can debate whether I should have modelled a table as a sum type or a polymorphic object, but now that I've chosen a sum type, it should be explicit in the API design.

"Explicit is better than implicit."

When we program, we make many mistakes. It's important to discover the mistakes as soon as possible. With a compiled language, the first feedback you get is from the compiler. I favour leveraging the compiler, and its type system, to prevent as many mistakes as possible. That's what Hillel Wayne calls constructive data. Make illegal states unrepresentable.

I could, had I thought of it at the time, have introduced a predicative natural-number wrapper of integers, in which case I could have strengthened the contract of Table even further:

public sealed class Table
{ 
    public static Table Standard(NaturalNumber capacity)
 
    public static Table Communal(NaturalNumber capacity)
 
    public NaturalNumber Capacity { get; }
 
    public int RemainingSeats { get; }
 
    public Table Reserve(Reservation reservation)
 
    public T Accept<T>(ITableVisitor<T> visitor)
}

The point is that I take encapsulation seriously, and my interpretation of the concept is heavily inspired by Bertrand Meyer's Object-Oriented Software Construction. The view of encapsulation emphasises contracts (preconditions, invariants, postconditions) rather than information hiding.

As I described in a previous article, you can't model all preconditions and invariants with types, but you can still let the type system do much heavy lifting.

This principle applies to all classes that are part of the Domain Model; not only Table, but also Restaurant:

public sealed class Restaurant
{
    public Restaurant(int idstring name, MaitreD maitreD)
 
    public int Id { get; }
    public string Name { get; }
    public MaitreD MaitreD { get; }
 
    public Restaurant WithId(int newId)
 
    public Restaurant WithName(string newName)
 
    public Restaurant WithMaitreD(MaitreD newMaitreD)
 
    public Restaurant Select(Func<MaitreD, MaitreD> selector)
}

While this class does have a public constructor, it makes use of another design choice that Entity Framework doesn't support: It nests one rich object (MaitreD) inside another. Why does it do that?

Again, this is far from a frivolous design choice I made just to be difficult. Rather, it's a result of a need-to-know principle (which strikes me as closely related to the Single Responsibility Principle): A class should only contain the information it needs in order to perform its job.

The MaitreD class does all the heavy lifting when it comes to deciding whether or not to accept reservations, how to allocate tables, etc. It doesn't, however, need to know the id or name of the restaurant in order to do that. Keeping that information out of MaitreD, and instead in the Restaurant wrapper, makes the code simpler and easier to use.

The bottom line of all this is that I value encapsulation over 'easy' database mapping.

Limitations of Entity Framework #

The promise of an object-relational mapper is that it automates mapping between objects and database. Is that promise realised?

In its current incarnation, it doesn't look as though Entity Framework supports mapping to and from the Domain Model. With the above tweaks, it supports the database schema that I've described, but only via 'Entity classes'. I still have to map to and from the 'Entity objects' and the actual Domain Model. Not much is gained.

One should, of course, be careful not drawing too strong inferences from this example. First, proving anything impossible is generally difficult. Just because I can't find a way to do what I want, I can't conclude that it's impossible. That a few other people tell me, too, that it's impossible still doesn't constitute strong evidence.

Second, even if it's impossible today, it doesn't follow that it will be impossible forever. Perhaps Entity Framework will support my Domain Model in the future.

Third, we can't conclude that just because Entity Framework (currently) doesn't support my Domain Model it follows that no object-relational mapper (ORM) does. There might be another ORM out there that perfectly supports my design, but I'm just not aware of it.

Based on my experience and what I see, read, and hear, I don't think any of that likely. Things might change, though.

Net benefit or drawback? #

Perhaps, despite all of this, you still prefer ORMs. You may compare my ADO.NET code to my Entity Framework code and conclude that the EF code still looks simpler. After all, when using ADO.NET I have to jump through some hoops to load the correct tables associated with each restaurant, whereas EF automatically handles that for me. The EF version requires fewer lines of code.

In isolation, the fewer lines of code the better. This seems like an argument for using an ORM after all, even if the original promise remains elusive. Take what you can get.

On the other hand, when you take on a dependency, there's usually a cost that comes along. A library like Entity Framework isn't free. While you don't pay a licence fee for it, it comes with other costs. You have to learn how to use it, and so do your colleagues. You also have to keep up to date with changes.

Every time some exotic requirement comes up, you'll have to spend time investigating how to address it with that ORM's API. This may lead to a game of Whac-A-Mole where every tweak to the ORM leads you further down the rabbit hole, and couples your code tighter with it.

You can only keep up with so many things. What's the best investment of your time, and the time of your team mates? Learning and knowing SQL, or learning and keeping up to date with a particular ORM?

I learned SQL decades ago, and that knowledge is still useful. On the other hand, I don't even know how many library and framework APIs that I've both learned and forgotten about.

As things currently stand, it looks to me as though the net benefit of using a library like Entity Framework is negative. Yes, it might save me a few lines of code, but I'm not ready to pay the costs just outlined.

This balance could tip in the future, or my context may change.

Conclusion #

For the kind of applications that I tend to become involved with, I don't find object-relational mappers particularly useful. When you have a rich Domain Model where the first design priority is encapsulation, assisted by the type system, it looks as though mapping is unavoidable.

While you can ask automated tools to generate code that mirrors a database schema (or the other way around), only classes with poor encapsulation are supported. As soon as you do something out of the ordinary like static factory methods or nested objects, apparently Entity Framework gives up.

Can we extrapolate from Entity Framework to other ORMs? Can we infer that Entity Framework will never be able to support objects with proper encapsulation, just because it currently doesn't?

I can't say, but I'd be surprised if things change soon, if at all. If, on the other hand, it eventually turns out that I can have my cake and eat it too, then why shouldn't I?

Until then, however, I don't find that the benefits of ORMs trump the costs of using them.


Comments

Vlad #

One project I worked on was (among other things) mapping data from database to rich domain objects in the way similar to what is described in this article. These object knew how to do a lot of things but were dependant on related objects and so everything neded to be loaded in advance from the database in order to ensure correctness. So having a Order, OrderLine, Person, Address and City, all the rows needed to be loaded in advance, mapped to objects and references set to create the object graph to be able to, say, display shipping costs based on person's address.

The mapping step involved cumbersome manual coding and was error prone because it was easy to forget to load some list or set some reference. Reflecting on that experience, it seems to me that sacrificing a bit of purity wrt domain modelling and leaning on an ORM to lazily load the references would have been much more efficient and correct.

But I guess it all depends on the context..?

2023-09-19 13:17 UTC
qfilip #

Thanks. I've been following recent posts, but I was too lazy to go through the whole PRing things to reply. Maybe that's a good thing, since it forces you to think how to reply, instead of throwing a bunch of words together quickly. Anyways, back to business.

I'm not trying to sell one way or the other, because I'm seriously conflicted with both. Since most people on the web tend to fall into ORM category (in .NET world at least), I was simply looking for other perspective, from someone more knowledgable than me.

The following is just my thinking out loud...

You've used DB-first approach and scaffolding classes from DB schema. With EF core, the usual thing to do, is the opposite. Write classes to scaffold DB schema. Now, this doesn't save us from writing those "relational properties", but it allows us to generate DB update scripts. So if you have a class like:

class SomeTable
{
    public int Id;
    public string Name;
}
                    
and you add a field:
class SomeTable
{
    public int Id;
    public string Name;
    public DateTime Birthday;
}
                    
you can run
add-migration MyMigration   // generate migration file
update-database             // execute it
                    

This gives you a nice way to track DB chages via Git, but it can also introduce conflicts. Two devs cannot edit the same class/table. You have to be really careful when making commits. Another painful thing to do this way is creating DB views and stored procedures. I've honestly never saw a good solution for it. Maybe trying to do these things is a futile effort in the first place.

The whole

readByNameSql = @"SELECT [Id], [Name], [OpensAt], [LastSeating], [SeatingDuration]...
                    
is giving me heebie jeebies. It is easy to change some column name, and introduce a bug. It might be possible to do stuff with string interpolation, but at that point, I'm thinking about creating my own framework...

The most common reaction is most likely that I'm doing this all wrong. I'm supposed to use the EF classes as my Domain Model. - Mark Seemann
One of the first things that I was taught on my first job, was to never expose my domain model to the outside world. The domain model being EF Core classes... These days, I'm thinking quite the opposite. EF Core classes are DTOs for the database (with some boilerplate in order for framework to do it's magic). I also want to expose my domain model to the outside world. Why not? That's the contract after all. But the problem with this, is that it adds another layer of mapping. Since my domain model validation is done in class constructor, deserialization of requests becomes a problem. Ideally, it should sit in a static method. But in that case I have: jsonDto -> domainModel -> dbDto. The No-ORM approach also still requires me to map domainModel to command parameters manually. All of this is a tedious, and very error prone process. Especially if you have the case like vlad mentioned above.

Minor observation on your code. People rarely map things from DB data to domain models when using EF Core. This is a horrible thing to do. Anyone can run a script against a DB, and corrupt the data. It is something I intend to enforce in future projects, if possible. Thank you F# community.

I can't think of anything more to say at the moment. Thanks again for a fully-fledged-article reply :). I also recommend this video. I haven't had the time to try things he is talking about yet.

2023-09-21 19:27 UTC

Vlad, qfilip, thank you for writing.

I think your comments warrant another article. I'll post an update here later.

2023-09-24 15:57 UTC
qfilip #

Quick update from me again. I've been thinking and experimenting with several approaches to solve issues I've written about above. How idealized world works and where do we make compromises. Forgive me for the sample types, I couldn't think of anything else. Let's assume we have this table:

type Sheikh = {
    // db entity data
    Id: Guid
    CreatedAt: DateTime
    ModifiedAt: DateTime
    Deleted: bool

    // domain data
    Name: string
    Email: string // unique constraint here
    
    // relational data
    Wives: Wife list
    Supercars: Supercar list
}
                

I've named first 3 fields as "entity data". Why would my domain model contain an ID? It shouldn't care about persistence. I may save it to the DB, write it to a text file, or print it on a piece of paper. Don't care. We put IDs because data usually ends up in a DB. I could have used Email here to serve as an ID, because it should be unique, but we also like to standardize these stuff. All IDs shall be uuids.

There are also these "CreatedAt", "ModifiedAt" and "Deleted" columns. This is something I usually do, when I want soft-delete functionality. Denoramalize the data to gain performance. Otherwise, I would need to make... say... EntityStatus table to keep that data, forcing me to do a JOIN for every read operation and additional UPDATE EntityStatus for every write operation. So I kinda sidestep "the good practices" to avoid very real complications.

Domain data part is what it is, so I can safely skip that part.

Relational data part is the most interesting bit. I think this is what keeps me falling back to EntityFramework and why using "relational properties" are unavoidable. Either that, or I'm missing something.

Focusing attention on Sheikh table here, with just 2 relations, there are 4 potential scenarios. I don't want to load stuff from the DB, unless they are required, so the scenarios are:

  • Load Sheikh without relational data
  • Load Sheikh with Wives
  • Load Sheikh with Supercars
  • Load Sheikh with Wives and Supercars

2NRelations I guess? I'm three beers in on this, with only six hours left until corporate clock starts ticking, so my math is probably off.

God forbid if any of these relations have their own "inner relations" you may or may not need to load. This is where the (magic mapping/to SQL translations) really start getting useful. There will be some code repetition, but you'll just need to add ThenInclude(x => ...) and you're done.

Now the flip side. Focusing attention on Supercar table:

type Supercar = {
    // db entity data
    ...

    // domain data
    Vendor: string
    Model: string
    HorsePower: int
    
    // relational data
    Owner: Sheikh
    OwnerId: Guid
}
                

Pretty much same as before. Sometimes I'll need Sheikh info, sometimes I won't. One of F# specific problems I'm having is that, records require all fields to be populated. What if I need just SheikhID to perform some domain logic?

                    let tuneSheikhCars (sheikhId) (hpIncrement) (cars) =
                        cars
                        |> List.filter (fun x -> x.Owner.Id = sheikhId)
                        |> List.map (fun x -> x with { HorsePower = x.HorsePower + hpIncrement })
                

Similar goes for inserting new Supercar. I want to query-check first if Owner/Sheikh exists, before attempting insertion. You can pass it as a separate parameter, but code gets messier and messier.

No matter how I twist and turn things around, in the real world, I'm not only concerned by current steps I need to take to complete a task, but also with possible future steps. Now, I could define a record that only contains relevant data per each request. But, as seen above, I'd be eventually forced to make ~ 2NRelations of such records, instead of one. A reusable one, that serves like a bucket for a preferred persistence mechanism, allowing me to load relations later on, because nothing lives in memory long term.

I strayed away slightly here from ORM vs no-ORM discussion that I've started earlier. Because, now I realize that this problem isn't just about mapping things from type A to type B.

2023-10-08 23:24 UTC
opcoder #
I wonder if EF not having all the features we want isn't a false problem. I feel like we try to use the domain entities as DTOs and viceversa, breaking the SRP principle. But if we start writing DTOs and use them with EF, we would need a layer to map between the DTOs and the entities (AutoMapper might help with this?). I'm sure this has been discussed before.
2023-10-09 6:56 UTC
qfilip #
opcoder not really, no... Automapper(s) should only be used for mapping between two "dumb objects" (DTOs). I wouldn't drag in a library even for that, however, as it's relatively simple to write this thing yourself (with tests) and have full control / zero configuration when you come to a point to create some special projections. As for storing domain models in objects, proper OOP objects, with both data and behaviour, I don't like that either. Single reason for that is: constructors. This is where you pass the data to be validated into a domain model, and this is where OOP has a fatal flaw for me. Constructors can only throw exceptions, giving me no room to maneuver. You can use static methods with ValidationResult as a return type, but now we're entering a territory where C#, as a language, is totally unprepared for.
Iker #

Just my two cents:

Yes, it is possible to map the NaturalNumber object to an E.F class property using ValueConverters. Here are a couple of articles talking about this:

But even though you can use this, you may still encounter another use cases that you cannot tackle. E.F is just a tool with its limitations, and there will be things you can do with simple C# that you can not do with E.F.

I think you need to consider why you want to use E.F, understand its strengths and weaknesses, and then decide if it suits your project.

Do you want to use EF solely as a data access layer, or do you want it to be your domain layer?. Maybe for a big project you can use only E.F as a data access layer and use old plain C# files for domain layer. In a [small | medium | quick & dirty] project use as your domain layer.

There are bad thing we already know:

  • Increased complexity.
  • There will be things you can not do. So you must be carefull you will not need something E.F can not give you.
  • You need to know how it works. For example, know that accessing myRestaurant.MaitreD implies a new database access (if you have not loaded it previously).

But sometimes E.F shines, for example:

  • You are programing against the E.F model, not against a specific database, so it is easier to migrate to another database.
  • Maybe migrate to another database is rare, but it is very convenient to run tests against an in-memory SQLite database. Tests against a real database can be run in the CD/CI environment, for example.
  • Having a centralized point to process changes (SaveChanges) allows you to easily do interesting things: save "CreatedBy," "CreatedDate," "ModifiedBy," and "ModifiedDate" fields for all tables or create historical tables (if you do not have access to the SQL Server temporal tables).
  • Global query filters allow you to make your application multi-tenant with very little code: all tables implement IByClient, a global filter for theses tables... and voilĆ , your application becomes multi-client with just a few lines.

I am not a E.F defender, in fact I have a love & hate reletaionship with it. But I believe it is a powerful tool for certain projects. As always, the important thing is to think whether it is the right tool for your specific project :)

2023-10-15 16:43 UTC

Thank you, all, for writing. There's more content in your comments than I can address in one piece, but I've written a follow-up article that engages with some of your points: Domain Model first.

Specifically regarding the point of having to hand-write a lot of code to deal with multiple tables joined in various fashions, I grant that while typing isn't a bottleneck, the more code you add, the greater the risk of bugs. I'm not trying to be dismissive of ORMs as a general tool. If you truly, inescapably, have a relational model, then an ORM seems like a good choice. If so, however, I don't see that you can get good encapsulation at the same time.

And indeed, an important responsibility of a software architect is to consider trade-offs to find a good solution for a particular problem. Sometimes such a solution involves an ORM, but sometimes, it doesn't. In my world, it usually doesn't.

Do I breathe rarefied air, dealing with esoteric problems that mere mortals can never hope to encounter? I don't think so. Rather, I offer the interpretation that I sometimes approach problems in a different way. All I really try to do with these articles is to present to the public the ways I think about problems. I hope, then, that it may inspire other people to consider problems from more than one angle.

Finally, from my various customer engagements I get the impression that people also like ORMs because 'entity classes' look strongly typed. As a counter-argument, I suggest that this may be an illusion.

2023-10-23 06:45 UTC

A first stab at the Brainfuck kata

Monday, 11 September 2023 08:07:00 UTC

I almost gave up, but persevered and managed to produce something that works.

As I've previously mentioned, a customer hired me to swing by to demonstrate test-driven development and tactical Git. To make things interesting, we agreed that they'd give me a kata at the beginning of the session. I didn't know which problem they'd give me, so I thought it'd be a good idea to come prepared. I decided to seek out katas that I hadn't done before.

The demonstration session was supposed to be two hours in front of a participating audience. In order to make my preparation aligned to that situation, I decided to impose a two-hour time limit to see how far I could get. At the same time, I'd also keep an eye on didactics, so preferably proceeding in an order that would be explainable to an audience.

Some katas are more complicated than others, so I'm under no illusion that I can complete any, to me unknown, kata in under two hours. My success criterion for the time limit is that I'd like to reach a point that would satisfy an audience. Even if, after two hours, I don't reach a complete solution, I should leave a creative and intelligent audience with a good idea of how to proceed.

After a few other katas, I ran into the Brainfuck kata one Thursday. In this article, I'll describe some of the most interesting things that happened along the way. If you want all the details, the code is available on GitHub.

Understanding the problem #

I had heard about Brainfuck before, but never tried to write an interpreter (or a program, for that matter).

The kata description lacks examples, so I decided to search for them elsewhere. The wikipedia article comes with some examples of small programs (including Hello, World), so ultimately I used that for reference instead of the kata page.

I'm happy I wasn't making my first pass on this problem in front of an audience. I spent the first 45 minutes just trying to understand the examples.

You might find me slow, since the rules of the language aren't that complicated. I was, however, confused by the way the examples were presented.

As the wikipedia article explains, in order to add two numbers together, one can use this idiom:

[->+<]

The article then proceeds to list a small, complete program that adds two numbers. This program adds numbers this way:

[        Start your loops with your cell pointer on the loop counter (c1 in our case)
< +      Add 1 to c0
> -      Subtract 1 from c1
]        End your loops with the cell pointer on the loop counter

I couldn't understand why this annotated 'walkthrough' explained the idiom in reverse. Several times, I was on the verge of giving up, feeling that I made absolutely no progress. Finally, it dawned on me that the second example is not an explanation of the first example, but rather a separate example that makes use of the same idea, but expresses it in a different way.

Most programming languages have more than one way to do things, and this is also the case here. [->+<] adds two numbers together, but so does [<+>-].

Once you understand something, it can be difficult to recall why you ever found it confusing. Now that I get this, I'm having trouble explaining what I was originally thinking, and why it confused me.

This experience does, however, drive home a point for educators: When you introduce a concept and then provide examples, the first example should be a direct continuation of the introduction, and not some variation. Variations are fine, too, but should follow later and be clearly labelled.

After 45 minutes I had finally cracked the code and was ready to get programming.

Getting started #

The kata description suggests starting with the +, -, >, and < instructions to manage memory. I briefly considered that, but on the other hand, I wanted to have some test coverage. Usually, I take advantage of test-driven development, and I while I wasn't sure how to proceed, I wanted to have some tests.

If I were to exclusively start with memory management, I would need some way to inspect the memory in order to write assertions. This struck me as violating encapsulation.

Instead, I thought that I'd write the simplest program that would produce some output, because if I had output, I would have something to verify.

That, on the other hand, meant that I had to consider how to model input and output. The Wikipedia article describes these as

"two streams of bytes for input and output (most often connected to a keyboard and a monitor respectively, and using the ASCII character encoding)."

Knowing that you can model the console's input and output streams as polymorphic objects, I decided to model the output as a TextWriter. The lowest-valued printable ASCII character is space, which has the byte value 32, so I wrote this test:

[Theory]
[InlineData("++++++++++++++++++++++++++++++++."" ")] // 32 increments; ASCII 32 is space
public void Run(string programstring expected)
{
    using var output = new StringWriter();
    var sut = new BrainfuckInterpreter(output);
 
    sut.Run(program);
    var actual = output.ToString();
 
    Assert.Equal(expected, actual);
}

As you can see, I wrote the test as a [Theory] (parametrised test) from the outset, since I predicted that I'd add more test cases soon. Strictly speaking, when following the red-green-refactor checklist, you shouldn't write more code than absolutely necessary. According to YAGNI, you should avoid speculative generality.

Sometimes, however, you've gone through a process so many times that you know, with near certainty, what happens next. I've done test-driven development for decades, so I occasionally allow my experience to trump the rules.

The Brainfuck program in the [InlineData] attribute increments the same data cell 32 times (you can count the plusses) and then outputs its value. The expected output is the space character, since it has the ASCII code 32.

What's the simplest thing that could possibly work? Something like this:

public sealed class BrainfuckInterpreter
{
    private readonly StringWriter output;
 
    public BrainfuckInterpreter(StringWriter output)
    {
        this.output = output;
    }
 
    public void Run(string program)
    {
        output.Write(' ');
    }
}

As is typical with test-driven development (TDD), the first few tests help you design the API, but not the implementation, which, here, is deliberately naive.

Since I felt pressed for time, having already spent 45 minutes of my two-hour time limit getting to grips with the problem, I suppose I lingered less on the refactoring phase than perhaps I should have. You'll notice, at least, that the BrainfuckInterpreter class depends on StringWriter rather than its abstract parent class TextWriter, which was the original plan.

It's not a disastrous mistake, so when I later discovered it, I easily rectified it.

Implementation outline #

To move on, I added another test case:

[Theory]
[InlineData("++++++++++++++++++++++++++++++++."" ")] // 32 increments; ASCII 32 is space
[InlineData("+++++++++++++++++++++++++++++++++.""!")] // 33 increments; ASCII 32 is !
public void Run(string programstring expected)
{
    using var output = new StringWriter();
    var sut = new BrainfuckInterpreter(output);
 
    sut.Run(program);
    var actual = output.ToString();
 
    Assert.Equal(expected, actual);
}

The only change is the addition of the second [InlineData] attribute, which supplies a slightly different Brainfuck program. This one has 33 increments, which corresponds to the ASCII character code for an exclamation mark.

Notice that I clearly copied and pasted the comment, but forgot to change the last 32 to 33.

In my eagerness to pass both tests, and because I felt the clock ticking, I made another classic TDD mistake: I took too big a step. At this point, it would have been enough to iterate over the program's characters, count the number of plusses, and convert that number to a character. What I did instead was this:

public sealed class BrainfuckInterpreter
{
    private readonly StringWriter output;
 
    public BrainfuckInterpreter(StringWriter output)
    {
        this.output = output;
    }
 
    public void Run(string program)
    {
        var imp = new InterpreterImp(program, output);
        imp.Run();
    }
 
    private sealed class InterpreterImp
    {
        private int programPointer;
        private readonly byte[] data;
        private readonly string program;
        private readonly StringWriter output;
 
        internal InterpreterImp(string program, StringWriter output)
        {
            data = new byte[30_000];
            this.program = program;
            this.output = output;
        }
 
        internal void Run()
        {
            while (!IsDone)
                InterpretInstruction();
        }
 
        private bool IsDone => program.Length <= programPointer;
 
        private void InterpretInstruction()
        {
            var instruction = program[programPointer];
            switch (instruction)
            {
                case '+':
                    data[0]++;
                    programPointer++;
                    break;
                case '.':
                    output.Write((char)data[0]);
                    programPointer++;
                    break;
                default:
                    programPointer++;
                    break;
            }
        }
    }

With only two test cases, all that code isn't warranted, but I was more focused on implementing an interpreter than on moving in small steps. Even with decades of TDD experience, discipline sometimes slips. Or maybe exactly because of it.

Once again, I was fortunate enough that this implementation structure turned out to work all the way, but the point of the TDD process is that you can't always know that.

You may wonder why I decided to delegate the work to an inner class. I did that because I expected to have to maintain a programPointer over the actual program, and having a class that interprets one program has better encapsulation. I'll remind the reader than when I use the word encapsulation, I don't necessarily mean information hiding. Usually, I think in terms of contracts: Invariants, pre-, and postconditions.

With this design, the program is guaranteed to be present as a class field, since it's readonly and assigned upon initialisation. No defensive coding is required.

Remaining memory-management instructions #

While I wasn't planning on making use of the Devil's advocate technique, I did leave one little deliberate mistake in the above implementation: I'd hardcoded the data pointer as 0.

This made it easy to choose the next test case, and the next one after that, and so on.

At the two-hour mark, I had these test cases:

[Theory]
[InlineData("++++++++++++++++++++++++++++++++."" ")] // 32 increments; ASCII 32 is space
[InlineData("+++++++++++++++++++++++++++++++++.""!")] // 33 increments; ASCII 32 is !
[InlineData("+>++++++++++++++++++++++++++++++++."" ")] // 32 increments after >; ASCII 32 is space
[InlineData("+++++++++++++++++++++++++++++++++-."" ")] // 33 increments and 1 decrement; ASCII 32
[InlineData(">+<++++++++++++++++++++++++++++++++."" ")] // 32 increments after movement; ASCII 32
public void Run(string programstring expected)
{
    using var output = new StringWriter();
    var sut = new BrainfuckInterpreter(output);
 
    sut.Run(program);
    var actual = output.ToString();
 
    Assert.Equal(expected, actual);
}

And this implementation:

private sealed class InterpreterImp
{
    private int programPointer;
    private int dataPointer;
    private readonly byte[] data;
    private readonly string program;
    private readonly StringWriter output;
 
    internal InterpreterImp(string program, StringWriter output)
    {
        data = new byte[30_000];
        this.program = program;
        this.output = output;
    }
 
    internal void Run()
    {
        while (!IsDone)
            InterpretInstruction();
    }
 
    private bool IsDone => program.Length <= programPointer;
 
    private void InterpretInstruction()
    {
        var instruction = program[programPointer];
        switch (instruction)
        {
            case '>':
                dataPointer++;
                programPointer++;
                break;
            case '<':
                dataPointer--;
                programPointer++;
                break;
            case '+':
                data[dataPointer]++;
                programPointer++;
                break;
            case '-':
                data[dataPointer]--;
                programPointer++;
                break;
            case '.':
                output.Write((char)data[dataPointer]);
                programPointer++;
                break;
            default:
                programPointer++;
                break;
        }
    }
}

I'm only showing the inner InterpreterImp class, since I didn't change the outer BrainfuckInterpreter class.

At this point, I had used my two hours, but I think that I managed to leave my imaginary audience with a sketch of a possible solution.

Jumps #

What remained was the jumping instructions [ and ], as well as input.

Perhaps I could have kept adding small [InlineData] test cases to my single test method, but I thought I was ready to take on some of the small example programs on the Wikipedia page. I started with the addition example in this manner:

    // Copied from https://en.wikipedia.org/wiki/Brainfuck
    const string addTwoProgram = @"
++       Cell c0 = 2
> +++++  Cell c1 = 5
 
[        Start your loops with your cell pointer on the loop counter (c1 in our case)
< +      Add 1 to c0
> -      Subtract 1 from c1
]        End your loops with the cell pointer on the loop counter
 
At this point our program has added 5 to 2 leaving 7 in c0 and 0 in c1
but we cannot output this value to the terminal since it is not ASCII encoded
 
To display the ASCII character ""7"" we must add 48 to the value 7
We use a loop to compute 48 = 6 * 8
 
++++ ++++  c1 = 8 and this will be our loop counter again
[
< +++ +++  Add 6 to c0
> -        Subtract 1 from c1
]
< .        Print out c0 which has the value 55 which translates to ""7""!";
 
    [Fact]
    public void AddTwoValues()
    {
        using var input = new StringReader("");
        using var output = new StringWriter();
        var sut = new BrainfuckInterpreter(input, output);
 
        sut.Run(addTwoProgram);
        var actual = output.ToString();
 
        Assert.Equal("7", actual);
    }

I got that test passing, added the next example, got that passing, and so on. My final implementation looks like this:

public sealed class BrainfuckInterpreter
{
    private readonly TextReader input;
    private readonly TextWriter output;
 
    public BrainfuckInterpreter(TextReader input, TextWriter output)
    {
        this.input = input;
        this.output = output;
    }
 
    public void Run(string program)
    {
        var imp = new InterpreterImp(program, input, output);
        imp.Run();
    }
 
    private sealed class InterpreterImp
    {
        private int instructionPointer;
        private int dataPointer;
        private readonly byte[] data;
        private readonly string program;
        private readonly TextReader input;
        private readonly TextWriter output;
 
        internal InterpreterImp(string program, TextReader input, TextWriter output)
        {
            data = new byte[30_000];
            this.program = program;
            this.input = input;
            this.output = output;
        }
 
        internal void Run()
        {
            while (!IsDone)
                InterpretInstruction();
        }
 
        private bool IsDone => program.Length <= instructionPointer;
 
        private void InterpretInstruction()
        {
            WrapDataPointer();
 
            var instruction = program[instructionPointer];
            switch (instruction)
            {
                case '>':
                    dataPointer++;
                    instructionPointer++;
                    break;
                case '<':
                    dataPointer--;
                    instructionPointer++;
                    break;
                case '+':
                    data[dataPointer]++;
                    instructionPointer++;
                    break;
                case '-':
                    data[dataPointer]--;
                    instructionPointer++;
                    break;
                case '.':
                    output.Write((char)data[dataPointer]);
                    instructionPointer++;
                    break;
                case ',':
                    data[dataPointer] = (byte)input.Read();
                    instructionPointer++;
                    break;
                case '[':
                    if (data[dataPointer] == 0)
                        MoveToMatchingClose();
                    else
                        instructionPointer++;
                    break;
                case ']':
                    if (data[dataPointer] != 0)
                        MoveToMatchingOpen();
                    else
                        instructionPointer++;
                    break;
                default:
                    instructionPointer++;
                    break;
            }
        }
 
        private void WrapDataPointer()
        {
            if (dataPointer == -1)
                dataPointer = data.Length - 1;
            if (dataPointer == data.Length)
                dataPointer = 0;
        }
 
        private void MoveToMatchingClose()
        {
            var nestingLevel = 1;
            while (0 < nestingLevel)
            {
                instructionPointer++;
                if (program[instructionPointer] == '[')
                    nestingLevel++;
                if (program[instructionPointer] == ']')
                    nestingLevel--;
            }
            instructionPointer++;
        }
 
        private void MoveToMatchingOpen()
        {
            var nestingLevel = 1;
            while (0 < nestingLevel)
            {
                instructionPointer--;
                if (program[instructionPointer] == ']')
                    nestingLevel++;
                if (program[instructionPointer] == '[')
                    nestingLevel--;
            }
            instructionPointer++;
        }
    }
}

As you can see, I finally discovered that I'd been too concrete when using StringWriter. Now, input is defined as a TextReader, and output as a TextWriter.

When TextReader.Read encounters the end of the input stream, it returns -1, and when you cast that to byte, it becomes 255. I admit that I haven't read through the Wikipedia article's ROT13 example code to a degree that I understand how it decides to stop processing, but the test passes.

I also realised that the Wikipedia article used the term instruction pointer, so I renamed programPointer to instructionPointer.

Assessment #

Due to the switch/case structure, the InterpretInstruction method has a cyclomatic complexity of 12, which is more than I recommend in my book Code That Fits in Your Head.

It's not uncommon that switch/case code has high cyclomatic complexity, and this is also a common criticism of the measure. When each case block is as simple as it is here, or delegates to helper methods such as MoveToMatchingClose, you could reasonably argue that the code is still maintainable.

Refactoring lists switch statements as a code smell and suggests better alternatives. Had I followed the kata description's additional constraints to the letter, I should also have made it easy to add new instructions, or rename existing ones. This might suggest that one of Martin Fowler's refactorings might be in order.

That is, however, an entirely different kind of exercise, and I thought that I'd already gotten what I wanted out of the kata.

Conclusion #

At first glance, the Brainfuck language isn't difficult to understand (but onerous to read). Even so, it took me so long time to understand the example code that I almost gave up more than once. Still, once I understood how it worked, the interpreter actually wasn't that hard to write.

In retrospect, perhaps I should have structured my code differently. Perhaps I should have used polymorphism instead of a switch statement. Perhaps I should have written the code in a more functional style. Regular readers will at least recognise that the code shown here is uncharacteristically imperative for me. I do, however, try to vary my approach to fit the problem at hand (use the right tool for the job, as the old saw goes), and the Brainfuck language is described in so imperative terms that imperative code seemed like the most fitting style.

Now that I understand how Brainfuck works, I might later try to do the kata with some other constraints. It might prove interesting.


Decomposing CTFiYH's sample code base

Monday, 04 September 2023 06:00:00 UTC

An experience report.

In my book Code That Fits in Your Head (CTFiYH) I write in the last chapter:

If you've looked at the book's sample code base, you may have noticed that it looks disconcertingly monolithic. If you consider the full code base that includes the integration tests, as [the following figure] illustrates, there are all of three packages[...]. Of those, only one is production code.

Three boxes labelled unit tests, integration tests, and REST API.

[Figure caption:] The packages that make up the sample code base. With only a single production package, it reeks of a monolith.

Later, after discussing dependency cycles, I conclude:

I've been writing F# and Haskell for enough years that I naturally follow the beneficial rules that they enforce. I'm confident that the sample code is nicely decoupled, even though it's packaged as a monolith. But unless you have a similar experience, I recommend that you separate your code base into multiple packages.

Usually, you can't write something that cocksure without it backfiring, but I was really, really confident that this was the case. Still, it's always nagged me, because I believe that I should walk the walk rather than talk the talk. And I do admit that this was one of the few claims in the book that I actually didn't have code to back up.

So I decided to spend part of a weekend to verify that what I wrote was true. You won't believe what happened next.

Decomposition #

Reader, I was right all along.

I stopped my experiment when my package graph looked like this:

Ports-and-adapters architecture diagram.

Does that look familiar? It should; it's a poster example of Ports and Adapters, or, if you will, Clean Architecture. Notice how all dependencies point inward, following the Dependency Inversion Principle.

The Domain Model has no dependencies, while the HTTP Model (Application Layer) only depends on the Domain Model. The outer layer contains the ports and adapters, as well as the Composition Root. The Web Host is a small web project that composes everything else. In order to do that, it must reference everything else, either directly (SMTP, SQL, HTTP Model) or transitively (Domain Model).

The Adapters, on the other hand, depend on the HTTP Model and (not shown) the SDKs that they adapt. The SqlReservationsRepository class, for example, is implemented in the SQL library, adapting the System.Data.SqlClient SDK to look like an IReservationsRepository, which is defined in the HTTP Model.

The SMTP library is similar. It contains a concrete implementation called SmtpPostOffice that adapts the System.Net.Mail API to look like an IPostOffice object. Once again, the IPostOffice interface is defined in the HTTP Model.

The above figure is not to scale. In reality, the outer ring is quite thin. The SQL library contains only SqlReservationsRepository and some supporting text files with SQL DDL definitions. The SMTP library contains only the SmtpPostOffice class. And the Web Host contains Program, Startup, and a few configuration file DTOs (options, in ASP.NET parlance).

Application layer #

The majority of code, at least if I use a rough proxy measure like number of files, is in the HTTP Model. I often think of this as the application layer, because it's all the logic that's specific to to the application, in contrast to the Domain Model, which ought to contain code that can be used in a variety of application contexts (REST API, web site, batch job, etc.).

In this particular, case the application is a REST API, and it turns out that while the Domain Model isn't trivial, more goes on making sure that the REST API behaves correctly: That it returns correctly formatted data, that it validates input, that it detects attempts at tampering with URLs, that it redirects legacy URLs, etc.

This layer also contains the interfaces for the application's real dependencies: IReservationsRepository, IPostOffice, IRestaurantDatabase, and IClock. This explains why the SQL and SMTP packages need to reference the HTTP Model.

If you have bought the book, you have access to its example code base, and while it's a Git repository, this particular experiment isn't included. After all, I just made it, two years after finishing the book. Thus, if you want to compare with the code that comes with the book, here's a list of all the files I moved to the HTTP Model package:

  • AccessControlList.cs
  • CalendarController.cs
  • CalendarDto.cs
  • Day.cs
  • DayDto.cs
  • DtoConversions.cs
  • EmailingReservationsRepository.cs
  • Grandfather.cs
  • HomeController.cs
  • HomeDto.cs
  • Hypertext.cs
  • IClock.cs
  • InMemoryRestaurantDatabase.cs
  • IPeriod.cs
  • IPeriodVisitor.cs
  • IPostOffice.cs
  • IReservationsRepository.cs
  • IRestaurantDatabase.cs
  • Iso8601.cs
  • LinkDto.cs
  • LinksFilter.cs
  • LoggingClock.cs
  • LoggingPostOffice.cs
  • LoggingReservationsRepository.cs
  • Month.cs
  • NullPostOffice.cs
  • Period.cs
  • ReservationDto.cs
  • ReservationsController.cs
  • ReservationsRepository.cs
  • RestaurantDto.cs
  • RestaurantsController.cs
  • ScheduleController.cs
  • SigningUrlHelper.cs
  • SigningUrlHelperFactory.cs
  • SystemClock.cs
  • TimeDto.cs
  • UrlBuilder.cs
  • UrlIntegrityFilter.cs
  • Year.cs

As you can see, this package contains the Controllers, the DTOs, the interfaces, and some REST- and HTTP-specific code such as SigningUrlHelper, UrlIntegrityFilter, LinksFilter, security, ISO 8601 formatters, etc.

Domain Model #

The Domain Model is small, but not insignificant. Perhaps the most striking quality of it is that (with a single, inconsequential exception) it contains no interfaces. There are no polymorphic types that model application dependencies such as databases, web services, messaging systems, or the system clock. Those are all the purview of the application layer.

As the book describes, the architecture is functional core, imperative shell, and since dependencies make everything impure, you can't have those in your functional core. While, with a language like C#, you can never be sure that a function truly is pure, I believe that the entire Domain Model is referentially transparent.

For those readers who have the book's sample code base, here's a list of the files I moved to the Domain Model:

  • Email.cs
  • ITableVisitor.cs
  • MaitreD.cs
  • Name.cs
  • Reservation.cs
  • ReservationsVisitor.cs
  • Restaurant.cs
  • Seating.cs
  • Table.cs
  • TimeOfDay.cs
  • TimeSlot.cs

If the entire Domain Model consists of immutable values and pure functions, and if impure dependencies make everything impure, what's the ITableVisitor interface doing there?

This interface doesn't model any external application dependency, but rather represents a sum type with the Visitor pattern. The interface looks like this:

public interface ITableVisitor<T>
{
    T VisitStandard(int seats, Reservation? reservation);
    T VisitCommunal(int seats, IReadOnlyCollection<Reservation> reservations);
}

Restaurant tables are modelled this way because the Domain Model distinguishes between two fundamentally different kinds of tables: Normal restaurant tables, and communal or shared tables. In F# or Haskell such a sum type would be a one-liner, but in C# you need to use either the Visitor pattern or a Church encoding. For the book, I chose the Visitor pattern in order to keep the code base as object-oriented as possible.

Circular dependencies #

In the book I wrote:

The passive prevention of cycles [that comes from separate packages] is worth the extra complexity. Unless team members have extensive experience with a language that prevents cycles, I recommend this style of architecture.

Such languages do exist, though. F# famously prevents cycles. In it, you can't use a piece of code unless it's already defined above. Newcomers to the language see this as a terrible flaw, but it's actually one of its best features.

Haskell takes a different approach, but ultimately, its explicit treatment of side effects at the type level steers you towards a ports-and-adapters-style architecture. Your code simply doesn't compile otherwise!

I've been writing F# and Haskell for enough years that I naturally follow the beneficial rules that they enforce. I'm confident that the sample code is nicely decoupled, even though it's packaged as a monolith. But unless you have a similar experience, I recommend that you separate your code base into multiple packages.

As so far demonstrated in this article, I'm now sufficiently conditioned to be aware of side effects and non-determinism that I know to avoid them and push them to be boundaries of the application. Even so, it turns out that it's insidiously easy to introduce small cycles when the language doesn't stop you.

This wasn't much of a problem in the Domain Model, but one small example may still illustrate how easy it is to let your guard down. In the Domain Model, I'd added a class called TimeOfDay (since this code base predates TimeOnly), but without thinking much of it, I'd added this method:

public string ToIso8601TimeString()
{
    return durationSinceMidnight.ToIso8601TimeString();
}

While this doesn't look like much, formatting a time value as an ISO 8601 value isn't a Domain concern. It's an application boundary concern, so belongs in the HTTP Model. And sure enough, once I moved the file that contained the ISO 8601 conversions to the HTTP Model, the TimeOfDay class no longer compiled.

In this case, the fix was easy. I removed the method from the TimeOfDay class, but added an extension method to the other ISO 8601 conversions:

public static string ToIso8601TimeString(this TimeOfDay timeOfDay)
{
    return ((TimeSpan)timeOfDay).ToIso8601TimeString();
}

While I had little trouble moving the files to the Domain Model one-by-one, once I started moving files to the HTTP Model, it turned out that this part of the code base contained more coupling.

Since I had made many classes and interfaces internal by default, once I started moving types to the HTTP Model, I was faced with either making them public, or move them en block. Ultimately, I decided to move eighteen files that were transitively linked to each other in one go. I could have moved them in smaller chunks, but that would have made the internal types invisible to the code that (temporarily) stayed behind. I decided to move them all at once. After all, while I prefer to move in small, controlled steps, even moving eighteen files isn't that big an operation.

In the end, I still had to make LinksFilter, UrlIntegrityFilter, SigningUrlHelperFactory, and AccessControlList.FromUser public, because I needed to reference them from the Web Host.

Test dependencies #

You may have noticed that in the above diagram, it doesn't look as though I separated the two test packages into more packages, and that is, indeed, the case. I've recently described how I think about distinguishing kinds of tests, and I don't really care too much whether an automated test exercises only a single function, or a whole bundle of objects. What I do care about is whether a test is simple or complex, fast or slow. That kind of thing.

The package labelled "Integration tests" on the diagram is really a small test library that exercises some SQL Server-specific behaviour. Some of the tests in it verify that certain race conditions don't occur. They do that by keep trying to make the race condition occur, until they time out. Since the timeout is 30 seconds per test, this test suite is slow. That's the reason it's a separate library, even though it contains only eight tests. The book contains more details.

The "Unit tests" package contains the bulk of the tests: 176 tests, some of which are FsCheck properties that each run a hundred test cases.

Some tests are self-hosted integration tests that rely on the Web Host, and some of them are more 'traditional' unit tests. Dependencies are transitive, so I drew an arrow from the "Unit tests" package to the Web Host. Some unit tests exercise objects in the HTTP Model, and some exercise the Domain Model.

You may have another question: If the Integration tests reference the SQL package, then why not the SMTP package? Why is it okay that the unit tests reference the SMTP package?

Again, I want to reiterate that the reason I distinguished between these two test packages were related to execution speed rather than architecture. The few SMTP tests are fast enough, so there's no reason to keep them in a separate package.

In fact, the SMTP tests don't exercise that the SmtpPostOffice can send email. Rather, I treat that class as a Humble Object. The few tests that I do have only verify that the system can parse configuration settings:

[Theory]
[InlineData("m.example.net", 587, "grault""garply""g@example.org")]
[InlineData("n.example.net", 465, "corge""waldo""c@example.org")]
public void ToPostOfficeReturnsSmtpOffice(
    string host,
    int port,
    string userName,
    string password,
    string from)
{
    var sut = Create.SmtpOptions(host, port, userName, password, from);
 
    var db = new InMemoryRestaurantDatabase();
    var actual = sut.ToPostOffice(db);
 
    var expected = new SmtpPostOffice(host, port, userName, password, from, db);
    Assert.Equal(expected, actual);
}

Notice, by the way, the use of structural equality on a service. Consider doing that more often.

In any case, the separation of automated tests into two packages may not be the final iteration. It's worked well so far, in this context, but it's possible that had things been different, I would have chosen to have more test packages.

Conclusion #

In the book, I made a bold claim: Although I had developed the example code as a monolith, I asserted that I'd been so careful that I could easily tease it apart into multiple packages should I chose to do so.

This sounded so much like hubris that I was trepidatious writing it. I wrote it anyway, because, while I hadn't tried, I was convinced that I was right.

Still, it nagged me ever since. What if, after all, I was wrong? I've been wrong before.

So I decided to finally make the experiment, and to my relief it turned out that I'd been right.

Don't try this at home, kids.


A first crack at the Args kata

Monday, 28 August 2023 07:28:00 UTC

Test-driven development in C#.

A customer hired me to swing by to demonstrate test-driven development and tactical Git. To make things interesting, we agreed that they'd give me a kata at the beginning of the session. I didn't know which problem they'd give me, so I thought it'd be a good idea to come prepared. I decided to seek out katas that I hadn't done before.

The demonstration session was supposed to be two hours in front of a participating audience. In order to make my preparation aligned to that situation, I decided to impose a two-hour time limit to see how far I could get. At the same time, I'd also keep an eye on didactics, so preferably proceeding in an order that would be explainable to an audience.

Some katas are more complicated than others, so I'm under no illusion that I can complete any, to me unknown, kata in under two hours. My success criterion for the time limit is that I'd like to reach a point that would satisfy an audience. Even if, after two hours, I don't reach a complete solution, I should leave a creative and intelligent audience with a good idea of how to proceed.

The first kata I decided to try was the Args kata. In this article, I'll describe some of the most interesting things that happened along the way. If you want all the details, the code is available on GitHub.

Boolean parser #

In short, the goal of the Args kata is to develop an API for parsing command-line arguments.

When you encounter a new problem, it's common to have a few false starts until you develop a promising plan. This happened to me as well, but after a few attempts that I quickly stashed, I realised that this is really a validation problem - as in parse, don't validate.

The first thing I did after that realisation was to copy verbatim the Validated code from An applicative reservation validation example in C#. I consider it fair game to reuse general-purpose code like this for a kata.

With that basic building block available, I decided to start with a parser that would handle Boolean flags. My reasoning was that this might be the simplest parser, since it doesn't have many failure modes. If the flag is present, the value should be interpreted to be true; otherwise, false.

Over a series of iterations, I developed this parametrised xUnit.net test:

[Theory]
[InlineData('l'"-l"true)]
[InlineData('l'" -l "true)]
[InlineData('l'"-l -p 8080 -d /usr/logs"true)]
[InlineData('l'"-p 8080 -l -d /usr/logs"true)]
[InlineData('l'"-p 8080 -d /usr/logs"false)]
[InlineData('l'"-l true"true)]
[InlineData('l'"-l false"false)]
[InlineData('l'"nonsense"false)]
[InlineData('f'"-f"true)]
[InlineData('f'"foo"false)]
[InlineData('f'""false)]
public void TryParseSuccess(char flagNamestring candidatebool expected)
{
    var sut = new BoolParser(flagName);
    var actual = sut.TryParse(candidate);
    Assert.Equal(Validated.Succeed<stringbool>(expected), actual);
}

To be clear, this test started as a [Fact] (single, non-parametrised test) that I subsequently converted to a parametrised test, and then added more and more test cases to.

The final implementation of BoolParser looks like this:

public sealed class BoolParser : IParser<bool>
{
    private readonly char flagName;
 
    public BoolParser(char flagName)
    {
        this.flagName = flagName;
    }
 
    public Validated<stringboolTryParse(string candidate)
    {
        var idx = candidate.IndexOf($"-{flagName}");
        if (idx < 0)
            return Validated.Succeed<stringbool>(false);
 
        var nextFlagIdx = candidate[(idx + 2)..].IndexOf('-');
        var bFlag = nextFlagIdx < 0
            ? candidate[(idx + 2)..]
            : candidate.Substring(idx + 2, nextFlagIdx);
        if (bool.TryParse(bFlag, out var b))
            return Validated.Succeed<stringbool>(b);
 
        return Validated.Succeed<stringbool>(true);
    }
}

This may not be the most elegant solution, but it passes all tests. Since I was under time pressure, I didn't want to spend too much time polishing the implementation details. As longs as I'm comfortable with the API design and the test cases, I can always refactor later. (I usually say that later is never, which also turned out to be true this time. On the other hand, it's not that the implementation code is awful in any way. It has a cyclomatic complexity of 4 and fits within a 80 x 20 box. It could be much worse.)

The IParser interface came afterwards. It wasn't driven by the above test, but by later developments.

Rough proof of concept #

Once I had a passable implementation of BoolParser, I developed a similar IntParser to a degree where it supported a happy path. With two parsers, I had enough building blocks to demonstrate how to combine them. At that point, I also had some 40 minutes left, so it was time to produce something that might look useful.

At first, I wanted to demonstrate that it's possible to combine the two parsers, so I wrote this test:

[Fact]
public void ParseBoolAndIntProofOfConceptRaw()
{
    var args = "-l -p 8080";
    var l = new BoolParser('l').TryParse(args).SelectFailure(s => new[] { s });
    var p = new IntParser('p').TryParse(args).SelectFailure(s => new[] { s });
    Func<boolint, (boolint)> createTuple = (bi) => (b, i);
    static string[] combineErrors(string[] s1string[] s2) => s1.Concat(s2).ToArray();
 
    var actual = createTuple.Apply(l, combineErrors).Apply(p, combineErrors);
 
    Assert.Equal(Validated.Succeed<string[], (boolint)>((true, 8080)), actual);
}

That's really not pretty, and I wouldn't expect an unsuspecting audience to understand what's going on. It doesn't help that C# is inadequate for applicative functors. While it's possible to implement applicative validation, the C# API is awkward. (There are ways to make it better than what's on display here, but keep in mind that I came into this exercise unprepared, and had to grab what was closest at hand.)

The main point of the above test was only to demonstrate that it's possible to combine two parsers into one. That took me roughly 15 minutes.

Armed with that knowledge, I then proceeded to define this base class:

public abstract class ArgsParser<T1T2T>
{
    private readonly IParser<T1> parser1;
    private readonly IParser<T2> parser2;
 
    public ArgsParser(IParser<T1> parser1, IParser<T2> parser2)
    {
        this.parser1 = parser1;
        this.parser2 = parser2;
    }
 
    public Validated<string[], T> TryParse(string candidate)
    {
        var l = parser1.TryParse(candidate).SelectFailure(s => new[] { s });
        var p = parser2.TryParse(candidate).SelectFailure(s => new[] { s });
 
        Func<T1, T2, T> create = Create;
        return create.Apply(l, CombineErrors).Apply(p, CombineErrors);
    }
 
    protected abstract T Create(T1 x1, T2 x2);
 
    private static string[] CombineErrors(string[] s1string[] s2)
    {
        return s1.Concat(s2).ToArray();
    }
}

While I'm not a fan of inheritance, this seemed the fasted way to expand on the proof of concept. The class encapsulates the ugly details of the ParseBoolAndIntProofOfConceptRaw test, while leaving just enough room for a derived class:

internal sealed class ProofOfConceptParser : ArgsParser<boolint, (boolint)>
{
    public ProofOfConceptParser() : base(new BoolParser('l'), new IntParser('p'))
    {
    }
 
    protected override (boolintCreate(bool x1int x2)
    {
        return (x1, x2);
    }
}

This class only defines which parsers to use and how to translate successful results to a single object. Here, because this is still a proof of concept, the resulting object is just a tuple.

The corresponding test looks like this:

[Fact]
public void ParseBoolAndIntProofOfConcept()
{
    var sut = new ProofOfConceptParser();
 
    var actual = sut.TryParse("-l -p 8080");
 
    Assert.Equal(Validated.Succeed<string[], (boolint)>((true, 8080)), actual);
}

At this point, I hit the two-hour mark, but I think I managed to produce enough code to convince a hypothetical audience that a complete solution is within grasp.

What remained was to

  • add proper error handling to IntParser
  • add a corresponding StringParser
  • improve the ArgsParser API
  • add better demo examples of the improved ArgsParser API

While I could leave this as an exercise to the reader, I couldn't just leave the code like that.

Finishing the kata #

For my own satisfaction, I decided to complete the kata, which I did in another hour.

Although I had started with an abstract base class, I know how to refactor it to a sealed class with an injected Strategy. I did that for the existing class, and also added one that supports three parsers instead of two:

public sealed class ArgsParser<T1T2T3T>
{
    private readonly IParser<T1> parser1;
    private readonly IParser<T2> parser2;
    private readonly IParser<T3> parser3;
    private readonly Func<T1, T2, T3, T> create;
 
    public ArgsParser(
        IParser<T1> parser1,
        IParser<T2> parser2,
        IParser<T3> parser3,
        Func<T1, T2, T3, T> create)
    {
        this.parser1 = parser1;
        this.parser2 = parser2;
        this.parser3 = parser3;
        this.create = create;
    }
 
    public Validated<string[], T> TryParse(string candidate)
    {
        var x1 = parser1.TryParse(candidate).SelectFailure(s => new[] { s });
        var x2 = parser2.TryParse(candidate).SelectFailure(s => new[] { s });
        var x3 = parser3.TryParse(candidate).SelectFailure(s => new[] { s });
        return create
            .Apply(x1, CombineErrors)
            .Apply(x2, CombineErrors)
            .Apply(x3, CombineErrors);
    }
 
    private static string[] CombineErrors(string[] s1string[] s2)
    {
        return s1.Concat(s2).ToArray();
    }
}

Granted, that's a bit of boilerplate, but if you imagine this as supplied by a reusable library, you only have to write this once.

I was now ready to parse the kata's central example, "-l -p 8080 -d /usr/logs", to a strongly typed value:

private sealed record TestConfig(bool DoLogint Portstring Directory);
 
[Theory]
[InlineData("-l -p 8080 -d /usr/logs")]
[InlineData("-p 8080 -l -d /usr/logs")]
[InlineData("-d /usr/logs -l -p 8080")]
[InlineData(" -d  /usr/logs  -l  -p 8080  ")]
public void ParseConfig(string args)
{
    var sut = new ArgsParser<boolintstring, TestConfig>(
        new BoolParser('l'),
        new IntParser('p'),
        new StringParser('d'),
        (bis) => new TestConfig(b, i, s));
 
    var actual = sut.TryParse(args);
 
    Assert.Equal(
        Validated.Succeed<string[], TestConfig>(
            new TestConfig(true, 8080, "/usr/logs")),
        actual);
}

This test parses some variations of the example input into an immutable record.

What happens if the input is malformed? Here's an example of that:

[Fact]
public void FailToParseConfig()
{
    var sut = new ArgsParser<boolintstring, TestConfig>(
        new BoolParser('l'),
        new IntParser('p'),
        new StringParser('d'),
        (bis) => new TestConfig(b, i, s));
 
    var actual = sut.TryParse("-p aityaity");
 
    Assert.True(actual.Match(
        onFailure: ss => ss.Contains("Expected integer for flag '-p', but got \"aityaity\"."),
        onSuccess: _ => false));
    Assert.True(actual.Match(
        onFailure: ss => ss.Contains("Missing value for flag '-d'."),
        onSuccess: _ => false));
}

Of particular interest is that, as promised by applicative validation, parsing failures don't short-circuit. The input value "-p aityaity" has two problems, and both are reported by TryParse.

At this point I was happy that I had sufficiently demonstrated the viability of the design. I decided to call it a day.

Conclusion #

As I did the Args kata, I found it interesting enough to warrant an article. Once I realised that I could use applicative parsing as the basis for the API, the rest followed.

There's room for improvement, but while doing katas is valuable, there are marginal returns in perfecting the code. Get the main functionality working, learn from it, and move on to another exercise.


Compile-time type-checked truth tables

Monday, 21 August 2023 08:07:00 UTC

With simple and easy-to-understand examples in F# and Haskell.

Eve Ragins recently published an article called Why you should use truth tables in your job. It's a good article. You should read it.

In it, she outlines how creating a Truth Table can help you smoke out edge cases or unclear requirements.

I agree, and it also beautifully explains why I find algebraic data types so useful.

With languages like F# or Haskell, this kind of modelling is part of the language, and you even get statically-typed compile-time checking that tells you whether you've handled all combinations.

Eve Ragins points out that there are other, socio-technical benefits from drawing up a truth table that you can, perhaps, print out, or otherwise share with non-technical stakeholders. Thus, the following is in no way meant as a full replacement, but rather as examples of how certain languages have affordances that enable you to think like this while programming.

F# #

I'm not going to go through Eve Ragins' blow-by-blow walkthrough, explaining how you construct a truth table. Rather, I'm just briefly going to show how simple it is to do the same in F#.

Most of the inputs in her example are Boolean values, which already exist in the language, but we need a type for the item status:

type ItemStatus = NotAvailable | Available | InUse

As is typical in F#, a type declaration is just a one-liner.

Now for something a little more interesting. In Eve Ragins' final table, there's a footnote that says that the dash/minus symbol indicates that the value is irrelevant. If you look a little closer, it turns out that the should_field_be_editable value is irrelevant whenever the should_field_show value is FALSE.

So instead of a bool * bool tuple, you really have a three-state type like this:

type FieldState = Hidden | ReadOnly | ReadWrite

It would probably have taken a few iterations to learn this if you'd jumped straight into pattern matching in F#, but since F# requires you to define types and functions before you can use them, I list the type now.

That's all you need to produce a truth table in F#:

let decide requiresApproval canUserApprove itemStatus =
    match requiresApproval, canUserApprove, itemStatus with
    |  true,  true, NotAvailable -> Hidden
    | false,  true, NotAvailable -> Hidden
    |  truefalse, NotAvailable -> Hidden
    | falsefalse, NotAvailable -> Hidden
    |  true,  true,    Available -> ReadWrite
    | false,  true,    Available -> Hidden
    |  truefalse,    Available -> ReadOnly
    | falsefalse,    Available -> Hidden
    |  true,  true,        InUse -> ReadOnly
    | false,  true,        InUse -> Hidden
    |  truefalse,        InUse -> ReadOnly
    | falsefalse,        InUse -> Hidden

I've called the function decide because it wasn't clear to me what else to call it.

What's so nice about F# pattern matching is that the compiler can tell if you've missed a combination. If you forget a combination, you get a helpful Incomplete pattern match compiler warning that points out the combination that you missed.

And as I argue in my book Code That Fits in Your Head, you should turn warnings into errors. This would also be helpful in a case like this, since you'd be prevented from forgetting an edge case.

Haskell #

You can do the same exercise in Haskell, and the result is strikingly similar:

data ItemStatus = NotAvailable | Available | InUse deriving (EqShow)
 
data FieldState = Hidden | ReadOnly | ReadWrite deriving (EqShow)
 
decide :: Bool -> Bool -> ItemStatus -> FieldState
decide  True  True NotAvailable = Hidden
decide False  True NotAvailable = Hidden
decide  True False NotAvailable = Hidden
decide False False NotAvailable = Hidden
decide  True  True    Available = ReadWrite
decide False  True    Available = Hidden
decide  True False    Available = ReadOnly
decide False False    Available = Hidden
decide  True  True        InUse = ReadOnly
decide False  True        InUse = Hidden
decide  True False        InUse = ReadOnly
decide False False        InUse = Hidden

Just like in F#, if you forget a combination, the compiler will tell you:

LibrarySystem.hs:8:1: warning: [-Wincomplete-patterns]
    Pattern match(es) are non-exhaustive
    In an equation for `decide':
        Patterns of type `Bool', `Bool', `ItemStatus' not matched:
            False False NotAvailable
  |
8 | decide  True  True NotAvailable = Hidden
  | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^...

To be clear, that combination is not missing from the above code example. This compiler warning was one I subsequently caused by commenting out a line.

It's also possible to turn warnings into errors in Haskell.

Conclusion #

I love languages with algebraic data types because they don't just enable modelling like this, they encourage it. This makes it much easier to write code that handles various special cases that I'd easily overlook in other languages. In languages like F# and Haskell, the compiler will tell you if you forgot to deal with a combination.


Replacing Mock and Stub with a Fake

Monday, 14 August 2023 07:23:00 UTC

A simple C# example.

A reader recently wrote me about my 2013 article Mocks for Commands, Stubs for Queries, commenting that the 'final' code looks suspect. Since it looks like the following, that's hardly an overstatement.

public User GetUser(int userId)
{
    var u = this.userRepository.Read(userId);
    if (u.Id == 0)
        this.userRepository.Create(1234);
    return u;
}

Can you spot what's wrong?

Missing test cases #

You might point out that this example seems to violate Command Query Separation, and probably other design principles as well. I agree that the example is a bit odd, but that's not what I have in mind.

The problem with the above example is that while it correctly calls the Read method with the userId parameter, it calls Create with the hardcoded constant 1234. It really ought to call Create with userId.

Does this mean that the technique that I described in 2013 is wrong? I don't think so. Rather, I left the code in a rather unhelpful state. What I had in mind with that article was the technique I called data flow verification. As soon as I had delivered that message, I was, according to my own goals, done. I wrapped up the article, leaving the code as shown above.

As the reader remarked, it's noteworthy that an article about better unit testing leaves the System Under Test (SUT) in an obviously defect state.

The short response is that at least one test case is missing. Since this was only demo code to show an example, the entire test suite is this:

public class SomeControllerTests
{
    [Theory]
    [InlineData(1234)]
    [InlineData(9876)]
    public void GetUserReturnsCorrectValue(int userId)
    {
        var expected = new User();
        var td = new Mock<IUserRepository>();
        td.Setup(r => r.Read(userId)).Returns(expected);
        var sut = new SomeController(td.Object);
 
        var actual = sut.GetUser(userId);
 
        Assert.Equal(expected, actual);
    }
 
    [Fact]
    public void UserIsSavedIfItDoesNotExist()
    {
        var td = new Mock<IUserRepository>();
        td.Setup(r => r.Read(1234)).Returns(new User { Id = 0 });
        var sut = new SomeController(td.Object);
 
        sut.GetUser(1234);
 
        td.Verify(r => r.Create(1234));
    }
}

There are three test cases: Two for the parametrised GetUserReturnsCorrectValue method and one test case for the UserIsSavedIfItDoesNotExist test. Since the latter only verifies the hardcoded value 1234 the Devil's advocate can get by with using that hardcoded value as well.

Adding a test case #

The solution to that problem is simple enough. Add another test case by converting UserIsSavedIfItDoesNotExist to a parametrised test:

[Theory]
[InlineData(1234)]
[InlineData(9876)]
public void UserIsSavedIfItDoesNotExist(int userId)
{
    var td = new Mock<IUserRepository>();
    td.Setup(r => r.Read(userId)).Returns(new User { Id = 0 });
    var sut = new SomeController(td.Object);
 
    sut.GetUser(userId);
 
    td.Verify(r => r.Create(userId));
}

There's no reason to edit the other test method; this should be enough to elicit a change to the SUT:

public User GetUser(int userId)
{
    var u = this.userRepository.Read(userId);
    if (u.Id == 0)
        this.userRepository.Create(userId);
    return u;
}

When you use Mocks (or, rather, Spies) and Stubs the Data Flow Verification technique is useful.

On the other hand, I no longer use Spies or Stubs since they tend to break encapsulation.

Fake #

These days, I tend to only model real application dependencies as Test Doubles, and when I do, I use Fakes.

Dos Equis meme with the text: I don't always use Test Doubles, but when I do, I use Fakes.

While the article series From interaction-based to state-based testing goes into more details, I think that this small example is a good opportunity to demonstrate the technique.

The IUserRepository interface is defined like this:

public interface IUserRepository
{
    User Read(int userId);
 
    void Create(int userId);
}

A typical Fake is an in-memory collection:

public sealed class FakeUserRepository : Collection<User>, IUserRepository
{
    public void Create(int userId)
    {
        Add(new User { Id = userId });
    }
 
    public User Read(int userId)
    {
        var user = this.SingleOrDefault(u => u.Id == userId);
        if (user == null)
            return new User { Id = 0 };
        return user;
    }
}

In my experience, they're typically easy to implement by inheriting from a collection base class. Such an object exhibits typical traits of a Fake object: It fulfils the implied contract, but it lacks some of the 'ilities'.

The contract of a Repository is typically that if you add an Entity, you'd expect to be able to retrieve it later. If the Repository offers a Delete method (this one doesn't), you'd expect the deleted Entity to be gone, so that you can't retrieve it. And so on. The FakeUserRepository class fulfils such a contract.

On the other hand, you'd also expect a proper Repository implementation to support more than that:

  • You'd expect a proper implementation to persist data so that you can reboot or change computers without losing data.
  • You'd expect a proper implementation to correctly handle multiple threads.
  • You may expect a proper implementation to support ACID transactions.

The FakeUserRepository does none of that, but in the context of a unit test, it doesn't matter. The data exists as long as the object exists, and that's until it goes out of scope. As long as a test needs the Repository, it remains in scope, and the data is there.

Likewise, each test runs in a single thread. Even when tests run in parallel, each test has its own Fake object, so there's no shared state. Therefore, even though FakeUserRepository isn't thread-safe, it doesn't have to be.

Testing with the Fake #

You can now rewrite the tests to use FakeUserRepository:

[Theory]
[InlineData(1234)]
[InlineData(9876)]
public void GetUserReturnsCorrectValue(int userId)
{
    var expected = new User { Id = userId };
    var db = new FakeUserRepository { expected };
    var sut = new SomeController(db);
 
    var actual = sut.GetUser(userId);
 
    Assert.Equal(expected, actual);
}
 
[Theory]
[InlineData(1234)]
[InlineData(9876)]
public void UserIsSavedIfItDoesNotExist(int userId)
{
    var db = new FakeUserRepository();
    var sut = new SomeController(db);
 
    sut.GetUser(userId);
 
    Assert.Single(db, u => u.Id == userId);
}

Instead of asking a Spy whether or not a particular method was called (which is an implementation detail), the UserIsSavedIfItDoesNotExist test verifies the posterior state of the database.

Conclusion #

In my experience, using Fakes simplifies unit tests. While you may have to edit the Fake implementation from time to time, you edit that code in a single place. The alternative is to edit all affected tests, every time you change something about a dependency. This is also known as Shotgun Surgery and considered an antipattern.

The code base that accompanies my book Code That Fits in Your Head has more realistic examples of this technique, and much else.


Comments

Hi Mark,

Firstly, thank you for another insightful article.

I'm curious about using Fakes and testing exceptions. In scenarios where dynamic mocks (like Moq) are employed, we can mock a method to throw an exception, allowing us to test the expected behavior of the System Under Test (SUT). In your example, if we were using Moq, we could create a test to mock the UserRepository's Read method to throw a specific exception (e.g., SqlException). This way, we could ensure that the controller responds appropriately, perhaps with an internal server response. However, I'm unsure about how to achieve a similar test using Fakes. Is this type of test suitable for Fakes, or do such tests not align with the intended use of Fakes? Personally, I avoid using try-catch blocks in repositories or controllers and prefer handling exceptions in middleware (e.g., ErrorHandler). In such cases, I write separate unit tests for the middleware. Could this be a more fitting approach? Your guidance would be much appreciated.

(And yes, I remember your advice about framing questions ā€”it's in your 'Code that Fits in Your Head' book! :D )

Thanks

2024-01-15 03:10 UTC

Thank you for writing. That's a question that warrants an article or two. I've now published an article titled Error categories and category errors. It's not a direct answer to your question, but I found it useful to first outline my thinking on errors in general.

I'll post an update here when I also have an answer to your specific question.

2024-01-30 7:13 UTC

AmirB, once again, thank you for writing. I've now published an article titled Testing exceptions that attempt to answer your question.

2024-02-26 6:57 UTC

NonEmpty catamorphism

Monday, 07 August 2023 11:40:00 UTC

The universal API for generic non-empty collections, with examples in C# and Haskell.

This article is part of an article series about catamorphisms. A catamorphism is a universal abstraction that describes how to digest a data structure into a potentially more compact value.

I was recently doing some work that required a data structure like a collection, but with the additional constraint that it should be guaranteed to have at least one element. I've known about Haskell's NonEmpty type, and how to port it to C# for years. This time I needed to implement it in a third language, and since I had a little extra time available, I thought it'd be interesting to pursue a conjecture of mine: It seems as though you can implement most (all?) of a generic data structure's API based on its catamorphism.

While I could make a guess as to how a catamorphism might look for a non-empty collection, I wasn't sure. A quick web search revealed nothing conclusive, so I decided to deduce it from first principles. As this article series demonstrates, you can derive the catamorphism from a type's isomorphic F-algebra.

The beginning of this article presents the catamorphism in C#, with an example. The rest of the article describes how to deduce the catamorphism. This part of the article presents my work in Haskell. Readers not comfortable with Haskell can just read the first part, and consider the rest of the article as an optional appendix.

C# catamorphism #

This article will use a custom C# class called NonEmptyCollection<T>, which is near-identical to the NotEmptyCollection<T> originally introduced in the article Semigroups accumulate.

I don't know why I originally chose to name the class NotEmptyCollection instead of NonEmptyCollection, but it's annoyed me ever since. I've finally decided to rectify that mistake, so from now on, the name is NonEmptyCollection.

The catamorphism for NonEmptyCollection is this instance method:

public TResult Aggregate<TResult>(Func<T, IReadOnlyCollection<T>, TResult> algebra)
{
    return algebra(Head, Tail);
}

Because the NonEmptyCollection class is really just a glorified tuple, the algebra is any function which produces a single value from the two constituent values.

It's easy to fall into the trap of thinking of the catamorphism as 'reducing' the data structure to a more compact form. While this is a common kind of operation, loss of data is not inevitable. You can, for example, return a new collection, essentially doing nothing:

var nec = new NonEmptyCollection<int>(42, 1337, 2112, 666);
var same = nec.Aggregate((x, xs) => new NonEmptyCollection<int>(x, xs.ToArray()));

This Aggregate method enables you to safely find a maximum value:

var nec = new NonEmptyCollection<int>(42, 1337, 2112, 666);
var max = nec.Aggregate((x, xs) => xs.Aggregate(x, Math.Max));

or to safely calculate an average:

var nec = new NonEmptyCollection<int>(42, 1337, 2112, 666);
var average = nec.Aggregate((x, xs) => xs.Aggregate(x, (a, b) => a + b) / (xs.Count + 1.0));

Both of these two last examples use the built-in Aggregate function to accumulate the xs. It uses the overload that takes a seed, for which it supplies x. This means that there's guaranteed to be at least that one value.

The catamorphism given here is not unique. You can create a trivial variation by swapping the two function arguments, so that x comes after xs.

NonEmpty F-algebra #

As in the previous article, I'll use Fix and cata as explained in Bartosz Milewski's excellent article on F-algebras.

As always, start with the underlying endofunctor:

data NonEmptyF a c = NonEmptyF { head :: a, tail :: ListFix a }
                     deriving (EqShowRead)
 
instance Functor (NonEmptyF a) where
  fmap _ (NonEmptyF x xs) = NonEmptyF x xs

Instead of using Haskell's standard list ([]) for the tail, I've used ListFix from the article on list catamorphism. This should, hopefully, demonstrate how you can build on already established definitions derived from first principles.

Since a non-empty collection is really just a glorified tuple of head and tail, there's no recursion, and thus, the carrier type c is not used. You could argue that going through all of these motions is overkill, but it still provides some insights. This is similar to the Boolean catamorphism and Maybe catamorphism.

The fmap function ignores the mapping argument (often called f), since the Functor instance maps NonEmptyF a c to NonEmptyF a c1, but the c or c1 type is not used.

As was the case when deducing the recent catamorphisms, Haskell isn't too happy about defining instances for a type like Fix (NonEmptyF a). To address that problem, you can introduce a newtype wrapper:

newtype NonEmptyFix a =
  NonEmptyFix { unNonEmptyFix :: Fix (NonEmptyF a) } deriving (EqShowRead)

You can define Functor, Applicative, Monad, etc. instances for this type without resorting to any funky GHC extensions. Keep in mind that, ultimately, the purpose of all this code is just to figure out what the catamorphism looks like. This code isn't intended for actual use.

A helper function makes it easier to define NonEmptyFix values:

createNonEmptyF :: a -> ListFix a -> NonEmptyFix a
createNonEmptyF x xs = NonEmptyFix $ Fix $ NonEmptyF x xs

Here's how to use it:

ghci> createNonEmptyF 42 $ consF 1337 $ consF 2112 nilF
NonEmptyFix {
  unNonEmptyFix = Fix (NonEmptyF 42 (ListFix (Fix (ConsF 1337 (Fix (ConsF 2112 (Fix NilF)))))))}

While this is quite verbose, keep in mind that the code shown here isn't meant to be used in practice. The goal is only to deduce catamorphisms from more basic universal abstractions, and you now have all you need to do that.

Haskell catamorphism #

At this point, you have two out of three elements of an F-Algebra. You have an endofunctor (NonEmptyF a), and an object c, but you still need to find a morphism NonEmptyF a c -> c. Notice that the algebra you have to find is the function that reduces the functor to its carrier type c, not the 'data type' a. This takes some time to get used to, but that's how catamorphisms work. This doesn't mean, however, that you get to ignore a, as you'll see.

As in the previous articles, start by writing a function that will become the catamorphism, based on cata:

nonEmptyF = cata alg . unNonEmptyFix
  where alg (NonEmptyF x xs) = undefined

While this compiles, with its undefined implementation of alg, it obviously doesn't do anything useful. I find, however, that it helps me think. How can you return a value of the type c from alg? You could pass a function argument to the nonEmptyF function and use it with x and xs:

nonEmptyF :: (a -> ListFix a -> c) -> NonEmptyFix a -> c
nonEmptyF f = cata alg . unNonEmptyFix
  where alg (NonEmptyF x xs) = f x xs

This works. Since cata has the type Functor f => (f a -> a) -> Fix f -> a, that means that alg has the type f a -> a. In the case of NonEmptyF, the compiler infers that the alg function has the type NonEmptyF a c -> c1, which fits the bill, since c may be the same type as c1.

This, then, is the catamorphism for a non-empty collection. This one is just a single function. It's still not the only possible catamorphism, since you could trivially flip the arguments to f.

I've chosen this representation because the arguments x and xs are defined in the same order as the order of head before tail. Notice how this is the same order as the above C# Aggregate method.

Basis #

You can implement most other useful functionality with nonEmptyF. Here's the Semigroup instance and a useful helper function:

toListFix :: NonEmptyFix a -> ListFix a
toListFix = nonEmptyF consF
 
instance Semigroup (NonEmptyFix a) where
  xs <> ys =
    nonEmptyF (\x xs' -> createNonEmptyF x $ xs' <> toListFix ys) xs

The implementation uses nonEmptyF to operate on xs. Inside the lambda expression, it converts ys to a list, and uses the ListFix Semigroup instance to concatenate xs with it.

Here's the Functor instance:

instance Functor NonEmptyFix where
  fmap f = nonEmptyF (\x xs -> createNonEmptyF (f x) $ fmap f xs)

Like the Semigroup instance, this fmap implementation uses fmap on xs, which is the ListFix Functor instance.

The Applicative instance is much harder to write from scratch (or, at least, I couldn't come up with a simpler way):

instance Applicative NonEmptyFix where
  pure x = createNonEmptyF x nilF
  liftA2 f xs ys =
    nonEmptyF
      (\x xs' ->
        nonEmptyF
          (\y ys' ->
            createNonEmptyF
              (f x y)
              (liftA2 f (consF x nilF) ys' <> liftA2 f xs' (consF y ys')))
          ys)
      xs

While that looks complicated, it's not that bad. It uses nonEmptyF to 'loop' over the xs, and then a nested call to nonEmptyF to 'loop' over the ys. The inner lambda expression uses f x y to calculate the head, but it also needs to calculate all other combinations of values in xs and ys.

Boxes labelled x, x1, x2, x3 over other boxes labelled y, y1, y2, y3. The x and y box are connected by an arrow labelled f.

First, it keeps x fixed and 'loops' through all the remaining ys'; that's the liftA2 f (consF x nilF) ys' part:

Boxes labelled x, x1, x2, x3 over other boxes labelled y, y1, y2, y3. The x and y1, y2, y3 boxes are connected by three arrows labelled with a single f.

Then it 'loops' over all the remaining xs' and all the ys; that is, liftA2 f xs' (consF y ys').

Boxes labelled x, x1, x2, x3 over other boxes labelled y, y1, y2, y3. The x1, x2, x3 boxes are connected to the y, y1, y2, y3 boxes by arrows labelled with a single f.

The two liftA2 functions apply to the ListFix Applicative instance.

You'll be happy to see, I think, that the Monad instance is simpler:

instance Monad NonEmptyFix where
  xs >>= f =
    nonEmptyF (\x xs' ->
      nonEmptyF
        (\y ys -> createNonEmptyF y $ ys <> (xs' >>= toListFix . f)) (f x)) xs

And fortunately, Foldable and Traversable are even simpler:

instance Foldable NonEmptyFix where
  foldr f seed = nonEmptyF (\x xs -> f x $ foldr f seed xs)
 
instance Traversable NonEmptyFix where
  traverse f = nonEmptyF (\x xs -> liftA2 createNonEmptyF (f x) (traverse f xs))

Finally, you can implement conversions to and from the NonEmpty type from Data.List.NonEmpty:

toNonEmpty :: NonEmptyFix a -> NonEmpty a
toNonEmpty = nonEmptyF (\x xs -> x :| toList xs)
 
fromNonEmpty :: NonEmpty a -> NonEmptyFix a
fromNonEmpty (x :| xs) = createNonEmptyF x $ fromList xs

This demonstrates that NonEmptyFix is isomorphic to NonEmpty.

Conclusion #

The catamorphism for a non-empty collection is a single function that produces a single value from the head and the tail of the collection. While it's possible to implement a 'standard fold' (foldr in Haskell), the non-empty catamorphism doesn't require a seed to get started. The data structure guarantees that there's always at least one value available, and this value can then be use to 'kick off' a fold.

In C# one can define the catamorphism as the above Aggregate method. You could then define all other instance functions based on Aggregate.

Next: Either catamorphism.


Test-driving the pyramid's top

Monday, 31 July 2023 07:00:00 UTC

Some thoughts on TDD related to integration and systems testing.

My recent article Works on most machines elicited some responses. Upon reflection, it seems that most of the responses relate to the top of the Test Pyramid.

While I don't have an one-shot solution that addresses all concerns, I hope that nonetheless I can suggest some ideas and hopefully inspire a reader or two. That's all. I intend nothing of the following to be prescriptive. I describe my own professional experience: What has worked for me. Perhaps it could also work for you. Use the ideas if they inspire you. Ignore them if you find them impractical.

The Test Pyramid #

The Test Pyramid is often depicted like this:

Standard Test Pyramid, which is really a triangle with three layers: Unit tests, integration tests, and UI tests.

This seems to indicate that while the majority of tests should be unit tests, you should also have a substantial number of integration tests, and quite a few UI tests.

Perhaps the following is obvious, but the Test Pyramid is an idea; it's a way to communicate a concept in a compelling way. What one should take away from it, I think, is only this: The number of tests in each category should form a total order, where the unit test category is the maximum. In other words, you should have more unit tests than you have tests in the next category, and so on.

No-one says that you can only have three levels, or that they have to have the same height. Finally, the above figure isn't even a pyramid, but rather a triangle.

I sometimes think of the Test Pyramid like this:

Test pyramid in perspective.

To be honest, it's not so much whether or not the pyramid is shown in perspective, but rather that the unit test base is significantly more voluminous than the other levels, and that the top is quite small.

Levels #

In order to keep the above discussion as recognisable as possible, I've used the labels unit tests, integration tests, and UI tests. It's easy to get caught up in a discussion about how these terms are defined. Exactly what is a unit test? How does it differ from an integration test?

There's no universally accepted definition of a unit test, so it tends to be counter-productive to spend too much time debating the finer points of what to call the tests in each layer.

Instead, I find the following criteria useful:

  1. In-process tests
  2. Tests that involve more than one process
  3. Tests that can only be performed in production

I'll describe each in a little more detail. Along the way, I'll address some of the reactions to Works on most machines.

In-process tests #

The in-process category corresponds roughly to the Test Pyramid's unit test level. It includes 'traditional' unit tests such as tests of stand-alone functions or methods on objects, but also Facade Tests. The latter may involve multiple modules or objects, perhaps even from multiple libraries. Many people may call them integration tests because they integrate more than one module.

As long as an automated test runs in a single process, in memory, it tends to be fast and leave no persistent state behind. This is almost exclusively the kind of test I tend to test-drive. I often follow an outside-in TDD process, an example of which is shown in my book Code That Fits in Your Head.

Consider an example from the source code that accompanies the book:

[Fact]
public async Task ReserveTableAtTheVaticanCellar()
{
    using var api = new SelfHostedApi();
    var client = api.CreateClient();
    var timeOfDayLaterThanLastSeatingAtTheOtherRestaurants =
        TimeSpan.FromHours(21.5);
 
    var at = DateTime.Today.AddDays(433).Add(
        timeOfDayLaterThanLastSeatingAtTheOtherRestaurants);
    var dto = Some.Reservation.WithDate(at).ToDto();
    var response =
        await client.PostReservation("The Vatican Cellar", dto);
 
    response.EnsureSuccessStatusCode();
}

I think of a test like this as an automated acceptance test. It uses an internal test-specific domain-specific language (test utilities) to exercise the REST service's API. It uses ASP.NET self-hosting to run both the service and the HTTP client in the same process.

Even though this may, at first glance, look like an integration test, it's an artefact of test-driven development. Since it does cut across both HTTP layer and domain model, some readers may think of it as an integration test. It uses a stateful in-memory data store, so it doesn't involve more than a single process.

Tests that span processes #

There are aspects of software that you can't easily drive with tests. I'll return to some really gnarly examples in the third category, but in between, we find concerns that are hard, but still possible to test. The reason that they are hard is often because they involve more than one process.

The most common example is data access. Many software systems save or retrieve data. With test-driven development, you're supposed to let the tests inform your API design decisions in such a way that everything that involves difficult, error-prone code is factored out of the data access layer, and into another part of the code that can be tested in process. This development technique ought to drain the hard-to-test components of logic, leaving behind a Humble Object.

One reaction to Works on most machines concerns exactly that idea:

"As a developer, you need to test HumbleObject's behavior."

It's almost tautologically part of the definition of a Humble Object that you're not supposed to test it. Still, realistically, ladeak has a point.

When I wrote the example code to Code That Fits in Your Head, I applied the Humble Object pattern to the data access component. For a good while, I had a SqlReservationsRepository class that was so simple, so drained of logic, that it couldn't possibly fail.

Until, of course, the inevitable happened: There was a bug in the SqlReservationsRepository code. Not to make a long story out of it, but even with a really low cyclomatic complexity, I'd accidentally swapped two database columns when reading from a table.

Whenever possible, when I discover a bug, I first write an automated test that exposes that bug, and only then do I fix the problem. This is congruent with my lean bias. If a defect can occur once, it can occur again in the future, so it's better to have a regression test.

The problem with this bug is that it was in a Humble Object. So, ladeak is right. Sooner or later, you'll have to test the Humble Object, too.

That's when I had to bite the bullet and add a test library that tests against the database.

One such test looks like this:

[Theory]
[InlineData(Grandfather.Id, "2022-06-29 12:00""e@example.gov""Enigma", 1)]
[InlineData(Grandfather.Id, "2022-07-27 11:40""c@example.com""Carlie", 2)]
[InlineData(2, "2021-09-03 14:32""bon@example.edu""Jovi", 4)]
public async Task CreateAndReadRoundTrip(
    int restaurantId,
    string at,
    string email,
    string name,
    int quantity)
{
    var expected = new Reservation(
        Guid.NewGuid(),
        DateTime.Parse(at, CultureInfo.InvariantCulture),
        new Email(email),
        new Name(name),
        quantity);
    var connectionString = ConnectionStrings.Reservations;
    var sut = new SqlReservationsRepository(connectionString);
 
    await sut.Create(restaurantId, expected);
    var actual = await sut.ReadReservation(restaurantId, expected.Id);
 
    Assert.Equal(expected, actual);
}

The entire test runs in a special context where a database is automatically created before the test runs, and torn down once the test has completed.

"When building such behavior, you can test against a shared instance of the service in your dev team or run that service on your dev machine in a container."

Yes, those are two options. A third, in the spirit of GOOS, is to strongly favour technologies that support automation. Believe it or not, you can automate SQL Server. You don't need a Docker container for it. That's what I did in the above test.

I can see how a Docker container with an external dependency can be useful too, so I'm not trying to dismiss that technology. The point is, however, that simpler alternatives may exist. I, and others, did test-driven development for more than a decade before Docker existed.

Tests that can only be performed in production #

The last category of tests are those that you can only perform on a production system. What might be examples of that?

I've run into a few over the years. One such test is what I call a Smoke Test: Metaphorically speaking, turn it on and see if it develops smoke. These kinds of tests are good at catching configuration errors. Does the web server have the right connection string to the database? A test can verify whether that's the case, but it makes no sense to run such a test on a development machine, or against a test system, or a staging environment. You want to verify that the production system is correctly configured. Only a test against the production system can do that.

For every configuration value, you may want to consider a Smoke Test.

There are other kinds of tests you can only perform in production. Sometimes, it's not technical concerns, but rather legal or financial constraints, that dictate circumstances.

A few years ago I worked with a software organisation that, among other things, integrated with the Danish personal identification number system (CPR). Things may have changed since, but back then, an organisation had to have a legal agreement with CPR before being granted access to its integration services. It's an old system (originally from 1968) with a proprietary data integration protocol.

We test-drove a parser of the data format, but that still left behind a Humble Object that would actually perform the data transfers. How do we test that Humble Object?

Back then, at least, there was no test system for the CPR service, and it was illegal to query the live system unless you had a business reason. And software testing did not constitute a legal reason.

The only legal option was to make the Humble Object as simple and foolproof as possible, and then observe how it worked in actual production situations. Containers wouldn't help in such a situation.

It's possible to write automated tests against production systems, but unless you're careful, they're difficult to write and maintain. At least, go easy on the assertions, since you can't assume much about the run-time data and behaviour of a live system. Smoke tests are mostly just 'pings', so can be written to be fairly maintenance-free, but you shouldn't need many of them.

Other kinds of tests against production are likely to be fragile, so it pays to minimise their number. That's the top of the pyramid.

User interfaces #

I no longer develop user interfaces, so take the following with a pinch of salt.

The 'original' Test Pyramid that I've depicted above has UI tests at the pyramid's top. That doesn't necessarily match the categories I've outlined here; don't assume parity.

A UI test may or may not involve more than one process, but they are often difficult to maintain for other reasons. Perhaps this is where the pyramid metaphor starts to break down. All models are wrong, but some are useful.

Back when I still programmed user interfaces, I'd usually test-drive them via a subcutaneous API, and rely on some variation of MVC to keep the rendered controls in sync. Still, once in a while, you need to verify that the user interface looks as it's supposed to. Often, the best tool for that job is the good old Mark I Eyeball.

This still means that you need to run the application from time to time.

"Docker is also very useful for enabling others to run your software on their machines. Recently, we've been exploring some apps that consisted of ~4 services (web servers) and a database. All of them written in different technologies (PHP, Java, C#). You don't have to setup environment variables. You don't need to have relevant SDKs to build projects etc. Just run docker command, and spin them instantly on your PC."

That sounds like a reasonable use case. I've never found myself in such circumstances, but I can imagine the utility that containers offer in a situation like that. Here's how I envision the scenario:

A box with arrows to three other boxes, which again have arrows to a database symbol.

The boxes with rounded corners symbolise containers.

Again, my original goal with the previous article wasn't to convince you that container technologies are unequivocally bad. Rather, it was to suggest that test-driven development (TDD) solves many of the problems that people seem to think can only be solved with containers. Since TDD has many other beneficial side effects, it's worth considering instead of mindlessly reaching for containers, which may represent only a local maximum.

How could TDD address qfilip's concern?

When I test-drive software, I favour real dependencies, and I favour Fake objects over Mocks and Stubs. Were I to return to user-interface programming today, I'd define its external dependencies as one or more interfaces, and implement a Fake Object for each.

Not only will this enable me to simulate the external dependencies with the Fakes. If I implement the Fakes as part of the production code, I'd even be able to spin up the system, using the Fakes instead of the real system.

App box with arrows pointing to itself.

A Fake is an implementation that 'almost works'. A common example is an in-memory collection instead of a database. It's neither persistent nor thread-safe, but it's internally consistent. What you add, you can retrieve, until you delete it again. For the purposes of starting the app in order to verify that the user interface looks correct, that should be good enough.

Another related example is NServiceBus, which comes with a file transport that is clearly labeled as not for production use. While it's called the Learning Transport, it's also useful for exploratory testing on a development machine. While this example clearly makes use of an external resource (the file system), it illustrates how a Fake implementation can alleviate the need for a container.

Uses for containers #

Ultimately, it's still useful to be able to stand up an entire system, as qfilip suggests, and if containers is a good way to do that, it doesn't bother me. At the risk of sounding like a broken record, I never intended to say that containers are useless.

When I worked as a Software Development Engineer in Microsoft, I had two computers: A laptop and a rather beefy tower PC. I've always done all programming on laptops, so I repurposed the tower as a virtual server with all my system's components on separate virtual machines (VM). The database in one VM, the application server in another, and so on. I no longer remember what all the components were, but I seem to recall that I had four VMs running on that one box.

While I didn't use it much, I found it valuable to occasionally verify that all components could talk to each other on a realistic network topology. This was in 2008, and Docker wasn't around then, but I could imagine it would have made that task easier.

I don't dispute that Docker and Kubernetes are useful, but the job of a software architect is to carefully identify the technologies on which a system should be based. The more technology dependencies you take on, the more rigid the design.

After a few decades of programming, my experience is that as a programmer and architect, I can find better alternatives than depending on container technologies. If testers and IT operators find containers useful to do their jobs, then that's fine by me. Since my code works on most machines, it works in containers, too.

Truly Humble Objects #

One last response, and I'll wrap this up.

"As a developer, you need to test HumbleObject's behavior. What if a DatabaseConnection or a TCP conn to a message queue is down?"

How should such situations be handled? There may always be special cases, but in general, I can think of two reactions:

  • Log the error
  • Retry the operation

Assuming that the Humble Object is a polymorphic type (i.e. inherits a base class or implements an interface), you should be able to extract each of these behaviours to general-purpose components.

In order to log errors, you can either use a Decorator or a global exception handler. Most frameworks provide a way to catch (otherwise) unhandled exceptions, exactly for this purpose, so you don't have to add such functionality to a Humble Object.

Retry logic can also be delegated to a third-party component. For .NET I'd start looking at Polly, but I'd be surprised if other platforms don't have similar libraries that implement the stability patterns from Release It.

Something more specialised, like a fail-over mechanism, sounds like a good reason to wheel out the Chain of Responsibility pattern.

All of these can be tested independently of any Humble Object.

Conclusion #

In a recent article I reflected on my experience with TDD and speculated that a side effect of that process is code flexible enough to work on most machines. Thus, I've never encountered the need for a containers.

Readers responded with comments that struck me as mostly related to the upper levels of the Test Pyramid. In this article, I've attempted to address some of those concerns. I still get by without containers.


Is software getting worse?

Monday, 24 July 2023 06:02:00 UTC

A rant, with some examples.

I've been a software user for thirty years.

My first PC was DOS-based. In my first job, I used OS/2, in the next, Windows 3.11, NT, and later incarnations of Windows.

I wrote my first web sites in Arachnophilia, and my first professional software in Visual Basic, Visual C++, and Visual InterDev.

I used Terminate with my first modem. If I recall correctly, it had a built-in email downloader and offline reader. Later, I switched to Outlook for email. I've used Netscape Navigator, Internet Explorer, Firefox, and Chrome to surf the web.

I've written theses, articles, reports, etc. in Word Perfect for DOS and MS Word for Windows. I wrote my new book Code That Fits In your Head in TexStudio. Yes, it was written entirely in LaTeX.

Updates #

For the first fifteen years, new software version were rare. You'd get a version of AutoCAD, Windows, or Visual C++, and you'd use it for years. After a few years, a new version would come out, and that would be a big deal.

Interim service releases were rare, too, since there was no network-based delivery mechanism. Software came on floppy disks, and later on CDs.

Even if a bug fix was easy to make, it was difficult for a software vendor to distribute it, so most software releases were well-tested. Granted, software had bugs back then, and some of them you learned to work around.

When a new version came out, the same forces were at work. The new version had to be as solid and stable as the previous one. Again, I grant that once in a while, even in those days, this wasn't always the case. Usually, a bad release spelled the demise of a company, because release times were so long that competitors could take advantage of a bad software release.

Usually, however, software updates were improvements, and you looked forward to them.

Decay #

I no longer look forward to updates. These days, software is delivered over the internet, and some applications update automatically.

From a security perspective it can be a good idea to stay up-to-date, and for years, I diligently did that. Lately, however, I've become more conservative. Particularly when it comes to Windows, I ignore all suggestions to update it until it literally forces the update on me.

Just like even-numbered Star Trek movies don't suck the same pattern seems to be true for Windows: Windows XP was good, Windows 7 was good, and Windows 10 wasn't bad either. I kept putting off Windows 11 for as long as possible, but now I use it, and I can't say that I'm surprised that I don't like it.

This article, however, isn't a rant about Windows in particular. This seems to be a general trend, and it's been noticeable for years.

Examples #

I think that the first time I noticed a particular application degrading was Vivino. It started out as a local company here in Copenhagen, and I was a fairly early adopter. Initially, it was great: If you like wine, but don't know that much about it, you could photograph a bottle's label, and it'd automatically recognise the wine and register it in your 'wine library'. I found it useful that I could look up my notes about a wine I'd had a year ago to remind me what I thought of it. As time went on, however, I started to notice errors in my wine library. It might be double entries, or wines that were silently changed to another vintage, etc. Eventually it got so bad that I lost trust in the application and uninstalled it.

Another example is Sublime Text, which I used for writing articles for this blog. I even bought a licence for it. Version 3 was great, but version 4 was weird from the outset. One thing was that they changed how they indicated which parts of a file I'd edited after opening it, and I never understood the idea behind the visuals. Worse was that auto-closing of HTML stopped working. Since I'm that weird dude who writes raw HTML, such a feature is quite important to me. If I write an HTML tag, I expect the editor to automatically add the closing tag, and place my cursor between the two. Sublime Text stopped doing that consistently, and eventually it became annoying enough that I though: Why bother? Now I write in Visual Studio Code.

Microsoft is almost a chapter in itself, but to be fair, I don't consider Microsoft products worse than others. There's just so much of it, and since I've always been working in the Microsoft tech stack, I use a lot of it. Thus, selection bias clearly is at work here. Still, while I don't think Microsoft is worse than the competition, it seems to be part of the trend.

For years, my login screen was stuck on the same mountain lake, even though I tried every remedy suggested on the internet. Eventually, however, a new version of Windows fixed the issue. So, granted, sometimes new versions improve things.

Now, however, I have another problem with Windows Spotlight. It shows nice pictures, and there used to be an option to see where the picture was taken. Since I repaved my machine, this option is gone. Again, I've scoured the internet for resolutions to this problem, but neither rebooting, regedit changes, etc. has so far solved the problem.

That sounds like small problems, so let's consider something more serious. Half a year ago, Outlook used to be able to detect whether I was writing an email in English or Danish. It could even handle the hybrid scenario where parts of an email was in English, and parts in Danish. Since I repaved my machine, this feature no longer works. Outlook doesn't recognise Danish when I write it. One thing are the red squiggly lines under most words, but that's not even the worst. The worst part of this is that even though I'm writing in Danish, outlook thinks I'm writing in English, so it silently auto-corrects Danish words to whatever looks adjacent in English.

Screen shot of Outlook language bug.

This became so annoying that I contacted Microsoft support about it, but while they had me try a number of things, nothing worked. They eventually had to give up and suggested that I reinstalled my machine - which, at that point, I'd done two weeks before.

This used to work, but now it doesn't.

It's not all bad #

I could go on with other examples, but I think that this suffices. After all, I don't think it makes for a compelling read.

Of course, not everything is bad. While it looks as though I'm particularly harping on Microsoft, I rarely detect problems with Visual Studio or Code, and I usually install updates as soon as they are available. The same is true for much other software I use. Paint.NET is awesome, MusicBee is solid, and even the Sonos Windows app, while horrific, is at least consistently so.

Conclusion #

It seems to me that some software is actually getting worse, and that this is a more recent trend.

The point isn't that some software is bad. This has always been the case. What seems new to me is that software that used to be good deteriorates. While this wasn't entirely unheard of in the nineties (I'm looking at you, WordPerfect), this is becoming much more noticeable.

Perhaps it's just frequency illusion, or perhaps it's because I use software much more than I did in the nineties. Still, I can't shake the feeling that some software is deteriorating.

Why does this happen? I don't know, but my own bias suggests that it's because there's less focus on regression testing. Many of the problems I see look like regression bugs to me. A good engineering team could have caught them with automated regression tests, but these days, it seems as though many teams rely on releasing often and then letting users do the testing.

The problem with that approach, however, is that if you don't have good automated tests, fixing one regression may resurrect another.


Comments

I don't think your perception that software is getting worse is wrong, Mark. I've been an Evernote user since 2011. And, for a good portion of those years, I've been a paid customer.

I'm certainly not alone in my perception that the application is becoming worse with each new release, to the point of, at times, becoming unusable. Syncing problems with the mobile version, weird changes in the UI that don't accomplish anything, general sluggishness, and, above all, not listening to the users regarding long-needed features.

2023-07-29 18:05 UTC

Works on most machines

Monday, 17 July 2023 08:01:00 UTC

TDD encourages deployment flexibility. Functional programming also helps.

Recently several of the podcasts I subscribe to have had episodes about various container technologies, of which Kubernetes dominates. I tune out of such content, since it has nothing to do with me.

I've never found containerisation relevant. I remember being fascinated when I first heard of Docker, and for a while, I awaited a reason to use it. It never materialised.

I'd test-drive whatever system I was working on, and deploy it to production. Usually, it'd just work.

Since my process already produced good results, why make it more complicated?

Occasionally, I would become briefly aware of the lack of containers in my life, but then I'd forget about it again. Until now, I haven't thought much about it, and it's probably only the random coincidence of a few podcast episodes back-to-back that made me think more about it.

Be liberal with what system you run on #

When I was a beginner programmer a few years ago, things were different. I'd write code that worked on my machine, but not always on the test server.

As I gained experience, this tended to happen less often. This doubtlessly have multiple causes, and increased experience is likely one of them, but I also think that my interest in loose coupling and test-driven development plays a role.

Increasingly I developed an ethos of writing software that would work on most machines, instead of only my own. It seems reminiscent of Postel's law: Be liberal with what system you run on.

Test-driven development helps in that regard, because you write code that must be able to execute in at least two contexts: The test context, and the actual system context. These two contexts both exist on your machine.

A colleague once taught me: The most difficult generalisation step is going from one to two. Once you've generalised to two cases, it's much easier to generalise to three, four, or n cases.

It seems to me that such from-one-to-two-cases generalisation is an inadvertent by-product of test-driven development. Once your code already matches two different contexts, making it even more flexible isn't that much extra work. It's not even speculative generality because you also need to make it work on the production system and (one hopes) on a build server or continuous delivery pipeline. That's 3-4 contexts. Odds are that software that runs successfully in four separate contexts runs successfully on many more systems.

General-purpose modules #

In A Philosophy of Software Design John Ousterhout argues that one should aim for designing general-purpose objects or modules, rather than specialised APIs. He calls them deep modules and their counterparts shallow modules. On the surface, this seems to go against the grain of YAGNI, but the way I understand the book, the point is rather that general-purpose solutions also solve special cases, and, when done right, the code doesn't have to be more complicated than the one that handles the special case.

As I write in my review of the book, I think that there's a connection with test-driven development. General-purpose code is code that works in more than one situation, including automated testing environments. This is almost tautological. If it doesn't work in an automated test, an argument could be made that it's insufficiently general.

Likewise, general-purpose software should be able to work when deployed to more than one machine. It should even work on machines where other versions of that software already exist.

When you have general-purpose software, though, do you really need containers?

Isolation #

While I've routinely made use of test-driven development since 2003, I started my shift towards functional programming around ten years later. I think that this has amplified my code's flexibility.

As Jessica Kerr pointed out years ago, a corollary of referential transparency is that pure functions are isolated from their environment. Only input arguments affect the output of a pure function.

Ultimately, you may need to query the environment about various things, but in functional programming, querying the environment is impure, so you push it to the boundary of the system. Functional programming encourages you to explicitly consider and separate impure actions from pure functions. This implies that the environment-specific code is small, cohesive, and easy to review.

Conclusion #

For a while, when Docker was new, I expected it to be a technology that I'd eventually pick up and make part of my tool belt. As the years went by, that never happened. As a programmer, I've never had the need.

I think that a major contributor to that is that since I mostly develop software with test-driven development, the resulting software is already robust or flexible enough to run in multiple environments. Adding functional programming to the mix helps to achieve isolation from the run-time environment.

All of this seems to collaborate to enable code to work not just on my machine, but on most machines. Including containers.

Perhaps there are other reasons to use containers and Kubernetes. In a devops context, I could imagine that it makes deployment and operations easier. I don't know much about that, but I also don't mind. If someone wants to take the code I've written and run it in a container, that's fine. It's going to run there too.


Comments

qfilip #

Commenting for the first time. I hope I made these changes in proper manner. Anyway...

Kubernetes usually also means the usage of cloud infrastructure, and as such, it can be automated (and change-tracked) in various interesting ways. Is it worth it? Well, that depends as always... Docker isn't the only container technology supported by k8s, but since it's the most popular one... they go hand in hand.

Docker is also very useful for enabling others to run your software on their machines. Recently, we've been exploring some apps that consisted of ~4 services (web servers) and a database. All of them written in different technologies (PHP, Java, C#). You don't have to setup environment variables. You don't need to have relevant SDKs to build projects etc. Just run docker command, and spin them instantly on your PC.

So there's that...

Unrelated to the topic above, I'd like to ask you, if you could write an article on the specific subject. Or, if the answer is short, comment me back. As an F# enthusiast, I find yours and Scott's blog very valuable. One thing I've failed to find here is why you don't like ORMs. I think the words were they solve a problem that we shouldn't have in the first place. Since F# doesn't play too well with Entity Framework, and I pretty much can't live without it... I'm curious if I'm missing something. A different approach, way of thinking. I can work with raw SQL ofcourse... but the mapping... oh the mapping...

2023-07-18 22:30 UTC

I'm contemplating turning my response into a new article, but it may take some time before I get to it. I'll post here once I have a more thorough response.

2023-07-23 13:56 UTC

qfilip, thank you for writing. I've now published the article that, among many other things, respond to your comment about containers.

I'll get back to your question about ORMs as soon as possible.

2023-07-31 07:01 UTC

I'm still considering how to best address the question about ORMs, but in the meanwhile, I'd like to point interested readers to Ted Neward's famous article The Vietnam of Computer Science.

2023-08-14 20:01 UTC

Finally, I'm happy to announce that I've written an article trying to explain my position: Do ORMs reduce the need for mapping?.

2023-09-18 14:50 UTC

Page 4 of 73

"Our team wholeheartedly endorses Mark. His expert service provides tremendous value."
Hire me!