Statically and dynamically typed scripts

Monday, 05 February 2024 07:53:00 UTC

Extracting and analysing data in Haskell and Python.

I was recently following a course in mathematical analysis and probability for computer scientists. One assignment asked to analyze a small CSV file with data collected in a student survey. The course contained a mix of pure maths and practical application, and the official programming language to be used was Python. It was understood that one was to do the work in Python, but it wasn't an explicit requirement, and I was so tired that didn't have the energy for it.

I can get by in Python, but it's not a language I'm actually comfortable with. For small experiments, ad-hoc scripting, etc. I reach for Haskell, so that's what I did.

This was a few months ago, and I've since followed another course that required more intense use of Python. With a few more months of Python programming under my belt, I decided to revisit that old problem and do it in Python with the explicit purpose of comparing and contrasting the two.

Static or dynamic types for scripting #

I'd like to make one point with these articles, and that is that dynamically typed languages aren't inherently better suited for scripting than statically typed languages. From this, it does not, however, follow that statically typed languages are better, either. Rather, I increasingly believe that whether you find one or the other more productive is a question of personality, past experiences, programming background, etc. I've been over this ground before. Many of my heroes seem to favour dynamically typed languages, while I keep returning to statically typed languages.

For more than a decade I've preferred F# or Haskell for ad-hoc scripting. Note that while these languages are statically typed, they are low on ceremony. Types are inferred rather than declared. This means that for scripts, you can experiment with small code blocks, iteratively move closer to what you need, just as you would with a language like Python. Change a line of code, and the inferred type changes with it; there are no type declarations that you also need to fix.

When I talk about writing scripts in statically typed languages, I have such languages in mind. I wouldn't write a script in C#, C, or Java.

"Let me stop you right there: I don't think there is a real dynamic typing versus static typing debate.

"What such debates normally are is language X vs language Y debates (where X happens to be dynamic and Y happens to be static)."

The present articles compare Haskell and Python, so be careful that you don't extrapolate and draw any conclusions about, say, C++ versus Erlang.

When writing an ad-hoc script to extract data from a file, it's important to be able to experiment and iterate. Load the file, inspect the data, figure out how to extract subsets of it (particular columns, for example), calculate totals, averages, etc. A REPL is indispensable in such situations. The Haskell REPL (called Glasgow Haskell Compiler interactive, or just GHCi) is the best one I've encountered.

I imagine that a Python expert would start by reading the data to slice and dice it various ways. We may label this a data-first approach, but be careful not to read too much into this, as I don't really know what I'm talking about. That's not how my mind works. Instead, I tend to take a types-first approach. I'll look at the data and start with the types.

The assignment #

The actual task is the following. At the beginning of the course, the professors asked students to fill out a survey. Among the questions asked was which grade the student expected to receive, and how much experience with programming he or she already had.

Grades are given according to the Danish academic scale: -3, 00, 02, 4, 7, 10, and 12, and experience level on a simple numeric scale from 1 to 7, with 1 indicating no experience and 7 indicating expert-level experience.

Here's a small sample of the data:

No,3,2,6,6
No,4,2,3,7
No,1,12,6,2
No,4,10,4,3
No,3,4,4,6

The expected grade is in the third column (i.e. 2, 2, 12, 10, 4) and the experience level is in the fourth column (6,3,6,4,4). The other columns are answers to different survey questions. The full data set contains 38 rows.

The assignment poses the following questions: Two rows from the survey data are randomly selected. What is the probability mass function (PMF) of the sum of their expected grades, and what is the PMF of the absolute difference between their programming experience levels?

In both cases I was also asked to plot the PMFs.

Comparisons #

As outlined above, I originally wrote a Haskell script to answer the questions, and only months later returned to the problem to give it a go in Python. When reading my detailed walkthroughs, keep in mind that I have 8-9 years of Haskell experience, and that I tend to 'think in Haskell', while I have only about a year of experience with Python. I don't consider myself proficient with Python, so the competition is rigged from the outset.

For this small task, I don't think that there's a clear winner. I still like my Haskell code the best, but I'm sure someone better at Python could write a much cleaner script. I also have to admit that Matplotlib makes it a breeze to produce nice-looking plots with Python, whereas I don't even know where to start with that with Haskell.

Recently I've done some more advanced data analysis with Python, such as random forest classification, principal component analysis, KNN-classification, etc. While I understand that I'm only scratching the surface of data science and machine learning, it's obvious that there's a rich Python ecosystem for that kind of work.

Conclusion #

This lays the foundations for comparing a small Haskell script with an equivalent Python script. There's no scientific method to the comparison; it's just me doing the same exercise twice, a bit like I'd do katas with multiple variations in order to learn.

While I still like Haskell better than Python, that's only a personal preference. I'm deliberately not declaring a winner.

One point I'd like to make, however, is that there's nothing inherently better about a dynamically typed language when it comes to ad-hoc scripting. Languages with strong type inference work well, too.

Next: Extracting data from a small CSV file with Haskell.


Error categories and category errors

Monday, 29 January 2024 16:05:00 UTC

How I currently think about errors in programming.

A reader recently asked a question that caused me to reflect on the way I think about errors in software. While my approach to error handling has remained largely the same for years, I don't think I've described it in an organized way. I'll try to present those thoughts here.

This article is, for lack of a better term, a think piece. I don't pretend that it represents any fundamental truth, or that this is the only way to tackle problems. Rather, I write this article for two reasons.

  • Writing things down often helps clarifying your thoughts. While I already feel that my thinking on the topic of error handling is fairly clear, I've written enough articles that I know that by writing this one, I'll learn something new.
  • Publishing this article enables the exchange of ideas. By sharing my thoughts, I enable readers to point out errors in my thinking, or to improve on my work. Again, I may learn something. Perhaps others will, too.

Although I don't claim that the following is universal, I've found it useful for years.

Error categories #

Almost all software is at risk of failing for a myriad of reasons: User input, malformed data, network partitions, cosmic rays, race conditions, bugs, etc. Even so, we may categorize errors like this:

  • Predictable errors we can handle
  • Predictable errors we can't handle
  • Errors we've failed to predict

This distinction is hardly original. I believe I've picked it up from Michael Feathers, but although I've searched, I can't find the source, so perhaps I'm remembering it wrong.

You may find these three error categories underwhelming, but I find it useful to first consider what may be done about an error. Plenty of error situations are predictable. For example, all input should be considered suspect. This includes user input, but also data you receive from other systems. This kind of potential error you can typically solve with input validation, which I believe is a solved problem. Another predictable kind of error is unavailable services. Many systems store data in databases. You can easily predict that the database will, sooner or later, be unreachable. Potential causes include network partitions, a misconfigured connection string, logs running full, a crashed server, denial-of-service attacks, etc.

With some experience with software development, it's not that hard producing a list of things that could go wrong. The next step is to decide what to do about it.

There are scenarios that are so likely to happen, and where the solution is so well-known, that they fall into the category of predictable errors that you can handle. User input belongs here. You examine the input and inform the user if it's invalid.

Even with input, however, other scenarios may lead you down different paths. What if, instead of a system with a user interface, you're developing a batch job that receives a big data file every night? How do you deal with invalid input in that scenario? Do you reject the entire data set, or do you filter it so that you only handle the valid input? Do you raise a notification to asynchronously inform the sender that input was malformed?

Notice how categorization is context-dependent. It would be a (category?) error to interpret the above model as fixed and universal. Rather, it's an analysis framework that helps identifying how to categorize various fault scenarios in a particular application context.

Another example may be in order. If your system depends on a database, a predictable error is that the database will be unavailable. Can you handle that situation?

A common reaction is that there's really not a lot one can do about that. You may retry the operation, log the problem, or notify an on-call engineer, but ultimately the system depends on the database. If the database is unreachable, the system can't work. You can't handle that problem, so this falls in the category of predictable errors that you can't handle.

Or does it?

Trade-offs of error handling #

The example of an unreachable database is useful to explore in order to demonstrate that error handling isn't writ in stone, but rather an architectural design decision. Consider a common API design like this:

public interface IRepository<T>
{
    int Create(T item);
 
    // other members
}

What happens if client code calls Create but the database is unreachable? This is C# code, but the problem generalizes. With most implementations, the Create method will throw an exception.

Can you handle that situation? You may retry a couple of times, but if you have a user waiting for a response, you can't retry for too long. Once time is up, you'll have to accept that the operation failed. In a language like C#, the most robust implementation is to not handle the specific exception, but instead let it bubble up to be handled by a global exception handler that usually can't do much else than showing the user a generic error message, and then log the exception.

This isn't your only option, though. You may find yourself in a context where this kind of attitude towards errors is unacceptable. If you're working with BLOBAs it's probably fine, but if you're working with medical life-support systems, or deep-space probes, or in other high-value contexts, the overall error-tolerance may be lower. Then what do you do?

You may try to address the concern with IT operations: Configure failover systems for the database, installing two network cards in every machine, and so on. This may (also) be a way to address the problem, but isn't your only option. You may also consider changing the software architecture.

One option may be to switch to an asynchronous message-based system where messages are transmitted via durable queues. Granted, durables queues may fail as well (everything may fail), but when done right, they tend to be more robust. Even a machine that has lost all network connectivity may queue messages on its local disk until the network returns. Yes, the disk may run full, etc. but it's less likely to happen than a network partition or an unreachable database.

Notice that an unreachable database now goes into the category of errors that you've predicted, and that you can handle. On the other hand, failing to send an asynchronous message is now a new kind of error in your system: One that you can predict, but can't handle.

Making this change, however, impacts your software architecture. You can no longer have an interface method like the above Create method, because you can't rely on it returning an int in reasonable time. During error scenarios, messages may sit in queues for hours, if not days, so you can't block on such code.

As I've explained elsewhere you can instead model a Create method like this:

public interface IRepository<T>
{
    void Create(Guid id, T item);
 
    // other members
}

Not only does this follow the Command Query Separation principle, it also makes it easier for you to adopt an asynchronous message-based architecture. Done consistently, however, this requires that you approach application design in a way different from a design where you assume that the database is reachable.

It may even impact a user interface, because it'd be a good idea to design user experience in such a way that it helps the user have a congruent mental model of how the system works. This may include making the concept of an outbox explicit in the user interface, as it may help users realize that writes happen asynchronously. Most users understand that email works that way, so it's not inconceivable that they may be able to adopt a similar mental model of other applications.

The point is that this is an option that you may consider as an architect. Should you always design systems that way? I wouldn't. There's much extra complexity that you have to deal with in order to make asynchronous messaging work: UX, out-of-order messages, dead-letter queues, message versioning, etc. Getting to five nines is expensive, and often not warranted.

The point is rather that what goes in the predictable errors we can't handle category isn't fixed, but context-dependent. Perhaps we should rather name the category predictable errors we've decided not to handle.

Bugs #

How about the third category of errors, those we've failed to predict? We also call these bugs or defects. By definition, we only learn about them when they manifest. As soon as they become apparent, however, they fall into one of the other categories. If an error occurs once, it may occur again. It is now up to you to decide what to do about it.

I usually consider errors as stop-the-line-issues, so I'd be inclined to immediately address them. On they other hand, if you don't do that, you've implicitly decided to put them in the category of predictable errors that you've decided not to handle.

We don't intentionally write bugs; there will always be some of those around. On the other hand, various practices help reducing them: Test-driven development, code reviews, property-based testing, but also up-front design.

Error-free code #

Do consider explicitly how code may fail.

Despite the title of this section, there's no such thing as error-free code. Still, you can explicitly think about edge cases. For example, how might the following function fail?

public static TimeSpan Average(this IEnumerable<TimeSpantimeSpans)
{
    var sum = TimeSpan.Zero;
    var count = 0;
    foreach (var ts in timeSpans)
    {
        sum += ts;
        count++;
    }
    return sum / count;
}

In at least two ways: The input collection may be empty or infinite. I've already suggested a few ways to address those problems. Some of them are similar to what Michael Feathers calls unconditional code, in that we may change the domain. Another option, that I didn't cover in the linked article, is to expand the codomain:

public static TimeSpan? Average(this IReadOnlyCollection<TimeSpan> timeSpans)
{
    if (!timeSpans.Any())
        return null;
 
    var sum = TimeSpan.Zero;
    foreach (var ts in timeSpans)
        sum += ts;
    return sum / timeSpans.Count;
}

Now, instead of diminishing the domain, we expand the codomain by allowing the return value to be null. (Interestingly, this is the inverse of my profunctor description of the Liskov Substitution Principle. I don't yet know what to make of that. See: Just by writing things down, I learn something I hadn't realized before.)

This is beneficial in a statically typed language, because such a change makes hidden knowledge explicit. It makes it so explicit that a type checker can point out when we make mistakes. Make illegal states unrepresentable. Poka-yoke. A potential run-time exception is now a compile-time error, and it's firmly in the category of errors that we've predicted and decided to handle.

In the above example, we could use the built-in .NET Nullable<T> (with the ? syntactic-sugar alias). In other cases, you may resort to returning a Maybe (AKA option).

Modelling errors #

Explicitly expanding the codomain of functions to signal potential errors is beneficial if you expect the caller to be able to handle the problem. If callers can't handle an error, forcing them to deal with it is just going to make things more difficult. I've never done any professional Java programming, but I've heard plenty of Java developers complain about checked exceptions. As far as I can tell, the problem in Java isn't so much with the language feature per se, but rather with the exception types that APIs force you to handle.

As an example, imagine that every time you call a database API, the compiler forces you to handle an IOException. Unless you explicitly architect around it (as outlined above), this is likely to be one of the errors you can predict, but decide not to handle. But if the compiler forces you to handle it, then what do you do? You probably find some workaround that involves re-throwing the exception, or, as I understand that some Java developers do, declare that their own APIs may throw any exception, and by that means just pass the buck. Not helpful.

As far as I can tell, (checked) exceptions are equivalent to the Either container, also known as Result. We may imagine that instead of throwing exceptions, a function may return an Either value: Right for a right result (explicit mnemonic, there!), and left for an error.

It might be tempting to model all error-producing operations as Either-returning, but you're often better off using exceptions. Throw exceptions in those situations that you expect most clients can't recover from. Return left (or error) cases in those situations that you expect that a typical client would want to handle.

Again, it's context-specific, so if you're developing a reusable library, there's a balance to strike in API design (or overloads to supply).

Most errors are just branches #

In many languages, errors are somehow special. Most modern languages include a facility to model errors as exceptions, and special syntax to throw or catch them. (The odd man out may be C, with its reliance on error codes as return values, but that is incredible awkward for other reasons. You may also reasonably argue that C is hardly a modern language.)

Even Haskell has exceptions, even though it also has deep language support for Maybe and Either. Fortunately, Haskell APIs tend to only throw exceptions in those cases where average clients are unlikely to handle them: Timeouts, I/O failures, and so on.

It's unfortunate that languages treat errors as something exceptional, because this nudges us to make a proper category error: That errors are somehow special, and that we can't use normal coding constructs or API design practices to model them.

But you can. That's what Michael Feathers' presentation is about, and that's what you can do by making illegal states unrepresentable, or by returning Maybe or Either values.

Most errors are just branches in your code; where it diverges from the happy path in order to do something else.

Conclusion #

This article presents a framework for thinking about software errors. There are those you can predict may happen, and you choose to handle; those you predict may happen, but you choose to ignore; and those that you have not yet predicted: bugs.

A little up-front thinking will often help you predict some errors, but I'm not advocating that you foresee all errors. Some errors are programmer errors, and we make those errors because we're human, exactly because we're failing to predict the behaviour of a particular state of the code. Once you discover a bug, however, you have a choice: Do you address it or ignore it?

There are error conditions that you may deliberately choose to ignore. This doesn't necessarily make you an irresponsible programmer, but may rather be the result of a deliberate feasibility study. For example, every network operation may fail. How important is it that your application can keep running without the network? Is it worthwhile to make the code so robust that it can handle that situation? Or can you rather live with a few hours of downtime per quarter? If the latter, it may be best to let a human deal with network partitions when they occur.

The three error categories I suggest here are context-dependent. You decide which problems to deal with, and which ones to ignore, but apart from that, error-handling doesn't have to be difficult.


A Range kata implementation in C#

Monday, 22 January 2024 07:05:00 UTC

A port of the corresponding F# code.

This article is an instalment in a short series of articles on the Range kata. In the previous article I made a pass at the kata in F#, using property-based testing with Hedgehog to generate test data.

In the conclusion I mused about the properties I was able to come up with. Is it possible to describe open, closed, and mixed ranges in a way that's less coupled to the implementation? To be honest, I still don't have an answer to that question. Instead, in this article, I describe a straight port of the F# code to C#. There's value in that, too, for people who wonder how to reap the benefits of F# in C#.

The code is available on GitHub.

First property #

Both F# and C# are .NET languages. They run in the same substrate, and are interoperable. While Hedgehog is written in F#, it's possible to consume F# libraries from C#, and vice versa. I've done this multiple times with FsCheck, but I admit to never having tried it with Hedgehog.

If you want to try property-based testing in C#, a third alternative is available: CsCheck. It's written in C# and is more idiomatic in that context. While I sometimes still use FsCheck from C#, I often choose CsCheck for didactic reasons.

The first property I wrote was a direct port of the idea of the first property I wrote in F#:

[Fact]
public void ClosedRangeContainsList()
{
    (from xs in Gen.Short.Enumerable.Nonempty
     let min = xs.Min()
     let max = xs.Max()
     select (xs, min, max))
    .Sample(t =>
    {
        var sut = new Range<short>(
            new ClosedEndpoint<short>(t.min),
            new ClosedEndpoint<short>(t.max));
 
        var actual = sut.Contains(t.xs);
 
        Assert.True(actual, $"Expected {t.xs} to be contained in {sut}.");
    });
}

This test (or property, if you will) uses a technique that I often use with property-based testing. I'm still searching for a catchy name for this, but here we may call it something like reverse test-case assembly. My goal is to test a predicate, and this particular property should verify that for a given Equivalence Class, the predicate is always true.

While we may think of an Equivalence Class as a set from which we pick test cases, I don't actually have a full enumeration of such a set. I can't have that, since that set is infinitely big. Instead of randomly picking values from a set that I can't fully populate, I instead carefully pick test case values in such a way that they would all belong to the same set partition (Equivalence Class).

The test name suggests the test case: I'd like to verify that given I have a closed range, when I ask it whether a list within that range is contained, then the answer is true. How do I pick such a test case?

I do it in reverse. You can say that the sampling is the dual of the test. I start with a list (xs) and only then do I create a range that contains it. Since the first test case is for a closed range, the min and max values are sufficient to define such a range.

How do I pass that property?

Degenerately, as is often the case with TDD beginnings:

public bool Contains(IEnumerable<T> candidates)
{
    return true;
}

Even though the ClosedRangeContainsList property effectively executes a hundred test cases, the Devil's Advocate can easily ignore that and instead return hard-coded true.

Endpoint sum type #

I'm not going to bore you with the remaining properties. The repository is available on GitHub if you're interested in those details.

If you've programmed in F# for some time, you typically miss algebraic data types when forced to return to C#. A language like C# does have product types, but lack native sum types. Even so, not all is lost. I've previously demonstrated that you can employ the Visitor pattern to encode a sum type. Another option is to use Church encoding, which I've decided to do here.

When choosing between Church encoding and the Visitor pattern, Visitor is more object-oriented (after all, it's an original GoF design pattern), but Church encoding has fewer moving parts. Since I was just doing an exercise, I went for the simpler implementation.

An Endpoint object should allow one of two cases: Open or Closed. To avoid primitive obsession I gave the class a private constructor:

public sealed class Endpoint<T>
{
    private readonly T value;
    private readonly bool isClosed;
 
    private Endpoint(T valuebool isClosed)
    {
        this.value = value;
        this.isClosed = isClosed;
    }

Since the constructor is private you need another way to create Endpoint objects. Two factory methods provide that affordance:

public static Endpoint<T> Closed<T>(T value)
{
    return Endpoint<T>.Closed(value);
}
 
public static Endpoint<T> Open<T>(T value)
{
    return Endpoint<T>.Open(value);
}

The heart of the Church encoding is the Match method:

public TResult Match<TResult>(
    Func<T, TResult> whenClosed,
    Func<T, TResult> whenOpen)
{
    if (isClosed)
        return whenClosed(value);
    else
        return whenOpen(value);
}

Such an API is an example of poka-yoke because it obliges you to deal with both cases. The compiler will keep you honest: Did you remember to deal with both the open and the closed case? When calling the Match method, you must supply both arguments, or your code doesn't compile. Make illegal states unrepresentable.

Containment #

With the Endpoint class in place, you can implement a Range class.

public sealed class Range<Twhere T : IComparable<T>

It made sense to me to constrain the T type argument to IComparable<T>, although it's possible that I could have deferred that constraint to the actual Contains method, like I did with my Haskell implementation.

A Range holds two Endpoint values:

public Range(Endpoint<T> min, Endpoint<T> max)
{
    this.min = min;
    this.max = max;
}

The Contains method makes use of the built-in All method, using a private helper function as the predicate:

private bool IsInRange(T candidate)
{
    return min.Match(
        whenClosed: l => max.Match(
            whenClosed: h => l.CompareTo(candidate) <= 0 && candidate.CompareTo(h) <= 0,
            whenOpen:   h => l.CompareTo(candidate) <= 0 && candidate.CompareTo(h) <  0),
        whenOpen: l => max.Match(
            whenClosed: h => l.CompareTo(candidate) <  0 && candidate.CompareTo(h) <= 0,
            whenOpen:   h => l.CompareTo(candidate) <  0 && candidate.CompareTo(h) <  0));
}

This implementation performs a nested Match to arrive at the appropriate answer. The code isn't as elegant or readable as its F# counterpart, but it comes with comparable compile-time safety. You can't forget a combination, because if you do, your code isn't going to compile.

Still, you can't deny that C# involves more ceremony.

Conclusion #

Once you know how, it's not that difficult to port a functional design from F# or Haskell to a language like C#. The resulting code tends to be more complicated, but to a large degree, it's possible to retain the type safety.

In this article you saw a sketch of how to make that transition, using the Range kata as an example. The resulting C# API is perfectly serviceable, as the test code demonstrates.

Now that we have covered the fundamentals of the Range kata we have learned enough about it to go beyond the exercise and examine some more abstract properties.

Next: Range as a functor.


A Range kata implementation in F#

Monday, 15 January 2024 07:20:00 UTC

This time with some property-based testing.

This article is an instalment in a short series of articles on the Range kata. In the previous article I described my first attempt at the kata, and also complained that I had to think of test cases myself. When I find it tedious coming up with new test cases, I usually start to wonder if it'd be easier to use property-based testing.

Thus, when I decided to revisit the kata, the variation that I was most interested in pursuing was to explore whether it would make sense to use property-based testing instead of a set of existing examples.

Since I also wanted to do the second attempt in F#, I had a choice between FsCheck and Hedgehog. Each have their strengths and weaknesses, but since I already know FsCheck so well, I decided to go with Hedgehog.

I also soon discovered that I had no interest in developing the full suite of capabilities implied by the kata. Instead, I decided to focus on just the data structure itself, as well as the contains function. As in the previous article, this function can also be used to cover the kata's ContainsRange feature.

Getting started #

There's no rule that you can't combine property-based testing with test-driven development (TDD). On the contrary, that's how I often do it. In this exercise, I first wrote this test:

[<Fact>]
let ``Closed range contains list`` () = Property.check <| property {
    let! xs = Gen.int16 (Range.linearBounded ()) |> Gen.list (Range.linear 1 99)
    let min = List.min xs
    let max = List.max xs
 
    let actual = (Closed min, Closed max) |> Range.contains xs
 
    Assert.True (actual, sprintf "Range [%i, %i] expected to contain list." min max) }

We have to be careful when reading and understanding this code: There are two Range modules in action here!

Hedgehog comes with a Range module that you must use to define how it samples values from domains. Examples of that here are Range.linearBounded and Range.linear.

On the other hand, I've defined my contains function in a Range module, too. As long as there's no ambiguity, the F# compiler doesn't have a problem with that. Since there's no contains function in the Hedgehog Range module, the F# compiler isn't confused.

We humans, on the other hand, might be confused, and had this been a code base that I had to maintain for years, I might seriously consider whether I should rename my own Range module to something else, like Interval, perhaps.

In any case, the first test (or property, if you will) uses a technique that I often use with property-based testing. I'm still searching for a catchy name for this, but here we may call it something like reverse test-case assembly. My goal is to test a predicate, and this particular property should verify that for a given Equivalence Class, the predicate is always true.

While we may think of an Equivalence Class as a set from which we pick test cases, I don't actually have a full enumeration of such a set. I can't have that, since that set is infinitely big. Instead of randomly picking values from a set that I can't fully populate, I instead carefully pick test case values in such a way that they would all belong to the same set partition (Equivalence Class).

The test name suggests the test case: I'd like to verify that given I have a closed range, when I ask it whether a list within that range is contained, then the answer is true. How do I pick such a test case?

I do it in reverse. You can say that the sampling is the dual of the test. I start with a list (xs) and only then do I create a range that contains it. Since the first test case is for a closed range, the min and max values are sufficient to define such a range.

How do I pass that property?

Degenerately, as is often the case with TDD beginnings:

module Range =
    let contains _ _ = true

Even though the Closed range contains list property effectively executes a hundred test cases, the Devil's Advocate can easily ignore that and instead return hard-coded true.

More properties are required to flesh out the behaviour of the function.

Open range #

While I do keep the transformation priority premise in mind when picking the next test (or, here, property), I'm rarely particularly analytic about it. Since the first property tests that a closed range barely contains a list of values from its minimum to its maximum, it seemed like a promising next step to consider the case where the range consisted of open endpoints. That was the second test I wrote, then:

[<Fact>]
let ``Open range doesn't contain endpoints`` () = Property.check <| property {
    let! min = Gen.int32 (Range.linearBounded ())
    let! max = Gen.int32 (Range.linearBounded ())
 
    let actual = (Open min, Open max) |> Range.contains [min; max]
 
    Assert.False (actual, sprintf "Range (%i, %i) expected not to contain list." min max) }

This property simply states that if you query the contains predicate about a list that only contains the endpoints of an open range, then the answer is false because the endpoints are Open.

One implementation that passes both tests is this one:

module Range =
    let contains _ endpoints =
        match endpoints with
        | Open _, Open _ -> false
        | _ -> true

This implementation is obviously still incorrect, but we have reason to believe that we're moving closer to something that will eventually work.

Tick-tock #

In the spirit of the transformation priority premise, I've often found that when test-driving a predicate, I seem to fall into a tick-tock pattern where I alternate between tests for a true return value, followed by a test for a false return value, or the other way around. This was also the case here. The previous test was for a false value, so the third test requires true to be returned:

[<Fact>]
let ``Open range contains list`` () = Property.check <| property {
    let! xs = Gen.int64 (Range.linearBounded ()) |> Gen.list (Range.linear 1 99)
    let min = List.min xs - 1L
    let max = List.max xs + 1L
 
    let actual = (Open min, Open max) |> Range.contains xs
 
    Assert.True (actual, sprintf "Range (%i, %i) expected to contain list." min max) }

This then led to this implementation of the contains function:

module Range =
    let contains ys endpoints =
        match endpoints with
        | Open x, Open z ->
            ys |> List.forall (fun y -> x < y && y < z)
        | _ -> true

Following up on the above true-demanding test, I added one that tested a false scenario:

[<Fact>]
let ``Open-closed range doesn't contain endpoints`` () = Property.check <| property {
    let! min = Gen.int16 (Range.linearBounded ())
    let! max = Gen.int16 (Range.linearBounded ())
 
    let actual = (Open min, Closed max) |> Range.contains [min; max]
 
    Assert.False (actual, sprintf "Range (%i, %i] expected not to contain list." min max) }

This again led to this implementation:

module Range =
    let contains ys endpoints =
        match endpoints with
        | Open x, Open z ->
            ys |> List.forall (fun y -> x < y && y < z)
        | Open x, Closed z -> false
        | _ -> true

I had to add four more tests before I felt confident that I had the right implementation. I'm not going to show them all here, but you can look at the repository on GitHub if you're interested in the interim steps.

Types and functionality #

So far I had treated a range as a pair (two-tuple), just as I had done with the code in my first attempt. I did, however, have a few other things planned for this code base, so I introduced a set of explicit types:

type Endpoint<'a> = Open of 'a | Closed of 'a
 
type Range<'a> = { LowerBound : Endpoint<'a>; UpperBound : Endpoint<'a> }

The Range record type is isomorphic to a pair of Endpoint values, so it's not strictly required, but does make things more explicit.

To support the new type, I added an ofEndpoints function, and finalized the implementation of contains:

module Range =
    let ofEndpoints (lowerBound, upperBound) =
        { LowerBound = lowerBound; UpperBound = upperBound }
 
    let contains ys r =
        match r.LowerBound, r.UpperBound with
        |   Open x,   Open z -> ys |> List.forall (fun y -> x  < y && y  < z)
        |   Open x, Closed z -> ys |> List.forall (fun y -> x  < y && y <= z)
        | Closed x,   Open z -> ys |> List.forall (fun y -> x <= y && y  < z)
        | Closed x, Closed z -> ys |> List.forall (fun y -> x <= y && y <= z)

As is so often the case in F#, pattern matching makes such functions a pleasure to implement.

Conclusion #

I was curious whether using property-based testing would make the development process of the Range kata simpler. While each property was simple, I still had to write eight of them before I felt I'd fully described the problem. This doesn't seem like much of an improvement over the example-driven approach I took the first time around. It seems to be a comparable amount of code, and on one hand a property is more abstract than an example, but on the hand usually also covers more ground. I feel more confident that this implementation works, because I know that it's being exercised more rigorously.

When I find myself writing a property per branch, so to speak, I always feel that I missed a better way to describe the problem. As an example, for years I would demonstrate how to test the FizzBuzz kata with property-based testing by dividing the problem into Equivalence Classes and then writing a property for each partition. Just as I've done here. This is usually possible, but smells of being too coupled to the implementation.

Sometimes, if you think about the problem long enough, you may be able to produce an alternative set of properties that describe the problem in a way that's entirely decoupled from the implementation. After years, I finally managed to do that with the FizzBuzz kata.

I didn't succeed doing that with the Range kata this time around, but maybe later.

Next: A Range kata implementation in C#.


A Range kata implementation in Haskell

Monday, 08 January 2024 07:06:00 UTC

A first crack at the exercise.

This article is an instalment in a short series of articles on the Range kata. Here I describe my first attempt at the exercise. As I usually advise people on doing katas, the first time you try your hand at a kata, use the language with which you're most comfortable. To be honest, I may be most habituated to C#, having programmed in it since 2002, but on the other hand, I currently 'think in Haskell', and am often frustrated with C#'s lack of structural equality, higher-order abstractions, and support for functional expressions.

Thus, I usually start with Haskell even though I always find myself struggling with the ecosystem. If you do, too, the source code is available on GitHub.

I took my own advice by setting out with the explicit intent to follow the Range kata description as closely as possible. This kata doesn't beat about the bush, but instead just dumps a set of test cases on you. It wasn't clear if this is the most useful set of tests, or whether the order in which they're represented is the one most conducive to a good experience of test-driven development, but there was only one way to find out.

I quickly learned, however, that the suggested test cases were insufficient in describing the behaviour in enough details.

Containment #

I started by adding the first two test cases as inlined HUnit test lists:

"integer range contains" ~: do
  (r, candidate, expected) <-
    [
      ((Closed 2, Open 6), [2,4], True),
      ((Closed 2, Open 6), [-1,1,6,10], False)
    ]
  let actual = r `contains` candidate
  return $ expected ~=? actual

I wasn't particularly keen on going full Devil's Advocate on the exercise. I could, on the other hand, trivially pass both tests with this obviously degenerate implementation:

import Data.List
 
data Endpoint a = Open a | Closed a deriving (EqShow)
 
contains _ candidate = [2] `isPrefixOf` candidate

Reluctantly, I had to invent some additional test cases:

"integer range contains" ~: do
  (r, candidate, expected) <-
    [
      ((Closed   2 ,   Open  6),       [2,4],  True),
      ((Closed   2 ,   Open  6), [-1,1,6,10], False),
      ((Closed (-1), Closed 10), [-1,1,6,10],  True),
      ((Closed (-1),   Open 10), [-1,1,6,10], False),
      ((Closed (-1),   Open 10),  [-1,1,6,9],  True),
      ((  Open   2,  Closed  6),     [3,5,6],  True),
      ((  Open   2,    Open  6),       [2,5], False),
      ((  Open   2,    Open  6),          [],  True),
      ((Closed   2,  Closed  6),     [3,7,4], False)
    ]
  let actual = r `contains` candidate
  return $ expected ~=? actual

This was when I began to wonder whether it would have been easier to use property-based testing. That would entail, however, a departure from the kata's suggested test cases, so I decided to stick to the plan and then perhaps return to property-based testing when repeating the exercise.

Ultimately I implemented the contains function this way:

contains :: (Foldable t, Ord a) => (Endpoint a, Endpoint a) -> t a -> Bool
contains (lowerBound, upperBound) =
  let isHighEnough = case lowerBound of
        Closed x -> (x <=)
        Open   x -> (x <)
      isLowEnough = case upperBound of
        Closed y -> (<= y)
        Open   y ->  (< y)
      isContained x = isHighEnough x && isLowEnough x
  in all isContained

In some ways it seems a bit verbose to me, but I couldn't easily think of a simpler implementation.

One of the features I find so fascinating about Haskell is how general it enables me to be. While the tests use integers for concision, the contains function works with any Ord instance; not only Integer, but also Double, Word, Day, TimeOfDay, or some new type I can't even predict.

All points #

The next function suggested by the kata is a function to enumerate all points in a range. There's only a single test case, so again I added some more:

"getAllPoints" ~: do
  (r, expected) <-
    [
      ((Closed 2,   Open 6), [2..5]),
      ((Closed 4,   Open 8), [4..7]),
      ((Closed 2, Closed 6), [2..6]),
      ((Closed 4, Closed 8), [4..8]),
      ((  Open 2, Closed 6), [3..6]),
      ((  Open 4, Closed 8), [5..8]),
      ((  Open 2,   Open 6), [3..5]),
      ((  Open 4,   Open 8), [5..7])
    ]
  let actual = allPoints r
  return $ expected ~=? actual

Ultimately, after I'd implemented the next feature, I refactored the allPoints function to make use of it, and it became a simple one-liner:

allPoints :: (Enum a, Num a) => (Endpoint a, Endpoint a) -> [a]
allPoints = uncurry enumFromTo . endpoints

The allPoints function also enabled me to express the kata's ContainsRange test cases without introducing a new API:

"ContainsRange" ~: do
  (r, candidate, expected) <-
    [
      ((Closed 2,   Open  5), allPoints (Closed 7, Open   10), False),
      ((Closed 2,   Open  5), allPoints (Closed 3, Open   10), False),
      ((Closed 3,   Open  5), allPoints (Closed 2, Open   10), False),
      ((Closed 2,   Open 10), allPoints (Closed 3, Closed  5),  True),
      ((Closed 3, Closed  5), allPoints (Closed 3, Open    5),  True)
    ]
  let actual = r `contains` candidate
  return $ expected ~=? actual

As I've already mentioned, the above implementation of allPoints is based on the next feature, endpoints.

Endpoints #

The kata also suggests a function to return the two endpoints of a range, as well as some test cases to describe it. Once more, I had to add more test cases to adequately describe the desired functionality:

"endPoints" ~: do
  (r, expected) <-
    [
      ((Closed 2,   Open 6), (2, 5)),
      ((Closed 1,   Open 7), (1, 6)),
      ((Closed 2, Closed 6), (2, 6)),
      ((Closed 1, Closed 7), (1, 7)),
      ((  Open 2,   Open 6), (3, 5)),
      ((  Open 1,   Open 7), (2, 6)),
      ((  Open 2, Closed 6), (3, 6)),
      ((  Open 1, Closed 7), (2, 7))
    ]
  let actual = endpoints r
  return $ expected ~=? actual

The implementation is fairly trivial:

endpoints :: (Num a1, Num a2) => (Endpoint a2, Endpoint a1) -> (a2, a1)
endpoints (Closed x, Closed y) = (x  , y)
endpoints (Closed x,   Open y) = (x  , y-1)
endpoints (  Open x, Closed y) = (x+1, y)
endpoints (  Open x,   Open y) = (x+1, y-1)

One attractive quality of algebraic data types is that the 'algebra' of the type(s) tell you how many cases you need to pattern-match against. Since I'm treating a range as a pair of Endpoint values, and since each Endpoint can be one of two cases (Open or Closed), there's exactly 2 * 2 = 4 possible combinations (since a tuple is a product type).

That fits with the number of pattern-matches required to implement the function.

Overlapping ranges #

The final interesting feature is a predicate to determine whether one range overlaps another. As has become a refrain by now, I didn't find the suggested test cases sufficient to describe the desired behaviour, so I had to add a few more:

"overlapsRange" ~: do
  (r, candidate, expected) <-
    [
      ((Closed 2, Open  5), (Closed 7, Open 10), False),
      ((Closed 2, Open 10), (Closed 3, Open  5),  True),
      ((Closed 3, Open  5), (Closed 3, Open  5),  True),
      ((Closed 2, Open  5), (Closed 3, Open 10),  True),
      ((Closed 3, Open  5), (Closed 2, Open 10),  True),
      ((Closed 3, Open  5), (Closed 1, Open  3), False),
      ((Closed 3, Open  5), (Closed 5, Open  7), False)
    ]
  let actual = r `overlaps` candidate
  return $ expected ~=? actual

I'm not entirely happy with the implementation:

overlaps :: (Ord a1, Ord a2) =>
            (Endpoint a1, Endpoint a2) -> (Endpoint a2, Endpoint a1) -> Bool
overlaps (l1, h1) (l2, h2) =
  let less (Closed x) (Closed y) = x <= y
      less (Closed x)   (Open y) = x <  y
      less   (Open x) (Closed y) = x <  y
      less   (Open x)   (Open y) = x <  y
  in l1 `less` h2 && l2 `less` h1

Noth that the code presented here is problematic in isolation, but if you compare it to the above contains function, there seems to be some repetition going on. Still, it's not quite the same, but the code looks similar enough that it bothers me. I feel that some kind of abstraction is sitting there, right before my nose, mocking me because I can't see it. Still, the code isn't completely duplicated, and even if it was, I can always invoke the rule of three and let it remain as it is.

Which is ultimately what I did.

Equality #

The kata also suggests some test cases to verify that it's possible to compare two ranges for equality. Dutifully I added those test cases to the code base, even though I knew that they'd automatically pass.

"Equals" ~: do
  (x, y, expected) <-
    [
      ((Closed 3, Open  5), (Closed 3, Open  5),  True),
      ((Closed 2, Open 10), (Closed 3, Open  5), False),
      ((Closed 2, Open  5), (Closed 3, Open 10), False),
      ((Closed 3, Open  5), (Closed 2, Open 10), False)
    ]
  let actual = x == y
  return $ expected ~=? actual

In the beginning of this article, I called attention to C#'s regrettable lack of structural equality. Here's an example of what I mean. In Haskell, these tests automatically pass because Endpoint is an Eq instance (by declaration), and all pairs of Eq instances are themselves Eq instances. Simple, elegant, powerful.

Conclusion #

As a first pass at the (admittedly uncomplicated) Range kata, I tried to follow the 'plan' implied by the kata description's test cases. I quickly became frustrated with their lack of completion. They were adequate in indicating to a human (me) what the desired behaviour should be, but insufficient to satisfactorily describe the desired behaviour.

I could, of course, have stuck with only those test cases, and instead of employing the Devil's Advocate technique (which I actively tried to avoid) made an honest effort to implement the functionality.

The things is, however, that I don't trust myself. At its essence, the Range kata is all about edge cases, which are where most bugs tend to lurk. Thus, these are exactly the cases that should be covered by tests.

Having made enough 'dumb' programming mistakes during my career, I didn't trust myself to be able to write correct implementations without more test coverage than originally suggested. That's the reason I added more tests.

On the other hand, I more than once speculated whether property-based testing would make this work easier. I decided to pursue that idea during my second pass at the kata.

Next: A Range kata implementation in F#.


Comments

I’d have another test case for the Equality function: ((Open 2, Open 6), (Closed 3, Closed 5), True). While it is nice Haskell provides (automatic) structural equality, I don’t think we want to say that the (2, 6) range (on integers!) is something else than the [3, 5] range.

But yes, this opens a can of worms: While (2, 6) = [3, 5] on integers, (2.0, 6.0) is obviously different than [3.0, 5.0] (on reals/Doubles/…). I have no idea: In Haskell, could you write an implementation of a function which would behave differently depending on whether the type argument belongs to a typeclass or not?

2024-01-09 13:38 UTC

Petr, thank you for writing. I don't think I'd add that (or similar) test cases, but it's a judgment call, and it's partly language-specific. What you're suggesting is to consider things that are equivalent equal. I agree that for integers this would be the case, but it wouldn't be for rational numbers, or floating points (or real numbers, if we had those in programming).

In Haskell it wouldn't really be idiomatic, because equality is defined by the Eq type class, and most types just go with the default implementation. What you suggest requires writing an explicit Eq instance for Endpoint. It'd be possible, but then you'd have to deal explicitly with the various integer representations separately from other representations that use floating points.

The distinction between equivalence and equality is largely artificial or a convenient hand wave. To explain what I mean, consider mathematical expressions. Obviously, 3 + 1 is equal to 2 + 2 when evaluated, but they're different expressions. Thus, on an expression level, we don't consider those two expressions equal. I think of the integer ranges (2, 6) and [3, 6] the same way. They evaluate to the same, but there aren't equal.

I don't think that this is a strong argument, mind. In other programming languages, I might arrive at a different decision. It also matters what client code needs to do with the API. In any case, the decision to not consider equivalence the same as equality is congruent with how Haskell works.

The existence of floating points and rational numbers, however, opens another can of worms that I happily glossed over, since I had a completely different goal with the kata than producing a reusable library.

Haskell actually supports rational numbers with the % operator:

ghci> 1%2
1 % 2

This value represents ½, to be explicit.

Unfortunately, according to the specification (or, at least, the documentation) of the Enum type class, the two 'movement' operations succ and pred jump by increments of 1:

ghci> succ $ 1%2
3 % 2
ghci> succ $ succ $ 1%2
5 % 2

The same is the case with floating points:

ghci> succ 1.5
2.5
ghci> succ $ succ 1.5
3.5

This is unfortunate when it comes to floating points, since it would be possible to enumerate all floating points in a range. (For example, if a single-precision floating point occupies 32 bits, there's a finite number of them, and you can enumerate them.)

As Sonat Süer points out, this means that the allPoints function is fundamentally broken for floating points and rational numbers (and possibly other types as well).

One way around that in Haskell would be to introduce a new type class for the purpose of truly enumerating ranges, and either implement it correctly for floating points, or explicitly avoid making Float and Double instances of that new type class. This, on the other hand, would have the downside that all of a sudden, the allPoints function wouldn't support any custom type of which I, as the implementer, is unaware.

If this was a library that I'd actually have to ship as a reusable API, I think I'd start by not including the allPoints function, and then see if anyone asks for it. If or when that happens, I'd begin a process to chart why people need it, and what could be done to serve those needs in a useful and mathematically consistent manner.

2024-01-13 19:51 UTC

Variations of the Range kata

Monday, 01 January 2024 17:00:00 UTC

In the languages I usually employ.

The Range kata is succinct, bordering on the spartan in both description and requirements. To be honest, it's hardly the most inspiring kata available, and yet it may help showcase a few interesting points about software design in general. It's what it demonstrates about functors that makes it marginally interesting.

In this short article series I first cover a few incarnations of the kata in my usual programming languages, and then conclude by looking at range as a functor.

The article series contains the following articles:

I didn't take the same approaches through all three exercises. An important point about doing katas is to learn something, and when you've done the kata once, you've already gained some knowledge that can't easily be unlearned. Thus, on the second, or third time through, it's only natural to apply that knowledge, but then try different tactics to solve the problem in a different way. That's what I did here, starting with Haskell, proceeding with F#, and concluding with C#.

Next: A Range kata implementation in Haskell.


Serializing restaurant tables in C#

Monday, 25 December 2023 11:42:00 UTC

Using System.Text.Json, with and without Reflection.

This article is part of a short series of articles about serialization with and without Reflection. In this instalment I'll explore some options for serializing JSON with C# using the API built into .NET: System.Text.Json. I'm not going use Json.NET in this article, but I've done similar things with that library in the past, so what's here is, at least, somewhat generalizable.

Since the API is the same, the only difference from the previous article is the language syntax.

Natural numbers #

Before we start investigating how to serialize to and from JSON, we must have something to serialize. As described in the introductory article we'd like to parse and write restaurant table configurations like this:

{
  "singleTable": {
    "capacity": 16,
    "minimalReservation": 10
  }
}

On the other hand, I'd like to represent the Domain Model in a way that encapsulates the rules governing the model, making illegal states unrepresentable. Even though that's a catchphrase associated with functional programming, it applies equally well to a statically typed object-oriented language like C#.

As the first step, we observe that the numbers involved are all natural numbers. In C# it's rarer to define predicative data types than in a language like F#, but people should do it more.

public readonly struct NaturalNumber : IEquatable<NaturalNumber>
{
    private readonly int value;
 
    public NaturalNumber(int value)
    {
        if (value < 1)
            throw new ArgumentOutOfRangeException(
                nameof(value),
                "Value must be a natural number greater than zero.");
        this.value = value;
    }
 
    public static NaturalNumber? TryCreate(int candidate)
    {
        if (candidate < 1)
            return null;
        return new NaturalNumber(candidate);
    }
 
    public static bool operator <(NaturalNumber left, NaturalNumber right)
    {
        return left.value < right.value;
    }
 
    public static bool operator >(NaturalNumber left, NaturalNumber right)
    {
        return left.value > right.value;
    }
 
    public static bool operator <=(NaturalNumber left, NaturalNumber right)
    {
        return left.value <= right.value;
    }
 
    public static bool operator >=(NaturalNumber left, NaturalNumber right)
    {
        return left.value >= right.value;
    }
 
    public static bool operator ==(NaturalNumber left, NaturalNumber right)
    {
        return left.value == right.value;
    }
 
    public static bool operator !=(NaturalNumber left, NaturalNumber right)
    {
        return left.value != right.value;
    }
 
    public static explicit operator int(NaturalNumber number)
    {
        return number.value;
    }
 
    public override bool Equals(objectobj)
    {
        return obj is NaturalNumber number && Equals(number);
    }
 
    public bool Equals(NaturalNumber other)
    {
        return value == other.value;
    }
 
    public override int GetHashCode()
    {
        return HashCode.Combine(value);
    }
}

When comparing all that boilerplate code to the three lines required to achieve the same result in F#, it seems, at first glance, understandable that C# developers rarely reach for that option. Still, typing is not a programming bottleneck, and most of that code was generated by a combination of Visual Studio and GitHub Copilot.

The TryCreate method may not be strictly necessary, but I consider it good practice to give client code a way to perform a fault-prone operation in a safe manner, without having to resort to a try/catch construct.

That's it for natural numbers. 72 lines of code. Compare that to the F# implementation, which required three lines of code. Syntax does matter.

Domain Model #

Modelling a restaurant table follows in the same vein. One invariant I would like to enforce is that for a 'single' table, the minimal reservation should be a NaturalNumber less than or equal to the table's capacity. It doesn't make sense to configure a table for four with a minimum reservation of six.

In the same spirit as above, then, define this type:

public readonly struct Table
{
    private readonly NaturalNumber capacity;
    private readonly NaturalNumber? minimalReservation;
 
    private Table(NaturalNumber capacity, NaturalNumber? minimalReservation)
    {
        this.capacity = capacity;
        this.minimalReservation = minimalReservation;
    }
 
    public static Table? TryCreateSingle(int capacityint minimalReservation)
    {
        var cap = NaturalNumber.TryCreate(capacity);
        if (cap is null)
            return null;
        var min = NaturalNumber.TryCreate(minimalReservation);
        if (min is null)
            return null;
 
        if (cap < min)
            return null;
 
        return new Table(cap.Value, min.Value);
    }
 
    public static Table? TryCreateCommunal(int capacity)
    {
        var cap = NaturalNumber.TryCreate(capacity);
        if (cap is null)
            return null;
 
        return new Table(cap.Value, null);
    }
 
    public T Accept<T>(ITableVisitor<T> visitor)
    {
        if (minimalReservation is null)
            return visitor.VisitCommunal(capacity);
        else
            return visitor.VisitSingle(capacity, minimalReservation.Value);
    }
}

Here I've Visitor-encoded the sum type that Table is. It can either be a 'single' table or a communal table.

Notice that TryCreateSingle checks the invariant that the capacity must be greater than or equal to the minimalReservation.

The point of this little exercise, so far, is that it encapsulates the contract implied by the Domain Model. It does this by using the static type system to its advantage.

JSON serialization by hand #

At the boundaries of applications, however, there are no static types. Is the static type system still useful in that situation?

For a long time, the most popular .NET library for JSON serialization was Json.NET, but these days I find the built-in API offered in the System.Text.Json namespace adequate. This is also the case here.

The original rationale for this article series was to demonstrate how serialization can be done without Reflection, so I'll start there and return to Reflection later.

In this article series, I consider the JSON format fixed. A single table should be rendered as shown above, and a communal table should be rendered like this:

"communalTable": { "capacity": 42 } }

Often in the real world you'll have to conform to a particular protocol format, or, even if that's not the case, being able to control the shape of the wire format is important to deal with backwards compatibility.

As I outlined in the introduction article you can usually find a more weakly typed API to get the job done. For serializing Table to JSON it looks like this:

public static string Serialize(this Table table)
{
    return table.Accept(new TableVisitor());
}
 
private sealed class TableVisitor : ITableVisitor<string>
{
    public string VisitCommunal(NaturalNumber capacity)
    {
        var j = new JsonObject
        {
            ["communalTable"] = new JsonObject
            {
                ["capacity"] = (int)capacity
            }
        };
        return j.ToJsonString();
    }
 
    public string VisitSingle(NaturalNumber capacity, NaturalNumber value)
    {
        var j = new JsonObject
        {
            ["singleTable"] = new JsonObject
            {
                ["capacity"] = (int)capacity,
                ["minimalReservation"] = (int)value
            }
        };
        return j.ToJsonString();
    }
}

In order to separate concerns, I've defined this functionality in a new static class that references the Domain Model. The Serialize extension method uses a private Visitor to write two different JsonObject objects, using the JSON API's underlying Document Object Model (DOM).

JSON deserialization by hand #

You can also go the other way, and when it looks more complicated, it's because it is. When serializing an encapsulated value, not a lot can go wrong because the value is already valid. When deserializing a JSON string, on the other hand, all sorts of things can go wrong: It might not even be a valid string, or the string may not be valid JSON, or the JSON may not be a valid Table representation, or the values may be illegal, etc.

Since there are several values that explicitly must be integers, it makes sense to define a helper method to try to parse an integer:

private static intTryInt(this JsonNode? node)
{
    if (node is null)
        return null;
    
    if (node.GetValueKind() != JsonValueKind.Number)
        return null;
 
    try
    {
        return (int)node;
    }
    catch (FormatException)
    {
        return null;
    }
}

I'm surprised that there's no built-in way to do that, but if there is, I couldn't find it.

With a helper method like that you can now implement the Deserialize method:

public static Table? Deserialize(string json)
{
    try
    {
        var node = JsonNode.Parse(json);
 
        var cnode = node?["communalTable"];
        if (cnode is { })
        {
            var capacity = cnode["capacity"].TryInt();
            if (capacity is null)
                return null;
            return Table.TryCreateCommunal(capacity.Value);
        }
 
        var snode = node?["singleTable"];
        if (snode is { })
        {
            var capacity = snode["capacity"].TryInt();
            var minimalReservation = snode["minimalReservation"].TryInt();
            if (capacity is null || minimalReservation is null)
                return null;
            return Table.TryCreateSingle(
                capacity.Value,
                minimalReservation.Value);
        }
 
        return null;
    }
    catch (JsonException)
    {
        return null;
    }
}

Since both serialisation and deserialization is based on string values, you should write automated tests that verify that the code works, and in fact, I did. Here are a few examples:

[Fact]
public void DeserializeSingleTableFor4()
{
    var json = """{"singleTable":{"capacity":4,"minimalReservation":3}}""";
    var actual = TableJson.Deserialize(json);
    Assert.Equal(Table.TryCreateSingle(4, 3), actual);
}
 
[Fact]
public void DeserializeNonTable()
{
    var json = """{"foo":42}""";
    var actual = TableJson.Deserialize(json);
    Assert.Null(actual);
}

Apart from using directives and namespace declaration this hand-written JSON capability requires 87 lines of code, although, to be fair, TryInt is a general-purpose method that ought to be part of the System.Text.Json API. Can we do better with static types and Reflection?

JSON serialisation based on types #

The static JsonSerializer class comes with Serialize<T> and Deserialize<T> methods that use Reflection to convert a statically typed object to and from JSON. You can define a type (a Data Transfer Object (DTO) if you will) and let Reflection do the hard work.

In Code That Fits in Your Head I explain how you're usually better off separating the role of serialization from the role of Domain Model. One way to do that is exactly by defining a DTO for serialisation, and let the Domain Model remain exclusively to model the rules of the application. The above Table type plays the latter role, so we need new DTO types:

public sealed class TableDto
{
    public CommunalTableDto? CommunalTable { getset; }
    public SingleTableDto? SingleTable { getset; }
}

public sealed class CommunalTableDto
{
    public int Capacity { getset; }
}

public sealed class SingleTableDto
{
    public int Capacity { getset; }
    public int MinimalReservation { getset; }
}

One way to model a sum type with a DTO is to declare both cases as nullable fields. While it does allow illegal states to be representable (i.e. both kinds of tables defined at the same time, or none of them present) this is only par for the course at the application boundary.

While you can serialize values of that type, by default the generated JSON doesn't have the right format. Instead, a serialized communal table looks like this:

{
  "CommunalTable": { "Capacity": 42 },
  "SingleTable"null
}

There are two problems with the generated JSON document:

  • The casing is wrong
  • The null value shouldn't be there

None of those are too hard to address, but it does make the API a bit more awkward to use, as this test demonstrates:

[Fact]
public void SerializeCommunalTableViaReflection()
{
    var dto = new TableDto
    {
        CommunalTable = new CommunalTableDto { Capacity = 42 }
    };
 
    var actual = JsonSerializer.Serialize(
        dto,
        new JsonSerializerOptions
        {
            PropertyNamingPolicy = JsonNamingPolicy.CamelCase,
            DefaultIgnoreCondition = JsonIgnoreCondition.WhenWritingNull
        });
 
    Assert.Equal("""{"communalTable":{"capacity":42}}""", actual);
}

You can, of course, define this particular serialization behaviour as a reusable method, so it's not a problem that you can't address. I just wanted to include this, since it's part of the overall work that you have to do in order to make this work.

JSON deserialisation based on types #

To allow parsing of JSON into the above DTO the Reflection-based Deserialize method pretty much works out of the box, although again, it needs to be configured. Here's a passing test that demonstrates how that works:

[Fact]
public void DeserializeSingleTableViaReflection()
{
    var json = """{"singleTable":{"capacity":4,"minimalReservation":2}}""";
 
    var actual = JsonSerializer.Deserialize<TableDto>(
        json,
        new JsonSerializerOptions { PropertyNamingPolicy = JsonNamingPolicy.CamelCase });
 
    Assert.Null(actual?.CommunalTable);
    Assert.Equal(4, actual?.SingleTable?.Capacity);
    Assert.Equal(2, actual?.SingleTable?.MinimalReservation);
}

There's only difference in casing, so you'd expect the Deserialize method to be a Tolerant Reader, but no. It's very particular about that, so the JsonNamingPolicy.CamelCase configuration is necessary. Perhaps the API designers found that explicit is better than implicit.

In any case, you could package that in a reusable Deserialize function that has all the options that are appropriate in a particular code context, so not a big deal. That takes care of actually writing and parsing JSON, but that's only half the battle. This only gives you a way to parse and serialize the DTO. What you ultimately want is to persist or dehydrate Table data.

Converting DTO to Domain Model, and vice versa #

As usual, converting a nice, encapsulated value to a more relaxed format is safe and trivial:

public static TableDto ToDto(this Table table)
{
    return table.Accept(new TableDtoVisitor());
}
 
private sealed class TableDtoVisitor : ITableVisitor<TableDto>
{
    public TableDto VisitCommunal(NaturalNumber capacity)
    {
        return new TableDto
        {
            CommunalTable = new CommunalTableDto
            {
                Capacity = (int)capacity
            }
        };
    }
 
    public TableDto VisitSingle(
        NaturalNumber capacity,
        NaturalNumber value)
    {
        return new TableDto
        {
            SingleTable = new SingleTableDto
            {
                Capacity = (int)capacity,
                MinimalReservation = (int)value
            }
        };
    }
}

Going the other way is fundamentally a parsing exercise:

public Table? TryParse()
{
    if (CommunalTable is { })
        return Table.TryCreateCommunal(CommunalTable.Capacity);
    if (SingleTable is { })
        return Table.TryCreateSingle(
            SingleTable.Capacity,
            SingleTable.MinimalReservation);
 
    return null;
}

Here, like in Code That Fits in Your Head, I've made that conversion an instance method on TableDto.

Such an operation may fail, so the result is a nullable Table object.

Let's take stock of the type-based alternative. It requires 58 lines of code, distributed over three DTO types and the two conversions ToDto and TryParse, but here I haven't counted configuration of Serialize and Deserialize, since I left that to each test case that I wrote. Since all of this code generally stays within 80 characters in line width, that would realistically add another 10 lines of code, for a total around 68 lines.

This is smaller than the DOM-based code, but not by much.

Conclusion #

In this article I've explored two alternatives for converting a well-encapsulated Domain Model to and from JSON. One option is to directly manipulate the DOM. Another option is take a more declarative approach and define types that model the shape of the JSON data, and then leverage type-based automation (here, Reflection) to automatically parse and write the JSON.

I've deliberately chosen a Domain Model with some constraints, in order to demonstrate how persisting a non-trivial data model might work. With that setup, writing 'loosely coupled' code directly against the DOM requires 87 lines of code, while taking advantage of type-based automation requires 68 lines of code. Again, Reflection seems 'easier' if you count lines of code, but the difference is marginal.


Comments

Great piece as ever Mark. Always enjoy reading about alternatives to methods that have become unquestioned convention.

I generally try to avoid reflection, especially within business code, and mainly use it for application bootstrapping, such as to discover services for dependency injection by convention. I also don't like attributes muddying model definitions, even on DTOs, so I would happily take an alternative to System.Text.Json. It is however increasingly integrated into other System libraries in ways that make it almost too useful to pass up. For example, the System.Net.Http.HttpContent class has the ReadFromJsonAsync extension method, which makes it trivial to deserialize a response body. Analogous methods exist for BinaryData. I'm not normally a sucker for convenience, but it is difficult to turn down strong integration like this.

2024-01-05 21:13 UTC

Callum, thank you for writing. You are correct that the people who design and develop .NET put a lot of effort into making things convenient. Some of that convenience, however, comes with a price. You have to buy into a certain way of doing things, and that certain way can sometimes be at odds with other good software practices, such as the Dependency Inversion Principle or test-driven development.

My goal with this (and other) article(s) isn't, however, to say that you mustn't take advantage of convenient integrations, but rather to highlight that alternatives exist.

The many 'convenient' ways that a framework gives you to solve various problems comes with the risk that you may paint yourself into a corner, if you aren't careful. You've invested heavily in the framework's way of doing things, but there's just this small edge case that you can't get right. So you write a bit of custom code, after having figured out the correct extensibility point to hook into. Until the framework changes 'how things are done' in the next iteration.

This is what I call Framework Whac-A-Mole - a syndrome that I'm becoming increasingly wary of the more experience I gain. Of the examples linked to in that article, ASP.NET validation revisited may be the most relevant to this discussion.

As a final note, I'd be remiss if I entered into a discussion about programmer convenience without drawing on Rich Hickey's excellent presentation Simple Made Easy, where he goes to great length distinguishing between what is easy (i.e. close at hand) and what is simple (i.e. not complex). The sweet spot, of course, is the intersection, where things are both simple and easy.

Most 'convenient' framework features do not, in my opinion, check that box.

2024-01-10 13:37 UTC

Serializing restaurant tables in F#

Monday, 18 December 2023 13:59:00 UTC

Using System.Text.Json, with and without Reflection.

This article is part of a short series of articles about serialization with and without Reflection. In this instalment I'll explore some options for serializing JSON with F# using the API built into .NET: System.Text.Json. I'm not going use Json.NET in this article, but I've done similar things with that library in the past, so what's here is, at least, somewhat generalizable.

Natural numbers #

Before we start investigating how to serialize to and from JSON, we must have something to serialize. As described in the introductory article we'd like to parse and write restaurant table configurations like this:

{
  "singleTable": {
    "capacity": 16,
    "minimalReservation": 10
  }
}

On the other hand, I'd like to represent the Domain Model in a way that encapsulates the rules governing the model, making illegal states unrepresentable.

As the first step, we observe that the numbers involved are all natural numbers. In F# it's both idiomatic and easy to define a predicative data type:

type NaturalNumber = private NaturalNumber of int

Since it's defined with a private constructor we need to also supply a way to create valid values of the type:

module NaturalNumber =
    let tryCreate n = if n < 1 then None else Some (NaturalNumber n)

In this, as well as the other articles in this series, I've chosen to model the potential for errors with Option values. I could also have chosen to use Result if I wanted to communicate information along the 'error channel', but sticking with Option makes the code a bit simpler. Not so much in F# or Haskell, but once we reach C#, applicative validation becomes complicated.

There's no loss of generality in this decision, since both Option and Result are applicative functors.

> NaturalNumber.tryCreate -1;;
val it: NaturalNumber option = None

> let x = NaturalNumber.tryCreate 42;;
val x: NaturalNumber option = Some NaturalNumber 42

The tryCreate function enables client developers to create NaturalNumber values, and due to the F#'s default equality and comparison implementation, you can even compare them:

> let y = NaturalNumber.tryCreate 2112;;
val y: NaturalNumber option = Some NaturalNumber 2112

> x < y;;
val it: bool = true

That's it for natural numbers. Three lines of code. Compare that to the Haskell implementation, which required eight lines of code. This is mostly due to F#'s private keyword, which Haskell doesn't have.

Domain Model #

Modelling a restaurant table follows in the same vein. One invariant I would like to enforce is that for a 'single' table, the minimal reservation should be a NaturalNumber less than or equal to the table's capacity. It doesn't make sense to configure a table for four with a minimum reservation of six.

In the same spirit as above, then, define this type:

type Table =
    private
    | SingleTable of NaturalNumber * NaturalNumber
    | CommunalTable of NaturalNumber

Once more the private keyword makes it impossible for client code to create instances directly, so we need a pair of functions to create values:

module Table =
    let trySingle capacity minimalReservation = option {
        let! cap = NaturalNumber.tryCreate capacity
        let! min = NaturalNumber.tryCreate minimalReservation
        if cap < min then return! None
        else return SingleTable (cap, min) }
 
    let tryCommunal = NaturalNumber.tryCreate >> Option.map CommunalTable

Notice that trySingle checks the invariant that the capacity must be greater than or equal to the minimalReservation.

Again, notice how much easier it is to define a predicative type in F#, compared to Haskell.

This isn't a competition between languages, and while F# certainly scores a couple of points here, Haskell has other advantages.

The point of this little exercise, so far, is that it encapsulates the contract implied by the Domain Model. It does this by using the static type system to its advantage.

JSON serialization by hand #

At the boundaries of applications, however, there are no static types. Is the static type system still useful in that situation?

For a long time, the most popular .NET library for JSON serialization was Json.NET, but these days I find the built-in API offered in the System.Text.Json namespace adequate. This is also the case here.

The original rationale for this article series was to demonstrate how serialization can be done without Reflection, so I'll start there and return to Reflection later.

In this article series, I consider the JSON format fixed. A single table should be rendered as shown above, and a communal table should be rendered like this:

"communalTable": { "capacity": 42 } }

Often in the real world you'll have to conform to a particular protocol format, or, even if that's not the case, being able to control the shape of the wire format is important to deal with backwards compatibility.

As I outlined in the introduction article you can usually find a more weakly typed API to get the job done. For serializing Table to JSON it looks like this:

let serializeTable = function
    | SingleTable (NaturalNumber capacity, NaturalNumber minimalReservation) ->
        let j = JsonObject ()
        j["singleTable"<- JsonObject ()
        j["singleTable"]["capacity"<- capacity
        j["singleTable"]["minimalReservation"<- minimalReservation
        j.ToJsonString ()
    | CommunalTable (NaturalNumber capacity) ->
        let j = JsonObject ()
        j["communalTable"<- JsonObject ()
        j["communalTable"]["capacity"<- capacity
        j.ToJsonString ()

In order to separate concerns, I've defined this functionality in a new module that references the module that defines the Domain Model. The serializeTable function pattern-matches on SingleTable and CommunalTable to write two different JsonObject objects, using the JSON API's underlying Document Object Model (DOM).

JSON deserialization by hand #

You can also go the other way, and when it looks more complicated, it's because it is. When serializing an encapsulated value, not a lot can go wrong because the value is already valid. When deserializing a JSON string, on the other hand, all sorts of things can go wrong: It might not even be a valid string, or the string may not be valid JSON, or the JSON may not be a valid Table representation, or the values may be illegal, etc.

Here I found it appropriate to first define a small API of parsing functions, mostly in order to make the object-oriented API more composable. First, I need some code that looks at the root JSON object to determine which kind of table it is (if it's a table at all). I found it appropriate to do that as a pair of active patterns:

let private (|Single|_|) (node : JsonNode) =
    match node["singleTable"with
    | null -> None
    | tn -> Some tn
 
let private (|Communal|_|) (node : JsonNode) =
    match node["communalTable"with
    | null -> None
    | tn -> Some tn

It turned out that I also needed a function to even check if a string is a valid JSON document:

let private tryParseJson (candidate : string) =
    try JsonNode.Parse candidate |> Some
    with | :? System.Text.Json.JsonException -> None

If there's a way to do that without a try/with expression, I couldn't find it. Likewise, trying to parse an integer turns out to be surprisingly complicated:

let private tryParseInt (node : JsonNode) =
    match node with
    | null -> None
    | _ ->
        if node.GetValueKind () = JsonValueKind.Number
        then
            try node |> int |> Some
            with | :? FormatException -> None // Thrown on decimal numbers
        else None

Both tryParseJson and tryParseInt are, however, general-purpose functions, so if you have a lot of JSON you need to parse, you can put them in a reusable library.

With those building blocks you can now define a function to parse a Table:

let tryDeserializeTable (candidate : string) =
    match tryParseJson candidate with
    | Some (Single node) -> option {
        let! capacity = node["capacity"] |> tryParseInt
        let! minimalReservation = node["minimalReservation"] |> tryParseInt
        return! Table.trySingle capacity minimalReservation }
    | Some (Communal node) -> option {
        let! capacity = node["capacity"] |> tryParseInt
        return! Table.tryCommunal capacity }
    | _ -> None

Since both serialisation and deserialization is based on string values, you should write automated tests that verify that the code works, and in fact, I did. Here are a few examples:

[<Fact>]
let ``Deserialize single table for 4`` () =
    let json = """{"singleTable":{"capacity":4,"minimalReservation":3}}"""
    let actual = tryDeserializeTable json
    Table.trySingle 4 3 =! actual
 
[<Fact>]
let ``Deserialize non-table`` () =
    let json = """{"foo":42}"""
    let actual = tryDeserializeTable json
    None =! actual

Apart from module declaration and imports etc. this hand-written JSON capability requires 46 lines of code, although, to be fair, some of that code (tryParseJson and tryParseInt) are general-purpose functions that belong in a reusable library. Can we do better with static types and Reflection?

JSON serialisation based on types #

The static JsonSerializer class comes with Serialize<T> and Deserialize<T> methods that use Reflection to convert a statically typed object to and from JSON. You can define a type (a Data Transfer Object (DTO) if you will) and let Reflection do the hard work.

In Code That Fits in Your Head I explain how you're usually better off separating the role of serialization from the role of Domain Model. One way to do that is exactly by defining a DTO for serialisation, and let the Domain Model remain exclusively to model the rules of the application. The above Table type plays the latter role, so we need new DTO types:

type CommunalTableDto = { Capacity : int }
type SingleTableDto = { Capacity : int; MinimalReservation : int }
type TableDto = {
    CommunalTable : CommunalTableDto option
    SingleTable : SingleTableDto option }

One way to model a sum type with a DTO is to declare both cases as option fields. While it does allow illegal states to be representable (i.e. both kinds of tables defined at the same time, or none of them present) this is only par for the course at the application boundary.

While you can serialize values of that type, by default the generated JSON doesn't have the right format:

> val dto: TableDto = { CommunalTable = Some { Capacity = 42 }
                        SingleTable = None }

> JsonSerializer.Serialize dto;;
val it: string = "{"CommunalTable":{"Capacity":42},"SingleTable":null}"

There are two problems with the generated JSON document:

  • The casing is wrong
  • The null value shouldn't be there

None of those are too hard to address, but it does make the API a bit more awkward to use, as this test demonstrates:

[<Fact>]
let ``Serialize communal table via reflection`` () =
    let dto = { CommunalTable = Some { Capacity = 42 }; SingleTable = None }
    let actual =
        JsonSerializer.Serialize (
            dto,
            JsonSerializerOptions (
                PropertyNamingPolicy = JsonNamingPolicy.CamelCase,
                IgnoreNullValues = true ))
    """{"communalTable":{"capacity":42}}""" =! actual

You can, of course, define this particular serialization behaviour as a reusable function, so it's not a problem that you can't address. I just wanted to include this, since it's part of the overall work that you have to do in order to make this work.

JSON deserialisation based on types #

To allow parsing of JSON into the above DTO the Reflection-based Deserialize method pretty much works out of the box, although again, it needs to be configured. Here's a passing test that demonstrates how that works:

[<Fact>]
let ``Deserialize single table via reflection`` () =
    let json = """{"singleTable":{"capacity":4,"minimalReservation":2}}"""
    let actual =
        JsonSerializer.Deserialize<TableDto> (
            json,
            JsonSerializerOptions ( PropertyNamingPolicy = JsonNamingPolicy.CamelCase ))
    {
        CommunalTable = None
        SingleTable = Some { Capacity = 4; MinimalReservation = 2 }
    } =! actual

There's only difference in casing, so you'd expect the Deserialize method to be a Tolerant Reader, but no. It's very particular about that, so the JsonNamingPolicy.CamelCase configuration is necessary. Perhaps the API designers found that explicit is better than implicit.

In any case, you could package that in a reusable Deserialize function that has all the options that are appropriate in a particular code context, so not a big deal. That takes care of actually writing and parsing JSON, but that's only half the battle. This only gives you a way to parse and serialize the DTO. What you ultimately want is to persist or dehydrate Table data.

Converting DTO to Domain Model, and vice versa #

As usual, converting a nice, encapsulated value to a more relaxed format is safe and trivial:

let toTableDto = function
    | SingleTable (NaturalNumber capacity, NaturalNumber minimalReservation) ->
        {
            CommunalTable = None
            SingleTable =
                Some
                    {
                        Capacity = capacity
                        MinimalReservation = minimalReservation
                    }
        }
    | CommunalTable (NaturalNumber capacity) ->
        { CommunalTable = Some { Capacity = capacity }; SingleTable = None }

Going the other way is fundamentally a parsing exercise:

let tryParseTableDto candidate =
    match candidate.CommunalTable, candidate.SingleTable with
    | Some { Capacity = capacity }, None -> Table.tryCommunal capacity
    | None, Some { Capacity = capacity; MinimalReservation = minimalReservation } ->
        Table.trySingle capacity minimalReservation
    | _ -> None

Such an operation may fail, so the result is a Table option. It could also have been a Result<Table, 'something>, if you wanted to return information about errors when things go wrong. It makes the code marginally more complex, but doesn't change the overall thrust of this exploration.

Ironically, while tryParseTableDto is actually more complex than toTableDto it looks smaller, or at least denser.

Let's take stock of the type-based alternative. It requires 26 lines of code, distributed over three DTO types and the two conversions tryParseTableDto and toTableDto, but here I haven't counted configuration of Serialize and Deserialize, since I left that to each test case that I wrote. Since all of this code generally stays within 80 characters in line width, that would realistically add another 10 lines of code, for a total around 36 lines.

This is smaller than the DOM-based code, although at the same magnitude.

Conclusion #

In this article I've explored two alternatives for converting a well-encapsulated Domain Model to and from JSON. One option is to directly manipulate the DOM. Another option is take a more declarative approach and define types that model the shape of the JSON data, and then leverage type-based automation (here, Reflection) to automatically parse and write the JSON.

I've deliberately chosen a Domain Model with some constraints, in order to demonstrate how persisting a non-trivial data model might work. With that setup, writing 'loosely coupled' code directly against the DOM requires 46 lines of code, while taking advantage of type-based automation requires 36 lines of code. Contrary to the Haskell example, Reflection does seem to edge out a win this round.

Next: Serializing restaurant tables in C#.


Serializing restaurant tables in Haskell

Monday, 11 December 2023 07:35:00 UTC

Using Aeson, with and without generics.

This article is part of a short series of articles about serialization with and without Reflection. In this instalment I'll explore some options for serializing JSON using Aeson.

The source code is available on GitHub.

Natural numbers #

Before we start investigating how to serialize to and from JSON, we must have something to serialize. As described in the introductory article we'd like to parse and write restaurant table configurations like this:

{
  "singleTable": {
    "capacity": 16,
    "minimalReservation": 10
  }
}

On the other hand, I'd like to represent the Domain Model in a way that encapsulates the rules governing the model, making illegal states unrepresentable.

As the first step, we observe that the numbers involved are all natural numbers. While I'm aware that Haskell has built-in Nat type, I choose not to use it here, for a couple of reasons. One is that Nat is intended for type-level programming, and while this might be useful here, I don't want to pull in more exotic language features than are required. Another reason is that, in this domain, I want to model natural numbers as excluding zero (and I honestly don't remember if Nat allows zero, but I think that it does..?).

Another option is to use Peano numbers, but again, for didactic reasons, I'll stick with something a bit more idiomatic.

You can easily introduce a wrapper over, say, Integer, to model natural numbers:

newtype Natural = Natural Integer deriving (EqOrdShow)

This, however, doesn't prevent you from writing Natural (-1), so we need to make this a predicative data type. The first step is to only export the type, but not its data constructor:

module Restaurants (
  Natural,
  -- More exports here...
  ) where

But this makes it impossible for client code to create values of the type, so we need to supply a smart constructor:

tryNatural :: Integer -> Maybe Natural
tryNatural n
  | n < 1 = Nothing
  | otherwise = Just (Natural n)

In this, as well as the other articles in this series, I've chosen to model the potential for errors with Maybe values. I could also have chosen to use Either if I wanted to communicate information along the 'error channel', but sticking with Maybe makes the code a bit simpler. Not so much in Haskell or F#, but once we reach C#, applicative validation becomes complicated.

There's no loss of generality in this decision, since both Maybe and Either are Applicative instances.

With the tryNatural function you can now (attempt to) create Natural values:

ghci> tryNatural (-1)
Nothing
ghci> x = tryNatural 42
ghci> x
Just (Natural 42)

This enables client developers to create Natural values, and due to the type's Ord instance, you can even compare them:

ghci> y = tryNatural 2112
ghci> x < y
True

Even so, there will be cases when you need to extract the underlying Integer from a Natural value. You could supply a normal function for that purpose, but in order to make some of the following code a little more elegant, I chose to do it with pattern synonyms:

{-# COMPLETE N #-}
pattern N :: Integer -> Natural
pattern N i <- Natural i

That needs to be exported as well.

So, eight lines of code to declare a predicative type that models a natural number. Incidentally, this'll be 2-3 lines of code in F#.

Domain Model #

Modelling a restaurant table follows in the same vein. One invariant I would like to enforce is that for a 'single' table, the minimal reservation should be a Natural number less than or equal to the table's capacity. It doesn't make sense to configure a table for four with a minimum reservation of six.

In the same spirit as above, then, define this type:

data SingleTable = SingleTable
  { singleCapacity :: Natural
  , minimalReservation :: Natural
  } deriving (EqOrdShow)

Again, only export the type, but not its data constructor. In order to extract values, then, supply another pattern synonym:

{-# COMPLETE SingleT #-}
pattern SingleT :: Natural -> Natural -> SingleTable
pattern SingleT c m <- SingleTable c m

Finally, define a Table type and two smart constructors:

data Table = Single SingleTable | Communal Natural deriving (EqShow)
 
trySingleTable :: Integer -> Integer -> Maybe Table
trySingleTable capacity minimal = do
  c <- tryNatural capacity
  m <- tryNatural minimal
  if c < m then Nothing else Just (Single (SingleTable c m))
 
tryCommunalTable :: Integer -> Maybe Table
tryCommunalTable = fmap Communal . tryNatural

Notice that trySingleTable checks the invariant that the capacity must be greater than or equal to the minimal reservation.

The point of this little exercise, so far, is that it encapsulates the contract implied by the Domain Model. It does this by using the static type system to its advantage.

JSON serialization by hand #

At the boundaries of applications, however, there are no static types. Is the static type system still useful in that situation?

For Haskell, the most common JSON library is Aeson, and I admit that I'm no expert. Thus, it's possible that there's an easier way to serialize to and deserialize from JSON. If so, please leave a comment explaining the alternative.

The original rationale for this article series was to demonstrate how serialization can be done without Reflection, or, in the case of Haskell, Generics (not to be confused with .NET generics, which in Haskell usually is called parametric polymorphism). We'll return to Generics later in this article.

In this article series, I consider the JSON format fixed. A single table should be rendered as shown above, and a communal table should be rendered like this:

"communalTable": { "capacity": 42 } }

Often in the real world you'll have to conform to a particular protocol format, or, even if that's not the case, being able to control the shape of the wire format is important to deal with backwards compatibility.

As I outlined in the introduction article you can usually find a more weakly typed API to get the job done. For serializing Table to JSON it looks like this:

newtype JSONTable = JSONTable Table deriving (EqShow)
 
instance ToJSON JSONTable where
  toJSON (JSONTable (Single (SingleT (N c) (N m)))) =
    object ["singleTable" .= object [
      "capacity" .= c,
      "minimalReservation" .= m]]
  toJSON (JSONTable (Communal (N c))) =
    object ["communalTable" .= object ["capacity" .= c]]

In order to separate concerns, I've defined this functionality in a new module that references the module that defines the Domain Model. Thus, to avoid orphan instances, I've defined a JSONTable newtype wrapper that I then make a ToJSON instance.

The toJSON function pattern-matches on Single and Communal to write two different Values, using Aeson's underlying Document Object Model (DOM).

JSON deserialization by hand #

You can also go the other way, and when it looks more complicated, it's because it is. When serializing an encapsulated value, not a lot can go wrong because the value is already valid. When deserializing a JSON string, on the other hand, all sorts of things can go wrong: It might not even be a valid string, or the string may not be valid JSON, or the JSON may not be a valid Table representation, or the values may be illegal, etc.

It's no surprise, then, that the FromJSON instance is bigger:

instance FromJSON JSONTable where
  parseJSON (Object v) = do
    single <- v .:? "singleTable"
    communal <- v .:? "communalTable"
    case (single, communal) of
      (Just s, Nothing) -> do
        capacity <- s .: "capacity"
        minimal <- s .: "minimalReservation"
        case trySingleTable capacity minimal of
          Nothing -> fail "Expected natural numbers."
          Just t -> return $ JSONTable t
      (Nothing, Just c) -> do
        capacity <- c .: "capacity"
        case tryCommunalTable capacity of
          Nothing -> fail "Expected a natural number."
          Just t -> return $ JSONTable t
      _ -> fail "Expected exactly one of singleTable or communalTable."
  parseJSON _ = fail "Expected an object."

I could probably have done this more succinctly if I'd spent even more time on it than I already did, but it gets the job done and demonstrates the point. Instead of relying on run-time Reflection, the FromJSON instance is, unsurprisingly, a parser, composed from Aeson's specialised parser combinator API.

Since both serialisation and deserialization is based on string values, you should write automated tests that verify that the code works.

Apart from module declaration and imports etc. this hand-written JSON capability requires 27 lines of code. Can we do better with static types and Generics?

JSON serialisation based on types #

The intent with the Aeson library is that you define a type (a Data Transfer Object (DTO) if you will), and then let 'compiler magic' do the rest. In Haskell, it's not run-time Reflection, but a compilation technology called Generics. As I understand it, it automatically 'writes' the serialization and parsing code and turns it into machine code as part of normal compilation.

You're supposed to first turn on the

{-# LANGUAGE DeriveGeneric #-}

language pragma and then tell the compiler to automatically derive Generic for the DTO in question. You'll see an example of that shortly.

It's a fairly flexible system that you can tweak in various ways, but if it's possible to do it directly with the above Table type, please leave a comment explaining how. I tried, but couldn't make it work. To be clear, I could make it serializable, but not to the above JSON format. After enough Aeson Whac-A-Mole I decided to change tactics.

In Code That Fits in Your Head I explain how you're usually better off separating the role of serialization from the role of Domain Model. The way to do that is exactly by defining a DTO for serialisation, and let the Domain Model remain exclusively to model the rules of the application. The above Table type plays the latter role, so we need new DTO types.

We may start with the building blocks:

newtype CommunalDTO = CommunalDTO
  { communalCapacity :: Integer
  } deriving (EqShowGeneric)

Notice how it declaratively derives Generic, which works because of the DeriveGeneric language pragma.

From here, in principle, all that you need is just a single declaration to make it serializable:

instance ToJSON CommunalDTO

While it does serialize to JSON, it doesn't have the right format:

"communalCapacity": 42 }

The property name should be capacity, not communalCapacity. Why did I call the record field communalCapacity instead of capacity? Can't I just fix my CommunalDTO record?

Unfortunately, I can't just do that, because I also need a capacity JSON property for the single-table case, and Haskell isn't happy about duplicated field names in the same module. (This language feature truly is one of the weak points of Haskell.)

Instead, I can tweak the Aeson rules by supplying an Options value to the instance definition:

communalJSONOptions :: Options
communalJSONOptions =
  defaultOptions {
    fieldLabelModifier = \s -> case s of
      "communalCapacity" -> "capacity"
      _ -> s }
 
instance ToJSON CommunalDTO where
  toJSON = genericToJSON communalJSONOptions
  toEncoding = genericToEncoding communalJSONOptions

This instructs the compiler to modify how it generates the serialization code, and the generated JSON fragment is now correct.

We can do the same with the single-table case:

data SingleDTO = SingleDTO
  { singleCapacity :: Integer
  , minimalReservation :: Integer
  } deriving (EqShowGeneric)
 
singleJSONOptions :: Options
singleJSONOptions =
  defaultOptions {
    fieldLabelModifier = \s -> case s of
      "singleCapacity" -> "capacity"
      "minimalReservation" -> "minimalReservation"
      _ -> s }
 
instance ToJSON SingleDTO where
  toJSON = genericToJSON singleJSONOptions
  toEncoding = genericToEncoding singleJSONOptions

This takes care of that case, but we still need a container type that will hold either one or the other:

data TableDTO = TableDTO
  { singleTable :: Maybe SingleDTO
  , communalTable :: Maybe CommunalDTO
  } deriving (EqShowGeneric)
 
tableJSONOptions :: Options
tableJSONOptions =
  defaultOptions { omitNothingFields = True }
 
instance ToJSON TableDTO where
  toJSON = genericToJSON tableJSONOptions
  toEncoding = genericToEncoding tableJSONOptions

One way to model a sum type with a DTO is to declare both cases as Maybe fields. While it does allow illegal states to be representable (i.e. both kinds of tables defined at the same time, or none of them present) this is only par for the course at the application boundary.

That's quite a bit of infrastructure to stand up, but at least most of it can be reused for parsing.

JSON deserialisation based on types #

To allow parsing of JSON into the above DTO we can make them all FromJSON instances, e.g.:

instance FromJSON CommunalDTO where
  parseJSON = genericParseJSON communalJSONOptions

Notice that you can reuse the same communalJSONOptions used for the ToJSON instance. Repeat that exercise for the two other record types.

That's only half the battle, though, since this only gives you a way to parse and serialize the DTO. What you ultimately want is to persist or dehydrate Table data.

Converting DTO to Domain Model, and vice versa #

As usual, converting a nice, encapsulated value to a more relaxed format is safe and trivial:

toTableDTO :: Table -> TableDTO
toTableDTO (Single (SingleT (N c) (N m))) = TableDTO (Just (SingleDTO c m)) Nothing
toTableDTO (Communal (N c)) = TableDTO Nothing (Just (CommunalDTO c))

Going the other way is fundamentally a parsing exercise:

tryParseTable :: TableDTO -> Maybe Table
tryParseTable (TableDTO (Just (SingleDTO c m)) Nothing) = trySingleTable c m
tryParseTable (TableDTO Nothing (Just (CommunalDTO c))) = tryCommunalTable c
tryParseTable _ = Nothing

Such an operation may fail, so the result is a Maybe Table. It could also have been an Either something Table, if you wanted to return information about errors when things go wrong. It makes the code marginally more complex, but doesn't change the overall thrust of this exploration.

Let's take stock of the type-based alternative. It requires 62 lines of code, distributed over three DTO types, their Options, their ToJSON and FromJSON instances, and finally the two conversions tryParseTable and toTableDTO.

Conclusion #

In this article I've explored two alternatives for converting a well-encapsulated Domain Model to and from JSON. One option is to directly manipulate the DOM. Another option is take a more declarative approach and define types that model the shape of the JSON data, and then leverage type-based automation (here, Generics) to automatically produce the code that parses and writes the JSON.

I've deliberately chosen a Domain Model with some constraints, in order to demonstrate how persisting a non-trivial data model might work. With that setup, writing 'loosely coupled' code directly against the DOM requires 27 lines of code, while 'taking advantage' of type-based automation requires 62 lines of code.

To be fair, the dice don't always land that way. You can't infer a general rule from a single example, and it's possible that I could have done something clever with Aeson to reduce the code. Even so, I think that there's a conclusion to be drawn, and it's this:

Type-based automation (Generics, or run-time Reflection) may seem simple at first glance. Just declare a type and let some automation library do the rest. It may happen, however, that you need to tweak the defaults so much that it would be easier skipping the type-based approach and instead directly manipulating the DOM.

I love static type systems, but I'm also watchful of their limitations. There's likely to be an inflection point where, on the one side, a type-based declarative API is best, while on the other side of that point, a more 'lightweight' approach is better.

The position of such an inflection point will vary from context to context. Just be aware of the possibility, and explore alternatives if things begin to feel awkward.

Next: Serializing restaurant tables in F#.


Serialization with and without Reflection

Monday, 04 December 2023 20:53:00 UTC

An investigation of alternatives.

I recently wrote a tweet that caused more responses than usual:

"A decade ago, I used .NET Reflection so often that I know most the the API by heart.

"Since then, I've learned better ways to solve my problems. I can't remember when was the last time I used .NET Reflection. I never need it.

"Do you?"

Most people who read my tweets are programmers, and some are, perhaps, not entirely neurotypical, but I intended the last paragraph to be a rhetorical question. My point, really, was to point out that if I tell you it's possible to do without Reflection, one or two readers might keep that in mind and at least explore options the next time the urge to use Reflection arises.

A common response was that Reflection is useful for (de)serialization of data. These days, the most common case is going to and from JSON, but the problem is similar if the format is XML, CSV, or another format. In a sense, even reading to and from a database is a kind of serialization.

In this little series of articles, I'm going to explore some alternatives to Reflection. I'll use the same example throughout, and I'll stick to JSON, but you can easily extrapolate to other serialization formats.

Table layouts #

As always, I find the example domain of online restaurant reservation systems to be so rich as to furnish a useful example. Imagine a multi-tenant service that enables restaurants to take and manage reservations.

When a new reservation request arrives, the system has to make a decision on whether to accept or reject the request. The layout, or configuration, of tables plays a role in that decision.

Such a multi-tenant system may have an API for configuring the restaurant; essentially, entering data into the system about the size and policies regarding tables in a particular restaurant.

Most restaurants have 'normal' tables where, if you reserve a table for three, you'll have the entire table for a duration. Some restaurants also have one or more communal tables, typically bar seating where you may get a view of the kitchen. Quite a few high-end restaurants have tables like these, because it enables them to cater to single diners without reserving an entire table that could instead have served two paying customers.

Bar seating at Ernst, Berlin.

In Copenhagen, on the other hand, it's also not uncommon to have a special room for larger parties. I think this has something to do with the general age of the buildings in the city. Most establishments are situated in older buildings, with all the trappings, including load-bearing walls, cellars, etc. As part of a restaurant's location, there may be a big cellar room, second-story room, or other room that's not practical for the daily operation of the place, but which works for parties of, say, 15-30 people. Such 'private dining' rooms can be used for private occasions or company outings.

A maître d'hôtel may wish to configure the system with a variety of tables, including communal tables, and private dining tables as described above.

One way to model such requirements is to distinguish between two kinds of tables: Communal tables, and 'single' tables, and where single tables come with an additional property that models the minimal reservation required to reserve that table. A JSON representation might look like this:

{
  "singleTable": {
    "capacity": 16,
    "minimalReservation": 10
  }
}

This may represent a private dining table that seats up to sixteen people, and where the maître d'hôtel has decided to only accept reservations for at least ten guests.

A singleTable can also be used to model 'normal' tables without special limits. If the restaurant has a table for four, but is ready to accept a reservation for one person, you can configure a table for four, with a minimum reservation of one.

Communal tables are different, though:

"communalTable": { "capacity": 10 } }

Why not just model that as ten single tables that each seat one?

You don't want to do that because you want to make sure that parties can eat together. Some restaurants have more than one communal table. Imagine that you only have two communal tables of ten seats each. What happens if you model this as twenty single-person tables?

If you do that, you may accept reservations for parties of six, six, and six, because 6 + 6 + 6 = 18 < 20. When those three groups arrive, however, you discover that you have to split one of the parties! The party getting separated may not like that at all, and you are, after all, in the hospitality business.

Exploration #

In each article in this short series, I'll explore serialization with and without Reflection in a few languages. I'll start with Haskell, since that language doesn't have run-time Reflection. It does have a related facility called generics, not to be confused with .NET or Java generics, which in Haskell are called parametric polymorphism. It's confusing, I know.

Haskell generics look a bit like .NET Reflection, and there's some overlap, but it's not quite the same. The main difference is that Haskell generic programming all 'resolves' at compile time, so there's no run-time Reflection in Haskell.

If you don't care about Haskell, you can skip that article.

As you can see, the next article repeats the exercise in F#, and if you also don't care about that language, you can skip that article as well.

The C# article, on the other hand, should be readable to not only C# programmers, but also developers who work in sufficiently equivalent languages.

Descriptive, not prescriptive #

The purpose of this article series is only to showcase alternatives. Based on the reactions my tweet elicited I take it that some people can't imagine how serialisation might look without Reflection.

It is not my intent that you should eschew the Reflection-based APIs available in your languages. In .NET, for example, a framework like ASP.NET MVC expects you to model JSON or XML as Data Transfer Objects. This gives you an illusion of static types at the boundary.

Even a Haskell web library like Servant expects you to model web APIs with static types.

When working with such a framework, it doesn't always pay to fight against its paradigm. When I work with ASP.NET, I define DTOs just like everyone else. On the other hand, if communicating with a backend system, I sometimes choose to skip static types and instead working directly with a JSON Document Object Model (DOM).

I occasionally find that it better fits my use case, but it's not the majority of times.

Conclusion #

While some sort of Reflection or metadata-driven mechanism is often used to implement serialisation, it often turns out that such convenient language capabilities are programmed on top of an ordinary object model. Even isolated to .NET, I think I'm on my third JSON library, and most (all?) turned out to have an underlying DOM that you can manipulate.

In this article I've set the stage for exploring how serialisation can work, with or (mostly) without Reflection.

If you're interested in the philosophy of science and epistemology, you may have noticed a recurring discussion in academia: A wider society benefits not only from learning what works, but also from learning what doesn't work. It would be useful if researchers published their failures along with their successes, yet few do (for fairly obvious reasons).

Well, I depend neither on research grants nor salary, so I'm free to publish negative results, such as they are.

Not that I want to go so far as to categorize what I present in the present articles as useless, but they're probably best applied in special circumstances. On the other hand, I don't know your context, and perhaps you're doing something I can't even imagine, and what I present here is just what you need.

Next: Serializing restaurant tables in Haskell.


Comments

gdifolco #
I'll start with Haskell, since that language doesn't have run-time Reflection.

Haskell (the language) does not provide primitives to access to data representation per-se, during the compilation GHC (the compiler) erase a lot of information (more or less depending on the profiling flags) in order to provide to the run-time system (RTS) a minimal "bytecode".

That being said, provide three ways to deal structurally with values:

  • TemplateHaskell: give the ability to rewrite the AST at compile-time
  • Generics: give the ability to have a type-level representation of a type structure
  • Typeable: give the ability to have a value-level representation of a type structure

Template Haskell is low-level as it implies to deal with the AST, it is also harder to debug as, in order to be evaluated, the main compilation phase is stopped, then the Template Haskell code is ran, and finally the main compilation phase continue. It also causes compilation cache issues.

Generics take type's structure and generate a representation at type-level, the main idea is to be able to go back and forth between the type and its representation, so you can define so behavior over a structure, the good thing being that, since the representation is known at compile-time, many optimizations can be done. On complex types it tends to slow-down compilation and produce larger-than-usual binaries, it is generraly the way libraries are implemented.

Typeable is a purely value-level type representation, you get only on type whatever the type structure is, it is generally used when you have "dynamic" types, it provides safe ways to do coercion.

Haskell tends to push as much things as possible in compile-time, this may explain this tendency.

2023-12-05 21:47 UTC

Thank you for writing. I was already aware of Template Haskell and Generics, but Typeable is new to me. I don't consider the first two equivalent to Reflection, since they resolve at compile time. They're more akin to automated code generation, I think. Reflection, as I'm used to it from .NET, is a run-time feature where you can inspect and interact with types and values as the code is executing.

I admit that I haven't had the time to more than browse the documentation of Typeable, and it's so abstract that it's not clear to me what it does, how it works, or whether it's actually comparable to Reflection. The first question that comes to my mind regards the type class Typeable itself. It has no instances. Is it one of those special type classes (like Coercible) that one doesn't have to explicitly implement?

2023-12-08 17:11 UTC
gdifolco #
I don't consider the first two equivalent to Reflection, since they resolve at compile time. They're more akin to automated code generation, I think. Reflection, as I'm used to it from .NET, is a run-time feature where you can inspect and interact with types and values as the code is executing.

I don't know the .NET ecosystem well, but I guess you can borrow information at run-time we have at compile-time with TemplateHaskell and Generics, I think you are right then.

I admit that I haven't had the time to more than browse the documentation of Typeable, and it's so abstract that it's not clear to me what it does, how it works, or whether it's actually comparable to Reflection. The first question that comes to my mind regards the type class Typeable itself. It has no instances. Is it one of those special type classes (like Coercible) that one doesn't have to explicitly implement?

You can derive Typeable as any othe type classes:

            data MyType = MyType
              { myString :: String,
                myInt :: Int
              }
              deriving stock (Eq, Show, Typeable)
            

It's pretty "low-level", typeRep gives you a TypeRep a (a being the type represented) which is a representation of the type with primitive elements (More details here).

Then, you'll be able to either pattern match on it, or cast it (which it not like casting in Java for example, because you are just proving to the compiler that two types are equivalent).

2023-12-11 17:11 UTC

Page 2 of 73

"Our team wholeheartedly endorses Mark. His expert service provides tremendous value."
Hire me!