Yes silver bullet

Monday, 01 July 2019 07:38:00 UTC

Since Fred Brooks published his essay, I believe that we, contrary to his prediction, have witnessed several silver bullets.

I've been rereading Fred Brooks's 1986 essay No Silver Bullet because I've become increasingly concerned that people seem to draw the wrong conclusions from it. Semantic diffusion seems to have set in. These days, when people state something along the lines that there's no silver bullet in software development, I often get the impression that they mean that there's no panacea.

Indeed; I agree. There's no miracle cure that will magically make all problems in software development go away. That's not what the essay states, however. It is, fortunately, more subtle than that.

No silver bullet reread #

It's a great essay. It's not my intent to dispute the central argument of the essay, but I think that Brooks made one particular assumption that I disagree with. That doesn't make me smarter in any way. He wrote the essay in 1986. I'm writing this in 2019, with the benefit of the experience of all the years in-between. Hindsight is 20-20, so anyone could make the observations that I do here.

Before we get to that, though, a brief summary of the essence of the essay is in order. In short, the conclusion is this:

"There is no single development, in either technology or management technique, which by itself promises even one order-of-magnitude improvement within a decade in productivity, in reliability, in simplicity."

Fred Brooks, No Silver Bullet, 1986
The beginning of the essay is a brilliant analysis of the reasons why software development is inherently difficult. If you read this together with Jack Reeves What Is Software Design? (available various places on the internet, or as an appendix in APPP), you'll probably agree that there's an inherent complexity to software development that no invention is likely to dispel.

Ostensibly in the tradition of Aristotle, Brooks distinguishes between essential and accidental complexity. This distinction is central to his argument, so it's worth discussing for a minute.

Software development problems are complex, i.e. made up of many interacting sub-problems. Some of that complexity is accidental. This doesn't imply randomness or sloppiness, but only that the complexity isn't inherent to the problem; that it's only the result of our (human) failure to achieve perfection.

If you imagine that you could whittle away all the accidental complexity, you'd ultimately reach a point where, in the words of Saint Exupéry, there is nothing more to remove. What's left is the essential complexity.

Brooks' conjecture is that a typical software development project comes with both essential and accidental complexity. In his 1995 reflections "No Silver Bullet" Refired (available in The Mythical Man-Month), he clarifies what he already implied in 1986:

"It is my opinion, and that is all, that the accidental or representational part of the work is now down to about half or less of the total."

Fred Brooks, "No Silver Bullet" Refired, 1995
This I fundamentally disagree with, but more on that later. It makes sense to me to graphically represent the argument like this:

Some, but not much, accidental complexity as a shell around essential complexity.

The way that I think of Brooks' argument is that any software project contains some essential and some accidental complexity. For a given project, the size of the essential complexity is fixed.

Brooks believes that less than half of the overall complexity is accidental:

Essential and accidental complexity pie chart.

While a pie chart better illustrates the supposed ratio between the two types of complexity, I prefer to view Brooks' arguments as the first diagram, above. In that visualisation, the essential complexity is a core of fixed size, while accidental complexity is something you can work at removing. If you keep improving your process and technology, you may, conceptually, be able to remove (almost) all of it.

Essential complexity with a very thin shell of accidental complexity.

Brooks' point, with which I agree, is that if the essential complexity is inherent, then you can't reduce the size of it. The only way to decrease the overall complexity is to reduce the accidental complexity.

If you agree with the assessment that less than half of the overall complexity in modern software development is accidental, then it follows that no dramatic improvements are available. Even if you remove all accidental complexity, you've only reduced overall complexity by, say, forty percent.

Accidental complexity abounds #

I find Brooks' arguments compelling. I do not, however, accept the premise that there's only little accidental complexity left. Instead of the above diagrams, I believe that the situation looks more like this (not to scale):

Accidental complexity with a tiny core of essential complexity.

I think that most of the complexity in software development is accidental. I'm not sure about today, but I believe that I have compelling evidence that this was the case in 1986, so I don't see why it shouldn't still be the case.

To be clear, this is all anecdotal, since I don't believe that software development is quantifiable. In the essay, Brooks explicitly talks about the invisibility of software. Software is pure thought stuff; you can't measure it. I discuss this in my Humane Code video, but I also recommend that you read The Leprechauns of Software Engineering if you have any illusions that we, as an industry, have any reliable measurements of productivity.

Brooks predicts that, within the decade (from 1986 to 1996), there would be no single development that would increase productivity with an order of magnitude, i.e. by a factor of at least ten. Ironically, when he wrote "No Silver Bullet" Refired in 1995, at least two such developments were already in motion.

We can't blame Brooks for not identifying those developments, because in 1995, their impact was not yet apparent. Again, hindsight is 20-20.

Neither of these two developments are purely technological, although technology plays a role. Notice, though, that Brooks' prediction included technology or management technique. It's in the interaction between technology and the humane that the orders-of-magnitude developments emerged.

World Wide Web #

I have a dirty little secret. In the beginning of my programming career, I became quite the expert on a programming framework called Microsoft Commerce Server. In fact, I co-authored a chapter of Professional Commerce Server 2000 Programming, and in 2003 I received an MVP award as an acknowledgement of my work in the Commerce Server community (such as it were; it was mostly on Usenet).

The Commerce Server framework was a black box. This was long before Microsoft embraced open source, and while there was a bit of official documentation, it was superficial; it was mostly of the getting-started kind.

Over several years, I managed to figure out how the framework really worked, and thus, how one could extend it. This was a painstaking process. Since it was a black box, I couldn't just go and read the code to figure out how it worked. The framework was written in C++ and Visual Basic, so there wasn't even IL code to decompile.

I had one window into the framework. It relied on SQL Server, and I could attach the profiler tool to spy on its interaction with the database. Painstakingly, over several years, I managed to wrest the framework's secrets from it.

I wasted much time doing detective work like that.

In general, programming in the late nineties and early two-thousands was less productive, not because the languages or tools were orders-of-magnitude worse than today, but because when you hit a snag, you were in trouble.

These days, if you run into a problem beyond your abilities, you can ask for help on the World Wide Web. Usually, you'll find an existing answer on Stack Overflow, and you'll be able to proceed without too much delay.

Compared to twenty years ago, I believe that the World Wide Web has increased my productivity more than ten-fold. While it also existed in 1995, there wasn't much content. It's not the technology itself that provides the productivity increase, but rather the synergy of technology and human knowledge.

I think that Brooks vastly underestimated how much time one can waste when one is stuck. That's a sort of accidental complexity, although in the development process rather than in the technology itself.

Automated testing #

In the late nineties, I was developing web sites (with Commerce Server). When I wanted to run my code to see if it worked, I'd launch the web site on my laptop, log in, click around and enter data until I was convinced that the functionality was working as it should. Most of the time, however, it wasn't, so I'd change a bit of the code, and go through the same process again.

I think that's a common way to 'test' software; at least, it was back then.

While you could get good at going through these motions quickly, verifying a single, or a handful of related functionalities, could easily take at least a couple of seconds, and usually more like half a minute.

If you had dozens, or even hundreds, of different scenarios to address, you obviously wouldn't run through them all every time you changed the code. At the very best, you'd click your way through three of four usage scenarios that you thought were relevant to the change you'd made. Other functionality, earlier declared done, you just considered to be unaffected.

Needless to say, regressions were regular occurrences.

In 2003 I discovered test-driven development, and through that, automated testing. While you can't directly compare unit tests with whole usage scenarios, I think it's fair to compare something like automated integration tests or user-scenario tests (whatever you want to call them) with manually clicking through an application.

Even an integration test, if written properly, can verify a scenario at least ten times faster than you can do it by hand. A more realistic estimate is probably hundred times faster, or more.

Granted, you have to write the automated test as well, and I know that it's not always trivial. Still, once you have an automated test suite in place, you can run it all the time.

I never ran through all usage scenarios when I manually 'tested' my software. With automated tests, I do. This saves me from most regressions.

This improvement is, in my opinion, a no-brainer. It's easily a factor ten improvement. All the time wasted manually 'testing' the software, plus the time wasted fixing regressions, can be put to better use.

At the time Brooks was writing his own retrospective (in 1995), Kent Beck was beginning to talk to other people about test-driven development. As is a common theme in this article, hindsight is 20-20.

Honourable mentions #

There's been other improvements in software development since 1986. I considered including several other improvements as bona fide orders-of-magnitude improvements, but I think that's probably going too far. Each of the following developments have, however, offered significant improvements:

  • Git. It's surprising how much more productive Git can make you. While it's somewhat better than centralised source control systems at the functionality also available with those other systems, the productivity increase comes from all the new, unanticipated workflows it enables. Before I started using DVCS, I'd have lots of code that was commented out, so that I could experiment with various alternatives. With Git, I just create a new branch, or stash my changes, and experiment with abandon. While it's probably not a ten-fold increase in productivity, I believe it's the simplest technology change you can make to dramatically increase your productivity.
  • Garbage collection. Since I've admitted that I worked with Microsoft Commerce Server, I've probably lost all credibility with my reader already, but let's see if I can win back a little. While Commerce Server programming involved VBScript programming, it also often involved COM programming, and I did quite a bit of that in C++. Having to make sure that you've cleaned up all memory after use is a bother. Garbage collection just makes this work go away. It's hardly a ten-fold improvement in productivity, but I do find it significant.
  • Agile software development. The methodology of decreasing the feedback time between implementation and deployment has made me much more productive. I'm not interested in peddling any particular methodology like Scrum as much as just the general concept of getting rapid feedback. Particularly if you combine continuous delivery with Git, you have a powerful combination. Brooks already talked about incremental software development, and had some hopes attached to this as well. My personal experience can only agree with his sentiment. Again, probably not in itself a ten-fold increase in productivity, but enough that I wouldn't want to work on a project where rapid feedback and incremental development wasn't valued.
I'm probably forgetting lots of other improvements that have happened in the last decades. That's fine. The purpose of this article isn't to produce an exhaustive list, but rather to make the argument that significant improvements have been made since Brooks wrote his essay. I think it'd be folly, then, to believe that we've seen the last of such improvements.

Personally, I'm inclined to believe another order-of-magnitude improvement is right at our feet.

Statically typed functional programming #

This section is conjecture on my part. The improvements I've so far covered are already realised (at least for those who choose to take advantage of them). The improvement I'll cover here is more speculative.

I believe that statically typed functional programming offers another order-of-magnitude improvement over existing software development. Twenty years ago, I believed that object-oriented programming was a good idea. I now believe that I was wrong about that, so it's possible that in another twenty years, I'll also believe that I was wrong about functional programming. Take the following for what it is.

When I carefully reread No Silver Bullet, I got the distinct impression that Brooks considered low-level details of programming part of its essential complexity:

"Much of the complexity in a software construct is, however, not due to conformity to the external world but rather to the implementation itself - its data structures, its algorithms, its connectivity."

Fred Brooks, "No Silver Bullet" Refired, 1995
It's unreasonable to blame anyone writing in 1986, or 1995 for that matter, to think that for loops, variables, program state, and such other programming stables were anything but essential parts of the complexity of developing software.

Someone, unfortunately I forget who, once made the point that all mainstream programming languages are layers of abstractions of how a CPU works. Assembly language is basically just mnemonics on top of a CPU instruction set, then C can be thought of as an abstraction over assembly language, C++ as the next step in abstraction, Java and C# as sort of abstractions of C++, and so on. The origin of the design is the physical CPU. You could say that these languages are designed in a bottom-up fashion.

Imperative languages depicted as designed bottom-up, and functional languages as designed top-down.

Some functional languages (perhaps most famously Haskell, but also APL, and, possibly, Lisp) are designed in a much more top-down fashion. You start with mathematical abstractions like category theory and then figure out how to crystallise the theory into a programming language, and then again, via more layers of abstractions, how to turn the abstract language into machine code.

The more you learn about the pure functional alternative to programming, the more you begin to see mutable program state, variables, for loops, and similar language constructs merely as artefacts of the underlying model. Brooks, I think, thought of these as part of the essential complexity of programming. I don't think that that's the case. You can get by just fine with other abstractions instead.

Besides, Brooks writes, under the heading of Complexity:

"From the complexity comes the difficulty of enumerating, much less understanding, all the possible states of the program, and from that comes the unreliability. From the complexity of the functions comes the difficulty of invoking those functions, which makes programs hard to use."

Fred Brooks, No Silver Bullet, 1986
When he writes functions, I don't think that he means functions in the Haskell sense. I think that he means operations, procedures, or methods.

Indeed, when you look at a C# method signature like the following, it's hard to enumerate, understand, or remember, all that it does:

int? TryAccept(Reservation reservation);

If this is a high-level function, many things could happen when you call that method. It could change the state of a database. It could send an email. It could mutate a variable. Not only that, but the behaviour could depend on non-deterministic factors, such as the date, time of day, or just raw randomness. Finally, how should you handle the return value? What does it mean if the return value is null? What if it's not? Is 0 a valid value? Are negative numbers valid? Are they different from positive values?

It is, indeed, difficult to enumerate all the possible states of such a function.

Consider, instead, a Haskell function with a type like this:

tryAccept :: Int -> Reservation -> MaybeT ReservationsProgram Int

What happens if you invoke this function? It returns a value. Does it send any emails? Does it mutate any state? No, it can't, because the static type informs us that this is a pure function. If any programmer, anywhere inside of the function, or the functions it calls, or functions they call, etc. tried to do something impure, it wouldn't have compiled.

Can we enumerate the states of the program? Certainly. We just have to figure out what ReservationsProgram is. After following a few types, we find this statically typed enumeration:

data ReservationsInstruction next =
    IsReservationInFuture Reservation (Bool -> next)
  | ReadReservations UTCTime ([Reservation] -> next)
  | Create Reservation (Int -> next)
  deriving Functor

Essentially, there's three 'actions' that this type enables. The tryAccept function returns the ReservationsProgram inside of a MaybeT container, so there's a fourth option that something short-circuits along the way.

You don't even have to keep track of this yourself. The compiler keeps you honest. Whenever you invoke the tryAccept function, the compiler will insist that you write code that can handle all possible outcomes. If you turn on the right compiler flags, the code is not going to compile if you don't.

(Both code examples are taken from the same repository.)

Haskellers jokingly declare that if Haskell code compiles, it works. While humorous, there's a kernel of truth in that. An advanced type system can carry much information about the behaviour of a program. Some people, particularly programmers who come from a dynamically typed background, find Haskell's type system rigid. That's not an unreasonable criticism, but often, in dynamically typed languages, you have to write many automated tests to ensure that your program behaves as desired, and that it correctly handles various edge cases. A type system like Haskell's, on the other hand, embeds those rules in types instead of in tests.

While you should still write automated tests for Haskell programs, fewer are needed. How many fewer? Compared to C-based languages, a factor ten isn't an unreasonable guess.

After a few false starts, in 2014 I finally decided that F# would be my default choice of language on .NET. The reason for that decision was that I felt so much more productive in F# compared to C#. While F#'s type system doesn't embed information about pure versus impure functions, it does support sum types, which is what enables the sort of compile-time enumeration that Brooks discusses.

F# is still my .NET language of choice, but I find that I mostly 'think in' Haskell these days. My conjecture is that a sufficiently advanced type system (like Haskell's) could easily represent another order-of-magnitude improvement over mainstream imperative languages.

Improvements for those who want them #

The essay No Silver Bullet is a perspicacious work. I think more people should read at least the first part, where Brooks explains why software development is hard. I find that analysis brilliant, and I agree: software development presupposes essential complexity. It's inherently hard.

There's no reason to make it harder than it has to be, though.

More than once, I've discussed productivity improvements with people, only to be met with the dismissal that 'there's no silver bullet'.

Granted, there's no magical solution that will solve all problems with software development, but that doesn't mean that improvements can't be had.

Consider the improvements I've argued for here. Everyone now uses the World Wide Web and sites like Stack Overflow for research; that particular improvement is firmly embedded in all organisations. On the other hand, I still regularly talk to organisations that don't routinely use automated testing.

People still use centralised version control (like TFS or SVN). If there was ever a low-hanging fruit, changing to Git is one. Git is free, and there's plenty of tools you can use to migrate your version history to it. There's also plenty of training and help to be had. Yes, it'll require a small investment to make the change, but the productivity increase is significant.

"The future is already here — it's just not very evenly distributed."

William Gibson
So it is with technology improvements. Automated testing is available, but not ubiquitous. Git is free, but still organisations stick to suboptimal version control. Haskell and F# are mature languages, yet programmers still program in C# or Java.

Summary #

The essay No Silver Bullet was written in 1986, but seems to me to be increasingly misunderstood. When people today talk about it at all, it's mostly as an excuse to stay where they are. "There's no silver bullets," they'll say.

The essay, however, doesn't argue that no improvements can be had. It only argues that no more order-of-magnitude improvements can be had.

In the present essay I argue that, since Brooks wrote No Silver Bullet, more than one such improvement happened. Once the World Wide Web truly began furnishing information at your fingertips, you could be more productive because you wouldn't be stuck for days or weeks. Automated testing reduces the work that manual testers used to perform, as well as limiting regressions.

If you accept my argument, that order-of-magnitude improvements appeared after 1986, this implies that Brooks' premise was wrong. In that case, there's no reason to believe that we've seen the last significant improvement to software development.

I think that more such improvements await us. I suggest that statically typed functional programming offers such an advance, but if history teaches us anything, it seems that breakthroughs tend to be unpredictable.


Comments

As always I enjoy reading your blog, even though I don't understand half of it most of the time. Or is that most of it half of the time? Allow me to put a few observations forward.

First I should confess, that I have actually not read the whole of Brook's essay. When I initially tried I got about half way through; it sounds like I should make another go at it. That of course will not stop me from commenting on the above.

Brook talks about complexity. To me designing and implementing a software system is not complex. Quantum physics is complex. Flying an airplane is difficult. Software development may be difficult depending on the task at hand (and unfortunately the qualifications of the team), but I would argue that it at most falls into the same category as flying an airplane.

I would properly also state, that there are no silver bullets. But like you I feel that people understand it incorrectly and there is definetely no reason for making things harder than they are. I think the examples of technology that helps are excellent and exactly describe that things do move forward.

That being said, it does not take away the creativity of the right decomposition, the responsibility for getting the use cases right, and especially the liability for getting it wrong. Sadly especially the last of overlooked. People should be reminded of where the phrase 'live under the bridge' comes from.

To end my ramblins, I would also look a little into the future. As you know I am somewhat sceptial about machine learning and AI. However, looking at the recent break throughs and use cases in these areas, I would not be surprised of a future where software development is done by 'an AI' assemblying pre-defined 'entities' to create the software we need. Like an F16 cannot be flown without a computer, future software cannot be created by a human.

2019-07-04 18:29:00 UTC

Karsten, thank you for writing. I'm not inclined to agree that software development falls into the same category of complexity as flying a plane. It seems to me to be orders of magnitudes more complex.

Just look at error rates.

Would you ever board an air plane if flying had error rates similar to those observed in software development? Would you fly even if only one percent of all flights ended with plane crash?

In reality, flying is extremely safe. Would you claim that software development is as safe, predictable, and manageable as flying?

I see no evidence of that.

Are pilots significantly more capable human beings than software developers, or does something else explain the discrepancy in failure rates?

2019-07-05 15:47 UTC

Hi Mark. The fact that error rates are higher in software development is more a statement to the bad state our industry is in and has been for a milinium or more.

Why do we except that we produce crappy systems or in your words software that is not safe, predictable, and manageble? The list of excuses is very long and the list of results is very short. We as an industry are simply doing it wrong, but most people prefers hand waving and marketing than simple and plausible heuristic.

To use your analogy about planes I could ask if you would fly with a place that had (only) been unit tested? Properly not as it is never the unit that fails, but always the integration. Should be test all integrations then? Yes, why not?

The used of planes or pilots (or whatever) may have been bad. My point was, that I do not see software development as complex.

2019-07-05 20:12 UTC

Karsten, if we, as an industry, are doing it wrong, then why are we doing that?

And what should we be doing instead?

2019-07-06 16:00 UTC

Full binary tree catamorphism

Monday, 24 June 2019 06:00:00 UTC

The catamorphism for a full binary tree is a pair of functions.

This article is part of an article series about catamorphisms. A catamorphism is a universal abstraction that describes how to digest a data structure into a potentially more compact value.

This article presents the catamorphism for a full binary tree, as well as how to identify it. The beginning of this article presents the catamorphism in C#, with examples. The rest of the article describes how to deduce the catamorphism. This part of the article presents my work in Haskell. Readers not comfortable with Haskell can just read the first part, and consider the rest of the article as an optional appendix.

A full binary tree (also known as a proper or plane binary tree) is a tree in which each node has either two or no branches.

A full binary tree example diagram, with each node containing integers.

The diagram shows an example of a tree of integers. The left branch contains two children, of which the right branch again contains two sub-branches. The rest of the nodes are leaf-nodes with no sub-branches.

C# catamorphism #

As a C# representation of a full binary tree, I'll start with the IBinaryTree<T> API from A Visitor functor. The catamorphism is the Accept method:

TResult Accept<TResult>(IBinaryTreeVisitor<TTResult> visitor);

So far in this article series, you've mostly seen Church-encoded catamorphisms, so a catamorphism represented as a Visitor may be too big of a cognitive leap. We know, however, from Visitor as a sum type that a Visitor representation is isomorphic to a Church encoding. Since these are isomorphic, it's possible to refactor IBinaryTree<T> to a Church encoding. The GitHub repository contains a series of commits that demonstrates how that refactoring works. Once you're done, you arrive at this Match method, which is the refactored Accept method:

TResult Match<TResult>(Func<TResultTTResultTResult> node, Func<TTResult> leaf);

This method takes a pair of functions as arguments. The node function deals with an internal node in the tree (the blue nodes in the above diagram), whereas the leaf function deals with the leaf nodes (the green nodes in the diagram).

The leaf function may be the easiest one to understand. A leaf node only contains a value of the type T, so the only operation the function has to support is translating the T value to a TResult value. This is also the premise of the Leaf class' implementation of the method:

private readonly T item;
 
public TResult Match<TResult>(Func<TResultTTResultTResult> node, Func<TTResult> leaf)
{
    return leaf(item);
}

The node function is more tricky. It takes three input arguments, of the types TResult, T, and TResult. The roles of these are respectively left, item, and right. This is a typical representation of a binary node. Since there's always a left and a right branch, you put the node's value in the middle. As was the case with the tree catamorphism, the catamorphism function receives the branches as already-translated values; that is, both the left and right branch have already been translated to TResult when node is called. While it looks like magic, as always it's just the result of recursion:

private readonly IBinaryTree<T> left;
private readonly T item;
private readonly IBinaryTree<T> right;
 
public TResult Match<TResult>(Func<TResultTTResultTResult> node, Func<TTResult> leaf)
{
    return node(left.Match(node, leaf), item, right.Match(node, leaf));
}

This is the Node<T> class implementation of the Match method. It calls node and returns whatever it returns, but notice that as the left and right arguments, if first, recursively, calls left.Match and right.Match. This is how it can call node with the translated branches, as well as with the basic item.

The recursion stops and unwinds on left and right whenever one of those are Leaf instances.

Examples #

You can use Match to implement most other behaviour you'd like IBinaryTree<T> to have. In the original article on the full binary tree functor you saw how to implement Select with a Visitor, but now that the API is Church-encoded, you can derive Select from Match:

public static IBinaryTree<TResult> Select<TResultT>(
    this IBinaryTree<T> tree,
    Func<TTResult> selector)
{
    if (tree == null)
        throw new ArgumentNullException(nameof(tree));
    if (selector == null)
        throw new ArgumentNullException(nameof(selector));
 
    return tree.Match(
        node: (l, x, r) => Create(l, selector(x), r),
        leaf: x => Leaf(selector(x)));
}

In the leaf case, the Select method simply calls selector with the x value it receives, and puts the resulting TResult object into a new Leaf object.

In the node case, the lambda expression receives three arguments: l and r are the already-translated left and right branches, so you only need to call selector on x and call the Create helper method to produce a new Node object.

You can also implement more specialised functionality, like calculating the sum of nodes, measuring the depth of the tree, and similar functions. You saw equivalent examples in the previous article.

For the examples in this article, I'll use the tree shown in the above diagram. Using static helper methods, you can write it like this:

var tree =
    BinaryTree.Create(
        BinaryTree.Create(
            BinaryTree.Leaf(42),
            1337,
            BinaryTree.Create(
                BinaryTree.Leaf(2112),
                5040,
                BinaryTree.Leaf(1984))),
        2,
        BinaryTree.Leaf(90125));

To calculate the sum of all nodes, you can write a function like this:

public static int Sum(this IBinaryTree<int> tree)
{
    return tree.Match((l, x, r) => l + x + r, x => x);
}

The leaf function just returns the value of the node, while the node function adds the numbers together. It works for the above tree:

> tree.Sum()
100642

To find the maximum value, you can write another extension method:

public static int Max(this IBinaryTree<int> tree)
{
    return tree.Match((l, x, r) => Math.Max(Math.Max(l, r), x), x => x);
}

Again, the leaf function just returns the value of the node. The node function receives the value of the current node x, as well as the already-found maximum value of the left branch and the right branch; it then returns the maximum of these three values:

> tree.Max()
90125

As was also the case for trees, both of these operations are part of the standard repertoire available via a data structure's fold. That's not the case for the next two functions, which can't be implemented using a fold, but which can be defined with the catamorphism. The first is a function to count the leaves of a tree:

public static int CountLeaves<T>(this IBinaryTree<T> tree)
{
    return tree.Match((l, _, r) => l + r, _ => 1);
}

Since the leaf function handles a leaf node, the number of leaf nodes in a leaf node is, by definition, one. Thus, that function can ignore the value of the node and always return 1. The node function, on the other hand, receives the number of leaf nodes on the left-hand side (l), the value of the current node, and the number of leaf nodes on the right-hand side (r). Notice that since an internal node is never a leaf node, it doesn't count; instead, just add l and r together. Notice that, again, the value of the node itself is irrelevant.

How many leaf nodes does the above tree have?

> tree.CountLeaves()
4

You can also measure the maximum depth of a tree:

public static int MeasureDepth<T>(this IBinaryTree<T> tree)
{
    return tree.Match((l, _, r) => 1 + Math.Max(l, r), _ => 0);
}

Like in the previous article, I've arbitrarily decided that the depth of a leaf node is zero; therefore, the leaf function always returns 0. The node function receives the depth of the left and right branches, and returns the maximum of those two values, plus one, since the current node adds one level of depth.

> tree.MeasureDepth()
3

You may not have much need for working with full binary trees in your normal, day-to-day C# work, but I found it worthwhile to include this example for a couple of reasons. First, because the original of the API shows that a catamorphism may be hiding in a Visitor. Second, because binary trees are interesting, in that they're foldable functors, but not monads.

Where does the catamorphism come from, though? How can you trust that the Match method is the catamorphism?

Binary tree F-Algebra #

As in the previous article, I'll use Fix and cata as explained in Bartosz Milewski's excellent article on F-Algebras.

As always, start with the underlying endofunctor. You can think of this one as a specialisation of the rose tree from the previous article:

data FullBinaryTreeF a c = LeafF a | NodeF c a c deriving (ShowEqRead)
 
instance Functor (FullBinaryTreeF a) where
  fmap _     (LeafF x) = LeafF x
  fmap f (NodeF l x r) = NodeF (f l) x (f r)

As usual, I've called the 'data' type a and the carrier type c (for carrier). The Functor instance as usual translates the carrier type; the fmap function has the type (c -> c1) -> FullBinaryTreeF a c -> FullBinaryTreeF a c1.

As was the case when deducing the recent catamorphisms, Haskell isn't too happy about defining instances for a type like Fix (FullBinaryTreeF a). To address that problem, you can introduce a newtype wrapper:

newtype FullBinaryTreeFix a =
  FullBinaryTreeFix { unFullBinaryTreeFix :: Fix (FullBinaryTreeF a) }
  deriving (ShowEqRead)

You can define Functor, Foldable, and Traversable instances (but not Monad) for this type without resorting to any funky GHC extensions. Keep in mind that ultimately, the purpose of all this code is just to figure out what the catamorphism looks like. This code isn't intended for actual use.

A pair of helper functions make it easier to define FullBinaryTreeFix values:

fbtLeafF :: a -> FullBinaryTreeFix a
fbtLeafF = FullBinaryTreeFix . Fix . LeafF
 
fbtNodeF :: FullBinaryTreeFix a -> a -> FullBinaryTreeFix a -> FullBinaryTreeFix a
fbtNodeF (FullBinaryTreeFix l) x (FullBinaryTreeFix r) = FullBinaryTreeFix $ Fix $ NodeF l x r

In order to distinguish these helper functions from the ones that create TreeFix a values, I prefixed them with fbt (for Full Binary Tree). fbtLeafF creates a leaf node:

Prelude Fix FullBinaryTree> fbtLeafF "fnaah"
FullBinaryTreeFix {unFullBinaryTreeFix = Fix (LeafF "fnaah")}

fbtNodeF is a helper function to create an internal node:

Prelude Fix FullBinaryTree> fbtNodeF (fbtLeafF 1337) 42 (fbtLeafF 2112)
FullBinaryTreeFix {unFullBinaryTreeFix = Fix (NodeF (Fix (LeafF 1337)) 42 (Fix (LeafF 2112)))}

The FullBinaryTreeFix type, or rather the underlying FullBinaryTreeF a functor, is all you need to identify the catamorphism.

Haskell catamorphism #

At this point, you have two out of three elements of an F-Algebra. You have an endofunctor (FullBinaryTreeF a), and an object c, but you still need to find a morphism FullBinaryTreeF a c -> c. Notice that the algebra you have to find is the function that reduces the functor to its carrier type c, not the 'data type' a. This takes some time to get used to, but that's how catamorphisms work. This doesn't mean, however, that you get to ignore a, as you'll see.

As in the previous articles, start by writing a function that will become the catamorphism, based on cata:

fullBinaryTreeF = cata alg . unFullBinaryTreeFix
  where alg     (LeafF x) = undefined
        alg (NodeF l x r) = undefined

While this compiles, with its undefined implementation of alg, it obviously doesn't do anything useful. I find, however, that it helps me think. How can you return a value of the type c from alg? You could pass a function argument to the fullBinaryTreeF function and use it with x:

fullBinaryTreeF fl = cata alg . unFullBinaryTreeFix
  where alg     (LeafF x) = fl x
        alg (NodeF l x r) = undefined

I called the function fl for function, leaf, because we're also going to need a function for the NodeF case:

fullBinaryTreeF :: (c -> a -> c -> c) -> (a -> c) -> FullBinaryTreeFix a -> c
fullBinaryTreeF fn fl = cata alg . unFullBinaryTreeFix
  where alg     (LeafF x) = fl x
        alg (NodeF l x r) = fn l x r

This works. Since cata has the type Functor f => (f a -> a) -> Fix f -> a, that means that alg has the type f a -> a. In the case of FullBinaryTreeF, the compiler infers that the alg function has the type FullBinaryTreeF a c -> c, which is just what you need!

You can now see what the carrier type c is for. It's the type that the algebra extracts, and thus the type that the catamorphism returns.

This, then, is the catamorphism for a full binary tree. As always, it's not the only possible catamorphism, since you can easily reorder the arguments to both fullBinaryTreeF, fn, and fl. These would all be isomorphic, though.

Basis #

You can implement most other useful functionality with treeF. Here's the Functor instance:

instance Functor FullBinaryTreeFix where
  fmap f = fullBinaryTreeF (\l x r -> fbtNodeF l (f x) r) (fbtLeafF . f)

The fl function first invokes f, followed by fbtLeafF. The fn function uses the fbtNodeF helper function to create a new internal node. l and r are already-translated branches, so you just need to call f with the node value x.

There's no Monad instance for binary trees, because you can't flatten a binary tree of binary trees. You can, on the other hand, define a Foldable instance:

instance Foldable FullBinaryTreeFix where
  foldMap f = fullBinaryTreeF (\l x r -> l <> f x <> r) f

The f function passed to foldMap has the type Monoid m => (a -> m), so the fl function that handles leaf nodes simply calls f with the contents of the node. The fn function receives two branches already translated to m, so it just has to call f with x and combine all the m values using the <> operator.

The Traversable instance follows right on the heels of Foldable:

instance Traversable FullBinaryTreeFix where
  sequenceA = fullBinaryTreeF (liftA3 fbtNodeF) (fmap fbtLeafF)

There are operations on binary trees that you can implement with a fold, but some that you can't. Consider the tree shown in the diagram at the beginning of the article. This is also the tree that the above C# examples use. In Haskell, using FullBinaryTreeFix, you can define that tree like this:

tree = 
  fbtNodeF
    (fbtNodeF
      (fbtLeafF 42)
      1337
      (fbtNodeF
        (fbtLeafF 2112)
        5040
        (fbtLeafF 1984)))
    2
    (fbtLeafF 90125)

Since FullBinaryTreeFix is Foldable, and that type class already comes with sum and maximum functions, no further work is required to repeat the first two of the above C# examples:

Prelude Fix FullBinaryTree> sum tree
100642
Prelude Fix FullBinaryTree> maximum tree
90125

Counting leaves, or measuring the depth of a tree, on the other hand, is impossible with the Foldable instance, but can be implemented using the catamorphism:

countLeaves :: Num n => FullBinaryTreeFix a -> n
countLeaves = fullBinaryTreeF (\l _ r -> l + r) (const 1)
 
treeDepth :: (Ord n, Num n) => FullBinaryTreeFix a -> n
treeDepth = fullBinaryTreeF (\l _ r -> 1 + max l r) (const 0)

The reasoning is the same as already explained in the above C# examples. The functions also produce the same results:

Prelude Fix FullBinaryTree> countLeaves tree
4
Prelude Fix FullBinaryTree> treeDepth tree
3

This, hopefully, illustrates that the catamorphism is more capable, and that the fold is just a (list-biased) specialisation.

Summary #

The catamorphism for a full binary tree is a pair of functions. One function handles internal nodes, while the other function handles leaf nodes.

I thought it was interesting to show this example for two reasons: First, the original example was a Visitor implementation, and I think it's worth realising that a Visitor's Accept method can also be viewed as a catamorphism. Second, a binary tree is an example of a data structure that has a fold, but isn't a monad.

All articles in the article series have, so far, covered data structures well-known from computer science. The next example will, on the other hand, demonstrate that even completely ad-hoc domain-specific data structures have catamorphisms.

Next: Payment types catamorphism.


Composition Root location

Monday, 17 June 2019 05:55:00 UTC

A Composition Root should be located near the point where user code first executes.

Prompted by a recent Internet discussion, my DIPPP co-author Steven van Deursen wrote to me in order to help clarify the Composition Root pattern.

In the email, Steven ponders whether it's defensible to use an API that looks like a Service Locator from within a unit test. He specifically calls out my article that describes the Auto-mocking Container design pattern.

In that article, I show how to use Castle Windsor's Resolve method from within a unit test:

[Fact]
public void SutIsController()
{
    var container = new WindsorContainer().Install(new ShopFixture());
    var sut = container.Resolve<BasketController>();
    Assert.IsAssignableFrom<IHttpController>(sut);
}

Is the test using a Service Locator? If so, why is that okay? If not, why isn't it a Service Locator?

This article argues that that this use of Resolve isn't a Service Locator.

Entry points defined #

The original article about the Composition Root pattern defines a Composition Root as the place where you compose your object graph(s). It repeatedly describes how this ought to happen in, or as close as possible to, the application's entry point. I believe that this definition is compatible with the pattern description given in our book.

I do realise, however, that we may never have explicitly defined what an entry point is.

In order to do so, it may be helpful to establish a bit of terminology. In the following, I'll use the terms user code as opposed to framework code.

Much of the code you write probably runs within some sort of framework. If you're writing a web application, you're probably using a web framework. If you're writing a message-based application, you might be using some message bus, or actor, framework. If you're writing an app for a mobile device, you're probably using some sort of framework for that, too.

Even as a programmer, you're a user of frameworks.

As I usually do, I'll use Tomas Petricek's distinction between libraries and frameworks. A library is a collection of APIs that you can call. A framework is a software system that calls your code.

User code running in a framework.

The reality is often more complex, as illustrated by the figure. While a framework will call your code, you can also invoke APIs afforded by the framework.

The point, however, is that user code is code that you write, while framework code is code that someone else wrote to develop the framework. The framework starts up first, and at some point in its lifetime, it calls your code.

Definition: The entry point is the user code that the framework calls first.

As an example, in ASP.NET Core, the (conventional) Startup class is the first user code that the framework calls. (If you follow Tomas Petricek's definition to the letter, ASP.NET Core isn't a framework, but a library, because you have to write a Main method and call WebHost.CreateDefaultBuilder(args).UseStartup<Startup>().Build().Run(). In reality, though, you're supposed to configure the application from your Startup class, making it the de facto entry point.)

Unit testing endpoints #

Most .NET-based unit testing packages are frameworks. There's typically little explicit configuration. Instead, you just write a method and adorn it with an attribute:

[Fact]
public async Task ReservationSucceeds()
{
    var repo = new FakeReservationsRepository();
    var sut = new ReservationsController(10, repo);
 
    var reservation = new Reservation(
        date: new DateTimeOffset(2018, 8, 13, 16, 53, 0, TimeSpan.FromHours(2)),
        email: "mark@example.com",
        name: "Mark Seemann",
        quantity: 4);
    var actual = await sut.Post(reservation);
 
    Assert.True(repo.Contains(reservation.Accept()));
    var expectedId = repo.GetId(reservation.Accept());
    var ok = Assert.IsAssignableFrom<OkActionResult>(actual);
    Assert.Equal(expectedId, ok.Value);
}
 
[Fact]
public async Task ReservationFails()
{
    var repo = new FakeReservationsRepository();
    var sut = new ReservationsController(10, repo);
 
    var reservation = new Reservation(
        date: new DateTimeOffset(2018, 8, 13, 16, 53, 0, TimeSpan.FromHours(2)),
        email: "mark@example.com",
        name: "Mark Seemann",
        quantity: 11);
    var actual = await sut.Post(reservation);
 
    Assert.False(reservation.IsAccepted);
    Assert.False(repo.Contains(reservation));
    Assert.IsAssignableFrom<InternalServerErrorActionResult>(actual);
}

With xUnit.net, the attribute is called [Fact], but the principle is the same in NUnit and MSTest, only that names are different.

Where's the entry point?

Each test is it's own entry point. The test is (typically) the first user code that the test runner executes. Furthermore, each test runs independently of any other.

For the sake of argument, you could write each test case in a new application, and run all your test applications in parallel. It would be impractical, but it oughtn't change the way you organise the tests. Each test method is, conceptually, a mini-application.

A test method is its own Composition Root; or, more generally, each test has its own Composition Root. In fact, xUnit.net has various extensibility points that enable you to hook into the framework before each test method executes. You can, for example, combine a [Theory] attribute with a custom AutoDataAttribute, or you can adorn your tests with a BeforeAfterTestAttribute. This doesn't change that the test runner will run each test case independently of all the other tests. Those pre-execution hooks play the same role as middleware in real applications.

You can, therefore, consider the Arrange phase the Composition Root for each test.

Thus, I don't consider the use of an Auto-mocking Container to be a Service Locator, since its role is to resolve object graphs at the entry point instead of locating services from arbitrary locations in the code base.

Summary #

A Composition Root is located at, or near, the entry point. An entry point is where user code is first executed by a framework. Each unit test method constitutes a separate, independent entry point. Therefore, it's consistent with these definitions to use an Auto-mocking Container in a unit test.


Tree catamorphism

Monday, 10 June 2019 09:10:00 UTC

The catamorphism for a tree is just a single function with a particular type.

This article is part of an article series about catamorphisms. A catamorphism is a universal abstraction that describes how to digest a data structure into a potentially more compact value.

This article presents the catamorphism for a tree, as well as how to identify it. The beginning of this article presents the catamorphism in C#, with examples. The rest of the article describes how to deduce the catamorphism. This part of the article presents my work in Haskell. Readers not comfortable with Haskell can just read the first part, and consider the rest of the article as an optional appendix.

A tree is a general-purpose data structure where each node in a tree has an associated value. Each node can have an arbitrary number of branches, including none.

A tree example diagram, with each node containing integers.

The diagram shows an example of a tree of integers. The left branch contains a sub-tree with only a single branch, whereas the right branch contains a sub-tree with three branches. Each of the leaf nodes are trees in their own right, but they all have zero branches.

In this example, each branch at the 'same level' has the same depth, but this isn't required.

C# catamorphism #

As a C# representation of a tree, I'll use the Tree<T> class from A Tree functor. The catamorphism is this instance method on Tree<T>:

public TResult Cata<TResult>(Func<TIReadOnlyCollection<TResult>, TResult> func)
{
    return func(Item, children.Select(c => c.Cata(func)).ToArray());
}

Contrary to previous articles, I didn't call this method Match, but simply Cata (for catamorphism). The reason is that those other methods are called Match for a particular reason. The data structures for which they are catamorphisms are all Church-encoded sum types. For those types, the Match methods enable a syntax similar to pattern matching in F#. That's not the case for Tree<T>. It's not a sum type, and it isn't Church-encoded.

The method takes a single function as an input argument. This is the first catamorphism in this article series that isn't made up of a pair of some sort. The Boolean catamorphism is a pair of values, the Maybe catamorphism is a pair made up of a value and a function, and the Either catamorphism is a pair of functions. The tree catamorphism, in contrast, is just a single function.

The first argument to the function is a value of the type T. This will be an Item value. The second argument to the function is a finite collection of TResult values. This may take a little time getting used to, but it's a collection of already reduced sub-trees. When you supply such a function to Cata, that function must return a single value of the type TResult. Thus, the function must be able to digest a finite collection of TResult values, as well as a T value, to a single TResult value.

The Cata method accomplishes this by calling func with the current Item, as well as by recursively applying itself to each of the sub-trees. Eventually, Cata will recurse into leaf nodes, which means that children will be empty. When that happens, the lambda expression inside children.Select never runs, and recursion stops and unwinds.

Examples #

You can use Cata to implement most other behaviour you'd like Tree<T> to have. In the original article on the Tree functor you saw an ad-hoc implementation of Select, but instead, you can derive Select from Cata:

public Tree<TResult> Select<TResult>(Func<TTResult> selector)
{
    return Cata<Tree<TResult>>((x, nodes) => new Tree<TResult>(selector(x), nodes));
}

The lambda expression receives x, an object of the type T, as well as nodes, which is a finite collection of already translated sub-trees. It simply translates x with selector and returns a new Tree<TResult> with the translated value and the already translated nodes.

This works just as well as the ad-hoc implementation; it passes all the same tests as shown in the previous article.

If you have a tree of numbers, you can add them all together:

public static int Sum(this Tree<int> tree)
{
    return tree.Cata<int>((x, xs) => x + xs.Sum());
}

This uses the built-in Sum method for IEnumerable<T> to add all the partly calculated sub-trees together, and then adds the value of the current node. In this and remaining examples, I'll use the tree shown in the above diagram:

Tree<int> tree =
    Tree.Create(42,
        Tree.Create(1337,
            Tree.Leaf(-3)),
        Tree.Create(7,
            Tree.Leaf(-99),
            Tree.Leaf(100),
            Tree.Leaf(0)));

You can now calculate the sum of all these nodes:

> tree.Sum()
1384

Another option is to find the maximum value anywhere in a tree:

public static int Max(this Tree<int> tree)
{
    return tree.Cata<int>((x, xs) => xs.Any() ? Math.Max(x, xs.Max()) : x);
}

This method again utilises one of the LINQ methods available via the .NET base class library: Max. It is, however, necessary to first check whether the partially reduced xs is empty or not, because the Max extension method on IEnumerable<int> doesn't know how to deal with an empty collection (it throws an exception). When xs is empty that implies a leaf node, in which case you can simply return x; otherwise, you'll first have to use the Max method on xs to find the maximum value there, and then use Math.Max to find the maximum of those two. (I'll here remind the attentive reader that finding the maximum number forms a semigroup and that semigroups accumulate when collections are non-empty. It all fits together. Isn't maths lovely?)

Using the same tree as before, you can see that this method, too, works as expected:

> tree.Max()
1337

So far, these two extension methods are just specialised folds. In Haskell, Foldable is a specific type class, and sum and max are available for all instances. As promised in the introduction to the series, though, there are some functions on trees that you can't implement using a fold. One of these is to count all the leaf nodes. You can still derive that functionality from the catamorphism, though:

public int CountLeaves()
{
    return Cata<int>((x, xs) => xs.Any() ? xs.Sum() : 1);
}

Like Max, the lambda expression used to implement CountLeaves uses Any to detect whether or not xs is empty, which is when Any is false. Empty xs indicates that you've found a leaf node, so return 1. When xs isn't empty, it contains a collection of 1 values - one for each leaf node recursively found; add them together with Sum.

This also works for the same tree as before:

> tree.CountLeaves()
4

You can also measure the maximum depth of a tree:

public int MeasureDepth()
{
    return Cata<int>((x, xs) => xs.Any() ? 1 + xs.Max() : 0);
}

This implementation considers a leaf node to have no depth:

> Tree.Leaf("foo").MeasureDepth()
0

This is a discretionary definition; you could also argue that, by definition, a leaf node ought to have a depth of one. If you think so, you'll need to change the 0 to 1 in the above MeasureDepth implementation.

Once more, you can use Any to detect leaf nodes. Whenever you find a leaf node, you return its depth, which, by definition, is 0. Otherwise, you find the maximum depth already found among xs, and add 1, because xs contains the maximum depths of all immediate sub-trees.

Using the same tree again:

> tree.MeasureDepth()
2

The above tree has the same depth for all sub-trees, so here's an example of a tilted tree:

> Tree.Create(3,
.   Tree.Create(1,
.     Tree.Leaf(0),
.     Tree.Leaf(0)),
.   Tree.Leaf(0),
.   Tree.Leaf(0),
.   Tree.Create(2,
.     Tree.Create(1,
.       Tree.Leaf(0))))
. .MeasureDepth()
3

To make it easier to understand, I've labelled all the leaf nodes with 0, because that's their depth. I've then labelled the other nodes with the maximum number 'under' them, plus one. That's the algorithm used.

Tree F-Algebra #

As in the previous article, I'll use Fix and cata as explained in Bartosz Milewski's excellent article on F-Algebras.

As always, start with the underlying endofunctor. I've taken some inspiration from Tree a from Data.Tree, but changed some names:

data TreeF a c = NodeF { nodeValue :: a, nodes :: ListFix c } deriving (ShowEqRead)
 
instance Functor (TreeF a) where
  fmap f (NodeF x ns) = NodeF x $ fmap f ns

Instead of using Haskell's standard list ([]) for the sub-forest, I've used ListFix from the article on list catamorphism. This should, hopefully, demonstrate how you can build on already established definitions derived from first principles.

As usual, I've called the 'data' type a and the carrier type c (for carrier). The Functor instance as usual translates the carrier type; the fmap function has the type (c -> c1) -> TreeF a c -> TreeF a c1.

As was the case when deducing the recent catamorphisms, Haskell isn't too happy about defining instances for a type like Fix (TreeF a). To address that problem, you can introduce a newtype wrapper:

newtype TreeFix a = TreeFix { unTreeFix :: Fix (TreeF a) } deriving (ShowEqRead)

You can define Functor, Applicative, Monad, etc. instances for this type without resorting to any funky GHC extensions. Keep in mind that ultimately, the purpose of all this code is just to figure out what the catamorphism looks like. This code isn't intended for actual use.

A pair of helper functions make it easier to define TreeFix values:

leafF :: a -> TreeFix a
leafF x = TreeFix $ Fix $ NodeF x nilF
 
nodeF :: a -> ListFix (TreeFix a) -> TreeFix a
nodeF x = TreeFix . Fix . NodeF x . fmap unTreeFix

leafF creates a leaf node:

Prelude Fix List Tree> leafF "ploeh"
TreeFix {unTreeFix = Fix (NodeF "ploeh" (ListFix (Fix NilF)))}

nodeF is a helper function to create a non-leaf node:

Prelude Fix List Tree> nodeF 4 (consF (leafF 9) nilF)
TreeFix {unTreeFix =
  Fix (NodeF 4 (ListFix (Fix (ConsF (Fix (NodeF 9 (ListFix (Fix NilF)))) (Fix NilF)))))}

Even with helper functions, construction of TreeFix values is cumbersome, but keep in mind that the code shown here isn't meant to be used in practice. The goal is only to deduce catamorphisms from more basic universal abstractions, and you now have all you need to do that.

Haskell catamorphism #

At this point, you have two out of three elements of an F-Algebra. You have an endofunctor (TreeF a), and an object c, but you still need to find a morphism TreeF a c -> c. Notice that the algebra you have to find is the function that reduces the functor to its carrier type c, not the 'data type' a. This takes some time to get used to, but that's how catamorphisms work. This doesn't mean, however, that you get to ignore a, as you'll see.

As in the previous articles, start by writing a function that will become the catamorphism, based on cata:

treeF = cata alg . unTreeFix
  where alg (NodeF x ns) = undefined

While this compiles, with its undefined implementation of alg, it obviously doesn't do anything useful. I find, however, that it helps me think. How can you return a value of the type c from alg? You could pass a function argument to the treeF function and use it with x and ns:

treeF :: (a -> ListFix c -> c) -> TreeFix a -> c
treeF f = cata alg . unTreeFix
  where alg (NodeF x ns) = f x ns

This works. Since cata has the type Functor f => (f a -> a) -> Fix f -> a, that means that alg has the type f a -> a. In the case of TreeF, the compiler infers that the alg function has the type TreeF a c -> c, which is just what you need!

You can now see what the carrier type c is for. It's the type that the algebra extracts, and thus the type that the catamorphism returns.

This, then, is the catamorphism for a tree. So far in this article series, all previous catamorphisms have been pairs, but this one is just a single function. It's still not the only possible catamorphism, since you could trivially flip the arguments to f.

I've chosen the representation shown here because it's isomorphic to the foldTree function from Haskell's built-in Data.Tree module, which explicitly documents that the function "is also known as the catamorphism on trees." foldTree is defined using Haskell's standard list type ([]), so the type is simpler: (a -> [b] -> b) -> Tree a -> b. The two representations of trees, TreeFix and Tree are, however, isomorphic, so foldTree is equivalent to treeF. Notice how both of these functions are also equivalent to the above C# Cata method.

Basis #

You can implement most other useful functionality with treeF. Here's the Functor instance:

instance Functor TreeFix where
  fmap f = treeF (nodeF . f)

nodeF . f is just the point-free version of \x ns -> nodeF (f x) ns, which follows the exact same implementation logic as the above C# Select implementation.

The Applicative instance is, I'm afraid, the most complex code you've seen so far in this article series:

instance Applicative TreeFix where
  pure = leafF
  ft <*> xt = treeF (\f ts -> addNodes ts $ f <$> xt) ft
    where addNodes ns (TreeFix (Fix (NodeF x xs))) =
            TreeFix (Fix (NodeF x (xs <> (unTreeFix <$> ns))))

I'd be surprised if it's impossible to make this terser, but I thought that it was just complicated enough that I needed to make one of the steps explicit. The addNodes helper function has the type ListFix (TreeFix a) -> TreeFix a -> TreeFix a, and it adds a list of sub-trees to the top node of a tree. It looks worse than it is, but it really just peels off the wrappers (TreeFix, Fix, and NodeF) to access the data (x and xs) of the top node. It then concatenates xs with ns, and puts all the wrappers back on.

I have to admit, though, that the Applicative and Monad instance in general are mind-binding to me. The <*> operation, particularly, takes a tree of functions and has to combine it with a tree of values. What does that even mean? How does one do that?

Like the above, apparently. I took the Applicative behaviour from Data.Tree and made sure that my implementation is isomorphic. I even have a property to make 'sure' that's the case:

testProperty "Applicative behaves like Data.Tree" $ do
  xt :: TreeFix Integer <- fromTree <$> resize 10 arbitrary
  ft :: TreeFix (Integer -> String) <- fromTree <$> resize 5 arbitrary
 
  let actual = ft <*> xt
 
  let expected = toTree ft <*> toTree xt
  return $ expected === toTree actual

The Monad instance looks similar to the Applicative instance:

instance Monad TreeFix where
  t >>= f = treeF (\x ns -> addNodes ns $ f x) t
    where addNodes ns (TreeFix (Fix (NodeF x xs))) =
            TreeFix (Fix (NodeF x (xs <> (unTreeFix <$> ns))))

The addNodes helper function is the same as for <*>, so you may wonder why I didn't extract that as a separate, reusable function. I decided, however, to apply the rule of three, and since, ultimately, addNodes appear only twice, I left them as the implementation details they are.

Fortunately, the Foldable instance is easier on the eyes:

instance Foldable TreeFix where
  foldMap f = treeF (\x xs -> f x <> fold xs)

Since f is a function of the type a -> m, where m is a Monoid instance, you can use fold and <> to accumulate everything to a single m value.

The Traversable instance is similarly terse:

instance Traversable TreeFix where
  sequenceA = treeF (\x ns -> nodeF <$> x <*> sequenceA ns)

Finally, you can implement conversions to and from the Tree type from Data.Tree, using ana as the dual of cata:

toTree :: TreeFix a -> Tree a
toTree = treeF (\x ns -> Node x $ toList ns)
 
fromTree :: Tree a -> TreeFix a
fromTree = TreeFix . ana coalg
  where coalg (Node x ns) = NodeF x (fromList ns)

This demonstrates that TreeFix is isomorphic to Tree, which again establishes that treeF and foldTree are equivalent.

Relationships #

In this series, you've seen various examples of catamorphisms of structures that have no folds, catamorphisms that coincide with folds, and catamorphisms that are more general than the fold. The introduction to the series included this diagram:

Catamorphisms and folds as sets, for various sum types.

The Either catamorphism is another example of a catamorphism that is more general than the fold, but that one turned out to be identical to the bifold. That's not the case here, because TreeFix isn't a Bifoldable instance at all.

There are operations on trees that you can implement with a fold, but some that you can't. Consider the tree in shown in the diagram at the beginning of the article. This is also the tree that the above C# examples use. In Haskell, using TreeFix, you can define that tree like this:

tree =
  nodeF 42
    (consF (nodeF 1337
      (consF (leafF (-3)) nilF)) $
     consF (nodeF 7
      (consF (leafF (-99)) $
       consF (leafF 100) $
       consF (leafF 0) nilF))
     nilF)

Yes, that almost looks like some Lisp dialect...

Since TreeFix is Foldable, and that type class already comes with sum and maximum functions, no further work is required to repeat the first two of the above C# examples:

*Tree Fix List Tree> sum tree
1384
*Tree Fix List Tree> maximum tree
1337

Counting leaves, or measuring the depth of a tree, on the other hand, is impossible with the Foldable instance, but can be implemented using the catamorphism:

countLeaves :: Num n => TreeFix a -> n
countLeaves = treeF (\_ xs -> if null xs then 1 else sum xs)
 
treeDepth :: (Ord n, Num n) => TreeFix a -> n
treeDepth = treeF (\_ xs -> if null xs then 0 else 1 + maximum xs)

The reasoning is the same as already explained in the above C# examples. The functions also produce the same results:

*Tree Fix List Tree> countLeaves tree
4
*Tree Fix List Tree> treeDepth tree
2

This, hopefully, illustrates that the catamorphism is more capable, and that the fold is just a (list-biased) specialisation.

Summary #

The catamorphism for a tree is just a single function, which is recursively evaluated. It enables you to translate, traverse, and reduce trees in many interesting ways.

You can use the catamorphism to implement a (list-biased) fold, including enumerating all nodes as a flat list, but there are operations (such as counting leaves) that you can implement with the catamorphism, but not with the fold.

Next: Rose tree catamorphism.


Comments

This [Max] method again utilises one of the LINQ methods available via the .NET base class library: Max. It is, however, necessary to first check whether the partially reduced xs is empty or not, because the Max extension method on IEnumerable<int> doesn't know how to deal with an empty collection (it throws an exception). When xs is empty that implies a leaf node, in which case you can simply return x; otherwise, you'll first have to use the Max method on xs to find the maximum value there, and then use Math.Max to find the maximum of those two. (I'll here remind the attentive reader that finding the maximum number forms a semigroup and that semigroups accumulate when collections are non-empty. It all fits together. Isn't maths lovely?)

...

So far, these two [Sum and Max] extension methods are just specialised folds.

Indeed, and fold is implemented for IEnumerable<> via Aggregate. As such, you can replace the case analysis in...

(x, xs) => xs.Any() ? Math.Max(x, xs.Max()) : x

...with a simple call to Aggregate:

(x, xs) => xs.Aggregate(x, Math.Max)
2020-08-03 16:13 UTC

Yes.

2020-08-03 17:12 UTC
I have to admit, though, that the Applicative and Monad instance in general are mind-binding to me. The <*> operation, particularly, takes a tree of functions and has to combine it with a tree of values. What does that even mean? How does one do that?

It is also unclear to me. Thankfully Nikos Baxevanis shared this excellent post by Edsko de Vries. That post helped me implement this in F# and gain some understanding of what is happening by considering the many examples.

2022-09-17 16:59 UTC

Tyson, thank you for the link. It's a great article, and it does help with establishing an intuition for applicative trees.

2022-09-18 19:12 UTC

Either catamorphism

Monday, 03 June 2019 06:05:00 UTC

The catamorphism for Either is a generalisation of its fold. The catamorphism enables operations not available via fold.

This article is part of an article series about catamorphisms. A catamorphism is a universal abstraction that describes how to digest a data structure into a potentially more compact value.

This article presents the catamorphism for Either (also known as Result), as well as how to identify it. The beginning of this article presents the catamorphism in C#, with examples. The rest of the article describes how to deduce the catamorphism. This part of the article presents my work in Haskell. Readers not comfortable with Haskell can just read the first part, and consider the rest of the article as an optional appendix.

Either is a data container that models two mutually exclusive results. It's often used to return values that may be either correct (right), or incorrect (left). In statically typed functional programming with a preference for total functions, Either offers a saner, more reasonable way to model success and error results than throwing exceptions.

C# catamorphism #

This article uses the Church encoding of Either. The catamorphism for Either is the Match method:

T Match<T>(Func<LT> onLeft, Func<RT> onRight);

Until this article, all previous catamorphisms have been a pair made from an initial value and a function. The Either catamorphism is a generalisation, since it's a pair of functions. One function handles the case where there's a left value, and the other function handles the case where there's a right value. Both functions must return the same, unifying type, which is often a string or something similar that can be shown in a user interface:

> IEither<TimeSpanint> e = new Left<TimeSpanint>(TimeSpan.FromMinutes(3));
> e.Match(ts => ts.ToString(), i => i.ToString())
"00:03:00"
> IEither<TimeSpanint> e = new Right<TimeSpanint>(42);
> e.Match(ts => ts.ToString(), i => i.ToString())
"42"

You'll often see examples like the above that turns both left and right cases into strings or something that can be represented as a unifying response type, but this is in no way required. If you can come up with a unifying type, you can convert both cases to that type:

> IEither<Guidstring> e = new Left<Guidstring>(Guid.NewGuid());
> e.Match(g => g.ToString().Count(c => 'a' <= c), s => s.Length)
12
> IEither<Guidstring> e = new Right<Guidstring>("foo");
> e.Match(g => g.ToString().Count(c => 'a' <= c), s => s.Length)
3

In the two above examples, you use two different functions that both reduce respectively Guid and string values to a number. The function that turns Guid values into a number counts how many of the hexadecimal digits that are greater than or equal to A (10). The other function simply returns the length of the string, if it's there. That example makes little sense, but the Match method doesn't care about that.

In practical use, Either is often used for error handling. The article on the Church encoding of Either contains an example.

Either F-Algebra #

As in the previous article, I'll use Fix and cata as explained in Bartosz Milewski's excellent article on F-Algebras.

While F-Algebras and fixed points are mostly used for recursive data structures, you can also define an F-Algebra for a non-recursive data structure. You already saw an example of that in the articles about Boolean catamorphism and Maybe catamorphism. The difference between e.g. Maybe values and Either is that both cases of Either carry a value. You can model this as a Functor with both a carrier type and two type arguments for the data that Either may contain:

data EitherF l r c = LeftF l | RightF r deriving (ShowEqRead)
 
instance Functor (EitherF l r) where
  fmap _  (LeftF l) = LeftF l
  fmap _ (RightF r) = RightF r

I chose to call the 'data types' l (for left) and r (for right), and the carrier type c (for carrier). As was also the case with BoolF and MaybeF, the Functor instance ignores the map function because the carrier type is missing from both the LeftF case and the RightF case. Like the Functor instances for BoolF and MaybeF, it'd seem that nothing happens, but at the type level, this is still a translation from EitherF l r c to EitherF l r c1. Not much of a function, perhaps, but definitely an endofunctor.

As was also the case when deducing the Maybe and List catamorphisms, Haskell isn't too happy about defining instances for a type like Fix (EitherF l r). To address that problem, you can introduce a newtype wrapper:

newtype EitherFix l r = EitherFix { unEitherFix :: Fix (EitherF l r) } deriving (ShowEqRead)

You can define Functor, Applicative, Monad, etc. instances for this type without resorting to any funky GHC extensions. Keep in mind that ultimately, the purpose of all this code is just to figure out what the catamorphism looks like. This code isn't intended for actual use.

A pair of helper functions make it easier to define EitherFix values:

leftF :: l -> EitherFix l r
leftF = EitherFix . Fix . LeftF
 
rightF :: r -> EitherFix l r
rightF = EitherFix . Fix . RightF

With those functions, you can create EitherFix values:

Prelude Data.UUID Data.UUID.V4 Fix Either> leftF <$> nextRandom
EitherFix {unEitherFix = Fix (LeftF e65378c2-0d6e-47e0-8bcb-7cc29d185fad)}
Prelude Data.UUID Data.UUID.V4 Fix Either> rightF "foo"
EitherFix {unEitherFix = Fix (RightF "foo")}

That's all you need to identify the catamorphism.

Haskell catamorphism #

At this point, you have two out of three elements of an F-Algebra. You have an endofunctor (EitherF l r), and an object c, but you still need to find a morphism EitherF l r c -> c. Notice that the algebra you have to find is the function that reduces the functor to its carrier type c, not the 'data types' l and r. This takes some time to get used to, but that's how catamorphisms work. This doesn't mean, however, that you get to ignore l and r, as you'll see.

As in the previous articles, start by writing a function that will become the catamorphism, based on cata:

eitherF = cata alg . unEitherFix
  where alg  (LeftF l) = undefined
        alg (RightF r) = undefined

While this compiles, with its undefined implementations, it obviously doesn't do anything useful. I find, however, that it helps me think. How can you return a value of the type c from the LeftF case? You could pass an argument to the eitherF function:

eitherF fl = cata alg . unEitherFix
  where alg  (LeftF l) = fl l
        alg (RightF r) = undefined

While you could, technically, pass an argument of the type c to eitherF and then return that value from the LeftF case, that would mean that you would ignore the l value. This would be incorrect, so instead, make the argument a function and call it with l. Likewise, you can deal with the RightF case in the same way:

eitherF :: (l -> c) -> (r -> c) -> EitherFix l r -> c
eitherF fl fr = cata alg . unEitherFix
  where alg  (LeftF l) = fl l
        alg (RightF r) = fr r

This works. Since cata has the type Functor f => (f a -> a) -> Fix f -> a, that means that alg has the type f a -> a. In the case of EitherF, the compiler infers that the alg function has the type EitherF l r c -> c, which is just what you need!

You can now see what the carrier type c is for. It's the type that the algebra extracts, and thus the type that the catamorphism returns.

This, then, is the catamorphism for Either. As has been consistent so far, it's a pair, but now made from two functions. As you've seen repeatedly, this isn't the only possible catamorphism, since you can, for example, trivially flip the arguments to eitherF.

I've chosen the representation shown here because it's isomorphic to the either function from Haskell's built-in Data.Either module, which calls the function the "Case analysis for the Either type". Both of these functions (eitherF and either) are equivalent to the above C# Match method.

Basis #

You can implement most other useful functionality with eitherF. Here's the Bifunctor instance:

instance Bifunctor EitherFix where
  bimap f s = eitherF (leftF . f) (rightF . s)

From that instance, the Functor instance trivially follows:

instance Functor (EitherFix l) where
  fmap = second

On top of Functor you can add Applicative:

instance Applicative (EitherFix l) where
  pure = rightF
  f <*> x = eitherF leftF (<$> x) f

Notice that the <*> implementation is similar to to the <*> implementation for MaybeFix. The same is the case for the Monad instance:

instance Monad (EitherFix l) where
  x >>= f = eitherF leftF f x

Not only is EitherFix Foldable, it's Bifoldable:

instance Bifoldable EitherFix where
  bifoldMap = eitherF

Notice, interestingly, that bifoldMap is identical to eitherF.

The Bifoldable instance enables you to trivially implement the Foldable instance:

instance Foldable (EitherFix l) where
  foldMap = bifoldMap mempty

You may find the presence of mempty puzzling, since bifoldMap (or eitherF; they're identical) takes as arguments two functions. Is mempty a function?

Yes, mempty can be a function. Here, it is. There's a Monoid instance for any function a -> m, where m is a Monoid instance, and mempty is the identity for that monoid. That's the instance in use here.

Just as EitherFix is Bifoldable, it's also Bitraversable:

instance Bitraversable EitherFix where
  bitraverse fl fr = eitherF (fmap leftF . fl) (fmap rightF . fr)

You can comfortably implement the Traversable instance based on the Bitraversable instance:

instance Traversable (EitherFix l) where
  sequenceA = bisequenceA . first pure

Finally, you can implement conversions to and from the standard Either type, using ana as the dual of cata:

toEither :: EitherFix l r -> Either l r
toEither = eitherF Left Right
 
fromEither :: Either a b -> EitherFix a b
fromEither = EitherFix . ana coalg
  where coalg  (Left l) =  LeftF l
        coalg (Right r) = RightF r

This demonstrates that EitherFix is isomorphic to Either, which again establishes that eitherF and either are equivalent.

Relationships #

In this series, you've seen various examples of catamorphisms of structures that have no folds, catamorphisms that coincide with folds, and now a catamorphism that is more general than the fold. The introduction to the series included this diagram:

Catamorphisms and folds as sets, for various sum types.

This shows that Boolean values and Peano numbers have catamorphisms, but no folds, whereas for Maybe and List, the fold and the catamorphism is the same. For Either, however, the fold is a special case of the catamorphism. The fold for Either 'pretends' that the left side doesn't exist. Instead, the left value is interpreted as a missing right value. Thus, in order to fold Either values, you must supply a 'fallback' value that will be used in case an Either value isn't a right value:

Prelude Fix Either> e = rightF LT :: EitherFix Integer Ordering
Prelude Fix Either> foldr (const . show) "" e
"LT"
Prelude Fix Either> e = leftF 42 :: EitherFix Integer Ordering
Prelude Fix Either> foldr (const . show) "" e
""

In a GHCi session like the above, you can create two Either values of the same type. The right case is an Ordering value, while the left case is an Integer value.

With foldr, there's no way to access the left case. While you can access and transform the right Ordering value, the number 42 is simply ignored during the fold. Instead, the default value "" is returned.

Contrast this with the catamorphism, which can access both cases:

Prelude Fix Either> e = rightF LT :: EitherFix Integer Ordering
Prelude Fix Either> eitherF show show e
"LT"
Prelude Fix Either> e = leftF 42 :: EitherFix Integer Ordering
Prelude Fix Either> eitherF show show e
"42"

In a session like this, you recreate the same values, but using the catamorphism eitherF, you can now access and transform both the left and the right cases. In other words, the catamorphism enables you to perform operations not possible with the fold.

It's interesting, however, to note that while the fold is a specialisation of the catamorphism, the bifold is identical to the catamorphism.

Summary #

The catamorphism for Either is a pair of functions. One function transforms the left case, while the other function transforms the right case. For any Either value, only one of those functions will be used.

When I originally encountered the concept of a catamorphism, I found it difficult to distinguish between catamorphism and fold. My problem was, I think, that the tutorials I ran into mostly used linked lists to demonstrate how, in that case, the fold is the catamorphism. It turns out, however, that this isn't always the case. A catamorphism is a general abstraction. A fold, on the other hand, seems to me to be mostly related to collections.

In this article you saw the first example of a catamorphism that can do more than the fold. For Either, the fold is just a special case of the catamorphism. You also saw, however, how the catamorphism was identical to the bifold. Thus, it's still not entirely clear how these concepts relate. Therefore, in the next article, you'll get an example of a container where there's no bifold, and where the catamorphism is, indeed, a generalisation of the fold.

Next: Tree catamorphism.


List catamorphism

Monday, 27 May 2019 06:10:00 UTC

The catamorphism for a list is the same as its fold.

This article is part of an article series about catamorphisms. A catamorphism is a universal abstraction that describes how to digest a data structure into a potentially more compact value.

This article presents the catamorphism for (linked) lists, and other collections in general. It also shows how to identify it. The beginning of this article presents the catamorphism in C#, with an example. The rest of the article describes how to deduce the catamorphism. This part of the article presents my work in Haskell. Readers not comfortable with Haskell can just read the first part, and consider the rest of the article as an optional appendix.

The C# part of the article will discuss IEnumerable<T>, while the Haskell part will deal specifically with linked lists. Since C# is a less strict language anyway, we have to make some concessions when we consider how concepts translate. In my experience, the functionality of IEnumerable<T> closely mirrors that of Haskell lists.

C# catamorphism #

The .NET base class library defines this Aggregate overload:

public static TAccumulate Aggregate<TSourceTAccumulate>(
    this IEnumerable<TSource> source,
    TAccumulate seed,
    Func<TAccumulateTSourceTAccumulate> func);

This is the catamorphism for linked lists (and, I'll conjecture, for IEnumerable<T> in general). The introductory article already used it to show several motivating examples, of which I'll only repeat the last:

> new[] { 42, 1337, 2112, 90125, 5040, 7, 1984 }
. .Aggregate(Angle.Identity, (a, i) => a.Add(Angle.FromDegrees(i)))
[{ Angle = 207° }]

In short, the catamorphism is, similar to the previous catamorphisms covered in this article series, a pair made from an initial value and a function. This has been true for both the Peano catamorphism and the Maybe catamorphism. An initial value is just a value in all three cases, but you may notice that the function in question becomes increasingly elaborate. For IEnumerable<T>, it's a function that takes two values. One of the values are of the type of the input list, i.e. for IEnumerable<TSource> it would be TSource. By elimination you can deduce that this value must come from the input list. The other value is of the type TAccumulate, which implies that it could be the seed, or the result from a previous call to func.

List F-Algebra #

As in the previous article, I'll use Fix and cata as explained in Bartosz Milewski's excellent article on F-Algebras. The ListF type comes from his article as well, although I've renamed the type arguments:

data ListF a c = NilF | ConsF a c deriving (ShowEqRead)
 
instance Functor (ListF a) where
  fmap _       NilF  = NilF
  fmap f (ConsF a c) = ConsF a $ f c

Like I did with MaybeF, I've named the 'data' type argument a, and the carrier type c (for carrier). Once again, notice that the Functor instance maps over the carrier type c; not over the 'data' type a.

As was also the case when deducing the Maybe catamorphism, Haskell isn't too happy about defining instances for a type like Fix (ListF a). To address that problem, you can introduce a newtype wrapper:

newtype ListFix a = ListFix { unListFix :: Fix (ListF a) } deriving (ShowEqRead)

You can define Functor, Applicative, Monad, etc. instances for this type without resorting to any funky GHC extensions. Keep in mind that ultimately, the purpose of all this code is just to figure out what the catamorphism looks like. This code isn't intended for actual use.

A few helper functions make it easier to define ListFix values:

nilF :: ListFix a
nilF = ListFix $ Fix NilF
 
consF :: a -> ListFix a -> ListFix a
consF x = ListFix . Fix . ConsF x . unListFix

With those functions, you can create ListFix linked lists:

Prelude Fix List> nilF
ListFix {unListFix = Fix NilF}
Prelude Fix List> consF 42 $ consF 1337 $ consF 2112 nilF
ListFix {unListFix = Fix (ConsF 42 (Fix (ConsF 1337 (Fix (ConsF 2112 (Fix NilF))))))}

The first example creates an empty list, whereas the second creates a list of three integers, corresponding to [42,1337,2112].

That's all you need to identify the catamorphism.

Haskell catamorphism #

At this point, you have two out of three elements of an F-Algebra. You have an endofunctor (ListF), and an object a, but you still need to find a morphism ListF a c -> c. Notice that the algebra you have to find is the function that reduces the functor to its carrier type c, not the 'data type' a. This takes some time to get used to, but that's how catamorphisms work. This doesn't mean, however, that you get to ignore a, as you'll see.

As in the previous article, start by writing a function that will become the catamorphism, based on cata:

listF = cata alg . unListFix
  where alg       NilF  = undefined
        alg (ConsF h t) = undefined

While this compiles, with its undefined implementations, it obviously doesn't do anything useful. I find, however, that it helps me think. How can you return a value of the type c from the NilF case? You could pass an argument to the listF function:

listF n = cata alg . unListFix
  where alg       NilF  = n
        alg (ConsF h t) = undefined

The ConsF case, contrary to NilF, contains both a head h (of type a) and a tail t (of type c). While you could make the code compile by simply returning t, it'd be incorrect to ignore h. In order to deal with both, you'll need a function that turns both h and t into a value of the type c. You can do this by passing a function to listF and using it:

listF :: (a -> c -> c) -> c -> ListFix a -> c
listF f n = cata alg . unListFix
  where alg       NilF  = n
        alg (ConsF h t) = f h t

This works. Since cata has the type Functor f => (f a -> a) -> Fix f -> a, that means that alg has the type f a -> a. In the case of ListF, the compiler infers that the alg function has the type ListF a c -> c, which is just what you need!

You can now see what the carrier type c is for. It's the type that the algebra extracts, and thus the type that the catamorphism returns.

This, then, is the catamorphism for lists. As has been consistent so far, it's a pair made from an initial value and a function. Once more, this isn't the only possible catamorphism, since you can, for example, trivially flip the arguments to listF:

listF :: c -> (a -> c -> c) -> ListFix a -> c
listF n f = cata alg . unListFix
  where alg       NilF  = n
        alg (ConsF h t) = f h t

You can also flip the arguments of f:

listF :: c -> (c -> a -> c) -> ListFix a -> c
listF n f = cata alg . unListFix
  where alg       NilF  = n
        alg (ConsF h t) = f t h

These representations are all isomorphic to each other, but notice that the last variation is similar to the above C# Aggregate overload. The initial n value is the seed, and the function f has the same shape as func. Thus, I consider it reasonable to conjecture that that Aggregate overload is the catamorphism for IEnumerable<T>.

Basis #

You can implement most other useful functionality with listF. The rest of this article uses the first of the variations shown above, with the type (a -> c -> c) -> c -> ListFix a -> c. Here's the Semigroup instance:

instance Semigroup (ListFix a) where
  xs <> ys = listF consF ys xs

The initial value passed to listF is ys, and the function to apply is simply the consF function, thus 'consing' the two lists together. Here's an example of the operation in action:

Prelude Fix List> consF 42 $ consF 1337 nilF <> (consF 2112 $ consF 1 nilF)
ListFix {unListFix = Fix (ConsF 42 (Fix (ConsF 1337 (Fix (ConsF 2112 (Fix (ConsF 1 (Fix NilF))))))))}

With a Semigroup instance, it's trivial to also implement the Monoid instance:

instance Monoid (ListFix a) where
  mempty = nilF

While you could implement mempty with listF (mempty = listF (const id) nilF nilF), that'd be overcomplicated. Just because you can implement all functionality using listF, it doesn't mean that you should, if a simpler alternative exists.

You can, on the other hand, use listF for the Functor instance:

instance Functor ListFix where
  fmap f = listF (\h l -> consF (f h) l) nilF

You could write the function you pass to listF in a point-free fashion as consF . f, but I thought it'd be easier to follow what happens when written as an explicit lambda expression. The function receives a 'current value' h, as well as the part of the list which has already been translated l. Use f to translate h, and consF the result unto l.

You can add Applicative and Monad instances in a similar fashion:

instance Applicative ListFix where
  pure x = consF x nilF
  fs <*> xs = listF (\f acc -> (f <$> xs) <> acc) nilF fs
 
instance Monad ListFix where
  xs >>= f = listF (\x acc -> f x <> acc) nilF xs

What may be more interesting, however, is the Foldable instance:

instance Foldable ListFix where
  foldr = listF

The demonstrates that listF and foldr is the same.

Next, you can also add a Traversable instance:

instance Traversable ListFix where
  sequenceA = listF (\x acc -> consF <$> x <*> acc) (pure nilF)

Finally, you can implement conversions to and from the standard list [] type, using ana as the dual of cata:

toList :: ListFix a -> [a]
toList = listF (:) []
 
fromList :: [a] -> ListFix a
fromList = ListFix . ana coalg
  where coalg   []  = NilF
        coalg (h:t) = ConsF h t

This demonstrates that ListFix is isomorphic to [], which again establishes that listF and foldr are equivalent.

Summary #

The catamorphism for lists is a pair made from an initial value and a function. One variation is equal to foldr. Like Maybe, the catamorphism is the same as the fold.

In C#, this function corresponds to the Aggregate extension method identified above.

You've now seen two examples where the catamorphism coincides with the fold. You've also seen examples (Boolean catamorphism and Peano catamorphism) where there's a catamorphism, but no fold at all. In a subsequent article, you'll see an example of a container that has both catamorphism and fold, but where the catamorphism is more general than the fold.

Next: NonEmpty catamorphism.


Maybe catamorphism

Monday, 20 May 2019 06:04:00 UTC

The catamorphism for Maybe is just a simplification of its fold.

This article is part of an article series about catamorphisms. A catamorphism is a universal abstraction that describes how to digest a data structure into a potentially more compact value.

This article presents the catamorphism for Maybe, as well as how to identify it. The beginning of this article presents the catamorphism in C#, with examples. The rest of the article describes how to deduce the catamorphism. This part of the article presents my work in Haskell. Readers not comfortable with Haskell can just read the first part, and consider the rest of the article as an optional appendix.

Maybe is a data container that models the absence or presence of a value. Contrary to null, Maybe has a type, so offers a sane and reasonable way to model that situation.

C# catamorphism #

This article uses Church-encoded Maybe. Other, alternative implementations of Maybe are possible. The catamorphism for Maybe is the Match method:

TResult Match<TResult>(TResult nothing, Func<TTResult> just);

Like the Peano catamorphism, the Maybe catamorphism is a pair of a value and a function. The nothing value corresponds to the absence of data, whereas the just function handles the presence of data.

Given, for example, a Maybe containing a number, you can use Match to get the value out of the Maybe:

> IMaybe<int> maybe = new Just<int>(42);
> maybe.Match(0, x => x)
42
> IMaybe<int> maybe = new Nothing<int>();
> maybe.Match(0, x => x)
0

The functionality is, however, more useful than a simple get-value-or-default operation. Often, you don't have a good default value for the type potentially wrapped in a Maybe object. In the core of your application architecture, it may not be clear how to deal with, say, the absence of a Reservation object, whereas at the boundary of your system, it's evident how to convert both absence and presence of data into a unifying type, such as an HTTP response:

Maybe<Reservation> maybe = // ...
return maybe
    .Select(r => Repository.Create(r))
    .Match<IHttpActionResult>(
        nothing: Content(
            HttpStatusCode.InternalServerError,
            new HttpError("Couldn't accept.")),
        just: id => Ok(id));

This enables you to avoid special cases, such as null Reservation objects, or magic numbers like -1 to indicate the absence of id values. At the boundary of an HTTP-based application, you know that you must return an HTTP response. That's the unifying type, so you can return 200 OK with a reservation ID in the response body when data is present, and 500 Internal Server Error when data is absent.

Maybe F-Algebra #

As in the previous article, I'll use Fix and cata as explained in Bartosz Milewski's excellent article on F-Algebras.

While F-Algebras and fixed points are mostly used for recursive data structures, you can also define an F-Algebra for a non-recursive data structure. You already saw an example of that in the article about Boolean catamorphism. The difference between Boolean values and Maybe is that the just case of Maybe carries a value. You can model this as a Functor with both a carrier type and a type argument for the data that Maybe may contain:

data MaybeF a c = NothingF | JustF a deriving (ShowEqRead)
 
instance Functor (MaybeF a) where
  fmap _  NothingF = NothingF
  fmap _ (JustF x) = JustF x

I chose to call the 'data type' a and the carrier type c (for carrier). As was also the case with BoolF, the Functor instance ignores the map function because the carrier type is missing from both the NothingF case and the JustF case. Like the Functor instance for BoolF, it'd seem that nothing happens, but at the type level, this is still a translation from MaybeF a c to MaybeF a c1. Not much of a function, perhaps, but definitely an endofunctor.

In the previous articles, it was possible to work directly with the fixed points of both functors; i.e. Fix BoolF and Fix NatF. Haskell isn't happy about attempts to define various instances for Fix (MaybeF a), so in order to make this easier, you can define a newtype wrapper:

newtype MaybeFix a =
  MaybeFix { unMaybeFix :: Fix (MaybeF a) } deriving (ShowEqRead)

In order to make it easier to work with MaybeFix you can add helper functions to create values:

nothingF :: MaybeFix a
nothingF = MaybeFix $ Fix NothingF
 
justF :: a -> MaybeFix a
justF = MaybeFix . Fix . JustF

You can now create MaybeFix values to your heart's content:

Prelude Fix Maybe> justF 42
MaybeFix {unMaybeFix = Fix (JustF 42)}
Prelude Fix Maybe> nothingF
MaybeFix {unMaybeFix = Fix NothingF}

That's all you need to identify the catamorphism.

Haskell catamorphism #

At this point, you have two out of three elements of an F-Algebra. You have an endofunctor (MaybeF), and an object a, but you still need to find a morphism MaybeF a c -> c. Notice that the algebra you have to find is the function that reduces the functor to its carrier type c, not the 'data type' a. This takes some time to get used to, but that's how catamorphisms work. This doesn't mean, however, that you get to ignore a, as you'll see.

As in the previous article, start by writing a function that will become the catamorphism, based on cata:

maybeF = cata alg . unMaybeFix
  where alg  NothingF = undefined
        alg (JustF x) = undefined

While this compiles, with its undefined implementations, it obviously doesn't do anything useful. I find, however, that it helps me think. How can you return a value of the type c from the NothingF case? You could pass an argument to the maybeF function:

maybeF n = cata alg . unMaybeFix
  where alg  NothingF = n
        alg (JustF x) = undefined

The JustF case, contrary to NothingF, already contains a value, and it'd be incorrect to ignore it. On the other hand, x is a value of type a, and you need to return a value of type c. You'll need a function to perform the conversion, so pass such a function as an argument to maybeF as well:

maybeF :: c -> (a -> c) -> MaybeFix a -> c
maybeF n f = cata alg . unMaybeFix
  where alg  NothingF = n
        alg (JustF x) = f x

This works. Since cata has the type Functor f => (f a -> a) -> Fix f -> a, that means that alg has the type f a -> a. In the case of MaybeF, the compiler infers that the alg function has the type MaybeF a c -> c, which is just what you need!

You can now see what the carrier type c is for. It's the type that the algebra extracts, and thus the type that the catamorphism returns.

Notice that maybeF, like the above C# Match method, takes as arguments a pair of a value and a function (together with the Maybe value itself). Those are two representations of the same idea. Furthermore, this is nearly identical to the maybe function in Haskell's Data.Maybe module. I found if fitting, therefore, to name the function maybeF.

Basis #

You can implement most other useful functionality with maybeF. Here's the Functor instance:

instance Functor MaybeFix where
  fmap f = maybeF nothingF (justF . f)

Since fmap should be a structure-preserving map, you'll have to map the nothing case to the nothing case, and just to just. When calling maybeF, you must supply a value for the nothing case and a function to deal with the just case. The nothing case is easy to handle: just use nothingF.

In the just case, first apply the function f to map from a to b, and then use justF to wrap the b value in a new MaybeFix container to get MaybeFix b.

Applicative is a little harder, but not much:

instance Applicative MaybeFix where
  pure = justF
  f <*> x = maybeF nothingF (<$> x) f

The pure function is just justF (pun intended). The apply operator <*> is more complex.

Both f and x surrounding <*> are MaybeFix values: f is MaybeFix (a -> b), and x is MaybeFix a. While it's becoming increasingly clear that you can use a catamorphism like maybeF to implement most other functionality, to which MaybeFix value should you apply it? To f or x?

Both are possible, but the code looks (in my opinion) more readable if you apply it to f. Again, when f is nothing, return nothingF. When f is just, use the functor instance to map x (using the infix fmap alias <$>).

The Monad instance, on the other hand, is almost trivial:

instance Monad MaybeFix where
  x >>= f = maybeF nothingF f x

As usual, map nothing to nothing by supplying nothingF. f is already a function that returns a MaybeFix b value, so just use that.

The Foldable instance is likewise trivial (although, as you'll see below, you can make it even more trivial):

instance Foldable MaybeFix where
  foldMap = maybeF mempty

The foldMap function must return a Monoid instance, so for the nothing case, simply return the identity, mempty. Furthermore, foldMap takes a function a -> m, but since the foldMap implementation is point-free, you can't 'see' that function as an argument.

Finally, for the sake of completeness, here's the Traversable instance:

instance Traversable MaybeFix where
  sequenceA = maybeF (pure nothingF) (justF <$>)

In the nothing case, you can put nothingF into the desired Applicative with pure. In the just case you can take advantage of the desired Applicative being also a Functor by simply mapping the inner value(s) with justF.

Since the Applicative instance for MaybeFix equals pure to justF, you could alternatively write the Traversable instance like this:

instance Traversable MaybeFix where
  sequenceA = maybeF (pure nothingF) (pure <$>)

I like this alternative less, since I find it confusing. The two appearances of pure relate to two different types. The pure in pure nothingF has the type MaybeFix a -> f (MaybeFix a), while the pure in pure <$> has the type a -> MaybeFix a!

Both implementations work the same, though:

Prelude Fix Maybe> sequenceA (justF ("foo", 42))
("foo",MaybeFix {unMaybeFix = Fix (JustF 42)})

Here, I'm using the Applicative instance of (,) String.

Finally, you can implement conversions to and from the standard Maybe type, using ana as the dual of cata:

toMaybe :: MaybeFix a -> Maybe a
toMaybe = maybeF Nothing return
 
fromMaybe :: Maybe a -> MaybeFix a
fromMaybe = MaybeFix . ana coalg
  where coalg  Nothing = NothingF
        coalg (Just x) = JustF x

This demonstrates that MaybeFix is isomorphic to Maybe, which again establishes that maybeF and maybe are equivalent.

Alternatives #

As usual, the above maybeF isn't the only possible catamorphism. A trivial variation is to flip its arguments, but other variations exist.

It's a recurring observation that a catamorphism is just a generalisation of a fold. In the above code, the Foldable instance already looked as simple as anyone could desire, but another variation of a catamorphism for Maybe is this gratuitously embellished definition:

maybeF :: (a -> c -> c) -> c -> MaybeFix a -> c
maybeF f n = cata alg . unMaybeFix
  where alg  NothingF = n
        alg (JustF x) = f x n

This variation redundantly passes n as an argument to f, thereby changing the type of f to a -> c -> c. There's no particular motivation for doing this, apart from establishing that this catamorphism is exactly the same as the fold:

instance Foldable MaybeFix where
  foldr = maybeF

You can still implement the other instances as well, but the rest of the code suffers in clarity. Here's a few examples:

instance Functor MaybeFix where
  fmap f = maybeF (const . justF . f) nothingF
 
instance Applicative MaybeFix where
  pure = justF
  f <*> x = maybeF (const . (<$> x)) nothingF f
 
instance Monad MaybeFix where
  x >>= f = maybeF (const . f) nothingF x

I find that the need to compose with const does nothing to improve the readability of the code, so this variation is mostly, I think, of academic interest. It does show, though, that the catamorphism of Maybe is isomorphic to its fold, as the diagram in the overview article indicated:

Catamorphisms and folds as sets, for various sum types.

You can demonstrate that this variation, too, is isomorphic to Maybe with a set of conversion:

toMaybe :: MaybeFix a -> Maybe a
toMaybe = maybeF (const . return) Nothing
 
fromMaybe :: Maybe a -> MaybeFix a
fromMaybe = MaybeFix . ana coalg
  where coalg  Nothing = NothingF
        coalg (Just x) = JustF x

Only toMaybe has changed, compared to above; the fromMaybe function remains the same. The only change to toMaybe is that the arguments have been flipped, and return is now composed with const.

Since (according to Conceptual Mathematics) isomorphisms are transitive this means that the two variations of maybeF are isomorphic. The latter, more complex, variation of maybeF is identical foldr, so we can consider the simpler, more frequently encountered variation a simplification of fold.

Summary #

The catamorphism for Maybe is the same as its Church encoding: a pair made from a default value and a function. In Haskell's base library, this is simply called maybe. In the above C# code, it's called Match.

This function is total, and you can implement any other functionality you need with it. I therefore consider it the canonical representation of Maybe, which is also why it annoys me that most Maybe implementations come equipped with partial functions like fromJust, or F#'s Option.get. Those functions shouldn't be part of a sane and reasonable Maybe API. You shouldn't need them.

In this series of articles about catamorphisms, you've now seen the first example of catamorphism and fold coinciding. In the next article, you'll see another such example - probably the most well-known catamorphism example of them all.

Next: List catamorphism.


Peano catamorphism

Monday, 13 May 2019 05:10:00 UTC

The catamorphism for Peano numbers involves a base value and a successor function.

This article is part of an article series about catamorphisms. A catamorphism is a universal abstraction that describes how to digest a data structure into a potentially more compact value.

This article presents the catamorphism for natural numbers, as well as how to identify it. The beginning of the article presents the catamorphism in C#, with examples. The rest of the article describes how I deduced the catamorphism. This part of the article presents my work in Haskell. Readers not comfortable with Haskell can just read the first part, and consider the rest of the article as an optional appendix.

C# catamorphism #

In this article, I model natural numbers using Peano's model, and I'll reuse the Church-encoded implementation you've seen before. The catamorphism for INaturalNumber is:

public static T Cata<T>(this INaturalNumber n, T zero, Func<TT> succ)
{
    return n.Match(zero, p => p.Cata(succ(zero), succ));
}

Notice that this is an extension method on INaturalNumber, taking two other arguments: a zero argument which will be returned when the number is zero, and a successor function to return the 'next' value based on a previous value.

The zero argument is the easiest to understand. It's simply passed to Match so that this is the value that Cata returns when n is zero.

The other argument to Match must be a Func<INaturalNumber, T>; that is, a function that takes an INaturalNumber as input and returns a value of the type T. You can supply such a function by using a lambda expression. This expression receives a predecessor p as input, and has to return a value of the type T. The only function available in this context, however, is succ, which has the type Func<T, T>. How can you make that work?

As is often the case when programming with generics, it pays to follow the types. A Func<T, T> requires an input of the type T. Do you have any T objects around?

The only available T object is zero, so you could call succ(zero) to produce another T value. While you could return that immediately, that'd ignore the predecessor p, so that's not going to work. Another option, which is the one that works, is to recursively call Cata with succ(zero) as the zero value, and succ as the second argument.

What this accomplishes is that Cata keeps recursively calling itself until n is zero. The zero object, however, will be the result of repeated applications of succ(zero). In other words, succ will be called as many times as the natural number. If n is 7, then succ will be called seven times, the first time with the original zero value, the next time with the result of succ(zero), the third time with the result of succ(succ(zero)), and so on. If the number is 42, succ will be called 42 times.

Arithmetic #

You can implement all the functionality you saw in the article on Church-encoded natural numbers. You can start gently by converting a Peano number into a normal C# int:

public static int Count(this INaturalNumber n)
{
    return n.Cata(0, x => 1 + x);
}

You can play with the functionality in C# Interactive to get a feel for how it works:

> NaturalNumber.Eight.Count()
8
> NaturalNumber.Five.Count()
5

The Count extension method uses Cata to count the level of recursion. The zero value is, not surprisingly, 0, and the successor function simply adds one to the previous number. Since the successor function runs as many times as encoded by the Peano number, and since the initial value is 0, you get the integer value of the number when Cata exits.

A useful building block you can write using Cata is a function to increment a natural number by one:

public static INaturalNumber Increment(this INaturalNumber n)
{
    return n.Cata(One, p => new Successor(p));
}

This, again, works as you'd expect:

> NaturalNumber.Zero.Increment().Count()
1
> NaturalNumber.Eight.Increment().Count()
9

With the Increment method and Cata, you can easily implement addition:

public static INaturalNumber Add(this INaturalNumber x, INaturalNumber y)
{
    return x.Cata(y, p => p.Increment());
}

The trick here is to use y as the zero case for x. In other words, if x is zero, then Add should return y. If x isn't zero, then Increment it as many times as the number encodes, but starting at y. In other words, start with y and Increment x times.

The catamorphism makes it much easier to implement arithmetic operation. Just consider multiplication, which wasn't the simplest implementation in the previous article. Now, it's as simple as this:

public static INaturalNumber Multiply(this INaturalNumber x, INaturalNumber y)
{
    return x.Cata(Zero, p => p.Add(y));
}

Start at 0 and simply Add(y) x times.

> NaturalNumber.Nine.Multiply(NaturalNumber.Four).Count()
36

Finally, you can also implement some common predicates:

public static IChurchBoolean IsZero(this INaturalNumber n)
{
    return n.Cata<IChurchBoolean>(new ChurchTrue(), _ => new ChurchFalse());
}
 
public static IChurchBoolean IsEven(this INaturalNumber n)
{
    return n.Cata<IChurchBoolean>(new ChurchTrue(), b => new ChurchNot(b));
}
 
public static IChurchBoolean IsOdd(this INaturalNumber n)
{
    return new ChurchNot(n.IsEven());
}

Particularly IsEven is elegant: It considers zero even, so simply uses a new ChurchTrue() object for that case. In all other cases, it alternates between false and true by negating the predecessor.

> NaturalNumber.Three.IsEven().ToBool()
false

It seems convincing that we can use Cata to implement all the other functionality we need. That seems to be a characteristic of a catamorphism. Still, how do we know that Cata is, in fact, the catamorphism for natural numbers?

Peano F-Algebra #

As in the previous article, I'll use Fix and cata as explained in Bartosz Milewski's excellent article on F-Algebras. The NatF type comes from his article as well:

data NatF a = ZeroF | SuccF a deriving (ShowEqRead)
 
instance Functor NatF where
  fmap _ ZeroF = ZeroF
  fmap f (SuccF x) = SuccF $ f x

You can use the fixed point of this functor to define numbers with a shared type. Here's just the first ten:

zeroFoneFtwoFthreeFfourFfiveFsixFsevenFeightFnineF :: Fix NatF
zeroF  = Fix ZeroF
oneF   = Fix $ SuccF zeroF
twoF   = Fix $ SuccF oneF
threeF = Fix $ SuccF twoF
fourF  = Fix $ SuccF threeF
fiveF  = Fix $ SuccF fourF
sixF   = Fix $ SuccF fiveF
sevenF = Fix $ SuccF sixF
eightF = Fix $ SuccF sevenF
nineF  = Fix $ SuccF eightF

That's all you need to identify the catamorphism.

Haskell catamorphism #

At this point, you have two out of three elements of an F-Algebra. You have an endofunctor (NatF), and an object a, but you still need to find a morphism NatF a -> a.

As in the previous article, start by writing a function that will become the catamorphism, based on cata:

natF = cata alg
  where alg ZeroF = undefined
        alg (SuccF predecessor) = undefined

While this compiles, with its undefined implementations, it obviously doesn't do anything useful. I find, however, that it helps me think. How can you return a value of the type a from the ZeroF case? You could pass an argument to the natF function:

natF z = cata alg
  where alg ZeroF = z
        alg (SuccF predecessor) = undefined

In the SuccF case, predecessor is already of the polymorphic type a, so instead of returning a constant value, you can supply a function as an argument to natF and use it in that case:

natF :: a -> (a -> a) -> Fix NatF -> a
natF z next = cata alg
  where alg ZeroF = z
        alg (SuccF predecessor) = next predecessor

This works. Since cata has the type Functor f => (f a -> a) -> Fix f -> a, that means that alg has the type f a -> a. In the case of NatF, the compiler infers that the alg function has the type NatF a -> a, which is just what you need!

For good measure, I should point out that, as usual, the above natF function isn't the only possible catamorphism. Trivially, you can flip the order of the arguments, and this would also be a catamorphism. These two alternatives are isomorphic.

The natF function identifies the Peano number catamorphism, which is equivalent to the C# representation in the beginning of the article. I called the function natF, because there's a tendency in Haskell to name the 'case analysis' or catamorphism after the type, just with a lower-case initial letter.

Basis #

A catamorphism can be used to implement most (if not all) other useful functionality, like all of the above C# functionality. In fact, I wrote the Haskell code first, and then translated my implementations into the above C# extension methods. This means that the following functions apply the same reasoning:

evenF :: Fix NatF -> Fix BoolF
evenF = natF trueF notF
 
oddF :: Fix NatF -> Fix BoolF
oddF = notF . evenF
 
incF :: Fix NatF -> Fix NatF
incF = natF oneF $ Fix . SuccF
 
addF :: Fix NatF -> Fix NatF -> Fix NatF
addF x y = natF y incF x
 
multiplyF :: Fix NatF -> Fix NatF -> Fix NatF
multiplyF x y = natF zeroF (addF y) x

Here are some GHCi usage examples:

Prelude Boolean Nat> evenF eightF
Fix TrueF
Prelude Boolean Nat> toNum $ multiplyF sevenF sixF
42

The toNum function corresponds to the above Count C# method. It is, again, based on cata. You can use ana to convert the other way:

toNum :: Num a => Fix NatF -> a
toNum = natF 0 (+ 1)
 
fromNum :: (Eq a, Num a) => a -> Fix NatF
fromNum = ana coalg
  where coalg 0 = ZeroF
        coalg x = SuccF $ x - 1

This demonstrates that Fix NatF is isomorphic to Num instances, such as Integer.

Summary #

The catamorphism for Peano numbers is a pair consisting of a zero value and a successor function. The most common description of catamorphisms that I've found emphasise how a catamorphism is like a fold; an operation you can use to reduce a data structure like a list or a tree to a single value. This is what happens here, but even so, the Fix NatF type isn't a Foldable instance. The reason is that while NatF is a polymorphic type, its fixed point Fix NatF isn't. Haskell's Foldable type class requires foldable containers to be polymorphic (what C# programmers would call 'generic').

When I first ran into the concept of a catamorphism, it was invariably described as a 'generalisation of fold'. The examples shown were always how the catamorphism for linked list is the same as its fold. I found such explanations unhelpful, because I couldn't understand how those two concepts differ.

The purpose with this article series is to show just how much more general the abstraction of a catamorphism is. In this article you saw how an infinitely recursive data structure like Peano numbers have a catamorphism, even though it isn't a parametrically polymorphic type. In the next article, though, you'll see the first example of a polymorphic type where the catamorphism coincides with the fold.

Next: Maybe catamorphism.


Boolean catamorphism

Monday, 06 May 2019 12:30:00 UTC

The catamorphism for Boolean values is just the common ternary operator.

This article is part of an article series about catamorphisms. A catamorphism is a universal abstraction that describes how to digest a data structure into a potentially more compact value.

This article presents the catamorphism for Boolean values, as well as how you identify it. The beginning of this article presents the catamorphism in C#, with a simple example. The rest of the article describes how I deduced the catamorphism. That part of the article presents my work in Haskell. Readers not comfortable with Haskell can just read the first part, and consider the rest of the article as an optional appendix.

C# catamorphism #

The catamorphism for Boolean values is the familiar ternary conditional operator:

> DateTime.Now.Day % 2 == 0 ? "Even date" : "Odd date"
"Odd date"

Given a Boolean expression, you basically provide two values: one to use in case the Boolean expression is true, and one to use in case it's false.

For Church-encoded Boolean values, the catamorphism looks like this:

T Match<T>(T trueCase, T falseCase);

This is an instance method where you must, again, supply two alternatives. When the instance represents true, you'll get the left-most value trueCase; otherwise, when the instance represents false, you'll get the right-most value falseCase.

The catamorphism turns out to be the same as the Church encoding. This seems to be a recurring pattern.

Alternatives #

To be accurate, there's more than one catamorphism for Boolean values. It's only by convention that the value corresponding to true goes on the left, and the false value goes to the right. You could flip the arguments, and it would still be a catamorphism. This is, in fact, what Haskell's Data.Bool module does:

Prelude Data.Bool> bool "Odd date" "Even date" $ even date
"Odd date"

The module documentation calls this the "Case analysis for the Bool type", instead of a catamorphism, but the two representations are isomorphic:

"This is equivalent to if p then y else x; that is, one can think of it as an if-then-else construct with its arguments reordered."
This is another recurring result. There's typically more than one catamorphism, but the alternatives are isomorphic. In this article series, I'll mostly present the alternative that strikes me as the one you'll encounter most frequently.

Fix #

In this and future articles, I'll derive the catamorphism from an F-Algebra. For an introduction to F-Algebras and fixed points, I'll refer you to Bartosz Milewski's excellent article on the topic. In it, he presents a generic data type for a fixed point, as well as polymorphic functions for catamorphisms and anamorphisms. While they're available in his article, I'll repeat them here for good measure:

newtype Fix f = Fix { unFix :: f (Fix f) }
 
cata :: Functor f => (f a -> a) -> Fix f -> a
cata alg = alg . fmap (cata alg) . unFix
 
ana :: Functor f => (a -> f a) -> a -> Fix f
ana coalg = Fix . fmap (ana coalg) . coalg

This should be recognisable from Bartosz Milewski's article. With one small exception, this is just a copy of the code shown there.

Boolean F-Algebra #

While F-Algebras and fixed points are mostly used for recursive data structures, you can also define an F-Algebra for a non-recursive data structure. As data types go, they don't get much simpler than Boolean values, which are just two mutually exclusive cases. In order to make a Functor out of the definition, though, you can equip it with a carrier type:

data BoolF a = TrueF | FalseF deriving (ShowEqRead)
 
instance Functor BoolF where
  fmap _  TrueF =  TrueF
  fmap _ FalseF = FalseF

The Functor instance simply ignores the carrier type and just returns TrueF and FalseF, respectively. It'd seem that nothing happens, but at the type level, this is still a translation from BoolF a to BoolF b. Not much of a function, perhaps, but definitely an endofunctor.

Another note that may be in order here, as well as for all future articles in this series, is that you'll notice that most types and custom functions come with the F suffix. This is simply a suffix I've added to avoid conflicts with built-in types, values, and functions, such as Bool, True, and, and so on. The F is for F-Algebra.

You can lift these values into Fix in order to make it fit with the cata function:

trueFfalseF :: Fix BoolF
trueF  = Fix  TrueF
falseF = Fix FalseF

That's all you need to identify the catamorphism.

Haskell catamorphism #

At this point, you have two out of three elements of an F-Algebra. You have an endofunctor (BoolF), and an object a, but you still need to find a morphism BoolF a -> a. At first glance, this seems impossible, because neither TrueF nor FalseF actually contain a value of the type a. How, then, can you conjure an a value out of thin air?

The cata function has the answer.

What you can do is to start writing the function that will become the catamorphism, basing it on cata:

boolF = cata alg
  where alg  TrueF = undefined
        alg FalseF = undefined

While this compiles, with its undefined implementations, it obviously doesn't do anything useful. I find, however, that it helps me think. How can you return a value of the type a from the TrueF case? You could pass an argument to the boolF function:

boolF x = cata alg
  where alg  TrueF = x
        alg FalseF = undefined

That seems promising, so do that for the FalseF case as well:

boolF :: a -> a -> Fix BoolF -> a
boolF x y = cata alg
  where alg  TrueF = x
        alg FalseF = y

This works. Since cata has the type Functor f => (f a -> a) -> Fix f -> a, that means that alg has the type f a -> a. In the case of BoolF, the compiler infers that the alg function has the type BoolF a -> a, which is just what you need!

The boolF function identifies the Boolean catamorphism, which is equivalent to representations in the beginning of the article. I called the function boolF, because there's a tendency in Haskell to name the 'case analysis' or catamorphism after the type, just with a lower-case initial letter.

You can use the boolF function just like the above ternary operator:

Prelude Boolean Nat> boolF "Even date" "Odd date" $ evenF dateF
"Odd date"

Here, I've also used evenF from the Nat module shown in the next article in the series.

From the above definition of boolF, it should also be clear that you can arrive at the alternative catamorphism defined by Data.Bool.bool by simply flipping x and y.

Basis #

A catamorphism can be used to implement most (if not all) other useful functionality. For Boolean values, that would be the standard Boolean operations and, or, and not:

andF :: Fix BoolF -> Fix BoolF -> Fix BoolF
andF x y = boolF y falseF x
 
orF :: Fix BoolF -> Fix BoolF -> Fix BoolF
orF x y = boolF trueF y x
 
notF :: Fix BoolF -> Fix BoolF
notF = boolF falseF trueF

They work as you'd expect them to work:

Prelude Boolean> andF trueF falseF
Fix FalseF
Prelude Boolean> orF trueF falseF
Fix TrueF
Prelude Boolean> orF (notF trueF) falseF
Fix FalseF

You can also implement conversion to and from the built-in Bool type:

toBool :: Fix BoolF -> Bool
toBool = boolF True False
 
fromBool :: Bool -> Fix BoolF
fromBool = ana coalg
  where coalg  True = TrueF
        coalg False = FalseF

This demonstrates that Fix BoolF is isomorphic to Bool.

Summary #

The catamorphism for Boolean values is a function, method, or operator akin to the familiar ternary conditional operator. The most common descriptions of catamorphisms that I've found emphasise how a catamorphism is like a fold; an operation you can use to reduce a data structure like a list or a tree to a single value. In that light, it may be surprising that something as simple as Boolean values have an associated catamorphism.

Since Fix BoolF is isomorphic to Bool, you may wonder what the point is. Why define this data type, and implement functions like andF, orF, and notF?

The code presented here is nothing but an analysis tool. It's a way to identify the catamorphism for Boolean values.

Next: Peano catamorphism.


Catamorphisms

Monday, 29 April 2019 18:31:00 UTC

A catamorphism is a general abstraction that enables you to handle multiple values, for example in order to reduce them to a single value.

This article series is part of an even larger series of articles about the relationship between design patterns and category theory. In another article series in this big series of articles, you learned about functors, applicatives, and other types of data containers.

You may have heard about map-reduce architectures. Much software can be designed around two general types of operations: those that map data, and those that reduce data. A functor is a container of data that supports structure-preserving maps. Thus, you can think of functors as the general abstraction for map operations (also sometimes called projections). Does a similar universal abstraction exist for operations that reduce data?

Yes, that abstraction is called a catamorphism.

Aggregation #

Catamorphism is an intimidating word, so let's start with an example. You often have a collection of values that you'd like to reduce to a single value. Such a collection can contain arbitrarily complex objects, but I'll keep it simple and start with a collection of numbers:

new[] { 42, 1337, 2112, 90125, 5040, 7, 1984 };

This particular list of numbers is an array, but that's not important. What comes next works for any IEnumerable<T>, including arrays. I only chose an array because the C# syntax for array creation is more compact than for other collection types.

How do you reduce those seven numbers to a single number? That depends on what you want that number to tell you. One option is to add the numbers together. There's a specific, built-in function for that:

> new[] { 42, 1337, 2112, 90125, 5040, 7, 1984 }.Sum();
100647

The Sum extension method is a one of many built-in functions that enable you to reduce a list of numbers to a single number: Average, Max, Count, and so on.

What do you do, though, if you need to reduce many values to one, and there's no existing function for that? What if, for example, you need to add all the numbers using modulo 360 addition?

In that case, you use Aggregate:

> new[] { 42, 1337, 2112, 90125, 5040, 7, 1984 }.Aggregate((x, y) => (x + y) % 360)
207

The way to interpret this result is that the initial array represents a sequence of rotations (measured in degrees), and the result is the final angle after all the rotations have completed.

In other (functional) languages, such a 'reduce' operation is called a fold. The metaphor, I suppose, is that you fold multiple values together, two by two.

A fold is a catamorphism, but a catamorphism is a more general abstraction. For some data structures, the catamorphism is more powerful than the fold, but for collections, there's no difference.

There's one edge case we need to be aware of, though. What if the collection is empty?

Aggregation of empty containers #

What happens if you attempt to aggregate an empty collection?

> new int[0].Aggregate((x, y) => (x + y) % 360)
Sequence contains no elements
  + System.Linq.Enumerable.Aggregate<TSource>(IEnumerable<TSource>, Func<TSource, TSource, TSource>)

The Aggregate method throws an exception because it doesn't know how to deal with empty collections. The lambda expression you supply tells the Aggregate method how to combine two values into one. This is, for instance, how semigroups accumulate.

The lambda expression handles all cases where you have two or more values. If you have only a single value, then that's no problem either:

> new[] { 1337 }.Aggregate((x, y) => (x + y) % 360)
1337

In that case, the lambda expression isn't involved at all, because the single value is simply returned without modification. In this example, this could even be interpreted as being incorrect, since you'd expect the result to be 257 (1337 % 360).

It's safer to use the Aggregate overload that takes a seed value:

> new int[0].Aggregate(0, (x, y) => (x + y) % 360)
0

Not only does that gracefully handle empty collections, it also gives you a 'better' result for a single value:

> new[] { 1337 }.Aggregate(0, (x, y) => (x + y) % 360)
257

This works better because the method always starts with the seed value, which means that even if there's only a single value (1337), the lambda expression still runs ((0 + 1337) % 360).

This overload of Aggregate has a different type, though:

public static TAccumulate Aggregate<TSourceTAccumulate>(
    this IEnumerable<TSource> source,
    TAccumulate seed,
    Func<TAccumulateTSourceTAccumulate> func);

Notice that the func doesn't require the accumulator to have the same type as elements from the source collection. This enables you to translate on the fly, so to speak. You can still use binary operations like the above modulo 360 addition, because that just implies that both TSource and TAccumulate are int.

With this overload, you could, for example, use the Angle class to perform the work:

> new[] { 42, 1337, 2112, 90125, 5040, 7, 1984 }
. .Aggregate(Angle.Identity, (a, i) => a.Add(Angle.FromDegrees(i)))
[{ Angle = 207° }]

Now the seed argument is Angle.Identity, which implies that TAccumulate is Angle. The source is still a collection of numbers, so TSource is int. Hence, I called the angle a and the integer i in the lambda expression. The output is an Angle object that represents 207°.

That Aggregate overload is the catamorphism for collections. It reduces a collection to an object.

Catamorphisms and folds #

Is catamorphism just an intimidating word for aggregate, accumulate, fold, or reduce?

It took me a long time to be able to tell the difference, because in many cases, it seems that there's no difference. The purpose of this article series is to make the distinction clearer. In short, a catamorphism is a more general concept.

Catamorphisms and folds as sets, for various sum types.

For some data structures, such as Boolean values, or Peano numbers, the catamorphism is all there is; no fold exists. For other data structures, such as Maybe or collections, the catamorphism and the fold coincide. Still other data structures, such as Either and trees, support folding, but the fold is based on the catamorphism. For those types, there are operations you can do with the catamorphism that are impossible to implement with the fold function. One example is that a tree's catamorphism enables you to count its leaves; you can't do that with its fold function.

You'll see plenty of examples in this article series:

Each of these articles will contain a fair amount of Haskell code, but even if you're an object-oriented programmer who doesn't read Haskell, you should still scan them, as I'll start each with some C# examples. The Haskell code, by the way, is available on GitHub.

Greek #

When encountering a word like catamorphism, your reaction might be:

"Catamorphism?! What does that even mean? It's all Greek to me."
Indeed, it's Greek, as is so much of mathematical terminology. The cata prefix means 'down'; lots of words start with cata, like catastrophe, catalogue, catatonia, catacomb, etc.

The morph suffix generally means 'shape'. While the cata prefix appears in common words like catastrophe, the morph suffix mostly appears in more academic contexts. Programmers will probably have encountered polymorphism and skeuomorphism, not to mention isomorphism. While morphism is heavily used in mathematics, other sciences use the suffix too, like dimorphism in biology.

In category theory, a morphism is basically just an arrow that points from one object to another. Think of it as a function.

If a morphism is just a function, why don't we just call it that, then? Is it really necessary with this intimidating terminology? Yes and no.

If someone had originally figured all of this out in the context of mainstream programming, he or she would probably have used friendlier names, like condense, reduce, fold, and so on. This would have been more encouraging, although I'm not sure it would have been better.

In software architecture we use many overloaded terms. For example, what's a service, or a client? What does tier mean? Is it the same as a layer, or is it something different? What's the difference between a library and a framework?

At least a word like catamorphism is concise. It's not in common use, so isn't overloaded and vague.

Another, more pragmatic, concern is that whether you like it or not, the terminology is already established. Mathematicians decided to name the concept catamorphism. While the name may seem intimidating, I prefer to teach concepts like these using established terminology. This means that if my articles are unclear, you can do further research with other resources. That's the benefit of established terminology, whether you like the specific words or not.

Summary #

You can compose entire applications based on the abstractions of map and reduce. You can see one example of such a system in my A Functional Architecture with F# Pluralsight course.

The terms map and reduce may, however, not be helpful, because it may not be clear exactly what types of data you can map, and what types you can reduce. One of the most important goals of this overall article series about universal abstractions is to help you identify when such software architectures apply. This is more often that you think.

What sort of data can you map? You can map functors. While hardly finite, there's a catalogue of well-known functors, of which I've covered some, but not all. That catalogue contains data containers like Maybe, Tree, lazy computations, tasks, and perhaps a score more. The catalogue of (actually useful) functors has, in my experience, a manageable size.

Likewise you could ask: What sort of data can you reduce? How do you implement that reduction? Again, there's a compact set of well-known catamorphisms. How do you reduce a collection? You use its catamorphism (which is equal to a fold). How do you reduce a tree? You use its catamorphism. How do you reduce an Either object? You use its catamorphism.

When we learn new programming languages, new libraries, new frameworks, we gladly invest time in learning hundreds, if not thousands, of keywords, APIs, extensibility points, and so on. May I offer, for your consideration, that your mental resources are better spent learning only a handful of universal abstractions?

Next: Boolean catamorphism.


Comments

In other (functional) languages, such a 'reduce' operation is called a fold. The metaphor, I suppose, is that you fold multiple values together, two by two.
...the Aggregate overload that takes a seed value...

My impression is that part of the functional programming style is to avoid function overloading. Consistent with that is the naming used by Language Ext for these concepts. In Language Ext, the function with type (in F# notation) seq<'a> -> ('a -> 'a -> 'a) -> 'a is called Reduce and the function with type (in F# notation) seq<'a> -> 'b -> ('b -> 'a -> 'b) -> 'b is called Fold.

I don't know the origin of these two names, but I remember the difference by thinking about preparing food. In cooking, reduction increases the concentration of a liquid by boiling away some of its water. I think of the returned 'a as being a highly concentrated form of the input sequence since every sequence element (and only those elements) was used to create that return value. In baking, folding is a technique that carefully combines two mixtures into one. This reminds me of how the seed value 'b and the sequence of 'a are (typically) two different types and are combined by the given function. They are not perfect analogies, but they work for me.

On a related note, I dislike that Reduce returns 'a instead of Option<'a>.

2019-07-12 12:20 UTC

Tyson, thank you for writing. As you may know, my book liberally employs cooking analogies, but I admit that I've never thought about reduction and fold in that light before. Good analogies, although perhaps a bit strained (pun intended).

They do work well, though, for the reasons you give.

"the functional programming style is to avoid function overloading"
As far as I can tell, this has more to do with the combination of Hindley–Milner type inference and currying you encounter in Haskell and ML-derived languages than it has to do with functional programming in itself. If I recall correctly, Clojure makes copious use of overloading.

The problem with overloading in a language like F# is that if you imagine that the function you refer to as fold was also called reduce, a partially applied expression like this would be ambiguous:

let foo = reduce xs bar

What is bar, here? If reduce is overloaded, is it a function, or is it a 'seed value'?

As far as I can tell, the compiler can't infer that, so instead of compromising on type inference, the languages in question disallow function overloading.

Notice, additionally, that F# does allow method overloading, for the part of the language that enables interoperability with the rest of .NET. In that part of the language, type inference rarely works anyway. I'm not an expert in how the F# compiler works, but I've always understood this to indicate that the interop part of F# isn't based on Hindley-Milner. I don't see how it could be, since the .NET/IL type system isn't a Hindley-Milner type system.

The reduce function you refer to is, by the way, based on a semigroup instance. More specifically, it's simply how semigroups accumulate. I agree that reduce is partial, and therefore not as pretty as one could wish, but I think a more appropriate solution is to define it on NotEmptyCollection<T>, instead of on IEnumerable<T>, as shown in that article.

In other words, I don't think reduce belongs on IEnumerable<T> at all. I know it's in both F# and Haskell, but my personal opinion is that it shouldn't be, just like Haskell's head function ought not to exist either...

2019-07-14 16:29 UTC

Page 26 of 73

"Our team wholeheartedly endorses Mark. His expert service provides tremendous value."
Hire me!