ploeh blog danish software design
Tester-Doer isomorphisms
The Tester-Doer pattern is equivalent to the Try-Parse idiom; both are equivalent to Maybe.
This article is part of a series of articles about software design isomorphisms. An isomorphism is when a bi-directional lossless translation exists between two representations. Such translations exist between the Tester-Doer pattern and the Try-Parse idiom. Both can also be translated into operations that return Maybe.
Given an implementation that uses one of those three idioms or abstractions, you can translate your design into one of the other options. This doesn't imply that each is of equal value. When it comes to composability, Maybe is superior to the two other alternatives, and Tester-Doer isn't thread-safe.
Tester-Doer #
The first time I explicitly encountered the Tester-Doer pattern was in the Framework Design Guidelines, which is from where I've taken the name. The pattern is, however, older. The idea that you can query an object about whether a given operation would be possible, and then you only perform it if the answer is affirmative, is almost a leitmotif in Object-Oriented Software Construction. Bertrand Meyer often uses linked lists and stacks as examples, but I'll instead use the example that Krzysztof Cwalina and Brad Abrams use:
ICollection<int> numbers = // ... if (!numbers.IsReadOnly) numbers.Add(1);
The idea with the Tester-Doer pattern is that you test whether an intended operation is legal, and only perform it if the answer is affirmative. In the example, you only add to the numbers
collection if IsReadOnly
is false
. Here, IsReadOnly
is the Tester, and Add
is the Doer.
As Jeffrey Richter points out in the book, this is a dangerous pattern:
"The potential problem occurs when you have multiple threads accessing the object at the same time. For example, one thread could execute the test method, which reports that all is OK, and before the doer method executes, another thread could change the object, causing the doer to fail."In other words, the pattern isn't thread-safe. While multi-threaded programming was always supported in .NET, this was less of a concern when the guidelines were first published (2006) than it is today. The guidelines were in internal use in Microsoft years before they were published, and there wasn't many multi-core processors in use back then.
Another problem with the Tester-Doer pattern is with discoverability. If you're looking for a way to add an element to a collection, you'd usually consider your search over once you find the Add
method. Even if you wonder Is this operation safe? Can I always add an element to a collection? you might consider looking for a CanAdd
method, but not an IsReadOnly
property. Most people don't even ask the question in the first place, though.
From Tester-Doer to Try-Parse #
You could refactor such a Tester-Doer API to a single method, which is both thread-safe and discoverable. One option is a variation of the Try-Parse idiom (discussed in detail below). Using it could look like this:
ICollection<int> numbers = // ... bool wasAdded = numbers.TryAdd(1);
In this special case, you may not need the wasAdded
variable, because the original Add
operation never returned a value. If, on the other hand, you do care whether or not the element was added to the collection, you'd have to figure out what to do in the case where the return value is true
and false
, respectively.
Compared to the more idiomatic example of the Try-Parse idiom below, you may have noticed that the TryAdd
method shown here takes no out
parameter. This is because the original Add
method returns void
; there's nothing to return. From unit isomorphisms, however, we know that unit is isomorphic to void
, so we could, more explicitly, have defined a TryAdd
method with this signature:
public bool TryAdd(T item, out Unit unit)
There's no point in doing this, however, apart from demonstrating that the isomorphism holds.
From Tester-Doer to Maybe #
You can also refactor the add-to-collection example to return a Maybe value, although in this degenerate case, it makes little sense. If you automate the refactoring process, you'd arrive at an API like this:
public Maybe<Unit> TryAdd(T item)
Using it would look like this:
ICollection<int> numbers = // ... Maybe<Unit> m = numbers.TryAdd(1);
The contract is consistent with what Maybe implies: You'd get an empty Maybe<Unit>
object if the add operation 'failed', and a populated Maybe<Unit>
object if the add operation succeeded. Even in the populated case, though, the value contained in the Maybe object would be unit, which carries no further information than its existence.
To be clear, this isn't close to a proper functional design because all the interesting action happens as a side effect. Does the design have to be functional? No, it clearly isn't in this case, but Maybe is a concept that originated in functional programming, so you could be misled to believe that I'm trying to pass this particular design off as functional. It's not.
A functional version of this API could look like this:
public Maybe<ICollection<T>> TryAdd(T item)
An implementation wouldn't mutate the object itself, but rather return a new collection with the added item, in case that was possible. This is, however, always possible, because you can always concatenate item
to the front of the collection. In other words, this particular line of inquiry is increasingly veering into the territory of the absurd. This isn't, however, a counter-example of my proposition that the isomorphism exists; it's just a result of the initial example being degenerate.
Try-Parse #
Another idiom described in the Framework Design Guidelines is the Try-Parse idiom. This seems to be a coding idiom more specific to the .NET framework, which is the reason I call it an idiom instead of a pattern. (Perhaps it is, after all, a pattern... I'm sure many of my readers are better informed about how problems like these are solved in other languages, and can enlighten me.)
A better name might be Try-Do, since the idiom doesn't have to be constrained to parsing. The example that Cwalina and Abrams supply, however, relates to parsing a string
into a DateTime
value. Such an API is already available in the base class library. Using it looks like this:
bool couldParse = DateTime.TryParse(candidate, out DateTime dateTime);
Since DateTime
is a value type, the out
parameter will never be null
, even if parsing fails. You can, however, examine the return value couldParse
to determine whether the candidate
could be parsed.
In the running commentary in the book, Jeffrey Richter likes this much better:
"I like this guideline a lot. It solves the race-condition problem and the performance problem."I agree that it's better than Tester-Doer, but that doesn't mean that you can't refactor such a design to that pattern.
From Try-Parse to Tester-Doer #
While I see no compelling reason to design parsing attempts with the Tester-Doer pattern, it's possible. You could create an API that enables interaction like this:
DateTime dateTime = default(DateTime); bool canParse = DateTimeEnvy.CanParse(candidate); if (canParse) dateTime = DateTime.Parse(candidate);
You'd need to add a new CanParse
method with this signature:
public static bool CanParse(string candidate)
In this particular example, you don't have to add a Parse
method, because it already exists in the base class library, but in other examples, you'd have to add such a method as well.
This example doesn't suffer from issues with thread safety, since strings are immutable, but in general, that problem is always a concern with the Tester-Doer anti-pattern. Discoverability still suffers in this example.
From Try-Parse to Maybe #
While the Try-Parse idiom is thread-safe, it isn't composable. Every time you run into an API modelled over this template, you have to stop what you're doing and check the return value. Did the operation succeed? Was should the code do if it didn't?
Maybe, on the other hand, is composable, so is a much better way to model problems such as parsing. Typically, methods or functions that return Maybe values are still prefixed with Try, but there's no longer any out
parameter. A Maybe-based TryParse
function could look like this:
public static Maybe<DateTime> TryParse(string candidate)
You could use it like this:
Maybe<DateTime> m = DateTimeEnvy.TryParse(candidate);
If the candidate
was successfully parsed, you get a populated Maybe<DateTime>
; if the string was invalid, you get an empty Maybe<DateTime>
.
A Maybe object composes much better with other computations. Contrary to the Try-Parse idiom, you don't have to stop and examine a Boolean return value. You don't even have to deal with empty cases at the point where you parse. Instead, you can defer the decision about what to do in case of failure until a later time, where it may be more obvious what to do in that case.
Maybe #
In my Encapsulation and SOLID Pluralsight course, you get a walk-through of all three options for dealing with an operation that could potentially fail. Like in this article, the course starts with Tester-Doer, progresses over Try-Parse, and arrives at a Maybe-based implementation. In that course, the example involves reading a (previously stored) message from a text file. The final API looks like this:
public Maybe<string> Read(int id)
The protocol implied by such a signature is that you supply an ID, and if a message with that ID exists on disc, you receive a populated Maybe<string>
; otherwise, an empty object. This is not only composable, but also thread-safe. For anyone who understands the universal abstraction of Maybe, it's clear that this is an operation that could fail. Ultimately, client code will have to deal with empty Maybe values, but this doesn't have to happen immediately. Such a decision can be deferred until a proper context exists for that purpose.
From Maybe to Tester-Doer #
Since Tester-Doer is the least useful of the patterns discussed in this article, it makes little sense to refactor a Maybe-based API to a Tester-Doer implementation. Nonetheless, it's still possible. The API could look like this:
public bool Exists(int id) public string Read(int id)
Not only is this design not thread-safe, but it's another example of poor discoverability. While the doer is called Read
, the tester isn't called CanRead
, but rather Exists
. If the class has other members, these could be listed interleaved between Exists
and Read
. It wouldn't be obvious that these two members were designed to be used together.
Again, the intended usage is code like this:
string message; if (fileStore.Exists(49)) message = fileStore.Read(49);
This is still problematic, because you need to decide what to do in the else
case as well, although you don't see that case here.
The point is, still, that you can translate from one representation to another without loss of information; not that you should.
From Maybe to Try-Parse #
Of the three representations discussed in this article, I firmly believe that a Maybe-based API is superior. Unfortunately, the .NET base class library doesn't (yet) come with a built-in Maybe object, so if you're developing an API as part of a reusable library, you have two options:
- Export the library's
Maybe<T>
type together with the methods that return it. - Use Try-Parse for interoperability reasons.
FileStore
example from my Pluralsight course, this would imply not a TryParse
method, but a TryRead
method:
public bool TryRead(int id, out string message)
This would enable you to expose the method in a reusable library. Client code could interact with it like this:
string message; if (!fileStore.TryRead(50, out message)) message = "";
This has all the problems associated with the Try-Parse idiom already discussed in this article, but it does, at least, have a basic use case.
Isomorphism with Either #
At this point, I hope that you find it reasonable to believe that the three representations, Tester-Doer, Try-Parse, and Maybe, are isomorphic. You can translate between any of these representations to any other of these without loss of information. This also means that you can translate back again.
While I've only argued with a series of examples, it's my experience that these three representations are truly isomorphic. You can always translate any of these representations into another. Mostly, though, I translate into Maybe. If you disagree with my proposition, all you have to do is to provide a counter-example.
There's a fourth isomorphism that's already well-known, and that's between Maybe and Either. Specifically, Maybe<T>
is isomorphic to Either<Unit, T>
. In Haskell, this is easily demonstrated with this set of functions:
toMaybe :: Either () a -> Maybe a toMaybe (Left ()) = Nothing toMaybe (Right x) = Just x fromMaybe :: Maybe a -> Either () a fromMaybe Nothing = Left () fromMaybe (Just x) = Right x
Translated to C#, using the Church-encoded Maybe together with the Church-encoded Either, these two functions could look like the following, starting with the conversion from Maybe to Either:
// On Maybe: public static IEither<Unit, T> ToEither<T>(this IMaybe<T> source) { return source.Match<IEither<Unit, T>>( nothing: new Left<Unit, T>(Unit.Value), just: x => new Right<Unit, T>(x)); }
Likewise, the conversion from Either to Maybe:
// On Either: public static IMaybe<T> ToMaybe<T>(this IEither<Unit, T> source) { return source.Match<IMaybe<T>>( onLeft: _ => new Nothing<T>(), onRight: x => new Just<T>(x)); }
You can convert back and forth to your heart's content, as this parametrised xUnit.net 2.3.1 test shows:
[Theory] [InlineData(42)] [InlineData(1337)] [InlineData(2112)] [InlineData(90125)] public void IsomorphicWithPopulatedMaybe(int i) { var expected = new Right<Unit, int>(i); var actual = expected.ToMaybe().ToEither(); Assert.Equal(expected, actual); }
I decided to exclude IEither<Unit, T>
from the overall theme of this article in order to better contrast three alternatives that may not otherwise look equivalent. That IEither<Unit, T>
is isomorphic to IMaybe<T>
is a well-known result. Besides, I think that both of these two representations already inhabit the same conceptual space. Either and Maybe are both well-known in statically typed functional programming.
Summary #
The Tester-Doer pattern is a decades-old design pattern that attempts to model how to perform operations that can potentially fail, without relying on exceptions for flow control. It predates mainstream multi-core processors by decades, which can explain why it even exists as a pattern in the first place. At the time people arrived at the pattern, thread-safety wasn't a big concern.
The Try-Parse idiom is a thread-safe alternative to the Tester-Doer pattern. It combines the two tester and doer methods into a single method with an out
parameter. While thread-safe, it's not composable.
Maybe offers the best of both worlds. It's both thread-safe and composable. It's also as discoverable as any Try-Parse method.
These three alternatives are all, however, isomorphic. This means that you can refactor any of the three designs into one of the other designs, without loss of information. It also means that you can implement Adapters between particular implementations, should you so desire. You see this frequently in F# code, where functions that return 'a option
adapt Try-Parse methods from the .NET base class library.
While all three designs are equivalent in the sense that you can translate one into another, it doesn't imply that they're equally useful. Maybe is the superior design, and Tester-Doer clearly inferior.
Next: Church encoding.
Payment types catamorphism
You can find the catamorphism for a custom sum type. Here's an example.
This article is part of an article series about catamorphisms. A catamorphism is a universal abstraction that describes how to digest a data structure into a potentially more compact value.
This article presents the catamorphism for a domain-specific sum type, as well as how to identify it. The beginning of this article presents the catamorphism in C#, with a few examples. The rest of the article describes how to deduce the catamorphism. This part of the article presents my work in Haskell. Readers not comfortable with Haskell can just read the first part, and consider the rest of the article as an optional appendix.
In all previous articles in the series, you've seen catamorphisms for well-known data structures: Boolean values, Peano numbers, Maybe, trees, and so on. These are all general-purpose data structures, so you might be left with the impression that catamorphisms are only related to such general types. That's not the case. The point of this article is to demonstrate that you can find the catamorphism for your own custom, domain-specific sum type as well.
C# catamorphism #
The custom type we'll examine in this article is the Church-encoded payment types I've previously written about. It's just an example of a custom data type, but it serves the purpose of illustration because I've already shown it as a Church encoding in C#, as a Visitor in C#, and as a discriminated union in F#.
The catamorphism for the IPaymentType
interface is the Match
method:
T Match<T>( Func<PaymentService, T> individual, Func<PaymentService, T> parent, Func<ChildPaymentService, T> child);
As has turned out to be a common trait, the catamorphism is identical to the Church encoding.
I'm not going to show more than a few examples of using the Match
method, because you can find other examples in the previous articles,
> IPaymentType p = new Individual(new PaymentService("Visa", "Pay")); > p.Match(ps => ps.Name, ps => ps.Name, cps => cps.PaymentService.Name) "Visa" > IPaymentType p = new Parent(new PaymentService("Visa", "Pay")); > p.Match(ps => ps.Name, ps => ps.Name, cps => cps.PaymentService.Name) "Visa" > IPaymentType p = new Child(new ChildPaymentService("1234", new PaymentService("Visa", "Pay"))); > p.Match(ps => ps.Name, ps => ps.Name, cps => cps.PaymentService.Name) "Visa"
These three examples from a C# Interactive session demonstrate that no matter which payment method you use, you can use the same Match
method call to extract the payment name from the p
object.
Payment types F-Algebra #
As in the previous article, I'll use Fix
and cata
as explained in Bartosz Milewski's excellent article on F-Algebras.
First, you'll have to define the auxiliary types involved in this API:
data PaymentService = PaymentService { paymentServiceName :: String , paymentServiceAction :: String } deriving (Show, Eq, Read) data ChildPaymentService = ChildPaymentService { originalTransactionKey :: String , parentPaymentService :: PaymentService } deriving (Show, Eq, Read)
While F-Algebras and fixed points are mostly used for recursive data structures, you can also define an F-Algebra for a non-recursive data structure. You already saw examples of that in the articles about Boolean catamorphism, Maybe catamorphism, and Either catamorphism. While each of the three payment types have associated data, none of it is parametrically polymorphic, so a single type argument for the carrier type suffices:
data PaymentTypeF c = IndividualF PaymentService | ParentF PaymentService | ChildF ChildPaymentService deriving (Show, Eq, Read) instance Functor PaymentTypeF where fmap _ (IndividualF ps) = IndividualF ps fmap _ (ParentF ps) = ParentF ps fmap _ (ChildF cps) = ChildF cps
I chose to call the carrier type c
(for carrier). As was also the case with BoolF
, MaybeF
, and EitherF
, the Functor
instance ignores the map function because the carrier type is missing from all three cases. Like the Functor
instances for BoolF
, MaybeF
, and EitherF
, it'd seem that nothing happens, but at the type level, this is still a translation from PaymentTypeF c
to PaymentTypeF c1
. Not much of a function, perhaps, but definitely an endofunctor.
Some helper functions make it a little easier to create Fix PaymentTypeF
values, but there's really not much to them:
individualF :: PaymentService -> Fix PaymentTypeF individualF = Fix . IndividualF parentF :: PaymentService -> Fix PaymentTypeF parentF = Fix . ParentF childF :: ChildPaymentService -> Fix PaymentTypeF childF = Fix . ChildF
That's all you need to identify the catamorphism.
Haskell catamorphism #
At this point, you have two out of three elements of an F-Algebra. You have an endofunctor (PaymentTypeF
), and an object c
, but you still need to find a morphism PaymentTypeF c -> c
.
As in the previous articles, start by writing a function that will become the catamorphism, based on cata
:
paymentF = cata alg where alg (IndividualF ps) = undefined alg (ParentF ps) = undefined alg (ChildF cps) = undefined
While this compiles, with its undefined
implementations, it obviously doesn't do anything useful. I find, however, that it helps me think. How can you return a value of the type c
from the IndividualF
case? You could pass an argument to the paymentF
function, but you shouldn't ignore the data ps
contained in the case, so it has to be a function:
paymentF fi = cata alg where alg (IndividualF ps) = fi ps alg (ParentF ps) = undefined alg (ChildF cps) = undefined
I chose to call the argument fi
, for function, individual. You can pass a similar argument to deal with the ParentF
case:
paymentF fi fp = cata alg where alg (IndividualF ps) = fi ps alg (ParentF ps) = fp ps alg (ChildF cps) = undefined
And of course with the remaining ChildF
case as well:
paymentF :: (PaymentService -> c) -> (PaymentService -> c) -> (ChildPaymentService -> c) -> Fix PaymentTypeF -> c paymentF fi fp fc = cata alg where alg (IndividualF ps) = fi ps alg (ParentF ps) = fp ps alg (ChildF cps) = fc cps
This works. Since cata
has the type Functor f => (f a -> a) -> Fix f -> a
, that means that alg
has the type f a -> a
. In the case of PaymentTypeF
, the compiler infers that the alg
function has the type PaymentTypeF c -> c
, which is just what you need!
You can now see what the carrier type c
is for. It's the type that the algebra extracts, and thus the type that the catamorphism returns.
This, then, is the catamorphism for the payment types. Except for the tree catamorphism, all catamorphisms so far have been pairs, but this one is a triplet of functions. This is because the sum type has three cases instead of two.
As you've seen repeatedly, this isn't the only possible catamorphism, since you can, for example, trivially reorder the arguments to paymentF
. The version shown here is, however, equivalent to the above C# Match
method.
Usage #
You can use the catamorphism as a basis for other functionality. If, for example, you want to convert a Fix PaymentTypeF
value to JSON, you can first define an Aeson record type for that purpose:
data PaymentJson = PaymentJson { name :: String , action :: String , startRecurrent :: Bool , transactionKey :: Maybe String } deriving (Show, Eq, Generic) instance ToJSON PaymentJson
Subsequently, you can use paymentF
to implement a conversion from Fix PaymentTypeF
to PaymentJson
, as in the previous articles:
toJson :: Fix PaymentTypeF -> PaymentJson toJson = paymentF (\(PaymentService n a) -> PaymentJson n a False Nothing) (\(PaymentService n a) -> PaymentJson n a True Nothing) (\(ChildPaymentService k (PaymentService n a)) -> PaymentJson n a False $ Just k)
Testing it in GHCi, it works as it's supposed to:
Prelude Data.Aeson B Payment> B.putStrLn $ encode $ toJson $ parentF $ PaymentService "Visa" "Pay" {"transactionKey":null,"startRecurrent":true,"action":"Pay","name":"Visa"}
Clearly, it would have been easier to define the payment types shown here as a regular Haskell sum type and just use standard pattern matching, but the purpose of this article isn't to present useful code; the only purpose of the code here is to demonstrate how to identify the catamorphism for a custom domain-specific sum type.
Summary #
Even custom, domain-specific sum types have catamorphisms. This article presented the catamorphism for a custom payment sum type. Because this particular sum type has three cases, the catamorphism is a triplet, instead of a pair, which has otherwise been the most common shape of catamorphisms in previous articles.
Yes silver bullet
Since Fred Brooks published his essay, I believe that we, contrary to his prediction, have witnessed several silver bullets.
I've been rereading Fred Brooks's 1986 essay No Silver Bullet because I've become increasingly concerned that people seem to draw the wrong conclusions from it. Semantic diffusion seems to have set in. These days, when people state something along the lines that there's no silver bullet in software development, I often get the impression that they mean that there's no panacea.
Indeed; I agree. There's no miracle cure that will magically make all problems in software development go away. That's not what the essay states, however. It is, fortunately, more subtle than that.
No silver bullet reread #
It's a great essay. It's not my intent to dispute the central argument of the essay, but I think that Brooks made one particular assumption that I disagree with. That doesn't make me smarter in any way. He wrote the essay in 1986. I'm writing this in 2019, with the benefit of the experience of all the years in-between. Hindsight is 20-20, so anyone could make the observations that I do here.
Before we get to that, though, a brief summary of the essence of the essay is in order. In short, the conclusion is this:
The beginning of the essay is a brilliant analysis of the reasons why software development is inherently difficult. If you read this together with Jack Reeves What Is Software Design? (available various places on the internet, or as an appendix in APPP), you'll probably agree that there's an inherent complexity to software development that no invention is likely to dispel."There is no single development, in either technology or management technique, which by itself promises even one order-of-magnitude improvement within a decade in productivity, in reliability, in simplicity."
Ostensibly in the tradition of Aristotle, Brooks distinguishes between essential and accidental complexity. This distinction is central to his argument, so it's worth discussing for a minute.
Software development problems are complex, i.e. made up of many interacting sub-problems. Some of that complexity is accidental. This doesn't imply randomness or sloppiness, but only that the complexity isn't inherent to the problem; that it's only the result of our (human) failure to achieve perfection.
If you imagine that you could whittle away all the accidental complexity, you'd ultimately reach a point where, in the words of Saint ExupĂ©ry, there is nothing more to remove. What's left is the essential complexity.
Brooks' conjecture is that a typical software development project comes with both essential and accidental complexity. In his 1995 reflections "No Silver Bullet" Refired (available in The Mythical Man-Month), he clarifies what he already implied in 1986:
This I fundamentally disagree with, but more on that later. It makes sense to me to graphically represent the argument like this:"It is my opinion, and that is all, that the accidental or representational part of the work is now down to about half or less of the total."
The way that I think of Brooks' argument is that any software project contains some essential and some accidental complexity. For a given project, the size of the essential complexity is fixed.
Brooks believes that less than half of the overall complexity is accidental:
While a pie chart better illustrates the supposed ratio between the two types of complexity, I prefer to view Brooks' arguments as the first diagram, above. In that visualisation, the essential complexity is a core of fixed size, while accidental complexity is something you can work at removing. If you keep improving your process and technology, you may, conceptually, be able to remove (almost) all of it.
Brooks' point, with which I agree, is that if the essential complexity is inherent, then you can't reduce the size of it. The only way to decrease the overall complexity is to reduce the accidental complexity.
If you agree with the assessment that less than half of the overall complexity in modern software development is accidental, then it follows that no dramatic improvements are available. Even if you remove all accidental complexity, you've only reduced overall complexity by, say, forty percent.
Accidental complexity abounds #
I find Brooks' arguments compelling. I do not, however, accept the premise that there's only little accidental complexity left. Instead of the above diagrams, I believe that the situation looks more like this (not to scale):
I think that most of the complexity in software development is accidental. I'm not sure about today, but I believe that I have compelling evidence that this was the case in 1986, so I don't see why it shouldn't still be the case.
To be clear, this is all anecdotal, since I don't believe that software development is quantifiable. In the essay, Brooks explicitly talks about the invisibility of software. Software is pure thought stuff; you can't measure it. I discuss this in my Humane Code video, but I also recommend that you read The Leprechauns of Software Engineering if you have any illusions that we, as an industry, have any reliable measurements of productivity.
Brooks predicts that, within the decade (from 1986 to 1996), there would be no single development that would increase productivity with an order of magnitude, i.e. by a factor of at least ten. Ironically, when he wrote "No Silver Bullet" Refired in 1995, at least two such developments were already in motion.
We can't blame Brooks for not identifying those developments, because in 1995, their impact was not yet apparent. Again, hindsight is 20-20.
Neither of these two developments are purely technological, although technology plays a role. Notice, though, that Brooks' prediction included technology or management technique. It's in the interaction between technology and the humane that the orders-of-magnitude developments emerged.
World Wide Web #
I have a dirty little secret. In the beginning of my programming career, I became quite the expert on a programming framework called Microsoft Commerce Server. In fact, I co-authored a chapter of Professional Commerce Server 2000 Programming, and in 2003 I received an MVP award as an acknowledgement of my work in the Commerce Server community (such as it were; it was mostly on Usenet).
The Commerce Server framework was a black box. This was long before Microsoft embraced open source, and while there was a bit of official documentation, it was superficial; it was mostly of the getting-started kind.
Over several years, I managed to figure out how the framework really worked, and thus, how one could extend it. This was a painstaking process. Since it was a black box, I couldn't just go and read the code to figure out how it worked. The framework was written in C++ and Visual Basic, so there wasn't even IL code to decompile.
I had one window into the framework. It relied on SQL Server, and I could attach the profiler tool to spy on its interaction with the database. Painstakingly, over several years, I managed to wrest the framework's secrets from it.
I wasted much time doing detective work like that.
In general, programming in the late nineties and early two-thousands was less productive, not because the languages or tools were orders-of-magnitude worse than today, but because when you hit a snag, you were in trouble.
These days, if you run into a problem beyond your abilities, you can ask for help on the World Wide Web. Usually, you'll find an existing answer on Stack Overflow, and you'll be able to proceed without too much delay.
Compared to twenty years ago, I believe that the World Wide Web has increased my productivity more than ten-fold. While it also existed in 1995, there wasn't much content. It's not the technology itself that provides the productivity increase, but rather the synergy of technology and human knowledge.
I think that Brooks vastly underestimated how much time one can waste when one is stuck. That's a sort of accidental complexity, although in the development process rather than in the technology itself.
Automated testing #
In the late nineties, I was developing web sites (with Commerce Server). When I wanted to run my code to see if it worked, I'd launch the web site on my laptop, log in, click around and enter data until I was convinced that the functionality was working as it should. Most of the time, however, it wasn't, so I'd change a bit of the code, and go through the same process again.
I think that's a common way to 'test' software; at least, it was back then.
While you could get good at going through these motions quickly, verifying a single, or a handful of related functionalities, could easily take at least a couple of seconds, and usually more like half a minute.
If you had dozens, or even hundreds, of different scenarios to address, you obviously wouldn't run through them all every time you changed the code. At the very best, you'd click your way through three of four usage scenarios that you thought were relevant to the change you'd made. Other functionality, earlier declared done, you just considered to be unaffected.
Needless to say, regressions were regular occurrences.
In 2003 I discovered test-driven development, and through that, automated testing. While you can't directly compare unit tests with whole usage scenarios, I think it's fair to compare something like automated integration tests or user-scenario tests (whatever you want to call them) with manually clicking through an application.
Even an integration test, if written properly, can verify a scenario at least ten times faster than you can do it by hand. A more realistic estimate is probably hundred times faster, or more.
Granted, you have to write the automated test as well, and I know that it's not always trivial. Still, once you have an automated test suite in place, you can run it all the time.
I never ran through all usage scenarios when I manually 'tested' my software. With automated tests, I do. This saves me from most regressions.
This improvement is, in my opinion, a no-brainer. It's easily a factor ten improvement. All the time wasted manually 'testing' the software, plus the time wasted fixing regressions, can be put to better use.
At the time Brooks was writing his own retrospective (in 1995), Kent Beck was beginning to talk to other people about test-driven development. As is a common theme in this article, hindsight is 20-20.
Honourable mentions #
There's been other improvements in software development since 1986. I considered including several other improvements as bona fide orders-of-magnitude improvements, but I think that's probably going too far. Each of the following developments have, however, offered significant improvements:
- Git. It's surprising how much more productive Git can make you. While it's somewhat better than centralised source control systems at the functionality also available with those other systems, the productivity increase comes from all the new, unanticipated workflows it enables. Before I started using DVCS, I'd have lots of code that was commented out, so that I could experiment with various alternatives. With Git, I just create a new branch, or stash my changes, and experiment with abandon. While it's probably not a ten-fold increase in productivity, I believe it's the simplest technology change you can make to dramatically increase your productivity.
- Garbage collection. Since I've admitted that I worked with Microsoft Commerce Server, I've probably lost all credibility with my reader already, but let's see if I can win back a little. While Commerce Server programming involved VBScript programming, it also often involved COM programming, and I did quite a bit of that in C++. Having to make sure that you've cleaned up all memory after use is a bother. Garbage collection just makes this work go away. It's hardly a ten-fold improvement in productivity, but I do find it significant.
- Agile software development. The methodology of decreasing the feedback time between implementation and deployment has made me much more productive. I'm not interested in peddling any particular methodology like Scrum as much as just the general concept of getting rapid feedback. Particularly if you combine continuous delivery with Git, you have a powerful combination. Brooks already talked about incremental software development, and had some hopes attached to this as well. My personal experience can only agree with his sentiment. Again, probably not in itself a ten-fold increase in productivity, but enough that I wouldn't want to work on a project where rapid feedback and incremental development wasn't valued.
Personally, I'm inclined to believe another order-of-magnitude improvement is right at our feet.
Statically typed functional programming #
This section is conjecture on my part. The improvements I've so far covered are already realised (at least for those who choose to take advantage of them). The improvement I'll cover here is more speculative.
I believe that statically typed functional programming offers another order-of-magnitude improvement over existing software development. Twenty years ago, I believed that object-oriented programming was a good idea. I now believe that I was wrong about that, so it's possible that in another twenty years, I'll also believe that I was wrong about functional programming. Take the following for what it is.
When I carefully reread No Silver Bullet, I got the distinct impression that Brooks considered low-level details of programming part of its essential complexity:
It's unreasonable to blame anyone writing in 1986, or 1995 for that matter, to think that"Much of the complexity in a software construct is, however, not due to conformity to the external world but rather to the implementation itself - its data structures, its algorithms, its connectivity."
for
loops, variables, program state, and such other programming stables were anything but essential parts of the complexity of developing software.
Someone, unfortunately I forget who, once made the point that all mainstream programming languages are layers of abstractions of how a CPU works. Assembly language is basically just mnemonics on top of a CPU instruction set, then C can be thought of as an abstraction over assembly language, C++ as the next step in abstraction, Java and C# as sort of abstractions of C++, and so on. The origin of the design is the physical CPU. You could say that these languages are designed in a bottom-up fashion.
Some functional languages (perhaps most famously Haskell, but also APL, and, possibly, Lisp) are designed in a much more top-down fashion. You start with mathematical abstractions like category theory and then figure out how to crystallise the theory into a programming language, and then again, via more layers of abstractions, how to turn the abstract language into machine code.
The more you learn about the pure functional alternative to programming, the more you begin to see mutable program state, variables, for
loops, and similar language constructs merely as artefacts of the underlying model. Brooks, I think, thought of these as part of the essential complexity of programming. I don't think that that's the case. You can get by just fine with other abstractions instead.
Besides, Brooks writes, under the heading of Complexity:
When he writes functions, I don't think that he means functions in the Haskell sense. I think that he means operations, procedures, or methods."From the complexity comes the difficulty of enumerating, much less understanding, all the possible states of the program, and from that comes the unreliability. From the complexity of the functions comes the difficulty of invoking those functions, which makes programs hard to use."
Indeed, when you look at a C# method signature like the following, it's hard to enumerate, understand, or remember, all that it does:
int? TryAccept(Reservation reservation);
If this is a high-level function, many things could happen when you call that method. It could change the state of a database. It could send an email. It could mutate a variable. Not only that, but the behaviour could depend on non-deterministic factors, such as the date, time of day, or just raw randomness. Finally, how should you handle the return value? What does it mean if the return value is null? What if it's not? Is 0
a valid value? Are negative numbers valid? Are they different from positive values?
It is, indeed, difficult to enumerate all the possible states of such a function.
Consider, instead, a Haskell function with a type like this:
tryAccept :: Int -> Reservation -> MaybeT ReservationsProgram Int
What happens if you invoke this function? It returns a value. Does it send any emails? Does it mutate any state? No, it can't, because the static type informs us that this is a pure function. If any programmer, anywhere inside of the function, or the functions it calls, or functions they call, etc. tried to do something impure, it wouldn't have compiled.
Can we enumerate the states of the program? Certainly. We just have to figure out what ReservationsProgram
is. After following a few types, we find this statically typed enumeration:
data ReservationsInstruction next = IsReservationInFuture Reservation (Bool -> next) | ReadReservations UTCTime ([Reservation] -> next) | Create Reservation (Int -> next) deriving Functor
Essentially, there's three 'actions' that this type enables. The tryAccept
function returns the ReservationsProgram
inside of a MaybeT
container, so there's a fourth option that something short-circuits along the way.
You don't even have to keep track of this yourself. The compiler keeps you honest. Whenever you invoke the tryAccept
function, the compiler will insist that you write code that can handle all possible outcomes. If you turn on the right compiler flags, the code is not going to compile if you don't.
(Both code examples are taken from the same repository.)
Haskellers jokingly declare that if Haskell code compiles, it works. While humorous, there's a kernel of truth in that. An advanced type system can carry much information about the behaviour of a program. Some people, particularly programmers who come from a dynamically typed background, find Haskell's type system rigid. That's not an unreasonable criticism, but often, in dynamically typed languages, you have to write many automated tests to ensure that your program behaves as desired, and that it correctly handles various edge cases. A type system like Haskell's, on the other hand, embeds those rules in types instead of in tests.
While you should still write automated tests for Haskell programs, fewer are needed. How many fewer? Compared to C-based languages, a factor ten isn't an unreasonable guess.
After a few false starts, in 2014 I finally decided that F# would be my default choice of language on .NET. The reason for that decision was that I felt so much more productive in F# compared to C#. While F#'s type system doesn't embed information about pure versus impure functions, it does support sum types, which is what enables the sort of compile-time enumeration that Brooks discusses.
F# is still my .NET language of choice, but I find that I mostly 'think in' Haskell these days. My conjecture is that a sufficiently advanced type system (like Haskell's) could easily represent another order-of-magnitude improvement over mainstream imperative languages.
Improvements for those who want them #
The essay No Silver Bullet is a perspicacious work. I think more people should read at least the first part, where Brooks explains why software development is hard. I find that analysis brilliant, and I agree: software development presupposes essential complexity. It's inherently hard.
There's no reason to make it harder than it has to be, though.
More than once, I've discussed productivity improvements with people, only to be met with the dismissal that 'there's no silver bullet'.
Granted, there's no magical solution that will solve all problems with software development, but that doesn't mean that improvements can't be had.
Consider the improvements I've argued for here. Everyone now uses the World Wide Web and sites like Stack Overflow for research; that particular improvement is firmly embedded in all organisations. On the other hand, I still regularly talk to organisations that don't routinely use automated testing.
People still use centralised version control (like TFS or SVN). If there was ever a low-hanging fruit, changing to Git is one. Git is free, and there's plenty of tools you can use to migrate your version history to it. There's also plenty of training and help to be had. Yes, it'll require a small investment to make the change, but the productivity increase is significant.
So it is with technology improvements. Automated testing is available, but not ubiquitous. Git is free, but still organisations stick to suboptimal version control. Haskell and F# are mature languages, yet programmers still program in C# or Java."The future is already here â€” it's just not very evenly distributed."
Summary #
The essay No Silver Bullet was written in 1986, but seems to me to be increasingly misunderstood. When people today talk about it at all, it's mostly as an excuse to stay where they are. "There's no silver bullets," they'll say.
The essay, however, doesn't argue that no improvements can be had. It only argues that no more order-of-magnitude improvements can be had.
In the present essay I argue that, since Brooks wrote No Silver Bullet, more than one such improvement happened. Once the World Wide Web truly began furnishing information at your fingertips, you could be more productive because you wouldn't be stuck for days or weeks. Automated testing reduces the work that manual testers used to perform, as well as limiting regressions.
If you accept my argument, that order-of-magnitude improvements appeared after 1986, this implies that Brooks' premise was wrong. In that case, there's no reason to believe that we've seen the last significant improvement to software development.
I think that more such improvements await us. I suggest that statically typed functional programming offers such an advance, but if history teaches us anything, it seems that breakthroughs tend to be unpredictable.
Full binary tree catamorphism
The catamorphism for a full binary tree is a pair of functions.
This article is part of an article series about catamorphisms. A catamorphism is a universal abstraction that describes how to digest a data structure into a potentially more compact value.
This article presents the catamorphism for a full binary tree, as well as how to identify it. The beginning of this article presents the catamorphism in C#, with examples. The rest of the article describes how to deduce the catamorphism. This part of the article presents my work in Haskell. Readers not comfortable with Haskell can just read the first part, and consider the rest of the article as an optional appendix.
A full binary tree (also known as a proper or plane binary tree) is a tree in which each node has either two or no branches.
The diagram shows an example of a tree of integers. The left branch contains two children, of which the right branch again contains two sub-branches. The rest of the nodes are leaf-nodes with no sub-branches.
C# catamorphism #
As a C# representation of a full binary tree, I'll start with the IBinaryTree<T>
API from A Visitor functor. The catamorphism is the Accept
method:
TResult Accept<TResult>(IBinaryTreeVisitor<T, TResult> visitor);
So far in this article series, you've mostly seen Church-encoded catamorphisms, so a catamorphism represented as a Visitor may be too big of a cognitive leap. We know, however, from Visitor as a sum type that a Visitor representation is isomorphic to a Church encoding. Since these are isomorphic, it's possible to refactor IBinaryTree<T>
to a Church encoding. The GitHub repository contains a series of commits that demonstrates how that refactoring works. Once you're done, you arrive at this Match
method, which is the refactored Accept
method:
TResult Match<TResult>(Func<TResult, T, TResult, TResult> node, Func<T, TResult> leaf);
This method takes a pair of functions as arguments. The node
function deals with an internal node in the tree (the blue nodes in the above diagram), whereas the leaf
function deals with the leaf nodes (the green nodes in the diagram).
The leaf
function may be the easiest one to understand. A leaf node only contains a value of the type T
, so the only operation the function has to support is translating the T
value to a TResult
value. This is also the premise of the Leaf
class' implementation of the method:
private readonly T item; public TResult Match<TResult>(Func<TResult, T, TResult, TResult> node, Func<T, TResult> leaf) { return leaf(item); }
The node
function is more tricky. It takes three input arguments, of the types TResult
, T
, and TResult
. The roles of these are respectively left, item, and right. This is a typical representation of a binary node. Since there's always a left and a right branch, you put the node's value in the middle. As was the case with the tree catamorphism, the catamorphism function receives the branches as already-translated values; that is, both the left and right branch have already been translated to TResult
when node
is called. While it looks like magic, as always it's just the result of recursion:
private readonly IBinaryTree<T> left; private readonly T item; private readonly IBinaryTree<T> right; public TResult Match<TResult>(Func<TResult, T, TResult, TResult> node, Func<T, TResult> leaf) { return node(left.Match(node, leaf), item, right.Match(node, leaf)); }
This is the Node<T>
class implementation of the Match
method. It calls node
and returns whatever it returns, but notice that as the left
and right
arguments, if first, recursively, calls left.Match
and right.Match
. This is how it can call node
with the translated branches, as well as with the basic item
.
The recursion stops and unwinds on left
and right
whenever one of those are Leaf
instances.
Examples #
You can use Match
to implement most other behaviour you'd like IBinaryTree<T>
to have. In the original article on the full binary tree functor you saw how to implement Select
with a Visitor, but now that the API is Church-encoded, you can derive Select
from Match
:
public static IBinaryTree<TResult> Select<TResult, T>( this IBinaryTree<T> tree, Func<T, TResult> selector) { if (tree == null) throw new ArgumentNullException(nameof(tree)); if (selector == null) throw new ArgumentNullException(nameof(selector)); return tree.Match( node: (l, x, r) => Create(l, selector(x), r), leaf: x => Leaf(selector(x))); }
In the leaf
case, the Select
method simply calls selector
with the x
value it receives, and puts the resulting TResult
object into a new Leaf
object.
In the node
case, the lambda expression receives three arguments: l
and r
are the already-translated left and right branches, so you only need to call selector
on x
and call the Create
helper method to produce a new Node
object.
You can also implement more specialised functionality, like calculating the sum of nodes, measuring the depth of the tree, and similar functions. You saw equivalent examples in the previous article.
For the examples in this article, I'll use the tree shown in the above diagram. Using static helper methods, you can write it like this:
var tree = BinaryTree.Create( BinaryTree.Create( BinaryTree.Leaf(42), 1337, BinaryTree.Create( BinaryTree.Leaf(2112), 5040, BinaryTree.Leaf(1984))), 2, BinaryTree.Leaf(90125));
To calculate the sum of all nodes, you can write a function like this:
public static int Sum(this IBinaryTree<int> tree) { return tree.Match((l, x, r) => l + x + r, x => x); }
The leaf
function just returns the value of the node, while the node
function adds the numbers together. It works for the above tree
:
> tree.Sum() 100642
To find the maximum value, you can write another extension method:
public static int Max(this IBinaryTree<int> tree) { return tree.Match((l, x, r) => Math.Max(Math.Max(l, r), x), x => x); }
Again, the leaf
function just returns the value of the node. The node
function receives the value of the current node x
, as well as the already-found maximum value of the left branch and the right branch; it then returns the maximum of these three values:
> tree.Max() 90125
As was also the case for trees, both of these operations are part of the standard repertoire available via a data structure's fold. That's not the case for the next two functions, which can't be implemented using a fold, but which can be defined with the catamorphism. The first is a function to count the leaves of a tree:
public static int CountLeaves<T>(this IBinaryTree<T> tree) { return tree.Match((l, _, r) => l + r, _ => 1); }
Since the leaf
function handles a leaf node, the number of leaf nodes in a leaf node is, by definition, one. Thus, that function can ignore the value of the node and always return 1
. The node
function, on the other hand, receives the number of leaf nodes on the left-hand side (l
), the value of the current node, and the number of leaf nodes on the right-hand side (r
). Notice that since an internal node is never a leaf node, it doesn't count; instead, just add l
and r
together. Notice that, again, the value of the node itself is irrelevant.
How many leaf nodes does the above tree have?
> tree.CountLeaves() 4
You can also measure the maximum depth of a tree:
public static int MeasureDepth<T>(this IBinaryTree<T> tree) { return tree.Match((l, _, r) => 1 + Math.Max(l, r), _ => 0); }
Like in the previous article, I've arbitrarily decided that the depth of a leaf node is zero; therefore, the leaf
function always returns 0
. The node
function receives the depth of the left and right branches, and returns the maximum of those two values, plus one, since the current node adds one level of depth.
> tree.MeasureDepth() 3
You may not have much need for working with full binary trees in your normal, day-to-day C# work, but I found it worthwhile to include this example for a couple of reasons. First, because the original of the API shows that a catamorphism may be hiding in a Visitor. Second, because binary trees are interesting, in that they're foldable functors, but not monads.
Where does the catamorphism come from, though? How can you trust that the Match
method is the catamorphism?
Binary tree F-Algebra #
As in the previous article, I'll use Fix
and cata
as explained in Bartosz Milewski's excellent article on F-Algebras.
As always, start with the underlying endofunctor. You can think of this one as a specialisation of the rose tree from the previous article:
data FullBinaryTreeF a c = LeafF a | NodeF c a c deriving (Show, Eq, Read) instance Functor (FullBinaryTreeF a) where fmap _ (LeafF x) = LeafF x fmap f (NodeF l x r) = NodeF (f l) x (f r)
As usual, I've called the 'data' type a
and the carrier type c
(for carrier). The Functor
instance as usual translates the carrier type; the fmap
function has the type (c -> c1) -> FullBinaryTreeF a c -> FullBinaryTreeF a c1
.
As was the case when deducing the recent catamorphisms, Haskell isn't too happy about defining instances for a type like Fix (FullBinaryTreeF a)
. To address that problem, you can introduce a newtype
wrapper:
newtype FullBinaryTreeFix a = FullBinaryTreeFix { unFullBinaryTreeFix :: Fix (FullBinaryTreeF a) } deriving (Show, Eq, Read)
You can define Functor
, Foldable
, and Traversable
instances (but not Monad
) for this type without resorting to any funky GHC extensions. Keep in mind that ultimately, the purpose of all this code is just to figure out what the catamorphism looks like. This code isn't intended for actual use.
A pair of helper functions make it easier to define FullBinaryTreeFix
values:
fbtLeafF :: a -> FullBinaryTreeFix a fbtLeafF = FullBinaryTreeFix . Fix . LeafF fbtNodeF :: FullBinaryTreeFix a -> a -> FullBinaryTreeFix a -> FullBinaryTreeFix a fbtNodeF (FullBinaryTreeFix l) x (FullBinaryTreeFix r) = FullBinaryTreeFix $ Fix $ NodeF l x r
In order to distinguish these helper functions from the ones that create TreeFix a
values, I prefixed them with fbt
(for Full Binary Tree). fbtLeafF
creates a leaf node:
Prelude Fix FullBinaryTree> fbtLeafF "fnaah" FullBinaryTreeFix {unFullBinaryTreeFix = Fix (LeafF "fnaah")}
fbtNodeF
is a helper function to create an internal node:
Prelude Fix FullBinaryTree> fbtNodeF (fbtLeafF 1337) 42 (fbtLeafF 2112) FullBinaryTreeFix {unFullBinaryTreeFix = Fix (NodeF (Fix (LeafF 1337)) 42 (Fix (LeafF 2112)))}
The FullBinaryTreeFix
type, or rather the underlying FullBinaryTreeF a
functor, is all you need to identify the catamorphism.
Haskell catamorphism #
At this point, you have two out of three elements of an F-Algebra. You have an endofunctor (FullBinaryTreeF a
), and an object c
, but you still need to find a morphism FullBinaryTreeF a c -> c
. Notice that the algebra you have to find is the function that reduces the functor to its carrier type c
, not the 'data type' a
. This takes some time to get used to, but that's how catamorphisms work. This doesn't mean, however, that you get to ignore a
, as you'll see.
As in the previous articles, start by writing a function that will become the catamorphism, based on cata
:
fullBinaryTreeF = cata alg . unFullBinaryTreeFix where alg (LeafF x) = undefined alg (NodeF l x r) = undefined
While this compiles, with its undefined
implementation of alg
, it obviously doesn't do anything useful. I find, however, that it helps me think. How can you return a value of the type c
from alg
? You could pass a function argument to the fullBinaryTreeF
function and use it with x
:
fullBinaryTreeF fl = cata alg . unFullBinaryTreeFix where alg (LeafF x) = fl x alg (NodeF l x r) = undefined
I called the function fl
for function, leaf, because we're also going to need a function for the NodeF
case:
fullBinaryTreeF :: (c -> a -> c -> c) -> (a -> c) -> FullBinaryTreeFix a -> c fullBinaryTreeF fn fl = cata alg . unFullBinaryTreeFix where alg (LeafF x) = fl x alg (NodeF l x r) = fn l x r
This works. Since cata
has the type Functor f => (f a -> a) -> Fix f -> a
, that means that alg
has the type f a -> a
. In the case of FullBinaryTreeF
, the compiler infers that the alg
function has the type FullBinaryTreeF a c -> c
, which is just what you need!
You can now see what the carrier type c
is for. It's the type that the algebra extracts, and thus the type that the catamorphism returns.
This, then, is the catamorphism for a full binary tree. As always, it's not the only possible catamorphism, since you can easily reorder the arguments to both fullBinaryTreeF
, fn
, and fl
. These would all be isomorphic, though.
Basis #
You can implement most other useful functionality with treeF
. Here's the Functor
instance:
instance Functor FullBinaryTreeFix where fmap f = fullBinaryTreeF (\l x r -> fbtNodeF l (f x) r) (fbtLeafF . f)
The fl
function first invokes f
, followed by fbtLeafF
. The fn
function uses the fbtNodeF
helper function to create a new internal node. l
and r
are already-translated branches, so you just need to call f
with the node value x
.
There's no Monad
instance for binary trees, because you can't flatten a binary tree of binary trees. You can, on the other hand, define a Foldable
instance:
instance Foldable FullBinaryTreeFix where foldMap f = fullBinaryTreeF (\l x r -> l <> f x <> r) f
The f
function passed to foldMap
has the type Monoid m => (a -> m)
, so the fl
function that handles leaf nodes simply calls f
with the contents of the node. The fn
function receives two branches already translated to m
, so it just has to call f
with x
and combine all the m
values using the <>
operator.
The Traversable
instance follows right on the heels of Foldable
:
instance Traversable FullBinaryTreeFix where sequenceA = fullBinaryTreeF (liftA3 fbtNodeF) (fmap fbtLeafF)
There are operations on binary trees that you can implement with a fold, but some that you can't. Consider the tree shown in the diagram at the beginning of the article. This is also the tree that the above C# examples use. In Haskell, using FullBinaryTreeFix
, you can define that tree like this:
tree = fbtNodeF (fbtNodeF (fbtLeafF 42) 1337 (fbtNodeF (fbtLeafF 2112) 5040 (fbtLeafF 1984))) 2 (fbtLeafF 90125)
Since FullBinaryTreeFix
is Foldable
, and that type class already comes with sum
and maximum
functions, no further work is required to repeat the first two of the above C# examples:
Prelude Fix FullBinaryTree> sum tree 100642 Prelude Fix FullBinaryTree> maximum tree 90125
Counting leaves, or measuring the depth of a tree, on the other hand, is impossible with the Foldable
instance, but can be implemented using the catamorphism:
countLeaves :: Num n => FullBinaryTreeFix a -> n countLeaves = fullBinaryTreeF (\l _ r -> l + r) (const 1) treeDepth :: (Ord n, Num n) => FullBinaryTreeFix a -> n treeDepth = fullBinaryTreeF (\l _ r -> 1 + max l r) (const 0)
The reasoning is the same as already explained in the above C# examples. The functions also produce the same results:
Prelude Fix FullBinaryTree> countLeaves tree 4 Prelude Fix FullBinaryTree> treeDepth tree 3
This, hopefully, illustrates that the catamorphism is more capable, and that the fold is just a (list-biased) specialisation.
Summary #
The catamorphism for a full binary tree is a pair of functions. One function handles internal nodes, while the other function handles leaf nodes.
I thought it was interesting to show this example for two reasons: First, the original example was a Visitor implementation, and I think it's worth realising that a Visitor's Accept
method can also be viewed as a catamorphism. Second, a binary tree is an example of a data structure that has a fold, but isn't a monad.
All articles in the article series have, so far, covered data structures well-known from computer science. The next example will, on the other hand, demonstrate that even completely ad-hoc domain-specific data structures have catamorphisms.
Next: Payment types catamorphism.
Composition Root location
A Composition Root should be located near the point where user code first executes.
Prompted by a recent Internet discussion, my DIPPP co-author Steven van Deursen wrote to me in order to help clarify the Composition Root pattern.
In the email, Steven ponders whether it's defensible to use an API that looks like a Service Locator from within a unit test. He specifically calls out my article that describes the Auto-mocking Container design pattern.
In that article, I show how to use Castle Windsor's Resolve
method from within a unit test:
[Fact] public void SutIsController() { var container = new WindsorContainer().Install(new ShopFixture()); var sut = container.Resolve<BasketController>(); Assert.IsAssignableFrom<IHttpController>(sut); }
Is the test using a Service Locator? If so, why is that okay? If not, why isn't it a Service Locator?
This article argues that that this use of Resolve
isn't a Service Locator.
Entry points defined #
The original article about the Composition Root pattern defines a Composition Root as the place where you compose your object graph(s). It repeatedly describes how this ought to happen in, or as close as possible to, the application's entry point. I believe that this definition is compatible with the pattern description given in our book.
I do realise, however, that we may never have explicitly defined what an entry point is.
In order to do so, it may be helpful to establish a bit of terminology. In the following, I'll use the terms user code as opposed to framework code.
Much of the code you write probably runs within some sort of framework. If you're writing a web application, you're probably using a web framework. If you're writing a message-based application, you might be using some message bus, or actor, framework. If you're writing an app for a mobile device, you're probably using some sort of framework for that, too.
Even as a programmer, you're a user of frameworks.
As I usually do, I'll use Tomas Petricek's distinction between libraries and frameworks. A library is a collection of APIs that you can call. A framework is a software system that calls your code.
The reality is often more complex, as illustrated by the figure. While a framework will call your code, you can also invoke APIs afforded by the framework.
The point, however, is that user code is code that you write, while framework code is code that someone else wrote to develop the framework. The framework starts up first, and at some point in its lifetime, it calls your code.
Definition: The entry point is the user code that the framework calls first.
As an example, in ASP.NET Core, the (conventional) Startup
class is the first user code that the framework calls. (If you follow Tomas Petricek's definition to the letter, ASP.NET Core isn't a framework, but a library, because you have to write a Main
method and call WebHost.CreateDefaultBuilder(args).UseStartup<Startup>().Build().Run()
. In reality, though, you're supposed to configure the application from your Startup
class, making it the de facto entry point.)
Unit testing endpoints #
Most .NET-based unit testing packages are frameworks. There's typically little explicit configuration. Instead, you just write a method and adorn it with an attribute:
[Fact] public async Task ReservationSucceeds() { var repo = new FakeReservationsRepository(); var sut = new ReservationsController(10, repo); var reservation = new Reservation( date: new DateTimeOffset(2018, 8, 13, 16, 53, 0, TimeSpan.FromHours(2)), email: "mark@example.com", name: "Mark Seemann", quantity: 4); var actual = await sut.Post(reservation); Assert.True(repo.Contains(reservation.Accept())); var expectedId = repo.GetId(reservation.Accept()); var ok = Assert.IsAssignableFrom<OkActionResult>(actual); Assert.Equal(expectedId, ok.Value); } [Fact] public async Task ReservationFails() { var repo = new FakeReservationsRepository(); var sut = new ReservationsController(10, repo); var reservation = new Reservation( date: new DateTimeOffset(2018, 8, 13, 16, 53, 0, TimeSpan.FromHours(2)), email: "mark@example.com", name: "Mark Seemann", quantity: 11); var actual = await sut.Post(reservation); Assert.False(reservation.IsAccepted); Assert.False(repo.Contains(reservation)); Assert.IsAssignableFrom<InternalServerErrorActionResult>(actual); }
With xUnit.net, the attribute is called [Fact]
, but the principle is the same in NUnit and MSTest, only that names are different.
Where's the entry point?
Each test is it's own entry point. The test is (typically) the first user code that the test runner executes. Furthermore, each test runs independently of any other.
For the sake of argument, you could write each test case in a new application, and run all your test applications in parallel. It would be impractical, but it oughtn't change the way you organise the tests. Each test method is, conceptually, a mini-application.
A test method is its own Composition Root; or, more generally, each test has its own Composition Root. In fact, xUnit.net has various extensibility points that enable you to hook into the framework before each test method executes. You can, for example, combine a [Theory]
attribute with a custom AutoDataAttribute
, or you can adorn your tests with a BeforeAfterTestAttribute
. This doesn't change that the test runner will run each test case independently of all the other tests. Those pre-execution hooks play the same role as middleware in real applications.
You can, therefore, consider the Arrange phase the Composition Root for each test.
Thus, I don't consider the use of an Auto-mocking Container to be a Service Locator, since its role is to resolve object graphs at the entry point instead of locating services from arbitrary locations in the code base.
Summary #
A Composition Root is located at, or near, the entry point. An entry point is where user code is first executed by a framework. Each unit test method constitutes a separate, independent entry point. Therefore, it's consistent with these definitions to use an Auto-mocking Container in a unit test.
Tree catamorphism
The catamorphism for a tree is just a single function with a particular type.
This article is part of an article series about catamorphisms. A catamorphism is a universal abstraction that describes how to digest a data structure into a potentially more compact value.
This article presents the catamorphism for a tree, as well as how to identify it. The beginning of this article presents the catamorphism in C#, with examples. The rest of the article describes how to deduce the catamorphism. This part of the article presents my work in Haskell. Readers not comfortable with Haskell can just read the first part, and consider the rest of the article as an optional appendix.
A tree is a general-purpose data structure where each node in a tree has an associated value. Each node can have an arbitrary number of branches, including none.
The diagram shows an example of a tree of integers. The left branch contains a sub-tree with only a single branch, whereas the right branch contains a sub-tree with three branches. Each of the leaf nodes are trees in their own right, but they all have zero branches.
In this example, each branch at the 'same level' has the same depth, but this isn't required.
C# catamorphism #
As a C# representation of a tree, I'll use the Tree<T>
class from A Tree functor. The catamorphism is this instance method on Tree<T>
:
public TResult Cata<TResult>(Func<T, IReadOnlyCollection<TResult>, TResult> func) { return func(Item, children.Select(c => c.Cata(func)).ToArray()); }
Contrary to previous articles, I didn't call this method Match
, but simply Cata
(for catamorphism). The reason is that those other methods are called Match
for a particular reason. The data structures for which they are catamorphisms are all Church-encoded sum types. For those types, the Match
methods enable a syntax similar to pattern matching in F#. That's not the case for Tree<T>
. It's not a sum type, and it isn't Church-encoded.
The method takes a single function as an input argument. This is the first catamorphism in this article series that isn't made up of a pair of some sort. The Boolean catamorphism is a pair of values, the Maybe catamorphism is a pair made up of a value and a function, and the Either catamorphism is a pair of functions. The tree catamorphism, in contrast, is just a single function.
The first argument to the function is a value of the type T
. This will be an Item
value. The second argument to the function is a finite collection of TResult
values. This may take a little time getting used to, but it's a collection of already reduced sub-trees. When you supply such a function to Cata
, that function must return a single value of the type TResult
. Thus, the function must be able to digest a finite collection of TResult
values, as well as a T
value, to a single TResult
value.
The Cata
method accomplishes this by calling func
with the current Item
, as well as by recursively applying itself to each of the sub-trees. Eventually, Cata
will recurse into leaf nodes, which means that children
will be empty. When that happens, the lambda expression inside children.Select
never runs, and recursion stops and unwinds.
Examples #
You can use Cata
to implement most other behaviour you'd like Tree<T>
to have. In the original article on the Tree functor you saw an ad-hoc implementation of Select
, but instead, you can derive Select
from Cata
:
public Tree<TResult> Select<TResult>(Func<T, TResult> selector) { return Cata<Tree<TResult>>((x, nodes) => new Tree<TResult>(selector(x), nodes)); }
The lambda expression receives x
, an object of the type T
, as well as nodes
, which is a finite collection of already translated sub-trees. It simply translates x
with selector
and returns a new Tree<TResult>
with the translated value and the already translated nodes
.
This works just as well as the ad-hoc implementation; it passes all the same tests as shown in the previous article.
If you have a tree of numbers, you can add them all together:
public static int Sum(this Tree<int> tree) { return tree.Cata<int>((x, xs) => x + xs.Sum()); }
This uses the built-in Sum method for IEnumerable<T>
to add all the partly calculated sub-trees together, and then adds the value of the current node. In this and remaining examples, I'll use the tree shown in the above diagram:
Tree<int> tree = Tree.Create(42, Tree.Create(1337, Tree.Leaf(-3)), Tree.Create(7, Tree.Leaf(-99), Tree.Leaf(100), Tree.Leaf(0)));
You can now calculate the sum of all these nodes:
> tree.Sum() 1384
Another option is to find the maximum value anywhere in a tree:
public static int Max(this Tree<int> tree) { return tree.Cata<int>((x, xs) => xs.Any() ? Math.Max(x, xs.Max()) : x); }
This method again utilises one of the LINQ methods available via the .NET base class library: Max. It is, however, necessary to first check whether the partially reduced xs
is empty or not, because the Max
extension method on IEnumerable<int>
doesn't know how to deal with an empty collection (it throws an exception). When xs
is empty that implies a leaf node, in which case you can simply return x
; otherwise, you'll first have to use the Max
method on xs
to find the maximum value there, and then use Math.Max
to find the maximum of those two. (I'll here remind the attentive reader that finding the maximum number forms a semigroup and that semigroups accumulate when collections are non-empty. It all fits together. Isn't maths lovely?)
Using the same tree
as before, you can see that this method, too, works as expected:
> tree.Max() 1337
So far, these two extension methods are just specialised folds. In Haskell, Foldable
is a specific type class, and sum
and max
are available for all instances. As promised in the introduction to the series, though, there are some functions on trees that you can't implement using a fold. One of these is to count all the leaf nodes. You can still derive that functionality from the catamorphism, though:
public int CountLeaves() { return Cata<int>((x, xs) => xs.Any() ? xs.Sum() : 1); }
Like Max
, the lambda expression used to implement CountLeaves
uses Any to detect whether or not xs
is empty, which is when Any
is false
. Empty xs
indicates that you've found a leaf node, so return 1
. When xs
isn't empty, it contains a collection of 1
values - one for each leaf node recursively found; add them together with Sum
.
This also works for the same tree
as before:
> tree.CountLeaves() 4
You can also measure the maximum depth of a tree:
public int MeasureDepth() { return Cata<int>((x, xs) => xs.Any() ? 1 + xs.Max() : 0); }
This implementation considers a leaf node to have no depth:
> Tree.Leaf("foo").MeasureDepth() 0
This is a discretionary definition; you could also argue that, by definition, a leaf node ought to have a depth of one. If you think so, you'll need to change the 0
to 1
in the above MeasureDepth
implementation.
Once more, you can use Any
to detect leaf nodes. Whenever you find a leaf node, you return its depth, which, by definition, is 0
. Otherwise, you find the maximum depth already found among xs
, and add 1
, because xs
contains the maximum depths of all immediate sub-trees.
Using the same tree
again:
> tree.MeasureDepth() 2
The above tree
has the same depth for all sub-trees, so here's an example of a tilted tree:
> Tree.Create(3, . Tree.Create(1, . Tree.Leaf(0), . Tree.Leaf(0)), . Tree.Leaf(0), . Tree.Leaf(0), . Tree.Create(2, . Tree.Create(1, . Tree.Leaf(0)))) . .MeasureDepth() 3
To make it easier to understand, I've labelled all the leaf nodes with 0
, because that's their depth. I've then labelled the other nodes with the maximum number 'under' them, plus one. That's the algorithm used.
Tree F-Algebra #
As in the previous article, I'll use Fix
and cata
as explained in Bartosz Milewski's excellent article on F-Algebras.
As always, start with the underlying endofunctor. I've taken some inspiration from Tree a
from Data.Tree
, but changed some names:
data TreeF a c = NodeF { nodeValue :: a, nodes :: ListFix c } deriving (Show, Eq, Read) instance Functor (TreeF a) where fmap f (NodeF x ns) = NodeF x $ fmap f ns
Instead of using Haskell's standard list ([]
) for the sub-forest, I've used ListFix
from the article on list catamorphism. This should, hopefully, demonstrate how you can build on already established definitions derived from first principles.
As usual, I've called the 'data' type a
and the carrier type c
(for carrier). The Functor
instance as usual translates the carrier type; the fmap
function has the type (c -> c1) -> TreeF a c -> TreeF a c1
.
As was the case when deducing the recent catamorphisms, Haskell isn't too happy about defining instances for a type like Fix (TreeF a)
. To address that problem, you can introduce a newtype
wrapper:
newtype TreeFix a = TreeFix { unTreeFix :: Fix (TreeF a) } deriving (Show, Eq, Read)
You can define Functor
, Applicative
, Monad
, etc. instances for this type without resorting to any funky GHC extensions. Keep in mind that ultimately, the purpose of all this code is just to figure out what the catamorphism looks like. This code isn't intended for actual use.
A pair of helper functions make it easier to define TreeFix
values:
leafF :: a -> TreeFix a leafF x = TreeFix $ Fix $ NodeF x nilF nodeF :: a -> ListFix (TreeFix a) -> TreeFix a nodeF x = TreeFix . Fix . NodeF x . fmap unTreeFix
leafF
creates a leaf node:
Prelude Fix List Tree> leafF "ploeh" TreeFix {unTreeFix = Fix (NodeF "ploeh" (ListFix (Fix NilF)))}
nodeF
is a helper function to create a non-leaf node:
Prelude Fix List Tree> nodeF 4 (consF (leafF 9) nilF) TreeFix {unTreeFix = Fix (NodeF 4 (ListFix (Fix (ConsF (Fix (NodeF 9 (ListFix (Fix NilF)))) (Fix NilF)))))}
Even with helper functions, construction of TreeFix
values is cumbersome, but keep in mind that the code shown here isn't meant to be used in practice. The goal is only to deduce catamorphisms from more basic universal abstractions, and you now have all you need to do that.
Haskell catamorphism #
At this point, you have two out of three elements of an F-Algebra. You have an endofunctor (TreeF a
), and an object c
, but you still need to find a morphism TreeF a c -> c
. Notice that the algebra you have to find is the function that reduces the functor to its carrier type c
, not the 'data type' a
. This takes some time to get used to, but that's how catamorphisms work. This doesn't mean, however, that you get to ignore a
, as you'll see.
As in the previous articles, start by writing a function that will become the catamorphism, based on cata
:
treeF = cata alg . unTreeFix where alg (NodeF x ns) = undefined
While this compiles, with its undefined
implementation of alg
, it obviously doesn't do anything useful. I find, however, that it helps me think. How can you return a value of the type c
from alg
? You could pass a function argument to the treeF
function and use it with x
and ns
:
treeF :: (a -> ListFix c -> c) -> TreeFix a -> c treeF f = cata alg . unTreeFix where alg (NodeF x ns) = f x ns
This works. Since cata
has the type Functor f => (f a -> a) -> Fix f -> a
, that means that alg
has the type f a -> a
. In the case of TreeF
, the compiler infers that the alg
function has the type TreeF a c -> c
, which is just what you need!
You can now see what the carrier type c
is for. It's the type that the algebra extracts, and thus the type that the catamorphism returns.
This, then, is the catamorphism for a tree. So far in this article series, all previous catamorphisms have been pairs, but this one is just a single function. It's still not the only possible catamorphism, since you could trivially flip the arguments to f
.
I've chosen the representation shown here because it's isomorphic to the foldTree
function from Haskell's built-in Data.Tree
module, which explicitly documents that the function "is also known as the catamorphism on trees." foldTree
is defined using Haskell's standard list type ([]
), so the type is simpler: (a -> [b] -> b) -> Tree a -> b
. The two representations of trees, TreeFix
and Tree
are, however, isomorphic, so foldTree
is equivalent to treeF
. Notice how both of these functions are also equivalent to the above C# Cata
method.
Basis #
You can implement most other useful functionality with treeF
. Here's the Functor
instance:
instance Functor TreeFix where fmap f = treeF (nodeF . f)
nodeF . f
is just the point-free version of \x ns -> nodeF (f x) ns
, which follows the exact same implementation logic as the above C# Select
implementation.
The Applicative
instance is, I'm afraid, the most complex code you've seen so far in this article series:
instance Applicative TreeFix where pure = leafF ft <*> xt = treeF (\f ts -> addNodes ts $ f <$> xt) ft where addNodes ns (TreeFix (Fix (NodeF x xs))) = TreeFix (Fix (NodeF x (xs <> (unTreeFix <$> ns))))
I'd be surprised if it's impossible to make this terser, but I thought that it was just complicated enough that I needed to make one of the steps explicit. The addNodes
helper function has the type ListFix (TreeFix a) -> TreeFix a -> TreeFix a
, and it adds a list of sub-trees to the top node of a tree. It looks worse than it is, but it really just peels off the wrappers (TreeFix
, Fix
, and NodeF
) to access the data (x
and xs
) of the top node. It then concatenates xs
with ns
, and puts all the wrappers back on.
I have to admit, though, that the Applicative
and Monad
instance in general are mind-binding to me. The <*>
operation, particularly, takes a tree of functions and has to combine it with a tree of values. What does that even mean? How does one do that?
Like the above, apparently. I took the Applicative
behaviour from Data.Tree
and made sure that my implementation is isomorphic. I even have a property to make 'sure' that's the case:
testProperty "Applicative behaves like Data.Tree" $ do xt :: TreeFix Integer <- fromTree <$> resize 10 arbitrary ft :: TreeFix (Integer -> String) <- fromTree <$> resize 5 arbitrary let actual = ft <*> xt let expected = toTree ft <*> toTree xt return $ expected === toTree actual
The Monad
instance looks similar to the Applicative
instance:
instance Monad TreeFix where t >>= f = treeF (\x ns -> addNodes ns $ f x) t where addNodes ns (TreeFix (Fix (NodeF x xs))) = TreeFix (Fix (NodeF x (xs <> (unTreeFix <$> ns))))
The addNodes
helper function is the same as for <*>
, so you may wonder why I didn't extract that as a separate, reusable function. I decided, however, to apply the rule of three, and since, ultimately, addNodes
appear only twice, I left them as the implementation details they are.
Fortunately, the Foldable
instance is easier on the eyes:
instance Foldable TreeFix where foldMap f = treeF (\x xs -> f x <> fold xs)
Since f
is a function of the type a -> m
, where m
is a Monoid
instance, you can use fold
and <>
to accumulate everything to a single m
value.
The Traversable
instance is similarly terse:
instance Traversable TreeFix where sequenceA = treeF (\x ns -> nodeF <$> x <*> sequenceA ns)
Finally, you can implement conversions to and from the Tree
type from Data.Tree
, using ana
as the dual of cata
:
toTree :: TreeFix a -> Tree a toTree = treeF (\x ns -> Node x $ toList ns) fromTree :: Tree a -> TreeFix a fromTree = TreeFix . ana coalg where coalg (Node x ns) = NodeF x (fromList ns)
This demonstrates that TreeFix
is isomorphic to Tree
, which again establishes that treeF
and foldTree
are equivalent.
Relationships #
In this series, you've seen various examples of catamorphisms of structures that have no folds, catamorphisms that coincide with folds, and catamorphisms that are more general than the fold. The introduction to the series included this diagram:
The Either catamorphism is another example of a catamorphism that is more general than the fold, but that one turned out to be identical to the bifold. That's not the case here, because TreeFix
isn't a Bifoldable
instance at all.
There are operations on trees that you can implement with a fold, but some that you can't. Consider the tree in shown in the diagram at the beginning of the article. This is also the tree that the above C# examples use. In Haskell, using TreeFix
, you can define that tree like this:
tree = nodeF 42 (consF (nodeF 1337 (consF (leafF (-3)) nilF)) $ consF (nodeF 7 (consF (leafF (-99)) $ consF (leafF 100) $ consF (leafF 0) nilF)) nilF)
Yes, that almost looks like some Lisp dialect...
Since TreeFix
is Foldable
, and that type class already comes with sum
and maximum
functions, no further work is required to repeat the first two of the above C# examples:
*Tree Fix List Tree> sum tree 1384 *Tree Fix List Tree> maximum tree 1337
Counting leaves, or measuring the depth of a tree, on the other hand, is impossible with the Foldable
instance, but can be implemented using the catamorphism:
countLeaves :: Num n => TreeFix a -> n countLeaves = treeF (\_ xs -> if null xs then 1 else sum xs) treeDepth :: (Ord n, Num n) => TreeFix a -> n treeDepth = treeF (\_ xs -> if null xs then 0 else 1 + maximum xs)
The reasoning is the same as already explained in the above C# examples. The functions also produce the same results:
*Tree Fix List Tree> countLeaves tree 4 *Tree Fix List Tree> treeDepth tree 2
This, hopefully, illustrates that the catamorphism is more capable, and that the fold is just a (list-biased) specialisation.
Summary #
The catamorphism for a tree is just a single function, which is recursively evaluated. It enables you to translate, traverse, and reduce trees in many interesting ways.
You can use the catamorphism to implement a (list-biased) fold, including enumerating all nodes as a flat list, but there are operations (such as counting leaves) that you can implement with the catamorphism, but not with the fold.
This article series has so far covered progressively more complex data structures. The first examples (Boolean catamorphism and Peano catamorphism) were neither functors, applicatives, nor monads. All subsequent examples, on the other hand, are all of these, and more. The next example presents a functor that's neither applicative nor monad, yet still foldable. Obviously, what functionality it offers is still based on a catamorphism.
Either catamorphism
The catamorphism for Either is a generalisation of its fold. The catamorphism enables operations not available via fold.
This article is part of an article series about catamorphisms. A catamorphism is a universal abstraction that describes how to digest a data structure into a potentially more compact value.
This article presents the catamorphism for Either (also known as Result), as well as how to identify it. The beginning of this article presents the catamorphism in C#, with examples. The rest of the article describes how to deduce the catamorphism. This part of the article presents my work in Haskell. Readers not comfortable with Haskell can just read the first part, and consider the rest of the article as an optional appendix.
Either is a data container that models two mutually exclusive results. It's often used to return values that may be either correct (right), or incorrect (left). In statically typed functional programming with a preference for total functions, Either offers a saner, more reasonable way to model success and error results than throwing exceptions.
C# catamorphism #
This article uses the Church encoding of Either. The catamorphism for Either is the Match
method:
T Match<T>(Func<L, T> onLeft, Func<R, T> onRight);
Until this article, all previous catamorphisms have been a pair made from an initial value and a function. The Either catamorphism is a generalisation, since it's a pair of functions. One function handles the case where there's a left value, and the other function handles the case where there's a right value. Both functions must return the same, unifying type, which is often a string or something similar that can be shown in a user interface:
> IEither<TimeSpan, int> e = new Left<TimeSpan, int>(TimeSpan.FromMinutes(3)); > e.Match(ts => ts.ToString(), i => i.ToString()) "00:03:00" > IEither<TimeSpan, int> e = new Right<TimeSpan, int>(42); > e.Match(ts => ts.ToString(), i => i.ToString()) "42"
You'll often see examples like the above that turns both left and right cases into strings or something that can be represented as a unifying response type, but this is in no way required. If you can come up with a unifying type, you can convert both cases to that type:
> IEither<Guid, string> e = new Left<Guid, string>(Guid.NewGuid()); > e.Match(g => g.ToString().Count(c => 'a' <= c), s => s.Length) 12 > IEither<Guid, string> e = new Right<Guid, string>("foo"); > e.Match(g => g.ToString().Count(c => 'a' <= c), s => s.Length) 3
In the two above examples, you use two different functions that both reduce respectively Guid
and string
values to a number. The function that turns Guid
values into a number counts how many of the hexadecimal digits that are greater than or equal to A (10). The other function simply returns the length of the string
, if it's there. That example makes little sense, but the Match
method doesn't care about that.
In practical use, Either is often used for error handling. The article on the Church encoding of Either contains an example.
Either F-Algebra #
As in the previous article, I'll use Fix
and cata
as explained in Bartosz Milewski's excellent article on F-Algebras.
While F-Algebras and fixed points are mostly used for recursive data structures, you can also define an F-Algebra for a non-recursive data structure. You already saw an example of that in the articles about Boolean catamorphism and Maybe catamorphism. The difference between e.g. Maybe values and Either is that both cases of Either carry a value. You can model this as a Functor
with both a carrier type and two type arguments for the data that Either may contain:
data EitherF l r c = LeftF l | RightF r deriving (Show, Eq, Read) instance Functor (EitherF l r) where fmap _ (LeftF l) = LeftF l fmap _ (RightF r) = RightF r
I chose to call the 'data types' l
(for left) and r
(for right), and the carrier type c
(for carrier). As was also the case with BoolF
and MaybeF
, the Functor
instance ignores the map function because the carrier type is missing from both the LeftF
case and the RightF
case. Like the Functor
instances for BoolF
and MaybeF
, it'd seem that nothing happens, but at the type level, this is still a translation from EitherF l r c
to EitherF l r c1
. Not much of a function, perhaps, but definitely an endofunctor.
As was also the case when deducing the Maybe and List catamorphisms, Haskell isn't too happy about defining instances for a type like Fix (EitherF l r)
. To address that problem, you can introduce a newtype
wrapper:
newtype EitherFix l r = EitherFix { unEitherFix :: Fix (EitherF l r) } deriving (Show, Eq, Read)
You can define Functor
, Applicative
, Monad
, etc. instances for this type without resorting to any funky GHC extensions. Keep in mind that ultimately, the purpose of all this code is just to figure out what the catamorphism looks like. This code isn't intended for actual use.
A pair of helper functions make it easier to define EitherFix
values:
leftF :: l -> EitherFix l r leftF = EitherFix . Fix . LeftF rightF :: r -> EitherFix l r rightF = EitherFix . Fix . RightF
With those functions, you can create EitherFix
values:
Prelude Data.UUID Data.UUID.V4 Fix Either> leftF <$> nextRandom EitherFix {unEitherFix = Fix (LeftF e65378c2-0d6e-47e0-8bcb-7cc29d185fad)} Prelude Data.UUID Data.UUID.V4 Fix Either> rightF "foo" EitherFix {unEitherFix = Fix (RightF "foo")}
That's all you need to identify the catamorphism.
Haskell catamorphism #
At this point, you have two out of three elements of an F-Algebra. You have an endofunctor (EitherF l r
), and an object c
, but you still need to find a morphism EitherF l r c -> c
. Notice that the algebra you have to find is the function that reduces the functor to its carrier type c
, not the 'data types' l
and r
. This takes some time to get used to, but that's how catamorphisms work. This doesn't mean, however, that you get to ignore l
and r
, as you'll see.
As in the previous articles, start by writing a function that will become the catamorphism, based on cata
:
eitherF = cata alg . unEitherFix where alg (LeftF l) = undefined alg (RightF r) = undefined
While this compiles, with its undefined
implementations, it obviously doesn't do anything useful. I find, however, that it helps me think. How can you return a value of the type c
from the LeftF
case? You could pass an argument to the eitherF
function:
eitherF fl = cata alg . unEitherFix where alg (LeftF l) = fl l alg (RightF r) = undefined
While you could, technically, pass an argument of the type c
to eitherF
and then return that value from the LeftF
case, that would mean that you would ignore the l
value. This would be incorrect, so instead, make the argument a function and call it with l
. Likewise, you can deal with the RightF
case in the same way:
eitherF :: (l -> c) -> (r -> c) -> EitherFix l r -> c eitherF fl fr = cata alg . unEitherFix where alg (LeftF l) = fl l alg (RightF r) = fr r
This works. Since cata
has the type Functor f => (f a -> a) -> Fix f -> a
, that means that alg
has the type f a -> a
. In the case of EitherF
, the compiler infers that the alg
function has the type EitherF l r c -> c
, which is just what you need!
You can now see what the carrier type c
is for. It's the type that the algebra extracts, and thus the type that the catamorphism returns.
This, then, is the catamorphism for Either. As has been consistent so far, it's a pair, but now made from two functions. As you've seen repeatedly, this isn't the only possible catamorphism, since you can, for example, trivially flip the arguments to eitherF
.
I've chosen the representation shown here because it's isomorphic to the either
function from Haskell's built-in Data.Either
module, which calls the function the "Case analysis for the Either
type". Both of these functions (eitherF
and either
) are equivalent to the above C# Match
method.
Basis #
You can implement most other useful functionality with eitherF
. Here's the Bifunctor
instance:
instance Bifunctor EitherFix where bimap f s = eitherF (leftF . f) (rightF . s)
From that instance, the Functor
instance trivially follows:
instance Functor (EitherFix l) where fmap = second
On top of Functor
you can add Applicative
:
instance Applicative (EitherFix l) where pure = rightF f <*> x = eitherF leftF (<$> x) f
Notice that the <*>
implementation is similar to to the <*>
implementation for MaybeFix
. The same is the case for the Monad
instance:
instance Monad (EitherFix l) where x >>= f = eitherF leftF f x
Not only is EitherFix
Foldable
, it's Bifoldable
:
instance Bifoldable EitherFix where bifoldMap = eitherF
Notice, interestingly, that bifoldMap
is identical to eitherF
.
The Bifoldable
instance enables you to trivially implement the Foldable
instance:
instance Foldable (EitherFix l) where foldMap = bifoldMap mempty
You may find the presence of mempty
puzzling, since bifoldMap
(or eitherF
; they're identical) takes as arguments two functions. Is mempty
a function?
Yes, mempty
can be a function. Here, it is. There's a Monoid
instance for any function a -> m
, where m
is a Monoid
instance, and mempty
is the identity for that monoid. That's the instance in use here.
Just as EitherFix
is Bifoldable
, it's also Bitraversable
:
instance Bitraversable EitherFix where bitraverse fl fr = eitherF (fmap leftF . fl) (fmap rightF . fr)
You can comfortably implement the Traversable
instance based on the Bitraversable
instance:
instance Traversable (EitherFix l) where sequenceA = bisequenceA . first pure
Finally, you can implement conversions to and from the standard Either
type, using ana
as the dual of cata
:
toEither :: EitherFix l r -> Either l r toEither = eitherF Left Right fromEither :: Either a b -> EitherFix a b fromEither = EitherFix . ana coalg where coalg (Left l) = LeftF l coalg (Right r) = RightF r
This demonstrates that EitherFix
is isomorphic to Either
, which again establishes that eitherF
and either
are equivalent.
Relationships #
In this series, you've seen various examples of catamorphisms of structures that have no folds, catamorphisms that coincide with folds, and now a catamorphism that is more general than the fold. The introduction to the series included this diagram:
This shows that Boolean values and Peano numbers have catamorphisms, but no folds, whereas for Maybe and List, the fold and the catamorphism is the same. For Either, however, the fold is a special case of the catamorphism. The fold for Either 'pretends' that the left side doesn't exist. Instead, the left value is interpreted as a missing right value. Thus, in order to fold Either values, you must supply a 'fallback' value that will be used in case an Either value isn't a right value:
Prelude Fix Either> e = rightF LT :: EitherFix Integer Ordering Prelude Fix Either> foldr (const . show) "" e "LT" Prelude Fix Either> e = leftF 42 :: EitherFix Integer Ordering Prelude Fix Either> foldr (const . show) "" e ""
In a GHCi session like the above, you can create two Either values of the same type. The right case is an Ordering
value, while the left case is an Integer
value.
With foldr
, there's no way to access the left case. While you can access and transform the right Ordering
value, the number 42
is simply ignored during the fold. Instead, the default value ""
is returned.
Contrast this with the catamorphism, which can access both cases:
Prelude Fix Either> e = rightF LT :: EitherFix Integer Ordering Prelude Fix Either> eitherF show show e "LT" Prelude Fix Either> e = leftF 42 :: EitherFix Integer Ordering Prelude Fix Either> eitherF show show e "42"
In a session like this, you recreate the same values, but using the catamorphism eitherF
, you can now access and transform both the left and the right cases. In other words, the catamorphism enables you to perform operations not possible with the fold.
It's interesting, however, to note that while the fold is a specialisation of the catamorphism, the bifold is identical to the catamorphism.
Summary #
The catamorphism for Either is a pair of functions. One function transforms the left case, while the other function transforms the right case. For any Either value, only one of those functions will be used.
When I originally encountered the concept of a catamorphism, I found it difficult to distinguish between catamorphism and fold. My problem was, I think, that the tutorials I ran into mostly used linked lists to demonstrate how, in that case, the fold is the catamorphism. It turns out, however, that this isn't always the case. A catamorphism is a general abstraction. A fold, on the other hand, seems to me to be mostly related to collections.
In this article you saw the first example of a catamorphism that can do more than the fold. For Either, the fold is just a special case of the catamorphism. You also saw, however, how the catamorphism was identical to the bifold. Thus, it's still not entirely clear how these concepts relate. Therefore, in the next article, you'll get an example of a container where there's no bifold, and where the catamorphism is, indeed, a generalisation of the fold.
Next: Tree catamorphism.
List catamorphism
The catamorphism for a list is the same as its fold.
This article is part of an article series about catamorphisms. A catamorphism is a universal abstraction that describes how to digest a data structure into a potentially more compact value.
This article presents the catamorphism for (linked) lists, and other collections in general. It also shows how to identify it. The beginning of this article presents the catamorphism in C#, with an example. The rest of the article describes how to deduce the catamorphism. This part of the article presents my work in Haskell. Readers not comfortable with Haskell can just read the first part, and consider the rest of the article as an optional appendix.
The C# part of the article will discuss IEnumerable<T>
, while the Haskell part will deal specifically with linked lists. Since C# is a less strict language anyway, we have to make some concessions when we consider how concepts translate. In my experience, the functionality of IEnumerable<T>
closely mirrors that of Haskell lists.
C# catamorphism #
The .NET base class library defines this Aggregate overload:
public static TAccumulate Aggregate<TSource, TAccumulate>( this IEnumerable<TSource> source, TAccumulate seed, Func<TAccumulate, TSource, TAccumulate> func);
This is the catamorphism for linked lists (and, I'll conjecture, for IEnumerable<T>
in general). The introductory article already used it to show several motivating examples, of which I'll only repeat the last:
> new[] { 42, 1337, 2112, 90125, 5040, 7, 1984 } . .Aggregate(Angle.Identity, (a, i) => a.Add(Angle.FromDegrees(i))) [{ Angle = 207Â° }]
In short, the catamorphism is, similar to the previous catamorphisms covered in this article series, a pair made from an initial value and a function. This has been true for both the Peano catamorphism and the Maybe catamorphism. An initial value is just a value in all three cases, but you may notice that the function in question becomes increasingly elaborate. For IEnumerable<T>
, it's a function that takes two values. One of the values are of the type of the input list, i.e. for IEnumerable<TSource>
it would be TSource
. By elimination you can deduce that this value must come from the input list. The other value is of the type TAccumulate
, which implies that it could be the seed
, or the result from a previous call to func
.
List F-Algebra #
As in the previous article, I'll use Fix
and cata
as explained in Bartosz Milewski's excellent article on F-Algebras. The ListF
type comes from his article as well, although I've renamed the type arguments:
data ListF a c = NilF | ConsF a c deriving (Show, Eq, Read) instance Functor (ListF a) where fmap _ NilF = NilF fmap f (ConsF a c) = ConsF a $ f c
Like I did with MaybeF
, I've named the 'data' type argument a
, and the carrier type c
(for carrier). Once again, notice that the Functor
instance maps over the carrier type c
; not over the 'data' type a
.
As was also the case when deducing the Maybe catamorphism, Haskell isn't too happy about defining instances for a type like Fix (ListF a)
. To address that problem, you can introduce a newtype
wrapper:
newtype ListFix a = ListFix { unListFix :: Fix (ListF a) } deriving (Show, Eq, Read)
You can define Functor
, Applicative
, Monad
, etc. instances for this type without resorting to any funky GHC extensions. Keep in mind that ultimately, the purpose of all this code is just to figure out what the catamorphism looks like. This code isn't intended for actual use.
A few helper functions make it easier to define ListFix
values:
nilF :: ListFix a nilF = ListFix $ Fix NilF consF :: a -> ListFix a -> ListFix a consF x = ListFix . Fix . ConsF x . unListFix
With those functions, you can create ListFix
linked lists:
Prelude Fix List> nilF ListFix {unListFix = Fix NilF} Prelude Fix List> consF 42 $ consF 1337 $ consF 2112 nilF ListFix {unListFix = Fix (ConsF 42 (Fix (ConsF 1337 (Fix (ConsF 2112 (Fix NilF))))))}
The first example creates an empty list, whereas the second creates a list of three integers, corresponding to [42,1337,2112]
.
That's all you need to identify the catamorphism.
Haskell catamorphism #
At this point, you have two out of three elements of an F-Algebra. You have an endofunctor (ListF
), and an object a
, but you still need to find a morphism ListF a c -> c
. Notice that the algebra you have to find is the function that reduces the functor to its carrier type c
, not the 'data type' a
. This takes some time to get used to, but that's how catamorphisms work. This doesn't mean, however, that you get to ignore a
, as you'll see.
As in the previous article, start by writing a function that will become the catamorphism, based on cata
:
listF = cata alg . unListFix where alg NilF = undefined alg (ConsF h t) = undefined
While this compiles, with its undefined
implementations, it obviously doesn't do anything useful. I find, however, that it helps me think. How can you return a value of the type c
from the NilF
case? You could pass an argument to the listF
function:
listF n = cata alg . unListFix where alg NilF = n alg (ConsF h t) = undefined
The ConsF
case, contrary to NilF
, contains both a head h
(of type a
) and a tail t
(of type c
). While you could make the code compile by simply returning t
, it'd be incorrect to ignore h
. In order to deal with both, you'll need a function that turns both h
and t
into a value of the type c
. You can do this by passing a function to listF
and using it:
listF :: (a -> c -> c) -> c -> ListFix a -> c listF f n = cata alg . unListFix where alg NilF = n alg (ConsF h t) = f h t
This works. Since cata
has the type Functor f => (f a -> a) -> Fix f -> a
, that means that alg
has the type f a -> a
. In the case of ListF
, the compiler infers that the alg
function has the type ListF a c -> c
, which is just what you need!
You can now see what the carrier type c
is for. It's the type that the algebra extracts, and thus the type that the catamorphism returns.
This, then, is the catamorphism for lists. As has been consistent so far, it's a pair made from an initial value and a function. Once more, this isn't the only possible catamorphism, since you can, for example, trivially flip the arguments to listF
:
listF :: c -> (a -> c -> c) -> ListFix a -> c listF n f = cata alg . unListFix where alg NilF = n alg (ConsF h t) = f h t
You can also flip the arguments of f
:
listF :: c -> (c -> a -> c) -> ListFix a -> c listF n f = cata alg . unListFix where alg NilF = n alg (ConsF h t) = f t h
These representations are all isomorphic to each other, but notice that the last variation is similar to the above C# Aggregate
overload. The initial n
value is the seed
, and the function f
has the same shape as func
. Thus, I consider it reasonable to conjecture that that Aggregate
overload is the catamorphism for IEnumerable<T>
.
Basis #
You can implement most other useful functionality with listF
. The rest of this article uses the first of the variations shown above, with the type (a -> c -> c) -> c -> ListFix a -> c
. Here's the Semigroup
instance:
instance Semigroup (ListFix a) where xs <> ys = listF consF ys xs
The initial value passed to listF
is ys
, and the function to apply is simply the consF
function, thus 'consing' the two lists together. Here's an example of the operation in action:
Prelude Fix List> consF 42 $ consF 1337 nilF <> (consF 2112 $ consF 1 nilF) ListFix {unListFix = Fix (ConsF 42 (Fix (ConsF 1337 (Fix (ConsF 2112 (Fix (ConsF 1 (Fix NilF))))))))}
With a Semigroup
instance, it's trivial to also implement the Monoid
instance:
instance Monoid (ListFix a) where mempty = nilF
While you could implement mempty
with listF
(mempty = listF (const id) nilF nilF
), that'd be overcomplicated. Just because you can implement all functionality using listF
, it doesn't mean that you should, if a simpler alternative exists.
You can, on the other hand, use listF
for the Functor
instance:
instance Functor ListFix where fmap f = listF (\h l -> consF (f h) l) nilF
You could write the function you pass to listF
in a point-free fashion as consF . f
, but I thought it'd be easier to follow what happens when written as an explicit lambda expression. The function receives a 'current value' h
, as well as the part of the list which has already been translated l
. Use f
to translate h
, and consF
the result unto l
.
You can add Applicative
and Monad
instances in a similar fashion:
instance Applicative ListFix where pure x = consF x nilF fs <*> xs = listF (\f acc -> (f <$> xs) <> acc) nilF fs instance Monad ListFix where xs >>= f = listF (\x acc -> f x <> acc) nilF xs
What may be more interesting, however, is the Foldable
instance:
instance Foldable ListFix where foldr = listF
The demonstrates that listF
and foldr
is the same.
Next, you can also add a Traversable
instance:
instance Traversable ListFix where sequenceA = listF (\x acc -> consF <$> x <*> acc) (pure nilF)
Finally, you can implement conversions to and from the standard list []
type, using ana
as the dual of cata
:
toList :: ListFix a -> [a] toList = listF (:) [] fromList :: [a] -> ListFix a fromList = ListFix . ana coalg where coalg [] = NilF coalg (h:t) = ConsF h t
This demonstrates that ListFix
is isomorphic to []
, which again establishes that listF
and foldr
are equivalent.
Summary #
The catamorphism for lists is a pair made from an initial value and a function. One variation is equal to foldr
. Like Maybe, the catamorphism is the same as the fold.
In C#, this function corresponds to the Aggregate
extension method identified above.
You've now seen two examples where the catamorphism coincides with the fold. You've also seen examples (Boolean catamorphism and Peano catamorphism) where there's a catamorphism, but no fold at all. In the next article, you'll see an example of a container that has both catamorphism and fold, but where the catamorphism is more general than the fold.
Next: Either catamorphism.
Maybe catamorphism
The catamorphism for Maybe is just a simplification of its fold.
This article is part of an article series about catamorphisms. A catamorphism is a universal abstraction that describes how to digest a data structure into a potentially more compact value.
This article presents the catamorphism for Maybe, as well as how to identify it. The beginning of this article presents the catamorphism in C#, with examples. The rest of the article describes how to deduce the catamorphism. This part of the article presents my work in Haskell. Readers not comfortable with Haskell can just read the first part, and consider the rest of the article as an optional appendix.
Maybe is a data container that models the absence or presence of a value. Contrary to null, Maybe has a type, so offers a sane and reasonable way to model that situation.
C# catamorphism #
This article uses Church-encoded Maybe. Other, alternative implementations of Maybe are possible. The catamorphism for Maybe is the Match
method:
TResult Match<TResult>(TResult nothing, Func<T, TResult> just);
Like the Peano catamorphism, the Maybe catamorphism is a pair of a value and a function. The nothing
value corresponds to the absence of data, whereas the just
function handles the presence of data.
Given, for example, a Maybe containing a number, you can use Match
to get the value out of the Maybe:
> IMaybe<int> maybe = new Just<int>(42); > maybe.Match(0, x => x) 42 > IMaybe<int> maybe = new Nothing<int>(); > maybe.Match(0, x => x) 0
The functionality is, however, more useful than a simple get-value-or-default operation. Often, you don't have a good default value for the type potentially wrapped in a Maybe object. In the core of your application architecture, it may not be clear how to deal with, say, the absence of a Reservation
object, whereas at the boundary of your system, it's evident how to convert both absence and presence of data into a unifying type, such as an HTTP response:
Maybe<Reservation> maybe = // ... return maybe .Select(r => Repository.Create(r)) .Match<IHttpActionResult>( nothing: Content( HttpStatusCode.InternalServerError, new HttpError("Couldn't accept.")), just: id => Ok(id));
This enables you to avoid special cases, such as null Reservation
objects, or magic numbers like -1
to indicate the absence of id
values. At the boundary of an HTTP-based application, you know that you must return an HTTP response. That's the unifying type, so you can return 200 OK
with a reservation ID in the response body when data is present, and 500 Internal Server Error
when data is absent.
Maybe F-Algebra #
As in the previous article, I'll use Fix
and cata
as explained in Bartosz Milewski's excellent article on F-Algebras.
While F-Algebras and fixed points are mostly used for recursive data structures, you can also define an F-Algebra for a non-recursive data structure. You already saw an example of that in the article about Boolean catamorphism. The difference between Boolean values and Maybe is that the just case of Maybe carries a value. You can model this as a Functor
with both a carrier type and a type argument for the data that Maybe may contain:
data MaybeF a c = NothingF | JustF a deriving (Show, Eq, Read) instance Functor (MaybeF a) where fmap _ NothingF = NothingF fmap _ (JustF x) = JustF x
I chose to call the 'data type' a
and the carrier type c
(for carrier). As was also the case with BoolF
, the Functor
instance ignores the map function because the carrier type is missing from both the NothingF
case and the JustF
case. Like the Functor
instance for BoolF
, it'd seem that nothing happens, but at the type level, this is still a translation from MaybeF a c
to MaybeF a c1
. Not much of a function, perhaps, but definitely an endofunctor.
In the previous articles, it was possible to work directly with the fixed points of both functors; i.e. Fix BoolF
and Fix NatF
. Haskell isn't happy about attempts to define various instances for Fix (MaybeF a)
, so in order to make this easier, you can define a newtype
wrapper:
newtype MaybeFix a = MaybeFix { unMaybeFix :: Fix (MaybeF a) } deriving (Show, Eq, Read)
In order to make it easier to work with MaybeFix
you can add helper functions to create values:
nothingF :: MaybeFix a nothingF = MaybeFix $ Fix NothingF justF :: a -> MaybeFix a justF = MaybeFix . Fix . JustF
You can now create MaybeFix
values to your heart's content:
Prelude Fix Maybe> justF 42 MaybeFix {unMaybeFix = Fix (JustF 42)} Prelude Fix Maybe> nothingF MaybeFix {unMaybeFix = Fix NothingF}
That's all you need to identify the catamorphism.
Haskell catamorphism #
At this point, you have two out of three elements of an F-Algebra. You have an endofunctor (MaybeF
), and an object a
, but you still need to find a morphism MaybeF a c -> c
. Notice that the algebra you have to find is the function that reduces the functor to its carrier type c
, not the 'data type' a
. This takes some time to get used to, but that's how catamorphisms work. This doesn't mean, however, that you get to ignore a
, as you'll see.
As in the previous article, start by writing a function that will become the catamorphism, based on cata
:
maybeF = cata alg . unMaybeFix where alg NothingF = undefined alg (JustF x) = undefined
While this compiles, with its undefined
implementations, it obviously doesn't do anything useful. I find, however, that it helps me think. How can you return a value of the type c
from the NothingF
case? You could pass an argument to the maybeF
function:
maybeF n = cata alg . unMaybeFix where alg NothingF = n alg (JustF x) = undefined
The JustF
case, contrary to NothingF
, already contains a value, and it'd be incorrect to ignore it. On the other hand, x
is a value of type a
, and you need to return a value of type c
. You'll need a function to perform the conversion, so pass such a function as an argument to maybeF
as well:
maybeF :: c -> (a -> c) -> MaybeFix a -> c maybeF n f = cata alg . unMaybeFix where alg NothingF = n alg (JustF x) = f x
This works. Since cata
has the type Functor f => (f a -> a) -> Fix f -> a
, that means that alg
has the type f a -> a
. In the case of MaybeF
, the compiler infers that the alg
function has the type MaybeF a c -> c
, which is just what you need!
You can now see what the carrier type c
is for. It's the type that the algebra extracts, and thus the type that the catamorphism returns.
Notice that maybeF
, like the above C# Match
method, takes as arguments a pair of a value and a function (together with the Maybe value itself). Those are two representations of the same idea. Furthermore, this is nearly identical to the maybe
function in Haskell's Data.Maybe
module. I found if fitting, therefore, to name the function maybeF
.
Basis #
You can implement most other useful functionality with maybeF
. Here's the Functor
instance:
instance Functor MaybeFix where fmap f = maybeF nothingF (justF . f)
Since fmap
should be a structure-preserving map, you'll have to map the nothing case to the nothing case, and just to just. When calling maybeF
, you must supply a value for the nothing case and a function to deal with the just case. The nothing case is easy to handle: just use nothingF
.
In the just case, first apply the function f
to map from a
to b
, and then use justF
to wrap the b
value in a new MaybeFix
container to get MaybeFix b
.
Applicative
is a little harder, but not much:
instance Applicative MaybeFix where pure = justF f <*> x = maybeF nothingF (<$> x) f
The pure
function is just justF (pun intended). The apply operator <*>
is more complex.
Both f
and x
surrounding <*>
are MaybeFix
values: f
is MaybeFix (a -> b)
, and x
is MaybeFix a
. While it's becoming increasingly clear that you can use a catamorphism like maybeF
to implement most other functionality, to which MaybeFix
value should you apply it? To f
or x
?
Both are possible, but the code looks (in my opinion) more readable if you apply it to f
. Again, when f
is nothing, return nothingF
. When f
is just, use the functor instance to map x
(using the infix fmap
alias <$>
).
The Monad
instance, on the other hand, is almost trivial:
instance Monad MaybeFix where x >>= f = maybeF nothingF f x
As usual, map nothing to nothing by supplying nothingF
. f
is already a function that returns a MaybeFix b
value, so just use that.
The Foldable
instance is likewise trivial (although, as you'll see below, you can make it even more trivial):
instance Foldable MaybeFix where foldMap = maybeF mempty
The foldMap
function must return a Monoid
instance, so for the nothing case, simply return the identity, mempty. Furthermore, foldMap
takes a function a -> m
, but since the foldMap
implementation is point-free, you can't 'see' that function as an argument.
Finally, for the sake of completeness, here's the Traversable
instance:
instance Traversable MaybeFix where sequenceA = maybeF (pure nothingF) (justF <$>)
In the nothing case, you can put nothingF
into the desired Applicative
with pure
. In the just case you can take advantage of the desired Applicative
being also a Functor
by simply mapping the inner value(s) with justF
.
Since the Applicative
instance for MaybeFix
equals pure
to justF
, you could alternatively write the Traversable
instance like this:
instance Traversable MaybeFix where sequenceA = maybeF (pure nothingF) (pure <$>)
I like this alternative less, since I find it confusing. The two appearances of pure
relate to two different types. The pure
in pure nothingF
has the type MaybeFix a -> f (MaybeFix a)
, while the pure
in pure <$>
has the type a -> MaybeFix a
!
Both implementations work the same, though:
Prelude Fix Maybe> sequenceA (justF ("foo", 42)) ("foo",MaybeFix {unMaybeFix = Fix (JustF 42)})
Here, I'm using the Applicative
instance of (,) String
.
Finally, you can implement conversions to and from the standard Maybe
type, using ana
as the dual of cata
:
toMaybe :: MaybeFix a -> Maybe a toMaybe = maybeF Nothing return fromMaybe :: Maybe a -> MaybeFix a fromMaybe = MaybeFix . ana coalg where coalg Nothing = NothingF coalg (Just x) = JustF x
This demonstrates that MaybeFix
is isomorphic to Maybe
, which again establishes that maybeF
and maybe
are equivalent.
Alternatives #
As usual, the above maybeF
isn't the only possible catamorphism. A trivial variation is to flip its arguments, but other variations exist.
It's a recurring observation that a catamorphism is just a generalisation of a fold. In the above code, the Foldable
instance already looked as simple as anyone could desire, but another variation of a catamorphism for Maybe is this gratuitously embellished definition:
maybeF :: (a -> c -> c) -> c -> MaybeFix a -> c maybeF f n = cata alg . unMaybeFix where alg NothingF = n alg (JustF x) = f x n
This variation redundantly passes n
as an argument to f
, thereby changing the type of f
to a -> c -> c
. There's no particular motivation for doing this, apart from establishing that this catamorphism is exactly the same as the fold:
instance Foldable MaybeFix where foldr = maybeF
You can still implement the other instances as well, but the rest of the code suffers in clarity. Here's a few examples:
instance Functor MaybeFix where fmap f = maybeF (const . justF . f) nothingF instance Applicative MaybeFix where pure = justF f <*> x = maybeF (const . (<$> x)) nothingF f instance Monad MaybeFix where x >>= f = maybeF (const . f) nothingF x
I find that the need to compose with const
does nothing to improve the readability of the code, so this variation is mostly, I think, of academic interest. It does show, though, that the catamorphism of Maybe is isomorphic to its fold, as the diagram in the overview article indicated:
You can demonstrate that this variation, too, is isomorphic to Maybe
with a set of conversion:
toMaybe :: MaybeFix a -> Maybe a toMaybe = maybeF (const . return) Nothing fromMaybe :: Maybe a -> MaybeFix a fromMaybe = MaybeFix . ana coalg where coalg Nothing = NothingF coalg (Just x) = JustF x
Only toMaybe
has changed, compared to above; the fromMaybe
function remains the same. The only change to toMaybe
is that the arguments have been flipped, and return
is now composed with const
.
Since (according to Conceptual Mathematics) isomorphisms are transitive this means that the two variations of maybeF
are isomorphic. The latter, more complex, variation of maybeF
is identical foldr
, so we can consider the simpler, more frequently encountered variation a simplification of fold.
Summary #
The catamorphism for Maybe is the same as its Church encoding: a pair made from a default value and a function. In Haskell's base library, this is simply called maybe
. In the above C# code, it's called Match
.
This function is total, and you can implement any other functionality you need with it. I therefore consider it the canonical representation of Maybe, which is also why it annoys me that most Maybe implementations come equipped with partial functions like fromJust
, or F#'s Option.get
. Those functions shouldn't be part of a sane and reasonable Maybe API. You shouldn't need them.
In this series of articles about catamorphisms, you've now seen the first example of catamorphism and fold coinciding. In the next article, you'll see another such example - probably the most well-known catamorphism example of them all.
Next: List catamorphism.
Peano catamorphism
The catamorphism for Peano numbers involves a base value and a successor function.
This article is part of an article series about catamorphisms. A catamorphism is a universal abstraction that describes how to digest a data structure into a potentially more compact value.
This article presents the catamorphism for natural numbers, as well as how to identify it. The beginning of the article presents the catamorphism in C#, with examples. The rest of the article describes how I deduced the catamorphism. This part of the article presents my work in Haskell. Readers not comfortable with Haskell can just read the first part, and consider the rest of the article as an optional appendix.
C# catamorphism #
In this article, I model natural numbers using Peano's model, and I'll reuse the Church-encoded implementation you've seen before. The catamorphism for INaturalNumber
is:
public static T Cata<T>(this INaturalNumber n, T zero, Func<T, T> succ) { return n.Match(zero, p => p.Cata(succ(zero), succ)); }
Notice that this is an extension method on INaturalNumber
, taking two other arguments: a zero
argument which will be returned when the number is zero, and a successor function to return the 'next' value based on a previous value.
The zero
argument is the easiest to understand. It's simply passed to Match
so that this is the value that Cata
returns when n
is zero.
The other argument to Match
must be a Func<INaturalNumber, T>
; that is, a function that takes an INaturalNumber
as input and returns a value of the type T
. You can supply such a function by using a lambda expression. This expression receives a predecessor p
as input, and has to return a value of the type T
. The only function available in this context, however, is succ
, which has the type Func<T, T>
. How can you make that work?
As is often the case when programming with generics, it pays to follow the types. A Func<T, T>
requires an input of the type T
. Do you have any T
objects around?
The only available T
object is zero
, so you could call succ(zero)
to produce another T
value. While you could return that immediately, that'd ignore the predecessor p
, so that's not going to work. Another option, which is the one that works, is to recursively call Cata
with succ(zero)
as the zero
value, and succ
as the second argument.
What this accomplishes is that Cata
keeps recursively calling itself until n
is zero. The zero
object, however, will be the result of repeated applications of succ(zero)
. In other words, succ
will be called as many times as the natural number. If n
is 7, then succ
will be called seven times, the first time with the original zero
value, the next time with the result of succ(zero)
, the third time with the result of succ(succ(zero))
, and so on. If the number is 42, succ
will be called 42 times.
Arithmetic #
You can implement all the functionality you saw in the article on Church-encoded natural numbers. You can start gently by converting a Peano number into a normal C# int
:
public static int Count(this INaturalNumber n) { return n.Cata(0, x => 1 + x); }
You can play with the functionality in C# Interactive to get a feel for how it works:
> NaturalNumber.Eight.Count() 8 > NaturalNumber.Five.Count() 5
The Count
extension method uses Cata
to count the level of recursion. The zero
value is, not surprisingly, 0
, and the successor function simply adds one to the previous number. Since the successor function runs as many times as encoded by the Peano number, and since the initial value is 0
, you get the integer value of the number when Cata
exits.
A useful building block you can write using Cata
is a function to increment a natural number by one:
public static INaturalNumber Increment(this INaturalNumber n) { return n.Cata(One, p => new Successor(p)); }
This, again, works as you'd expect:
> NaturalNumber.Zero.Increment().Count() 1 > NaturalNumber.Eight.Increment().Count() 9
With the Increment
method and Cata
, you can easily implement addition:
public static INaturalNumber Add(this INaturalNumber x, INaturalNumber y) { return x.Cata(y, p => p.Increment()); }
The trick here is to use y
as the zero
case for x
. In other words, if x
is zero, then Add
should return y
. If x
isn't zero, then Increment
it as many times as the number encodes, but starting at y
. In other words, start with y
and Increment
x
times.
The catamorphism makes it much easier to implement arithmetic operation. Just consider multiplication, which wasn't the simplest implementation in the previous article. Now, it's as simple as this:
public static INaturalNumber Multiply(this INaturalNumber x, INaturalNumber y) { return x.Cata(Zero, p => p.Add(y)); }
Start at 0
and simply Add(y)
x
times.
> NaturalNumber.Nine.Multiply(NaturalNumber.Four).Count() 36
Finally, you can also implement some common predicates:
public static IChurchBoolean IsZero(this INaturalNumber n) { return n.Cata<IChurchBoolean>(new ChurchTrue(), _ => new ChurchFalse()); } public static IChurchBoolean IsEven(this INaturalNumber n) { return n.Cata<IChurchBoolean>(new ChurchTrue(), b => new ChurchNot(b)); } public static IChurchBoolean IsOdd(this INaturalNumber n) { return new ChurchNot(n.IsEven()); }
Particularly IsEven
is elegant: It considers zero
even, so simply uses a new ChurchTrue()
object for that case. In all other cases, it alternates between false and true by negating the predecessor.
> NaturalNumber.Three.IsEven().ToBool()
false
It seems convincing that we can use Cata
to implement all the other functionality we need. That seems to be a characteristic of a catamorphism. Still, how do we know that Cata
is, in fact, the catamorphism for natural numbers?
Peano F-Algebra #
As in the previous article, I'll use Fix
and cata
as explained in Bartosz Milewski's excellent article on F-Algebras. The NatF
type comes from his article as well:
data NatF a = ZeroF | SuccF a deriving (Show, Eq, Read) instance Functor NatF where fmap _ ZeroF = ZeroF fmap f (SuccF x) = SuccF $ f x
You can use the fixed point of this functor to define numbers with a shared type. Here's just the first ten:
zeroF, oneF, twoF, threeF, fourF, fiveF, sixF, sevenF, eightF, nineF :: Fix NatF zeroF = Fix ZeroF oneF = Fix $ SuccF zeroF twoF = Fix $ SuccF oneF threeF = Fix $ SuccF twoF fourF = Fix $ SuccF threeF fiveF = Fix $ SuccF fourF sixF = Fix $ SuccF fiveF sevenF = Fix $ SuccF sixF eightF = Fix $ SuccF sevenF nineF = Fix $ SuccF eightF
That's all you need to identify the catamorphism.
Haskell catamorphism #
At this point, you have two out of three elements of an F-Algebra. You have an endofunctor (NatF
), and an object a
, but you still need to find a morphism NatF a -> a
.
As in the previous article, start by writing a function that will become the catamorphism, based on cata
:
natF = cata alg where alg ZeroF = undefined alg (SuccF predecessor) = undefined
While this compiles, with its undefined
implementations, it obviously doesn't do anything useful. I find, however, that it helps me think. How can you return a value of the type a
from the ZeroF
case? You could pass an argument to the natF
function:
natF z = cata alg where alg ZeroF = z alg (SuccF predecessor) = undefined
In the SuccF
case, predecessor
is already of the polymorphic type a
, so instead of returning a constant value, you can supply a function as an argument to natF
and use it in that case:
natF :: a -> (a -> a) -> Fix NatF -> a natF z next = cata alg where alg ZeroF = z alg (SuccF predecessor) = next predecessor
This works. Since cata
has the type Functor f => (f a -> a) -> Fix f -> a
, that means that alg
has the type f a -> a
. In the case of NatF
, the compiler infers that the alg
function has the type NatF a -> a
, which is just what you need!
For good measure, I should point out that, as usual, the above natF
function isn't the only possible catamorphism. Trivially, you can flip the order of the arguments, and this would also be a catamorphism. These two alternatives are isomorphic.
The natF
function identifies the Peano number catamorphism, which is equivalent to the C# representation in the beginning of the article. I called the function natF
, because there's a tendency in Haskell to name the 'case analysis' or catamorphism after the type, just with a lower-case initial letter.
Basis #
A catamorphism can be used to implement most (if not all) other useful functionality, like all of the above C# functionality. In fact, I wrote the Haskell code first, and then translated my implementations into the above C# extension methods. This means that the following functions apply the same reasoning:
evenF :: Fix NatF -> Fix BoolF evenF = natF trueF notF oddF :: Fix NatF -> Fix BoolF oddF = notF . evenF incF :: Fix NatF -> Fix NatF incF = natF oneF $ Fix . SuccF addF :: Fix NatF -> Fix NatF -> Fix NatF addF x y = natF y incF x multiplyF :: Fix NatF -> Fix NatF -> Fix NatF multiplyF x y = natF zeroF (addF y) x
Here are some GHCi usage examples:
Prelude Boolean Nat> evenF eightF Fix TrueF Prelude Boolean Nat> toNum $ multiplyF sevenF sixF 42
The toNum
function corresponds to the above Count
C# method. It is, again, based on cata
. You can use ana
to convert the other way:
toNum :: Num a => Fix NatF -> a toNum = natF 0 (+ 1) fromNum :: (Eq a, Num a) => a -> Fix NatF fromNum = ana coalg where coalg 0 = ZeroF coalg x = SuccF $ x - 1
This demonstrates that Fix NatF
is isomorphic to Num
instances, such as Integer
.
Summary #
The catamorphism for Peano numbers is a pair consisting of a zero value and a successor function. The most common description of catamorphisms that I've found emphasise how a catamorphism is like a fold; an operation you can use to reduce a data structure like a list or a tree to a single value. This is what happens here, but even so, the Fix NatF
type isn't a Foldable
instance. The reason is that while NatF
is a polymorphic type, its fixed point Fix NatF
isn't. Haskell's Foldable
type class requires foldable containers to be polymorphic (what C# programmers would call 'generic').
When I first ran into the concept of a catamorphism, it was invariably described as a 'generalisation of fold'. The examples shown were always how the catamorphism for linked list is the same as its fold. I found such explanations unhelpful, because I couldn't understand how those two concepts differ.
The purpose with this article series is to show just how much more general the abstraction of a catamorphism is. In this article you saw how an infinitely recursive data structure like Peano numbers have a catamorphism, even though it isn't a parametrically polymorphic type. In the next article, though, you'll see the first example of a polymorphic type where the catamorphism coincides with the fold.
Next: Maybe catamorphism.
Comments
As always I enjoy reading your blog, even though I don't understand half of it most of the time. Or is that most of it half of the time? Allow me to put a few observations forward.
First I should confess, that I have actually not read the whole of Brook's essay. When I initially tried I got about half way through; it sounds like I should make another go at it. That of course will not stop me from commenting on the above.
Brook talks about complexity. To me designing and implementing a software system is not complex. Quantum physics is complex. Flying an airplane is difficult. Software development may be difficult depending on the task at hand (and unfortunately the qualifications of the team), but I would argue that it at most falls into the same category as flying an airplane.
I would properly also state, that there are no silver bullets. But like you I feel that people understand it incorrectly and there is definetely no reason for making things harder than they are. I think the examples of technology that helps are excellent and exactly describe that things do move forward.
That being said, it does not take away the creativity of the right decomposition, the responsibility for getting the use cases right, and especially the liability for getting it wrong. Sadly especially the last of overlooked. People should be reminded of where the phrase 'live under the bridge' comes from.
To end my ramblins, I would also look a little into the future. As you know I am somewhat sceptial about machine learning and AI. However, looking at the recent break throughs and use cases in these areas, I would not be surprised of a future where software development is done by 'an AI' assemblying pre-defined 'entities' to create the software we need. Like an F16 cannot be flown without a computer, future software cannot be created by a human.
Karsten, thank you for writing. I'm not inclined to agree that software development falls into the same category of complexity as flying a plane. It seems to me to be orders of magnitudes more complex.
Just look at error rates.
Would you ever board an air plane if flying had error rates similar to those observed in software development? Would you fly even if only one percent of all flights ended with plane crash?
In reality, flying is extremely safe. Would you claim that software development is as safe, predictable, and manageable as flying?
I see no evidence of that.
Are pilots significantly more capable human beings than software developers, or does something else explain the discrepancy in failure rates?
Hi Mark. The fact that error rates are higher in software development is more a statement to the bad state our industry is in and has been for a milinium or more.
Why do we except that we produce crappy systems or in your words software that is no safe, predictable, and manageble? The list of excuses is very long and the list of results is very short. We as an industry are simply doing it wrong, but most people prefers hand waving and marketing than simple and plausible heuristic.
To use your analogy about planes I could ask if you would fly with a place that had (only) been unit tested? Properly not as it is never the unit that fails, but always the integration. Should be test all integrations then? Yes, why not?
The used of planes or pilots (or whatever) may have been bad. My point was, the I do not see software development as complex.
Karsten, if we, as an industry, are doing it wrong, then why are we doing that?
And what should we be doing instead?