# Lazy monoids by Mark Seemann

*Lazy monoids are monoids. An article for object-oriented programmers.*

This article is part of a series about monoids. In short, a *monoid* is an associative binary operation with a neutral element (also known as *identity*). Previous articles have shown how more complex monoids arise from simpler monoids, such as tuple monoids, function monoids, and Maybe monoids. This article shows another such result: how lazy computations of monoids itself form monoids.

You'll see how simple this is through a series of examples. Specifically, you'll revisit several of the examples you've already seen in this article series.

### Lazy addition #

Perhaps the most intuitive monoid is *addition*. Lazy addition forms a monoid as well. In C#, you can implement this with a simple extension method:

public static Lazy<int> Add(this Lazy<int> x, Lazy<int> y) { return new Lazy<int>(() => x.Value + y.Value); }

This `Add`

method simply adds two lazy integers together in a lazy computation. You use it like any other extension method:

Lazy<int> x = // ... Lazy<int> y = // ... Lazy<int> sum = x.Add(y);

I'll spare you the tedious listing of FsCheck-based properties that demonstrate that the monoid laws hold. We'll look at an example of such a set of properties later in this article, for one of the other monoids.

### Lazy multiplication #

Not surprisingly, I hope, you can implement multiplication over lazy numbers in the same way:

public static Lazy<int> Multiply(this Lazy<int> x, Lazy<int> y) { return new Lazy<int>(() => x.Value * y.Value); }

Usage is similar to lazy addition:

Lazy<int> x = // ... Lazy<int> y = // ... Lazy<int> product = x.Multiply(y);

As is the case with lazy addition, this `Multiply`

method currently only works with lazy `int`

values. If you also want it to work with `long`

, `short`

, or other types of numbers, you'll have to add method overloads.

### Lazy Boolean monoids #

There are four monoids over Boolean values, although I've customarily only shown two of them: *and* and *or*. These also, trivially, work with lazy Boolean values:

public static Lazy<bool> And(this Lazy<bool> x, Lazy<bool> y) { return new Lazy<bool>(() => x.Value && y.Value); } public static Lazy<bool> Or(this Lazy<bool> x, Lazy<bool> y) { return new Lazy<bool>(() => x.Value || y.Value); }

Given the previous examples, you'll hardly be surprised to see how you can use one of these extension methods:

Lazy<bool> x = // ... Lazy<bool> y = // ... Lazy<bool> b = x.And(y);

Have you noticed a pattern in how the lazy binary operations `Add`

, `Multiply`

, `And`

, and `Or`

are implemented? Could this be generalised?

### Lazy angular addition #

In a previous article you saw how angular addition forms a monoid. Lazy angular addition forms a monoid as well, which you can implement with another extension method:

public static Lazy<Angle> Add(this Lazy<Angle> x, Lazy<Angle> y) { return new Lazy<Angle>(() => x.Value.Add(y.Value)); }

Until now, you may have noticed that all the extension methods seemed to follow a common pattern that looks like this:

return new Lazy<Foo>(() => x.Value ⋄ y.Value);

I've here used the diamond operator `⋄`

as a place-holder for any sort of binary operation. My choice of that particular character is strongly influenced by Haskell, where semigroups and monoids polymorphically are modelled with (among other options) the `<>`

operator.

The lazy angular addition implementation looks a bit different, though. This is because the original example uses an instance method to model the binary operation, instead of an infix operator such as `+`

, `&&`

, and so on. Given that the implementation of a lazy binary operation can also look like this, can you still imagine a generalisation?

### Lazy string concatenation #

If we follow the rough ordering of examples introduced in this article series about monoids, we've now reached concatenation as a monoid. While various lists, arrays, and other sorts of collections also form a monoid over concatenation, in .NET, `IEnumerable<T>`

already enables lazy evaluation, so I think it's more interesting to consider lazy string concatenation.

The implementation, however, holds few surprises:

public static Lazy<string> Concat(this Lazy<string> x, Lazy<string> y) { return new Lazy<string>(() => x.Value + y.Value); }

The overall result, so far, seems encouraging. All the basic monoids we've covered are also monoids when lazily computed.

### Lazy money addition #

The portfolio example from Kent Beck's book Test-Driven Development By Example also forms a monoid. You can, again, implement an extension method that enables you to add lazy expressions together:

public static Lazy<IExpression> Plus( this Lazy<IExpression> source, Lazy<IExpression> addend) { return new Lazy<IExpression>(() => source.Value.Plus(addend.Value)); }

So far, you've seen several examples of implementations, but are they really monoids? All are clearly binary operations, but are they associative? Do identities exist? In other words, do the lazy binary operations obey the monoid laws?

As usual, I'm not going to prove that they do, but I do want to share a set of FsCheck properties that demonstrate that the monoid laws hold. As an example, I'll share the properties for this lazy `Plus`

method, but you can write similar properties for all of the above methods as well.

You can verify the associativity law like this:

public void LazyPlusIsAssociative( Lazy<IExpression> x, Lazy<IExpression> y, Lazy<IExpression> z) { Assert.Equal( x.Plus(y).Plus(z), x.Plus(y.Plus(z)), Compare.UsingBank); }

Here, `Compare.UsingBank`

is just a test utility API to make the code more readable:

public static class Compare { public static ExpressionEqualityComparer UsingBank = new ExpressionEqualityComparer(); }

This takes advantage of the overloads for xUnit.net's `Assert`

methods that take custom equality comparers as an extra, optional argument. `ExpressionEqualityComparer`

is implemented in the test code base:

public class ExpressionEqualityComparer : IEqualityComparer<IExpression>, IEqualityComparer<Lazy<IExpression>> { private readonly Bank bank; public ExpressionEqualityComparer() { bank = new Bank(); bank.AddRate("CHF", "USD", 2); } public bool Equals(IExpression x, IExpression y) { var xm = bank.Reduce(x, "USD"); var ym = bank.Reduce(y, "USD"); return object.Equals(xm, ym); } public int GetHashCode(IExpression obj) { return bank.Reduce(obj, "USD").GetHashCode(); } public bool Equals(Lazy<IExpression> x, Lazy<IExpression> y) { return Equals(x.Value, y.Value); } public int GetHashCode(Lazy<IExpression> obj) { return GetHashCode(obj.Value); } }

If you think that the exchange rate between Dollars and Swiss Francs looks ridiculous, it's because I'm using the rate that Kent Beck used in his book, which is from 2002 (but otherwise timeless).

The above property passes for hundreds of randomly generated input values, as is the case for this property, which verifies the left and right identity:

public void LazyPlusHasIdentity(Lazy<IExpression> x) { Assert.Equal(x, x.Plus(Plus.Identity.ToLazy()), Compare.UsingBank); Assert.Equal(x, Plus.Identity.ToLazy().Plus(x), Compare.UsingBank); }

These properties are just examples, not proofs. Still, they give confidence that lazy computations of monoids are themselves monoids.

### Lazy Roster combinations #

The last example you'll get in this article is the `Roster`

example from the article on tuple monoids. Here's yet another extension method that enables you to combine to lazy rosters:

public static Lazy<Roster> Combine(this Lazy<Roster> x, Lazy<Roster> y) { return new Lazy<Roster>(() => x.Value.Combine(y.Value)); }

At this point it should be clear that there's essentially two variations in how the above extension methods are implemented. One variation is when the binary operation is implemented with an infix operator (like `+`

, `||`

, and so on), and another variation is when it's modelled as an instance method. How do these implementations generalise?

I'm sure you could come up with an ad-hoc higher-order function, abstract base class, or interface to model such a generalisation, but I'm not motivated by saving keystrokes. What I'm trying to uncover with this overall article series is how universal abstractions apply to programming.

Which universal abstraction is in play here?

### Lazy monoids as applicative operations #

Lazy<T> forms an applicative functor. Using appropriate `Apply`

overloads, you can rewrite all the above implementations in applicative style. Here's lazy addition:

public static Lazy<int> Add(this Lazy<int> x, Lazy<int> y) { Func<int, int, int> f = (i, j) => i + j; return f.Apply(x).Apply(y); }

This first declares a `Func`

value `f`

that invokes the non-lazy binary operation, in this case `+`

. Next, you can leverage the applicative nature of `Lazy<T>`

to apply `f`

to the two lazy values `x`

and `y`

.

Multiplication, as well as the Boolean operations, follow the exact same template:

public static Lazy<int> Multiply(this Lazy<int> x, Lazy<int> y) { Func<int, int, int> f = (i, j) => i * j; return f.Apply(x).Apply(y); } public static Lazy<bool> And(this Lazy<bool> x, Lazy<bool> y) { Func<bool, bool, bool> f = (b1, b2) => b1 && b2; return f.Apply(x).Apply(y); } public static Lazy<bool> Or(this Lazy<bool> x, Lazy<bool> y) { Func<bool, bool, bool> f = (b1, b2) => b1 || b2; return f.Apply(x).Apply(y); }

Notice that in all four implementations, the second line of code is verbatim the same: `return f.Apply(x).Apply(y);`

Does this generalisation also hold when the underlying, non-lazy binary operation is modelled as an instance method, as is the case of e.g. angular addition? Yes, indeed:

public static Lazy<Angle> Add(this Lazy<Angle> x, Lazy<Angle> y) { Func<Angle, Angle, Angle> f = (i, j) => i.Add(j); return f.Apply(x).Apply(y); }

You can implement the `Combine`

method for lazy `Roster`

objects in the same way, as well as `Plus`

for lazy monetary expressions. The latter is worth revisiting.

### Using the Lazy functor over portfolio expressions #

The lazy `Plus`

implementation looks like all of the above `Apply`

-based implementations:

public static Lazy<IExpression> Plus( this Lazy<IExpression> source, Lazy<IExpression> addend) { Func<IExpression, IExpression, IExpression> f = (x, y) => x.Plus(y); return f.Apply(source).Apply(addend); }

In my article, however, you saw how, when `Plus`

is a monoid, you can implement `Times`

as an extension method. Can you implement a lazy version of `Times`

as well? Must it be another extension method?

Yes, but instead of an ad-hoc implementation, you can take advantage of the functor nature of Lazy:

public static Lazy<IExpression> Times(this Lazy<IExpression> exp, int multiplier) { return exp.Select(x => x.Times(multiplier)); }

Notice that instead of explicitly reaching into the lazy `Value`

, you can simply call `Select`

on `exp`

. This lazily projects the `Times`

operation, while preserving the invariants of `Lazy<T>`

(i.e. that the computation is deferred until you ultimately access the `Value`

property).

You can implement a lazy version of `Reduce`

in the same way:

public static Lazy<Money> Reduce(this Lazy<IExpression> exp, Bank bank, string to) { return exp.Select(x => x.Reduce(bank, to)); }

The question is, however, is it even worthwhile? Do you need to create all these overloads, or could you just leverage `Select`

when you have a lazy value?

For example, if the above `Reduce`

overload didn't exist, you'd still be able to work with the portfolio API like this:

Lazy<IExpression> portfolio = // ... Lazy<Money> result = portfolio.Select(x => x.Reduce(bank, "USD"));

If you only occasionally use `Reduce`

, then perhaps this is good enough. If you frequently call `Reduce`

, however, it might be worth to add the above overload, in which case you could then instead write:

Lazy<IExpression> portfolio = // ... Lazy<Money> result = portfolio.Reduce(bank, "USD");

In both cases, however, I think that you'd be putting the concept of an applicative functor to good use.

### Towards generalisation #

Is the applicative style better than the initial ad-hoc implementations? That depends on how you evaluate 'better'. If you count lines of code, then the applicative style is twice as verbose as the ad-hoc implementations. In other words, this:

return new Lazy<int>(() => x.Value + y.Value);

seems simpler than this:

Func<int, int, int> f = (i, j) => i + j; return f.Apply(x).Apply(y);

This is, however, mostly because C# is too weak to express such abstractions in an elegant way. In F#, using the custom `<*>`

operator from the article on the Lazy applicative functor, you could express the lazy addition as simply as:

`lazy (+) <*> x <*> y`

In Haskell (if we, once more, pretend that `Identity`

is equivalent to `Lazy`

), you can simplify even further to:

(+) <$> x <*> y

Or rather, if you want it in point-free style, `liftA2 (+)`

.

### Summary #

The point of this article series isn't to save keystrokes, but to identify universal laws of computing, even as they relate to object-oriented programming. The pattern seems clear enough that I dare propose the following:

*All monoids remain monoids under lazy computation.*

In a future article I'll offer further support for that proposition.

**Next:** Monoids accumulate.