Built-in alternatives to applicative assertions

Monday, 30 January 2023 08:08:00 UTC

Why make things so complicated?

Several readers reacted to my small article series on applicative assertions, pointing out that error-collecting assertions are already supported in more than one unit-testing framework.

"In the Java world this seems similar to the result gained by Soft Assertions in AssertJ. https://assertj.github.io/doc/#assertj-c... if you’re after a target for functionality (without the adventures through monad land)"

While I'm not familiar with the details of Java unit-testing frameworks, the situation is similar in .NET, it turns out.

"Did you know there is Assert.Multiple in NUnit and now also in xUnit .Net? It seems to have quite an overlap with what you're doing here.

"For a quick overview, I found this blogpost helpful: https://www.thomasbogholm.net/2021/11/25/xunit-2-4-2-pre-multiple-asserts-in-one-test/"

I'm not surprised to learn that something like this exists, but let's take a quick look.

NUnit Assert.Multiple #

Let's begin with NUnit, as this seems to be the first .NET unit-testing framework to support error-collecting assertions. As a beginning, the documentation example works as it's supposed to:

[Test]
public void ComplexNumberTest()
{
    ComplexNumber result = SomeCalculation();
 
    Assert.Multiple(() =>
    {
        Assert.AreEqual(5.2, result.RealPart, "Real part");
        Assert.AreEqual(3.9, result.ImaginaryPart, "Imaginary part");
    });
}

When you run the test, it fails (as expected) with this error message:

Message:
  Multiple failures or warnings in test:
    1)   Real part
    Expected: 5.2000000000000002d
    But was:  5.0999999999999996d

    2)   Imaginary part
    Expected: 3.8999999999999999d
    But was:  4.0d

That seems to work well enough, but how does it actually work? I'm not interested in reading the NUnit source code - after all, the concept of encapsulation is that one should be able to make use of the capabilities of an object without knowing all implementation details. Instead, I'll guess: Perhaps Assert.Multiple executes the code block in a try/catch block and collects the various exceptions thrown by the nested assertions.

Does it catch all exception types, or only a subset?

Let's try with the kind of composed assertion that I previously investigated:

[Test]
public void HttpExample()
{
    var deleteResp = new HttpResponseMessage(HttpStatusCode.BadRequest);
    var getResp = new HttpResponseMessage(HttpStatusCode.OK);
 
    Assert.Multiple(() =>
    {
        deleteResp.EnsureSuccessStatusCode();
        Assert.That(getResp.StatusCode, Is.EqualTo(HttpStatusCode.NotFound));
    });
}

This test fails (again, as expected). What's the error message?

Message:
  System.Net.Http.HttpRequestException :↩
    Response status code does not indicate success: 400 (Bad Request).

(I've wrapped the result over multiple lines for readability. The symbol indicates where I've wrapped the text. I'll do that again later in this article.)

Notice that I'm using EnsureSuccessStatusCode as an assertion. This seems to spoil the behaviour of Assert.Multiple. It only reports the first status code error, but not the second one.

I admit that I don't fully understand what's going on here. In fact, I have taken a cursory glance at the relevant NUnit source code without being enlightened.

One hypothesis might be that NUnit assertions throw special Exception sub-types that Assert.Multiple catch. In order to test that, I wrote a few more tests in F# with Unquote, assuming that, since Unquote hardly throws NUnit exceptions, the behaviour might be similar to above.

[<Test>]
let Test4 () =
    let x = 1
    let y = 2
    let z = 3
    Assert.Multiple (fun () ->
        x =! y
        y =! z)

The =! operator is an Unquote operator that I usually read as must equal. How does that error message look?

Message:
  Multiple failures or warnings in test:
    1) 

  1 = 2
  false

    2)

  2 = 3
  false

Somehow, Assert.Multiple understands Unquote error messages, but not HttpRequestException. As I wrote, I don't fully understand why it behaves this way. To a degree, I'm intellectually curious enough that I'd like to know. On the other hand, from a maintainability perspective, as a user of NUnit, I shouldn't have to understand such details.

xUnit.net Assert.Multiple #

How fares the xUnit.net port of Assert.Multiple?

[Fact]
public void HttpExample()
{
    var deleteResp = new HttpResponseMessage(HttpStatusCode.BadRequest);
    var getResp = new HttpResponseMessage(HttpStatusCode.OK);
 
    Assert.Multiple(
        () => deleteResp.EnsureSuccessStatusCode(),
        () => Assert.Equal(HttpStatusCode.NotFound, getResp.StatusCode));
}

The API is, you'll notice, not quite identical. Where the NUnit Assert.Multiple method takes a single delegate as input, the xUnit.net method takes an array of actions. The difference is not only at the level of API; the behaviour is different, too:

Message:
  Multiple failures were encountered:
  ---- System.Net.Http.HttpRequestException :↩
  Response status code does not indicate success: 400 (Bad Request).
  ---- Assert.Equal() Failure
  Expected: NotFound
  Actual:   OK

This error message reports both problems, as we'd like it to do.

I also tried writing equivalent tests in F#, with and without Unquote, and they behave consistently with this result.

If I had to use something like Assert.Multiple, I'd trust the xUnit.net variant more than NUnit's implementation.

Assertion scopes #

Apparently, Fluent Assertions offers yet another alternative.

"Hey @ploeh, been reading your applicative assertion series. I recently discovered Assertion Scopes, so I'm wondering what is your take on them since it seems to me they are solving this problem in C# already. https://fluentassertions.com/introduction#assertion-scopes"

The linked documentation contains this example:

[Fact]
public void DocExample()
{
    using (new AssertionScope())
    {
        5.Should().Be(10);
        "Actual".Should().Be("Expected");
    }
}

It fails in the expected manner:

Message:
  Expected value to be 10, but found 5 (difference of -5).
  Expected string to be "Expected" with a length of 8, but "Actual" has a length of 6,↩
    differs near "Act" (index 0).

How does it fare when subjected to the EnsureSuccessStatusCode test?

[Fact]
public void HttpExample()
{
    var deleteResp = new HttpResponseMessage(HttpStatusCode.BadRequest);
    var getResp = new HttpResponseMessage(HttpStatusCode.OK);
 
    using (new AssertionScope())
    {
        deleteResp.EnsureSuccessStatusCode();
        getResp.StatusCode.Should().Be(HttpStatusCode.NotFound);
    }
}

That test produces this error output:

Message:
  System.Net.Http.HttpRequestException :↩
    Response status code does not indicate success: 400 (Bad Request).

Again, EnsureSuccessStatusCode prevents further assertions from being evaluated. I can't say that I'm that surprised.

Implicit or explicit #

You might protest that using EnsureSuccessStatusCode and treating the resulting HttpRequestException as an assertion is unfair and unrealistic. Possibly. As usual, such considerations are subject to a multitude of considerations, and there's no one-size-fits-all answer.

My intent with this article isn't to attack or belittle the APIs I've examined. Rather, I wanted to explore their boundaries by stress-testing them. That's one way to gain a better understanding. Being aware of an API's limitations and quirks can prevent subtle bugs.

Even if you'd never use EnsureSuccessStatusCode as an assertion, perhaps you or a colleague might inadvertently do something to the same effect.

I'm not surprised that both NUnit's Assert.Multiple and Fluent Assertions' AssertionScope behaves in a less consistent manner than xUnit.net's Assert.Multiple. The clue is in the API.

The xUnit.net API looks like this:

public static void Multiple(params Action[] checks)

Notice that each assertion is explicitly a separate action. This enables the implementation to isolate it and treat it independently of other actions.

Neither the NUnit nor the Fluent Assertions API is that explicit. Instead, you can write arbitrary code inside the 'scope' of multiple assertions. For AssertionScope, the notion of a 'scope' is plain to see. For the NUnit API it's more implicit, but the scope is effectively the extent of the method:

public static void Multiple(TestDelegate testDelegate)

That testDelegate can have as many (nested, even) assertions as you'd like, so the Multiple implementation needs to somehow demarcate when it begins and when it ends.

The testDelegate can be implemented in a different file, or even in a different library, and it has no way to communicate or coordinate with its surrounding scope. This reminds me of an Ambient Context, an idiom that Steven van Deursen convinced me was an anti-pattern. The surrounding context changes the behaviour of the code block it surrounds, and it's quite implicit.

Explicit is better than implicit.

The xUnit.net API, at least, looks a bit saner. Still, this kind of API is quirky enough that it reminds me of Greenspun's tenth rule; that these APIs are ad-hoc, informally-specified, bug-ridden, slow implementations of half of applicative functors.

Conclusion #

Not surprisingly, popular unit-testing and assertion libraries come with facilities to compose assertions. Also, not surprisingly, these APIs are crude and require you to learn their implementation details.

Would I use them if I had to? I probably would. As Rich Hickey put it, they're already at hand. That makes them easy, but not necessarily simple. APIs that compel you to learn their internal implementation details aren't simple.

Universal abstractions, on the other hand, you only have to learn one time. Once you understand what an applicative functor is, you know what to expect from it, and which capabilities it has.

In languages with good support for applicative functors, I would favour an assertion API based on that abstraction, if given a choice. At the moment, though, that's not much of an option. Even HUnit assertions are based on side effects.


Comments

Joker_vD #

Just a reminder: in .NET, method's execution cannot be resumed after an exception is thrown, there is just simply no way to do this, at all. Which means that NUnit's Assert.Multiple absolutely cannot work the way you guess it probably does, by running the delegate and resuming its execution after it throws an exception until the delegate returns.

How could it work then? Well, considering that documentation to almost every Assert's method has "Returns without throwing an exception when inside a multiple assert block" line in it, I would assume that Assert.Multiple sets a global flag which makes actual assertions to store the failures in some global hidden context instead on throwing them, then runs the delegate and after it finishes or throws, collects and clears all those failures from the context and resets the global flag.

Cursory inspection of NUnit's source code supports this idea, except that apparently it's not just a boolean flag but a "depth" counter; and assertions report the failures just the way I've speculated. I personally hate such side-channels but you have to admit, they allow for some nifty, seemingly impossible magical tricks (a.k.a. "spooky action at the distance").

Also, why do you assume that Unquote would not throw NUnit's assertions? It literally has "Unquote integrates configuration-free with all exception-based unit testing frameworks including xUnit.net, NUnit, MbUnit, Fuchu, and MSTest" in its README, and indeed, if you look at its source code, you'll see that at runtime it tries to locate any testing framework it's aware of and use its assertions. More funny party tricks, this time with reflection!

I understand that after working in more pure/functional programming environments one does start to slowly forget about those terrible things, but: those horrorterrors still exist, and people keep making more of them. Now, if you can, have a good night :)

2023-01-31 03:00 UTC

Joker_vD, thank you for explaining those details. I admit that I hadn't thought too deeply about implementation details, for the reasons I briefly mentioned in the post.

"I understand that after working in more pure/functional programming environments one does start to slowly forget about those terrible things"

Yes, that summarises my current thinking well, I'm afraid.

2023-01-30 6:49 UTC

NUnit has Assert.DoesNotThrow and Fluent Assertions has .Should().NotThrow(). I did not check Fluent Assertions, but NUnit does gather failures of Assert.DoesNotThrow inside Assert.Multiple into a multi-error report. One might argue that asserting that a delegate should not throw is another application of the "explicit is better than implicit" philosophy. Here's what Fluent Assertions has to say on that matter:

"We know that a unit test will fail anyhow if an exception was thrown, but this syntax returns a clearer description of the exception that was thrown and fits better to the AAA syntax."

As a side note, you might also want to take a look on NUnits Assert.That syntax. It allows to construct complex conditions tested against a single actual value:

int actual = 3;
Assert.That (actual, Is.GreaterThan (0).And.LessThanOrEqualTo (2).And.Matches (Has.Property ("P").EqualTo ("a")));

A failure is then reported like this:

Expected: greater than 0 and less than or equal to 2 and property P equal to "a"
But was:  3

2023-01-31 18:35 UTC

Max, thank you for writing. I have to admit that I never understood the point of NUnit's constraint model, but your example clearly illustrates how it may be useful. It enables you to compose assertions.

It's interesting to try to understand the underlying reason for that. I took a cursory glance at that IResolveConstraint API, and as far as I can tell, it may form a monoid (I'm not entirely sure about the ConstraintStatus enum, but even so, it may be 'close enough' to be composable).

I can see how that may be useful when making assertions against complex objects (i.e. object composed from other objects).

In xUnit.net you'd typically address that problem with custom IEqualityComparers. This is more verbose, but also strikes me as more reusable. One disadvantage of that approach, however, is that when tests fail, the assertion message is typically useless.

This is the reason I favour Unquote: Instead of inventing a Boolean algebra(?) from scratch, it uses the existing language and still gives you good error messages. Alas, that only works in F#.

In general, though, I'm inclined to think that all of these APIs address symptoms rather than solve real problems. Granted, they're useful whenever you need to make assertions against values that you don't control, but for your own APIs, a simpler solution is to model values as immutable data with structural equality.

Another question is whether aiming for clear assertion messages is optimising for the right concern. At least with TDD, I don't think that it is.

2023-02-02 7:53 UTC

Agilean

Monday, 23 January 2023 07:55:00 UTC

There are other agile methodologies than scrum.

More than twenty years after the Agile Manifesto it looks as though there's only one kind of agile process left: Scrum.

I recently held a workshop and as a side remark I mentioned that I don't consider scrum the best development process. This surprised some attendees, who politely inquired about my reasoning.

My experience with scrum #

The first nine years I worked as a professional programmer, the companies I worked in used various waterfall processes. When I joined the Microsoft Dynamics Mobile team in 2008 they were already using scrum. That was my first exposure to it, and I liked it. Looking back on it today, we weren't particular dogmatic about the process, being more interested in getting things done.

One telling fact is that we took turns being Scrum Master. Every sprint we'd rotate that role.

We did test-driven development, and had two-week sprints. This being a Microsoft development organisation, we had a dedicated build master, tech writers, specialised testers, and security reviews.

I liked it. It's easily one of the most professional software organisations I've worked in. I think it was a good place to work for many reasons. Scrum may have been a contributing factor, but hardly the only reason.

I have no issues with scrum as we practised it then. I recall later attending a presentation by Mike Cohn where he outlined four quadrants of team maturity. You'd start with scrum, but use retrospectives to evaluate what worked and what didn't. Then you'd adjust. A mature, self-organising team would arrive at its own process, perhaps initiated with scrum, but now having little resemblance with it.

I like scrum when viewed like that. When it becomes rigid and empty ceremony, I don't. If all you do is daily stand-ups, sprints, and backlogs, you may be doing scrum, but probably not agile.

Continuous deployment #

After Microsoft I joined a startup so small that formal process was unnecessary. Around that time I also became interested in lean software development. In the beginning, I learned a lot from Martin Jul who seemed to use the now-defunct Ative blog as a public notepad as he was reading works of Deming. I suppose, if you want a more canonical introduction to the topic, that you might start with one of the Poppendiecks' books, but since I've only read Implementing Lean Software Development, that's the only one I can recommend.

Around 2014 I returned to a regular customer. The team had, in my absence, been busy implementing continuous deployment. Instead of artificial periods like 'sprints' we had a kanban board to keep track of our work. We used a variation of feature flags and marked features as done when they were complete and in production.

Why wait until next Friday if the feature is done, done on a Wednesday? Why wait until the next Monday to identify what to work on next, if you're ready to take on new work on a Thursday? Why not move towards one-piece flow?

An effective self-organising team typically already knows what it's doing. Much process is introduced in order to give external stakeholders visibility into what a team is doing.

I found, in that organisation, that continuous deployment eliminated most of that need. At one time I asked a stakeholder what he thought of the feature I'd deployed a week before - a feature that he had requested. He replied that he hadn't had time to look at it yet.

The usual inquires about status (Is it done yet? When is it done?) were gone. The team moved faster than the stakeholders could keep up. That also gave us enough slack to keep the code base in good order. We also used test-driven development throughout (TDD).

TDD with continuous deployment and a kanban board strikes me as congenial with the ideas of lean software development, but that's not all.

Stop-the-line issues #

An andon cord is a central concept in lean manufactoring. If a worker (or anyone, really) discovers a problem during production, he or she pulls the andon cord and stops the production line. Then everyone investigates and determines what to do about the problem. Errors are not allowed to accumulate.

I think that I've internalised this notion to such a degree that I only recently connected it to lean software development.

In Code That Fits in Your Head, I recommend turning compiler warnings into errors at the beginning of a code base. Don't allow warnings to pile up. Do the same with static code analysis and linters.

When discussing software engineering with developers, I'm beginning to realise that this runs even deeper.

  • Turn warnings into errors. Don't allow warnings to accumulate.
  • The correct number of unhandled exceptions in production is zero. If you observe an unhandled exception in your production logs, fix it. Don't let them accumulate.
  • The correct number of known bugs is zero. Don't let bugs accumulate.

If you're used to working on a code base with hundreds of known bugs, and frequent exceptions in production, this may sound unrealistic. If you deal with issues as soon as they arise, however, this is not only possible - it's faster.

In lean software development, bugs are stop-the-line issues. When something unexpected happens, you stop what you're doing and make fixing the problem the top priority. You build quality in.

This has been my modus operandi for years, but I only recently connected the dots to realise that this is a typical lean practice. I may have picked it up from there. Or perhaps it's just common sense.

Conclusion #

When Agile was new and exciting, there were extreme programming and scrum, and possibly some lesser known techniques. Lean was around the corner, but didn't come to my attention, at least, until around 2010. Then it seems to have faded away again.

Today, agile looks synonymous with scrum, but I find lean software development more efficient. Why divide work into artificial time periods when you can release continuously? Why plan bug fixing when it's more efficient to stop the line and deal with the problem as it arises?

That may sound counter-intuitive, but it works because it prevents technical debt from accumulating.

Lean software development is, in my experience, a better agile methodology than scrum.


In the long run

Monday, 16 January 2023 08:28:00 UTC

Software design decisions should be time-aware.

A common criticism of modern capitalism is that maximising shareholder value leads to various detrimental outcomes, both societal, but possibly also for the maximising organisation itself. One major problem is when company leadership is incentivised to optimise stock market price for the next quarter, or other short terms. When considering only the short term, decision makers may (rationally) decide to sacrifice long-term benefits for short-term gains.

We often see similar behaviour in democracies. Politicians tend to optimise within a time frame that coincides with the election period. Getting re-elected is more important than good policy in the next period.

These observations are crude generalisations. Some democratic politicians and CEOs take longer views. Inherent in the context, however, is an incentive to short-term thinking.

This, it strikes me, is frequently the case in software development.

Particularly in the context of scrum there's a focus on delivering at the end of every sprint. I've observed developers and other stakeholders together engage in short-term thinking in order to meet those arbitrary and fictitious deadlines.

Even when deadlines are more remote than two weeks, project members rarely think beyond some perceived end date. As I describe in Code That Fits in Your Head, a project is rarely is good way to organise software development work. Projects end. Successful software doesn't.

Regardless of the specific circumstances, a too myopic focus on near-term goals gives you an incentive to cut corners. To not care about code quality.

...we're all dead #

As Keynes once quipped:

"In the long run we are all dead."

John Maynard Keynes

Clearly, while you can be too short-sighted, you can also take too long a view. Sometimes deadlines matter, and software not used makes no-one happy.

Working software remains the ultimate test of value, but as I've tried to express many times before, this does not imply that anything else is worthless.

You can't measure code quality. Code quality isn't software quality. Low code quality slows you down, and that, eventually, costs you money, blood, sweat, and tears.

This is, however, not difficult to predict. All it takes is a slightly wider time horizon. Consider the impact of your decisions past the next deadline.

Conclusion #

Don't be too short-sighted, but don't forget the immediate value of what you do. Your decisions matter. The impact is not always immediate. Consider what consequences short-term optimisations may have in a longer perspective.


The IO monad

Monday, 09 January 2023 07:39:00 UTC

The IO container forms a monad. An article for object-oriented programmers.

This article is an instalment in an article series about monads. A previous article described the IO functor. As is the case with many (but not all) functors, this one also forms a monad.

SelectMany #

A monad must define either a bind or join function. In C#, monadic bind is called SelectMany. In a recent article, I gave an example of what IO might look like in C#. Notice that it already comes with a SelectMany function:

public IO<TResult> SelectMany<TResult>(Func<T, IO<TResult>> selector)

Unlike other monads, the IO implementation is considered a black box, but if you're interested in a prototypical implementation, I already posted a sketch in 2020.

Query syntax #

I have also, already, demonstrated syntactic sugar for IO. In that article, however, I used an implementation of the required SelectMany overload that is more explicit than it has to be. The monad introduction makes the prediction that you can always implement that overload in the same way, and yet here I didn't.

That's an oversight on my part. You can implement it like this instead:

public static IO<TResult> SelectMany<TUTResult>(
    this IO<T> source,
    Func<T, IO<U>> k,
    Func<T, U, TResult> s)
{
    return source.SelectMany(x => k(x).Select(y => s(x, y)));
}

Indeed, the conjecture from the introduction still holds.

Join #

In the introduction you learned that if you have a Flatten or Join function, you can implement SelectMany, and the other way around. Since we've already defined SelectMany for IO<T>, we can use that to implement Join. In this article I use the name Join rather than Flatten. This is an arbitrary choice that doesn't impact behaviour. Perhaps you find it confusing that I'm inconsistent, but I do it in order to demonstrate that the behaviour is the same even if the name is different.

The concept of a monad is universal, but the names used to describe its components differ from language to language. What C# calls SelectMany, Scala calls flatMap, and what Haskell calls join, other languages may call Flatten.

You can always implement Join by using SelectMany with the identity function:

public static IO<T> Join<T>(this IO<IO<T>> source)
{
    return source.SelectMany(x => x);
}

In C# the identity function is idiomatically given as the lambda expression x => x since C# doesn't come with a built-in identity function.

Return #

Apart from monadic bind, a monad must also define a way to put a normal value into the monad. Conceptually, I call this function return (because that's the name that Haskell uses). In the IO functor article, I wrote that the IO<T> constructor corresponds to return. That's not strictly true, though, since the constructor takes a Func<T> and not a T.

This issue is, however, trivially addressed:

public static IO<T> Return<T>(T x)
{
    return new IO<T>(() => x);
}

Take the value x and wrap it in a lazily-evaluated function.

Laws #

While IO values are referentially transparent you can't compare them. You also can't 'run' them by other means than running a program. This makes it hard to talk meaningfully about the monad laws.

For example, the left identity law is:

return >=> h ≡ h

Note the implied equality. The composition of return and h should be equal to h, for some reasonable definition of equality. How do we define that?

Somehow we must imagine that two alternative compositions would produce the same observable effects ceteris paribus. If you somehow imagine that you have two parallel universes, one with one composition (say return >=> h) and one with another (h), if all else in those two universes were equal, then you would observe no difference in behaviour.

That may be useful as a thought experiment, but isn't particularly practical. Unfortunately, due to side effects, things do change when non-deterministic behaviour and side effects are involved. As a simple example, consider an IO action that gets the current time and prints it to the console. That involves both non-determinism and a side effect.

In Haskell, that's a straightforward composition of two IO actions:

> h () = getCurrentTime >>= print

How do we compare two compositions? By running them?

> return () >>= h
2022-06-25 16:47:30.6540847 UTC
> h ()
2022-06-25 16:47:37.5281265 UTC

The outputs are not the same, because time goes by. Can we thereby conclude that the monad laws don't hold for IO? Not quite.

The IO Container is referentially transparent, but evaluation isn't. Thus, we have to pretend that two alternatives will lead to the same evaluation behaviour, all things being equal.

This property seems to hold for both the identity and associativity laws. Whether or not you compose with return, or in which evaluation order you compose actions, it doesn't affect the outcome.

For completeness sake, the C# implementation sketch is just a wrapper over a Func<T>. We can also think of such a function as a function from unit to T - in pseudo-C# () => T. That's a function; in other words: The Reader monad. We already know that the Reader monad obeys the monad laws, so the C# implementation, at least, should be okay.

Conclusion #

IO forms a monad, among other abstractions. This is what enables Haskell programmers to compose an arbitrary number of impure actions with monadic bind without ever having to force evaluation. In C# it might have looked the same, except that it doesn't.

Next: Test Data Generator monad.


Adding NuGet packages when offline

Monday, 02 January 2023 05:41:00 UTC

A fairly trivial technical detective story.

I was recently in an air plane, writing code, when I realised that I needed to add a couple of NuGet packages to my code base. I was on one of those less-travelled flights in Europe, on board an Embraer E190, and as is usually the case on those 1½-hour flights, there was no WiFi.

Adding a NuGet package typically requires that you're online so that the tools can query the relevant NuGet repository. You'll need to download the package, so if you're offline, you're just out of luck, right?

Fortunately, I'd previously used the packages I needed in other projects, on the same laptop. While I'm no fan of package restore, I know that the local NuGet tools cache packages somewhere on the local machine.

So, perhaps I could entice the tools to reuse a cached package...

First, I simply tried adding a package that I needed:

$ dotnet add package unquote
  Determining projects to restore...
  Writing C:\Users\mark\AppData\Local\Temp\tmpF3C.tmp
info : X.509 certificate chain validation will use the default trust store selected by .NET.
info : Adding PackageReference for package 'unquote' into project '[redacted]'.
error: Unable to load the service index for source https://api.nuget.org/v3/index.json.
error:   No such host is known. (api.nuget.org:443)
error:   No such host is known.

Fine plan, but no success.

Clearly the dotnet tool was trying to access api.nuget.org, which, obviously, couldn't be reached because my laptop was in flight mode. It occurred to me, though, that the reason that the tool was querying api.nuget.org was that it wanted to see which version of the package was the most recent. After all, I hadn't specified a version.

What if I were to specify a version? Would the tool use the cached version of the package?

That seemed worth a try, but which versions did I already have on my laptop?

I don't go around remembering which version numbers I've used of various NuGet packages, but I expected the NuGet tooling to have that information available, somewhere.

But where? Keep in mind that I was offline, so couldn't easily look this up.

On the other hand, I knew that these days, most Windows applications keep data of that kind somewhere in AppData, so I started spelunking around there, looking for something that might be promising.

After looking around a bit, I found a subdirectory named AppData\Local\NuGet\v3-cache. This directory contained a handful of subdirectories obviously named with GUIDs. Each of these contained a multitude of .dat files. The names of those files, however, looked promising:

list_antlr_index.dat
list_autofac.dat
list_autofac.extensions.dependencyinjection.dat
list_autofixture.automoq.dat
list_autofixture.automoq_index.dat
list_autofixture.automoq_range_2.0.0-3.6.7.dat
list_autofixture.automoq_range_3.30.3-3.50.5.dat
list_autofixture.automoq_range_3.50.6-4.17.0.dat
list_autofixture.automoq_range_3.6.8-3.30.2.dat
list_autofixture.dat
...

and so on.

These names were clearly(?) named list_[package-name].dat or list_[package-name]_index.dat, so I started looking around for one named after the package I was looking for (Unquote).

Often, both files are present, which was also the case for Unquote.

$ ls list_unquote* -l
-rw-r--r-- 1 mark 197609   348 Oct  1 18:38 list_unquote.dat
-rw-r--r-- 1 mark 197609 42167 Sep 23 21:29 list_unquote_index.dat

As you can tell, list_unquote_index.dat is much larger than list_unquote.dat. Since I didn't know what the format of these files were, I decided to look at the smallest one first. It had this content:

{
  "versions": [
    "1.3.0",
    "2.0.0",
    "2.0.1",
    "2.0.2",
    "2.0.3",
    "2.1.0",
    "2.1.1",
    "2.2.0",
    "2.2.1",
    "2.2.2",
    "3.0.0",
    "3.1.0",
    "3.1.1",
    "3.1.2",
    "3.2.0",
    "4.0.0",
    "5.0.0",
    "6.0.0-rc.1",
    "6.0.0-rc.2",
    "6.0.0-rc.3",
    "6.0.0",
    "6.1.0"
  ]
}

A list of versions. Sterling. It looked as though version 6.1.0 was the most recent one on my machine, so I tried to add that one to my code base:

$ dotnet add package unquote --version 6.1.0
  Determining projects to restore...
  Writing C:\Users\mark\AppData\Local\Temp\tmp815D.tmp
info : X.509 certificate chain validation will use the default trust store selected by .NET.
info : Adding PackageReference for package 'unquote' into project '[redacted]'.
info : Restoring packages for [redacted]...
info : Package 'unquote' is compatible with all the specified frameworks in project '[redacted]'.
info : PackageReference for package 'unquote' version '6.1.0' added to file '[redacted]'.
info : Generating MSBuild file [redacted].
info : Writing assets file to disk. Path: [redacted]
log  : Restored [redacted] (in 397 ms).

Jolly good! That worked.

This way I managed to install all the NuGet packages I needed. This was fortunate, because I had so little time to transfer to my connecting flight that I never got to open the laptop before I was airborne again - in another E190 without WiFi, and another session of offline programming.


Comments

A postscript to your detective story might note that the primary NuGet cache lives at %userprofile%\.nuget\packages on Windows and ~/.nuget/packages on Mac and Linux. The folder names there are much easier to decipher than the folders and files in the http cache.

2023-01-02 13:46 UTC

Functors as invariant functors

Monday, 26 December 2022 13:05:00 UTC

A most likely useless set of invariant functors that nonetheless exist.

This article is part of a series of articles about invariant functors. An invariant functor is a functor that is neither covariant nor contravariant. See the series introduction for more details.

It turns out that all functors are also invariant functors.

Is this useful? Let me be honest and say that if it is, I'm not aware of it. Thus, if you're interested in practical applications, you can stop reading here. This article contains nothing of practical use - as far as I can tell.

Because it's there #

Why describe something of no practical use?

Why do some people climb Mount Everest? Because it's there, or for other irrational reasons. Which is fine. I've no personal goals that involve climbing mountains, but I happily engage in other irrational and subjective activities.

One of them, apparently, is to write articles of software constructs of no practical use, because it's there.

All functors are also invariant functors, even if that's of no practical use. That's just the way it is. This article explains how, and shows a few (useless) examples.

I'll start with a few Haskell examples and then move on to showing the equivalent examples in C#. If you're unfamiliar with Haskell, you can skip that section.

Haskell package #

For Haskell you can find an existing definition and implementations in the invariant package. It already makes most common functors Invariant instances, including [] (list), Maybe, and Either. Here's an example of using invmap with a small list:

ghci> invmap secondsToNominalDiffTime nominalDiffTimeToSeconds [0.1, 60]
[0.1s,60s]

Here I'm using the time package to convert fixed-point decimals into NominalDiffTime values.

How is this different from normal functor mapping with fmap? In observable behaviour, it's not:

ghci> fmap secondsToNominalDiffTime [0.1, 60]
[0.1s,60s]

When invariantly mapping a functor, only the covariant mapping function a -> b is used. Here, that's secondsToNominalDiffTime. The contravariant mapping function b -> a (nominalDiffTimeToSeconds) is simply ignored.

While the invariant package already defines certain common functors as Invariant instances, every Functor instance can be converted to an Invariant instance. There are two ways to do that: invmapFunctor and WrappedFunctor.

In order to demonstrate, we need a custom Functor instance. This one should do:

data Pair a = Pair (a, a) deriving (Eq, Show, Functor)

If you just want to perform an ad-hoc invariant mapping, you can use invmapFunctor:

ghci> invmapFunctor secondsToNominalDiffTime nominalDiffTimeToSeconds $ Pair (0.1, 60)
Pair (0.1s,60s)

I can't think of any reason to do this, but it's possible.

WrappedFunctor is perhaps marginally more relevant. If you run into a function that takes an Invariant argument, you can convert any Functor to an Invariant instance by wrapping it in WrappedFunctor:

ghci> invmap secondsToNominalDiffTime nominalDiffTimeToSeconds $ WrapFunctor $ Pair (0.1, 60)
WrapFunctor {unwrapFunctor = Pair (0.1s,60s)}

A realistic, useful example still escapes me, but there it is.

Pair as an invariant functor in C# #

What would the above Haskell example look like in C#? First, we're going to need a Pair data structure:

public sealed class Pair<T>
{
    public Pair(T x, T y)
    {
        X = x;
        Y = y;
    }
 
    public T X { get; }
    public T Y { get; }
 
    // More members follow...

Making Pair<T> a functor is so easy that Haskell can do it automatically with the DeriveFunctor extension. In C# you must explicitly write the function:

public Pair<T1> Select<T1>(Func<T, T1> selector)
{
    return new Pair<T1>(selector(X), selector(Y));
}

An example equivalent to the above fmap example might be this, here expressed as a unit test:

[Fact]
public void FunctorExample()
{
    Pair<long> sut = new Pair<long>(
        TimeSpan.TicksPerSecond / 10,
        TimeSpan.TicksPerSecond * 60);
    Pair<TimeSpan> actual = sut.Select(ticks => new TimeSpan(ticks));
    Assert.Equal(
        new Pair<TimeSpan>(
            TimeSpan.FromSeconds(.1),
            TimeSpan.FromSeconds(60)),
        actual);
}

You can trivially make Pair<T> an invariant functor by giving it a function equivalent to invmap. As I outlined in the introduction it's possible to add an InvMap method to the class, but it might be more idiomatic to instead add a Select overload:

public Pair<T1> Select<T1>(Func<T, T1> tToT1, Func<T1, T> t1ToT)
{
    return Select(tToT1);
}

Notice that this overload simply ignores the t1ToT argument and delegates to the normal Select overload. That's consistent with the Haskell package. This unit test shows an examples:

[Fact]
public void InvariantFunctorExample()
{
    Pair<long> sut = new Pair<long>(
        TimeSpan.TicksPerSecond / 10,
        TimeSpan.TicksPerSecond * 60);
    Pair<TimeSpan> actual =
        sut.Select(ticks => new TimeSpan(ticks), ts => ts.Ticks);
    Assert.Equal(
        new Pair<TimeSpan>(
            TimeSpan.FromSeconds(.1),
            TimeSpan.FromSeconds(60)),
        actual);
}

I can't think of a reason to do this in C#. In Haskell, at least, you have enough power of abstraction to describe something as simply an Invariant functor, and then let client code decide whether to use Maybe, [], Endo, or a custom type like Pair. You can't do that in C#, so the abstraction is even less useful here.

Conclusion #

All functors are invariant functors. You simply use the normal functor mapping function (fmap in Haskell, map in many other languages, Select in C#). This enables you to add an invariant mapping (invmap) that only uses the covariant argument (a -> b) and ignores the contravariant argument (b -> a).

Invariant functors are, however, not particularly useful, so neither is this result. Still, it's there, so deserves a mention. The situation is similar for the next article.

Next: Contravariant functors as invariant functors.


Error-accumulating composable assertions in C#

Monday, 19 December 2022 08:39:00 UTC

Perhaps the list monoid is all you need for non-short-circuiting assertions.

This article is the second instalment in a small articles series about applicative assertions. It explores a way to compose assertions in such a way that failure messages accumulate rather than short-circuit. It assumes that you've read the article series introduction and the previous article.

Unsurprisingly, the previous article showed that you can use an applicative functor to create composable assertions that don't short-circuit. It also concluded that, in C# at least, the API is awkward.

This article explores a simpler API.

A clue left by the proof of concept #

The previous article's proof of concept left a clue suggesting a simpler API. Consider, again, how the rather horrible RunAssertions method decides whether or not to throw an exception:

string errors = composition.Match(
    onFailure: f => string.Join(Environment.NewLine, f),
    onSuccess: _ => string.Empty);
 
if (!string.IsNullOrEmpty(errors))
    throw new Exception(errors);

Even though Validated<FS> is a sum type, the RunAssertions method declines to take advantage of that. Instead, it reduces composition to a simple type: A string. It then decides to throw an exception if the errors value is not null or empty.

This suggests that using a sum type may not be necessary to distinguish between the success and the failure case. Rather, an empty error string is all it takes to indicate success.

Non-empty errors #

The proof-of-concept assertion type is currently defined as Validated with a particular combination of type arguments: Validated<IReadOnlyCollection<string>, Unit>. Consider, again, this Match expression:

string errors = composition.Match(
    onFailure: f => string.Join(Environment.NewLine, f),
    onSuccess: _ => string.Empty);

Does an empty string unambiguously indicate success? Or is it possible to arrive at an empty string even if composition actually represents a failure case?

You can arrive at an empty string from a failure case if the collection of error messages is empty. Consider the type argument that takes the place of the F generic type: IReadOnlyCollection<string>. A collection of this type can be empty, which would also cause the above Match to produce an empty string.

Even so, the proof-of-concept works in practice. The reason it works is that failure cases will never have empty assertion messages. We know this because (in the proof-of-concept code) only two functions produce assertions, and they each populate the error message collection with a string. You may want to revisit the AssertTrue and AssertEqual functions in the previous article to convince yourself that this is true.

This is a good example of knowledge that 'we' as developers know, but the code currently doesn't capture. Having to deal with such knowledge taxes your working memory, so why not encapsulate such information in the type itself?

How do you encapsulate the knowledge that a collection is never empty? Introduce a NotEmptyCollection collection. I'll reuse the class from the article Semigroups accumulate and add a Concat instance method:

public NotEmptyCollection<T> Concat(NotEmptyCollection<T> other)
{
    return new NotEmptyCollection<T>(Head, Tail.Concat(other).ToArray());
}

Since the two assertion-producing functions both supply an error message in the failure case, it's trivial to change them to return Validated<NotEmptyCollection<string>, Unit> - just change the types used:

public static Validated<NotEmptyCollection<string>, Unit> AssertTrue(
    this bool condition,
    string message)
{
    return condition
        ? Succeed<NotEmptyCollection<string>, Unit>(Unit.Value)
        : Fail<NotEmptyCollection<string>, Unit>(new NotEmptyCollection<string>(message));
}
 
public static Validated<NotEmptyCollection<string>, Unit> AssertEqual<T>(
    T expected,
    T actual)
{
    return Equals(expected, actual)
        ? Succeed<NotEmptyCollection<string>, Unit>(Unit.Value)
        : Fail<NotEmptyCollection<string>, Unit>(
            new NotEmptyCollection<string>($"Expected {expected}, but got {actual}."));
}

This change guarantees that the RunAssertions method only produces an empty errors string in success cases.

Error collection isomorphism #

Assertions are still defined by the Validated sum type, but the success case carries no information: Validated<NotEmptyCollection<T>, Unit>, and the failure case is always guaranteed to contain at least one error message.

This suggests that a simpler representation is possible: One that uses a normal collection of errors, and where an empty collection indicates an absence of errors:

public class Asserted<T>
{
    public Asserted() : this(Array.Empty<T>())
    {
    }
 
    public Asserted(T error) : this(new[] { error })
    {
    }
 
    public Asserted(IReadOnlyCollection<T> errors)
    {
        Errors = errors;
    }
 
    public Asserted<T> And(Asserted<T> other)
    {
        if (other is null)
            throw new ArgumentNullException(nameof(other));
 
        return new Asserted<T>(Errors.Concat(other.Errors).ToList());
    }
 
    public IReadOnlyCollection<T> Errors { get; }
}

The Asserted<T> class is scarcely more than a glorified wrapper around a normal collection, but it's isomorphic to Validated<NotEmptyCollection<T>, Unit>, which the following two functions prove:

public static Asserted<T> FromValidated<T>(this Validated<NotEmptyCollection<T>, Unit> v)
{
    return v.Match(
        failures => new Asserted<T>(failures),
        _ => new Asserted<T>());
}
 
public static Validated<NotEmptyCollection<T>, Unit> ToValidated<T>(this Asserted<T> a)
{
    if (a.Errors.Any())
    {
        var errors = new NotEmptyCollection<T>(
            a.Errors.First(),
            a.Errors.Skip(1).ToArray());
        return Validated.Fail<NotEmptyCollection<T>, Unit>(errors);
    }
    else
        return Validated.Succeed<NotEmptyCollection<T>, Unit>(Unit.Value);
}

You can translate back and forth between Validated<NotEmptyCollection<T>, Unit> and Asserted<T> without loss of information.

A collection, however, gives rise to a monoid, which suggests a much simpler way to compose assertions than using an applicative functor.

Asserted truth #

You can now rewrite the assertion-producing functions to return Asserted<string> instead of Validated<NotEmptyCollection<string>, Unit>.

public static Asserted<string> True(bool condition, string message)
{
    return condition ? new Asserted<string>() : new Asserted<string>(message);
}

This Asserted.True function returns no error messages when condition is true, but a collection with the single element message when it's false.

You can use it in a unit test like this:

var assertResponse = Asserted.True(
    deleteResp.IsSuccessStatusCode,
    $"Actual status code: {deleteResp.StatusCode}.");

You'll see how assertResponse composes with another assertion later in this article. The example continues from the previous article. It's the same test from the same code base.

Asserted equality #

You can also rewrite the other assertion-producing function in the same way:

public static Asserted<string> Equal(object expected, object actual)
{
    if (Equals(expected, actual))
        return new Asserted<string>();
 
    return new Asserted<string>($"Expected {expected}, but got {actual}.");
}

Again, when the assertion passes, it returns no errors; otherwise, it returns a collection with a single error message.

Using it may look like this:

var getResp = await api.CreateClient().GetAsync(address);
var assertState = Asserted.Equal(HttpStatusCode.NotFound, getResp.StatusCode);

At this point, each of the assertions are objects that represent a verification step. By themselves, they neither pass nor fail the test. You have to execute them to reach a verdict.

Evaluating assertions #

The above code listing of the Asserted<T> class already shows how to combine two Asserted<T> objects into one. The And instance method is a binary operation that, together with the parameterless constructor, makes Asserted<T> a monoid.

Once you've combined all assertions into a single Asserted<T> object, you need to Run it to produce a test outcome:

public static void Run(this Asserted<string> assertions)
{
    if (assertions?.Errors.Any() ?? false)
    {
        var messages = string.Join(Environment.NewLine, assertions.Errors);
        throw new Exception(messages);
    }
}

If there are no errors, Run does nothing; otherwise it combines all the error messages together and throws an exception. As was also the case in the previous article, I've allowed myself a few proof-of-concept shortcuts. The framework design guidelines admonishes against throwing System.Exception. It might be more appropriate to introduce a new Exception type that also allows enumerating the error messages.

The entire assertion phase of the test looks like this:

var assertResponse = Asserted.True(
    deleteResp.IsSuccessStatusCode,
    $"Actual status code: {deleteResp.StatusCode}.");
var getResp = await api.CreateClient().GetAsync(address);
var assertState = Asserted.Equal(HttpStatusCode.NotFound, getResp.StatusCode);
assertResponse.And(assertState).Run();

You can see the entire test in the previous article. Notice how the two assertion objects are first combined into one with the And binary operation. The result is a single Asserted<string> object on which you can call Run.

Like the previous proof of concept, this assertion passes and fails in the same way. It's possible to compose assertions and collect error messages, instead of short-circuiting on the first failure, even without an applicative functor.

Method chaining #

If you don't like to come up with variable names just to make assertions, it's also possible to use the Asserted API's fluent interface:

var getResp = await api.CreateClient().GetAsync(address);
Asserted
    .True(
        deleteResp.IsSuccessStatusCode,
        $"Actual status code: {deleteResp.StatusCode}.")
    .And(Asserted.Equal(HttpStatusCode.NotFound, getResp.StatusCode))
    .Run();

This isn't necessarily better, but it's an option.

Conclusion #

While it's possible to design non-short-circuiting composable assertions using an applicative functor, it looks as though a simpler solution might solve the same problem. Collect error messages. If none were collected, interpret that as a success.

As I wrote in the introduction article, however, this may not be the last word. Some assertions return values that can be used for other assertions. That's a scenario that I have not yet investigated in this light, and it may change the conclusion. If so, I'll add more articles to this small article series. As I'm writing this, though, I have no such plans.

Did I just, in a roundabout way, write that more research is needed?

Next: Built-in alternatives to applicative assertions.


Comments

I think NUnit's Assert.Multiple is worth mentioning in this series. It does not require any complicated APIs, just wrap your existing test with multiple asserts into a delegate.

2022-12-20 08:02 UTC

Pavel, thank you for writing. I'm aware of both that API and similar ones for other testing frameworks. As is usually the case, there are trade-offs to consider. I'm currently working on some material that may turn into another article about that.

2022-12-21 20:17 UTC

A new article is now available: Built-in alternatives to applicative assertions.

2023-01-30 12:31 UTC

When do tests fail?

Monday, 12 December 2022 08:33:00 UTC

Optimise for the common scenario.

Unit tests occasionally fail. When does that happen? How often? What triggers it? What information is important when tests fail?

Regularly I encounter the viewpoint that it should be easy to understand the purpose of a test when it fails. Some people consider test names important, a topic that I've previously discussed. Recently I discussed the Assertion Roulette test smell on Twitter, and again I learned some surprising things about what people value in unit tests.

The importance of clear assertion messages #

The Assertion Roulette test smell is often simplified to degeneracy, but it really describes situations where it may be a problem if you can't tell which of several assertions actually caused a test to fail.

Josh McKinney gave a more detailed example than Gerard Meszaros does in the book:

"Background. In a legacy product, we saw some tests start failing intermittently. They weren’t just flakey, but also failed without providing enough info to fix. One of things which caused time to fix to increase was multiple ways of a single test to fail."

He goes on:

"I.e. if you fix the first assertion and you know there still could be flakiness, or long cycle times to see the failure. Multiple assertions makes any test problem worse. In an ideal state, they are fine, but every assertion doubles the amount of failures a test catches."

and concludes:

"the other main way (unrelated) was things like:

assertTrue(someListResult.isRmpty())

Which tells you what failed, but nothing about how.

But the following is worse. You must run the test twice to fix:

assertFalse(someList.isEmpty());
assertEqual(expected, list.get(0));"

The final point is due to the short-circuiting nature of most assertion libraries. That, however, is a solvable problem.

I find the above a compelling example of why Assertion Roulette may be problematic.

It did give me pause, though. How common is this scenario?

Out of the blue #

The situation described by Josh McKinney comes with more than a single warning flag. I hope that it's okay to point some of them out. I didn't get the impression from my interaction with Josh McKinney that he considered the situation ideal in any way.

First, of course, there's the lack of information about the problem. Here, that's a real problem. As I understand it, it makes it harder to reproduce the problem in a development environment.

Next, there's long cycle times, which I interpret as significant time may pass from when you attempt a fix until you can actually observe whether or not it worked. Josh McKinney doesn't say how long, but I wouldn't surprised if it was measured in days. At least, if the cycle time is measured in days, I can see how this is a problem.

Finally, there's the observation that "some tests start failing intermittently". This was the remark that caught my attention. How often does that happen?

Tests shouldn't do that. Tests should be deterministic. If they're not, you should work to eradicate non-determinism in tests.

I'll be the first to admit that that I also write non-deterministic tests. Not by design, but because I make mistakes. I've written many Erratic Tests in my career, and I've documented a few of them here:

While it can happen, it shouldn't be the norm. When it nonetheless happens, eradicating that source of non-determinism should be top priority. Pull the andon cord.

When tests fail #

Ideally, tests should rarely fail. As examined above, you may have Erratic Tests in your test suite, and if you do, these tests will occasionally (or often) fail. As Martin Fowler writes, this is a problem and you should do something about it. He also outlines strategies for it.

Once you've eradicated non-determinism in unit tests, then when do tests fail?

I can think of a couple of situations.

Tests routinely fail as part of the red-green-refactor cycle. This is by design. If no test is failing in the red phase, you probably made a mistake (which also regularly happens to me), or you may not really be doing test-driven development (TDD).

Another situation that may cause a test to fail is if you changed some code and triggered a regression test.

In both cases, tests don't just fail out of the blue. They fail as an immediate consequence of something you did.

Optimise for the common scenario #

In both cases you're (hopefully) in a tight feedback loop. If you're in a tight feedback loop, then how important is the assertion message really? How important is the test name?

You work on the code base, make some changes, run the tests. If one or more tests fail, it's correlated to the change you just made. You should have a good idea of what went wrong. Are code forensics and elaborate documentation really necessary to understand a test that failed because you just did something a few minutes before?

The reason I don't care much about test names or whether there's one or more assertion in a unit test is exactly that: When tests fail, it's usually because of something I just did. I don't need diagnostics tools to find the root cause. The root cause is the change that I just made.

That's my common scenario, and I try to optimise my processes for the common scenarios.

Fast feedback #

There's an implied way of working that affects such attitudes. Since I learned about TDD in 2003 I've always relished the fast feedback I get from a test suite. Since I tried continuous deployment around 2014, I consider it central to modern software engineering (and Accelerate strongly suggests so, too).

The modus operandi I outline above is one of fast feedback. If you're sitting on a feature branch for weeks before integrating into master, or if you can only deploy two times a year, this influences what works and what doesn't.

Both Modern Software Engineering and Accelerate make a strong case that short feedback cycles are pivotal for successful software development organisations.

I also understand that that's not the reality for everyone. When faced with long cycle times, a multitude of Erratic Tests, a legacy code base, and so on, other things become important. In those circumstances, tests may fail for different reasons.

When you work with TDD, continuous integration (CI), and continuous deployment (CD), then when do tests fail? They fail because you made them fail, only minutes earlier. Fix your code and move forward.

Conclusion #

When discussing test names and assertion messages, I've been surprised by the emphasis some people put on what I consider to be of secondary importance. I think the explanation is that circumstances differ.

With TDD and CI/CD you mostly look at a unit test when you write it, or if some regression test fails because you changed some code (perhaps in response to a test you just wrote). Your test suite may have hundreds or thousands of tests. Most of these pass every time you run the test suite. That's the normal state of affairs.

In other circumstances, you may have Erratic Tests that fail unpredictably. You should make it a priority to stop that, but as part of that process, you may need good assertion messages and good test names.

Different circumstances call for different reactions, so what works well in one situation may be a liability in other situations. I hope that this article has shed a little light on the forces you may want to consider.


GitHub Copilot preliminary experience report

Monday, 05 December 2022 08:37:00 UTC

Based on a few months of use.

I've been evaluating GitHub Copilot since August 2022. Perhaps it's time to collect my thoughts so far.

In short, it's surprisingly good, but also gets a lot of things wrong. It does seem helpful to the experienced programmer, but I don't see it replacing all programmers yet.

Not only for boilerplate code #

I was initially doubtful. I'd seen some demos where Copilot created fifteen to twenty lines of code to, say, make a REST API request. These examples mostly struck me as auto-generation of something that ought to be a proper abstraction: A method in a reusable library.

In general I don't think that typing is a productivity bottleneck, and I'm sceptical of productivity tools, and particularly code generators. The more code a code base contains, the more code there is to read. Accelerating code production doesn't strike me as a goal in itself.

On the other hand, I'm past fifty and aware that in my age, one often tends to resist new ideas. Perhaps the above justifications are nothing but motivated reasoning. I decided to install the plug-ins and go about my business to see what would happen.

It has surprised and delighted me more than once. It's not only for boilerplate code. Here's a recent example.

I was doing an exercise in F# and wrote this type definition:

type Instruction = Enqueue of int | Dequeue

I wanted to write a property-based test with this type, and since I was using Hedgehog I needed a generator for the type. While FsCheck can automatically generate values of custom discriminated unions, Hedgehog has a more explicit-is-better-than-implicit design philosophy. In other words, if there's a way to make Hedgehog automatically generate values of a type like the above Instruction, it has so far escaped me.

So I started typing let private genInstruction = after which Copilot suggested the following:

let private genInstruction =
    Gen.frequency
        [ 1, Gen.constant Dequeue
          1, Gen.int32 (Range.constant 0 10_000) |> Gen.map Enqueue ]

I was, frankly, stunned. While I do understand that Copilot doesn't actually understand the code it suggests, Hedgehog is sufficiently esoteric that I didn't expect Copilot to have enough training data to enable it to make a useful contribution in this niche. I was wrong. I've repeatedly seen Copilot make suggestions to my F# and Haskell code. It's not just for C#, JavaScript, or python code.

The above suggestion was, to be clear, absolutely appropriate and compiled right away. The only detail I decided to change was the Range, which I decided to change to Range.linear. That's not, however, a significant change.

Perhaps you're not impressed by three lines of auto-generated code. How much of a productivity improvement is that? Quite a bit, in my case.

It wouldn't have taken me long to type those three lines of code, but as I already mentioned, typing isn't a bottleneck. On the other hand, looking up an unfamiliar API can take some time. The Programmer's Brain discusses this kind of problem and suggests exercises to address it. Does Copilot offer a shortcut?

While I couldn't remember the details of Hedgehog's API, once I saw the suggestion, I recognised Gen.frequency, so I understood it as an appropriate code suggestion. The productivity gain, if there is one, may come from saving you the effort of looking up unfamiliar APIs, rather than saving you some keystrokes.

In this example, I already knew of the Gen.frequency function - I just couldn't recall the exact name and type. This enabled me to evaluate Copilot's suggestion and deem it correct. If I hadn't known that API already, how could I have known whether to trust Copilot?

Detectably wrong suggestions #

As amazing as Copilot can be, it's hardly faultless. It makes many erroneous suggestions. Sometimes the suggestion is obviously wrong. If you accept it, it doesn't compile. Sometimes, the compilation error is only a little edit from being correct, but at least in such situations you'll be explicitly aware that the suggestion couldn't be used verbatim.

Other suggestions are wrong, but less conspicuously so. Here's an example.

I was recently subjecting the code base that accompanies Code That Fits in Your Head to the mutation testing tool Stryker. Since it did point out a few possible mutations, I decided to add a few tests. One was of a wrapper class called TimeOfDay. Because of static code analysis rules, it came with conversions to and from TimeSpan, but these methods weren't covered by any tests.

In order to remedy that situation, I started writing an FsCheck property and came as far as:

[Property]
public void ConversionsRoundTrip(TimeSpan timeSpan)

At that point Copilot suggested the following, which I accepted:

[Property]
public void ConversionsRoundTrip(TimeSpan timeSpan)
{
    var timeOfDay = new TimeOfDay(timeSpan);
    var actual = (TimeSpan)timeOfDay;
    Assert.Equal(timeSpan, actual);
}

Looks good, doesn't it? Again, I was impressed. It compiled, and it even looks as though Copilot had picked up one of my naming conventions: naming variables by role, in this case actual.

While I tend to be on guard, I immediately ran the test suite instead of thinking it through. It failed. Keep in mind that this is a characterisation test, so it was supposed to pass.

The TimeOfDay constructor reveals why:

public TimeOfDay(TimeSpan durationSinceMidnight)
{
    if (durationSinceMidnight < TimeSpan.Zero ||
        TimeSpan.FromHours(24) < durationSinceMidnight)
        throw new ArgumentOutOfRangeException(
            nameof(durationSinceMidnight),
            "Please supply a TimeSpan between 0 and 24 hours.");
 
    this.durationSinceMidnight = durationSinceMidnight;
}

While FsCheck knows how to generate TimeSpan values, it'll generate arbitrary durations, including negative values and spans much longer than 24 hours. That explains why the test fails.

Granted, this is hardly a searing indictment against Copilot. After all, I could have made this mistake myself.

Still, that prompted me to look for more issues with the code that Copilot had suggested. Another problem with the code is that it tests the wrong API. The suggested test tries to round-trip via the TimeOfDay class' explicit cast operators, which were already covered by tests. Well, I might eventually have discovered that, too. Keep in mind that I was adding this test to improve the code base's Stryker score. After running the tool again, I would probably eventually have discovered that the score didn't improve. It takes Stryker around 25 minutes to test this code base, though, so it wouldn't have been rapid feedback.

Since, however, I examined the code with a critical eye, I noticed this by myself. This would clearly require changing the test code as well.

In the end, I wrote this test:

[Property]
public void ConversionsRoundTrip(TimeSpan timeSpan)
{
    var expected = ScaleToTimeOfDay(timeSpan);
    var sut = TimeOfDay.ToTimeOfDay(expected);
 
    var actual = TimeOfDay.ToTimeSpan(sut);
 
    Assert.Equal(expected, actual);
}
 
private static TimeSpan ScaleToTimeOfDay(TimeSpan timeSpan)
{
    // Convert an arbitrary TimeSpan to a 24-hour TimeSpan.
    // The data structure that underlies TimeSpan is a 64-bit integer,
    // so first we need to identify the range of possible TimeSpan
    // values. It might be easier to understand to calculate
    // TimeSpan.MaxValue - TimeSpan.MinValue, but that underflows.
    // Instead, the number of possible 64-bit integer values is the same
    // as the number of possible unsigned 64-bit integer values.
    var range = ulong.MaxValue;
    var domain = TimeSpan.FromHours(24).Ticks;
    var scale = (ulong)domain / range;
    var expected = timeSpan * scale;
    return expected;
}

In this case, Copilot didn't improve my productivity. It may actually have slowed me down a bit.

This time, it wasn't too difficult to spot issues with the suggested code. What if the problems are more subtle?

Errors that are difficult to detect #

How do bugs appear? We write them, thinking that our code is going to do one thing, while the compiler decides otherwise. Even when we actively engage with the code, we may be looking at a bug and still fail to see it.

Why should we trust Copilot to produce bug-free code?

Here's another example. I was going through the Range kata in F# and made my Endpoint discriminated union a functor:

type Endpoint<'a> = Open of 'a | Closed of 'a
 
module Endpoint =
    let map f = function
        | Open   x -> Open   (f x)
        | Closed x -> Closed (f x)

Making a type a functor is such a well-understood process that Haskell can automate it. I knew how to write the above map function as well, so I wrote it without testing it.

After I'd written the map function, I decided to add tests of the two functor laws. Not because I doubted the implementation, but because I might need that as example code later.

Copilot suggested this test after I named it:

open Xunit
open Hedgehog
 
[<Fact>]
let ``First functor law`` () = Property.check <| property {
    let! x = Gen.int32 (Range.linearBounded ())
    let f = (fun x -> x + 1)
    let g = (fun x -> x * 2)
 
    let actual = Endpoint.map f (Endpoint.map g (Open x))
    let expected = Endpoint.map (f >> g) (Open x)
 
    Assert.Equal(expected, actual) }

It compiles. It also fails.

What's wrong with it?

If you know the functor laws by heart, you'll be able to spot that this is clearly not the first functor law. On the other hand, it looks convincingly like the second functor law. Should I just change the name and move on?

I can't, though, since the test fails. Could there be a bug in my map function, after all?

No, there's an error in the test. I invite you to spot it.

In terms of keystrokes, it's easy to fix the problem:

open Xunit
open Hedgehog
 
[<Fact>]
let ``First functor law`` () = Property.check <| property {
    let! x = Gen.int32 (Range.linearBounded ())
    let f = (fun x -> x + 1)
    let g = (fun x -> x * 2)
 
    let actual = Endpoint.map f (Endpoint.map g (Open x))
    let expected = Endpoint.map (f << g) (Open x)
 
    Assert.Equal(expected, actual) }

Spot the edit. I bet it'll take you longer to find it than it took me to type it.

The test now passes, but for one who has spent less time worrying over functor laws than I have, troubleshooting this could have taken a long time.

These almost-right suggestions from Copilot both worry me and give me hope.

Copilot for experienced programmers #

When a new technology like Copilot appears, it's natural to speculate on the consequences. Does this mean that programmers will lose their jobs?

This is just a preliminary evaluation after a few months, so I could be wrong, but I think we programmers are safe. If you're experienced, you'll be able to tell most of Copilot's hits from its misses. Perhaps you'll get a productivity improvement out of, but it could also slow you down.

The tool is likely to improve over time, so I'm hopeful that this could become a net productivity gain. Still, with this high an error rate, I'm not too worried yet.

The Pragmatic Programmer describes a programming style named Programming by Coincidence. People who develop software this way have only a partial understanding of the code they write.

"Fred doesn't know why the code is failing because he didn't know why it worked in the first place."

I've encountered my fair share of these people. When editing code, they make small adjustments and do cursory manual testing until 'it looks like it works'. If they have to start a new feature or are otherwise faced with a metaphorical blank page, they'll copy some code from somewhere else and use that as a starting point.

You'd think that Copilot could enhance the productivity of such people, but I'm not sure. It might actually slow them down. These people don't fully understand the code they themselves 'write', so why should we expect them to understand the code that Copilot suggests?

If faced with a Copilot suggestion that 'almost works', will they be able to spot if it's a genuinely good suggestion, or whether it's off, like I've described above? If the Copilot code doesn't work, how much time will they waste thrashing?

Conclusion #

GitHub Copilot has the potential to be a revolutionary technology, but it's not, yet. So far, I'm not too worried. It's an assistant, like a pairing partner, but it's up to you to evaluate whether the code that Copilot suggests is useful, correct, and safe. How can you do that unless you already know what you're doing?

If you don't have the qualifications to evaluate the suggested code, I fail to see how it's going to help you. Granted, it does have potential to help you move on in less time that you would otherwise have spent. In this article, I showed one example where I would have had to spend significant time looking up API documentation. Instead, Copilot suggested the correct code to use.

Pulling in the other direction are the many false positives. Copilot makes many suggestions, and many of them are poor. The ones that are recognisably bad are unlikely to slow you down. I'm more concerned with those that are subtly wrong. They have the potential to waste much time.

Which of these forces are strongest? The potential for wasting time is infinite, while the maximum productivity gain you can achieve is 100 percent. That's an asymmetric distribution. There's a long tail of time wasters, but there's no equivalent long tail of improvement.

I'm not, however, trying to be pessimistic. I expect to keep Copilot around for the time being. It could very well be here to stay. Used correctly, it seems useful.

Is it going to replace programmers? Hardly. Rather, it may enable poor developers to make such a mess of things that you need even more good programmers to subsequently fix things.


An initial proof of concept of applicative assertions in C#

Monday, 28 November 2022 06:47:00 UTC

Worthwhile? Not obviously.

This article is the first instalment in a small articles series about applicative assertions. It explores a way to compose assertions in such a way that failure messages accumulate rather than short-circuit. It assumes that you've read the article series introduction.

Assertions are typically based on throwing exceptions. As soon as one assertion fails, an exception is thrown and no further assertions are evaluated. This is normal short-circuiting behaviour of exceptions. In some cases, however, it'd be useful to keep evaluating other assertions and collect error messages.

This article series explores an intriguing idea to address such issues: Use an applicative functor to collect multiple assertion messages. I started experimenting with the idea to see where it would lead. The article series serves as a report of what I found. It is neither a recommendation nor a caution. I still find the idea interesting, but I'm not sure whether the complexity is warranted.

Example scenario #

A realistic example is often illustrative, although there's a risk that the realism carries with it some noise that detracts from the core of the matter. I'll reuse an example that I've already discussed and explained in greater detail. The code is from the code base that accompanies my book Code That Fits in Your Head.

This test has two independent assertions:

[Theory]
[InlineData(884, 18, 47, "c@example.net""Nick Klimenko", 2)]
[InlineData(902, 18, 50, "emot@example.gov""Emma Otting", 5)]
public async Task DeleteReservation(
    int days, int hours, int minutes, string email, string name, int quantity)
{
    using var api = new LegacyApi();
    var at = DateTime.Today.AddDays(days).At(hours, minutes)
        .ToIso8601DateTimeString();
    var dto = Create.ReservationDto(at, email, name, quantity);
    var postResp = await api.PostReservation(dto);
    Uri address = FindReservationAddress(postResp);
 
    var deleteResp = await api.CreateClient().DeleteAsync(address);
 
    Assert.True(
        deleteResp.IsSuccessStatusCode,
        $"Actual status code: {deleteResp.StatusCode}.");
    var getResp = await api.CreateClient().GetAsync(address);
    Assert.Equal(HttpStatusCode.NotFound, getResp.StatusCode);
}

The test exercises the REST API to first create a reservation, then delete it, and finally check that the reservation no longer exists. Two independent postconditions must be true for the test to pass:

  • The DELETE request must result in a status code that indicates success.
  • The resource must no longer exist.

It's conceivable that a bug might fail one of these without invalidating the other.

As the test is currently written, it uses xUnit.net's standard assertion library. If the Assert.True verification fails, the Assert.Equal statement isn't evaluated.

Assertions as validations #

Is it possible to evaluate the Assert.Equal postcondition even if the first assertion fails? You could use a try/catch block, but is there a more composable and elegant option? How about an applicative functor?

Since I was interested in exploring this question as a proof of concept, I decided to reuse the machinery that I'd already put in place for the article An applicative reservation validation example in C#: The Validated class and its associated functions. In a sense, you can think of an assertion as a validation of a postcondition.

This is not a resemblance I intend to carry too far. What I learn by experimenting with Validated I can apply to a more appropriately-named class like Asserted.

Neither of the two above assertions return a value; they are one-stop assertions. If they succeed, they return nothing; if they fail, they produce an error.

It's possible to model this kind of behaviour with Validated. You can model a collection of errors with, well, a collection. To keep the proof of concept simple, I decided to use a collection of strings: IReadOnlyCollection<string>. To model 'nothing' I had to add a unit type:

public sealed class Unit
{
    private Unit() { }
 
    public readonly static Unit Value = new Unit();
}

This enabled me to define assertions as Validated<IReadOnlyCollection<string>, Unit> values: Either a collection of error messages, or nothing.

Asserting truth #

Instead of xUnit.net's Assert.True, you can now define an equivalent function:

public static Validated<IReadOnlyCollection<string>, Unit> AssertTrue(
    this bool condition,
    string message)
{
    return condition
        ? Succeed<IReadOnlyCollection<string>, Unit>(Unit.Value)
        : Fail<IReadOnlyCollection<string>, Unit>(new[] { message });
}

It simply returns a Success value containing nothing when condition is true, and otherwise a Failure value containing the error message.

You can use it like this:

var assertResponse = Validated.AssertTrue(
    deleteResp.IsSuccessStatusCode,
    $"Actual status code: {deleteResp.StatusCode}.");

Later in the article you'll see how this assertion combines with another assertion.

Asserting equality #

Instead of xUnit.net's Assert.Equal, you can also define a function that works the same way but returns a Validated value:

public static Validated<IReadOnlyCollection<string>, Unit> AssertEqual<T>(
    T expected,
    T actual)
{
    return Equals(expected, actual)
        ? Succeed<IReadOnlyCollection<string>, Unit>(Unit.Value)
        : Fail<IReadOnlyCollection<string>, Unit>(new[]
            { $"Expected {expected}, but got {actual}." });
}

The AssertEqual function first uses Equals to compare expected with actual. If the result is true, the function returns a Success value containing nothing; otherwise, it returns a Failure value containing a failure message. Since this is only a proof of concept, the failure message is useful, but minimal.

Notice that this function returns a value of the same type (Validated<IReadOnlyCollection<string>, Unit>) as AssertTrue.

You can use the function like this:

var assertState = Validated.AssertEqual(HttpStatusCode.NotFound, getResp.StatusCode);

Again, you'll see how to combine this assertion with the above assertResponse value later in this article.

Evaluating assertions #

The DeleteReservation test only has two independent assertions, so in my proof of concept, all I needed to do was to figure out a way to combine two applicative assertions into one, and then evaluate it. This rather horrible method does that:

public static void RunAssertions(
    Validated<IReadOnlyCollection<string>, Unit> assertion1,
    Validated<IReadOnlyCollection<string>, Unit> assertion2)
{
    var f = Succeed<IReadOnlyCollection<string>, Func<Unit, Unit, Unit>>((_, __) => Unit.Value);
    Func<IReadOnlyCollection<string>, IReadOnlyCollection<string>, IReadOnlyCollection<string>>
        combine = (x, y) => x.Concat(y).ToArray();
 
    Validated<IReadOnlyCollection<string>, Unit> composition = f
        .Apply(assertion1, combine)
        .Apply(assertion2, combine);
    string errors = composition.Match(
        onFailure: f => string.Join(Environment.NewLine, f),
        onSuccess: _ => string.Empty);
 
    if (!string.IsNullOrEmpty(errors))
        throw new Exception(errors);

C# doesn't have good language features for applicative functors the same way that F# and Haskell do, and although you can use various tricks to make the programming experience better that what is on display here, I was still doing a proof of concept. If it turns out that this approach is useful and warranted, we can introduce some of the facilities to make the API more palatable. For now, though, we're dealing with all the rough edges.

The way that applicative functors work, you typically use a 'lifted' function to combine two (or more) 'lifted' values. Here, 'lifted' means 'being inside the Validated container'.

Each of the assertions that I want to combine has the same type: Validated<IReadOnlyCollection<string>, Unit>. Notice that the S (success) generic type argument is Unit in both cases. While it seems redundant, formally I needed a 'lifted' function to combine two Unit values into a single value. This single value can (in principle) have any type I'd like it to have, but since you can't extract any information out of a Unit value, it makes sense to use the monoidal nature of unit to combine two into one.

Basically, you just ignore the Unit input values because they carry no information. Also, they're all the same value anyway, since the type is a Singleton. In its 'naked' form, the function might be implemented like this: (_, __) => Unit.Value. Due to the ceremony required by the combination of C# and applicative functors, however, this monoidal binary operation has to be 'lifted' to a Validated value. That's the f value in the RunAssertions function body.

The Validated.Apply function requires as an argument a function that combines the generic F (failure) values into one, in order to deal with the case where there's multiple failures. In this case F is IReadOnlyCollection<string>. Since declarations of Func values in C# requires explicit type declaration, that's a bit of a mouthful, but the combine function just concatenates two collections into one.

The RunAssertions method can now Apply both assertion1 and assertion2 to f, which produces a combined Validated value, composition. It then matches on the combined value to produce a string value. If there are no assertion messages, the result is the empty string; otherwise, the function combines the assertion messages with a NewLine between each. Again, this is proof-of-concept code. A more robust and flexible API (if warranted) might keep the errors around as a collection of strongly typed Value Objects.

Finally, if the resulting errors string is not null or empty, the RunAssertions method throws an exception with the combined error message(s). Here I once more invoked my proof-of-concept privilege to throw an Exception, even though the framework design guidelines admonishes against doing so.

Ultimately, then, the assert phase of the test looks like this:

var assertResponse = Validated.AssertTrue(
    deleteResp.IsSuccessStatusCode,
    $"Actual status code: {deleteResp.StatusCode}.");
var getResp = await api.CreateClient().GetAsync(address);
var assertState =
    Validated.AssertEqual(HttpStatusCode.NotFound, getResp.StatusCode);
Validated.RunAssertions(assertResponse, assertState);

The rest of the test hasn't changed.

Outcomes #

Running the test with the applicative assertions passes, as expected. In order to verify that it works as it's supposed to, I tried to sabotage the System Under Test (SUT) in various ways. First, I made the Delete method that handles DELETE requests a no-op, while still returning 200 OK. As you'd expect, the result is a test failure with this message:

Message:
System.Exception : Expected NotFound, but got OK.

This is the assertion that verifies that getResp.StatusCode is 404 Not Found. It fails because the sabotaged Delete method doesn't delete the reservation.

Then I further sabotaged the SUT to also return an incorrect status code (400 Bad Request), which produced this failure message:

Message:
System.Exception : Actual status code: BadRequest.
Expected NotFound, but got OK.

Notice that the message contains information about both failure conditions.

Finally, I re-enabled the correct behaviour (deleting the reservation from the data store) while still returning 400 Bad Request:

Message:
System.Exception : Actual status code: BadRequest.

As desired, the assertions collect all relevant failure messages.

Conclusion #

Not surprisingly, it's possible to design a composable assertion API that collects multiple failure messages using an applicative functor. Anyone who knows how applicative validation works would have been able to predict that outcome. That's not what the above proof of concept was about. What I wanted to see was rather how it would play out in a realistic scenario, and whether using an applicative functor is warranted.

Applicative functors don't gel well with C#, so unsurprisingly the API is awkward. It's likely possible to smooth much of the friction, but without good language support and syntactic sugar, it's unlikely to become idiomatic C#.

Rather than taking the edge off the unwieldy API, the implementation of RunAssertions suggests another alternative.

Next: Error-accumulating composable assertions in C#.


Page 7 of 72

"Our team wholeheartedly endorses Mark. His expert service provides tremendous value."
Hire me!