AI for doc comments

Monday, 10 July 2023 06:02:00 UTC

A solution in search of a problem?

I was recently listening to a podcast episode where the guest (among other things) enthused about how advances in large language models mean that you can now get these systems to write XML doc comments.

You know, these things:

/// <summary>
/// Scorbles a dybliad.
/// </summary>
/// <param name="dybliad">The dybliad to scorble.</param>
/// <param name="flag">
/// A flag that controls wether scorbling is done pre- or postvotraid.
/// </param>
/// <returns>The scorbled dybliad.</returns>
public string Scorble(string dybliad, bool flag)

And it struck me how that's not the first time I've encountered that notion. Finally, you no longer need to write those tedious documentation comments in your code. Instead, you can get Github Copilot or ChatGPT to write them for you.

When was the last time you wrote such comments?

I'm sure that there are readers who wrote some just yesterday, but generally, I rarely encounter them in the wild.

As a rule, I only write them when my modelling skills fail me so badly that I need to apologise in code. Whenever I run into such a situation, I may as well take advantage of the format already in place for such things, but it's not taking up a big chunk of my time.

It's been a decade since I ran into a code base where doc comments were mandatory. When I had to write comments, I'd use GhostDoc, which used heuristics to produce 'documentation' on par with modern AI tools.

Whether you use GhostDoc, Github Copilot, or write the comments yourself, most of them tend to be equally inane and vacuous. Good design only amplifies this quality. The better names you use, and the more you leverage the type system to make illegal states unrepresentable, the less you need the kind of documentation furnished by doc comments.

I find it striking that more than one person wax poetic about AI's ability to produce doc comments.

Is that, ultimately, the only thing we'll entrust to large language models?

I know that that they can do more than that, but are we going to let them? Or is automatic doc comments a solution in search of a problem?


Validating or verifying emails

Monday, 03 July 2023 05:41:00 UTC

On separating preconditions from business rules.

My recent article Validation and business rules elicited this question:

"Regarding validation should be pure function, lets have user registration as an example, is checking the email address uniqueness a validation or a business rule? It may not be pure since the check involves persistence mechanism."

This is a great opportunity to examine some heuristics in greater detail. As always, this mostly presents how I think about problems like this, and so doesn't represent any rigid universal truth.

The specific question is easily answered, but when the topic is email addresses and validation, I foresee several follow-up questions that I also find interesting.

Uniqueness constraint #

A new user signs up for a system, and as part of the registration process, you want to verify that the email address is unique. Is that validation or a business rule?

Again, I'm going to put the cart before the horse and first use the definition to answer the question.

Validation is a pure function that decides whether data is acceptable.

Can you implement the uniqueness constraint with a pure function? Not easily. What most systems would do, I'm assuming, is to keep track of users in some sort of data store. This means that in order to check whether or not a email address is unique, you'd have to query that database.

Querying a database is non-deterministic because you could be making multiple subsequent queries with the same input, yet receive differing responses. In this particular example, imagine that you ask the database whether ann.siebel@example.com is already registered, and the answer is no, that address is new to us.

Database queries are snapshots in time. All that answer tells you is that at the time of the query, the address would be unique in your database. While that answer travels over the network back to your code, a concurrent process might add that very address to the database. Thus, the next time you ask the same question: Is ann.siebel@example.com already registered? the answer would be: Yes, we already know of that address.

Verifying that the address is unique (most likely) involves an impure action, and so according to the above definition isn't a validation step. By the law of the the excluded middle, then, it must be a business rule.

Using a different rule of thumb, Robert C. Martin arrives at the same conclusion:

"Uniqueness is semantic not syntactic, so I vote that uniqueness is a business rule not a validation rule."

This highlights a point about this kind of analysis. Using functional purity is a heuristic shortcut to sorting verification problems. Those that are deterministic and have no side effects are validation problems, and those that are either non-deterministic or have side effects are not.

Being able to sort problems in this way is useful because it enables you to choose the right tool for the job, and to avoid the wrong tool. In this case, trying to address the uniqueness constraint with validation is likely to cause trouble.

Why is that? Because of what I already described. A database query is a snapshot in time. If you make a decision based on that snapshot, it may be the wrong decision once you reach a conclusion. Granted, when discussing user registration, the risk of several processes concurrently trying to register the same email address probably isn't that big, but in other domains, contention may be a substantial problem.

Being able to identify a uniqueness constraint as something that isn't validation enables you to avoid that kind of attempted solution. Instead, you may contemplate other designs. If you keep users in a relational database, the easiest solution is to put a uniqueness constraint on the Email column and let the database deal with the problem. Just be prepared to handle the exception that the INSERT statement may generate.

If you have another kind of data store, there are other ways to model the constraint. You can even do so using lock-free architectures, but that's out of scope for this article.

Validation checks preconditions #

Encapsulation is an important part of object-oriented programming (and functional programming as well). As I've often outlined, I base my understanding of encapsulation on Object-Oriented Software Construction. I consider contract (preconditions, invariants, and postconditions) essential to encapsulation.

I'll borrow a figure from my article Can types replace validation?:

An arrow labelled 'validation' pointing from a document to the left labelled 'Data' to a box to the right labelled 'Type'.

The role of validation is to answer the question: Does the data make sense?

This question, and its answer, is typically context-dependent. What 'makes sense' means may differ. This is even true for email addresses.

When I wrote the example code for my book Code That Fits in Your Head, I had to contemplate how to model email addresses. Here's an excerpt from the book:

Email addresses are notoriously difficult to validate, and even if you had a full implementation of the SMTP specification, what good would it do you?

Users can easily give you a bogus email address that fits the spec. The only way to really validate an email address is to send a message to it and see if that provokes a response (such as the user clicking on a validation link). That would be a long-running asynchronous process, so even if you'd want to do that, you can't do it as a blocking method call.

The bottom line is that it makes little sense to validate the email address, apart from checking that it isn't null. For that reason, I'm not going to validate it more than I've already done.

In this example, I decided that the only precondition I would need to demand was that the email address isn't null. This was motivated by the operations I needed to perform with the email address - or rather, in this case, the operations I didn't need to perform. The only thing I needed to do with the address was to save it in a database and send emails:

public async Task EmailReservationCreated(int restaurantId, Reservation reservation)
{
    if (reservation is null)
        throw new ArgumentNullException(nameof(reservation));
 
    var r = await RestaurantDatabase.GetRestaurant(restaurantId).ConfigureAwait(false);
 
    var subject = $"Your reservation for {r?.Name}.";
    var body = CreateBodyForCreated(reservation);
    var email = reservation.Email.ToString();
 
    await Send(subject, body, email).ConfigureAwait(false);
}

This code example suggests why I made it a precondition that Email mustn't be null. Had null be allowed, I would have had to resort to defensive coding, which is exactly what encapsulation makes redundant.

Validation is a process that determines whether data is useful in a particular context. In this particular case, all it takes is to check the Email property on the DTO. The sample code that comes with Code That Fits in Your Head shows the basics, while An applicative reservation validation example in C# contains a more advanced solution.

Preconditions are context-dependent #

I would assume that a normal user registration process has little need to validate an ostensible email address. A system may want to verify the address, but that's a completely different problem. It usually involves sending an email to the address in question and have some asynchronous process register if the user verifies that email. For an article related to this problem, see Refactoring registration flow to functional architecture.

Perhaps you've been reading this with mounting frustration: How about validating the address according to the SMTP spec?

Indeed, that sounds like something one should do, but turns out to be rarely necessary. As already outlined, users can easily supply a bogus address like foo@bar.com. It's valid according to the spec, and so what? How does that information help you?

In most contexts I've found myself, validating according to the SMTP specification is a distraction. One might, however, imagine scenarios where it might be required. If, for example, you need to sort addresses according to user name or host name, or perform some filtering on those parts, etc. it might be warranted to actually require that the address is valid.

This would imply a validation step that attempts to parse the address. Once again, parsing here implies translating less-structured data (a string) to more-structured data. On .NET, I'd consider using the MailAddress class which already comes with built-in parser functions.

The point being that your needs determine your preconditions, which again determine what validation should do. The preconditions are context-dependent, and so is validation.

Conclusion #

Email addresses offer a welcome opportunity to discuss the difference between validation and verification in a way that is specific, but still, I hope, easy to extrapolate from.

Validation is a translation from one (less-structured) data format to another. Typically, the more-structured data format is an object, a record, or a hash map (depending on language). Thus, validation is determined by two forces: What the input data looks like, and what the desired object requires; that is, its preconditions.

Validation is always a translation with the potential for error. Some input, being less-structured, can't be represented by the more-structured format. In addition to parsing, a validation function must also be able to fail in a composable matter. That is, fortunately, a solved problem.


Validation and business rules

Monday, 26 June 2023 06:05:00 UTC

A definition of validation as distinguished from business rules.

This article suggests a definition of validation in software development. A definition, not the definition. It presents how I currently distinguish between validation and business rules. I find the distinction useful, although perhaps it's a case of reversed causality. The following definition of validation is useful because, if defined like that, it's a solved problem.

My definition is this:

Validation is a pure function that decides whether data is acceptable.

I've used the word acceptable because it suggests a link to Postel's law. When validating, you may want to allow for some flexibility in input, even if, strictly speaking, it's not entirely on spec.

That's not, however, the key ingredient in my definition. The key is that validation should be a pure function.

While this may sound like an arbitrary requirement, there's a method to my madness.

Business rules #

Before I explain the benefits of the above definition, I think it'll be useful to outline typical problems that developers face. My thesis in Code That Fits in Your Head is that understanding limits of human cognition is a major factor in making a code base sustainable. This again explains why encapsulation is such an important idea. You want to confine knowledge in small containers that fit in your head. Information shouldn't leak out of these containers, because that would require you to keep track of too much stuff when you try to understand other code.

When discussing encapsulation, I emphasise contract over information hiding. A contract, in the spirit of Object-Oriented Software Construction, is a set of preconditions, invariants, and postconditions. Preconditions are particularly relevant to the topic of validation, but I've often experienced that some developers struggle to identify where validation ends and business rules begin.

Consider an online restaurant reservation system as an example. We'd like to implement a feature that enables users to make reservations. In order to meet that end, we decide to introduce a Reservation class. What are the preconditions for creating a valid instance of such a class?

When I go through such an exercise, people quickly identify requirement such as these:

  • The reservation should have a date and time.
  • The reservation should contain the number of guests.
  • The reservation should contain the name or email (or other data) about the person making the reservation.

A common suggestion is that the restaurant should also be able to accommodate the reservation; that is, it shouldn't be fully booked, it should have an available table at the desired time of an appropriate size, etc.

That, however, isn't a precondition for creating a valid Reservation object. That's a business rule.

Preconditions are self-contained #

How do you distinguish between a precondition and a business rule? And what does that have to do with input validation?

Notice that in the above examples, the three preconditions I've listed are self-contained. They are statements about the object or value's constituent parts. On the other hand, the requirement that the restaurant should be able to accommodate the reservation deals with a wider context: The table layout of the restaurant, prior reservations, opening and closing times, and other business rules as well.

Validation is, as Alexis King points out, a parsing problem. You receive less-structured data (CSV, JSON, XML, etc.) and attempt to project it to a more-structured format (C# objects, F# records, Clojure maps, etc.). This succeeds when the input satisfies the preconditions, and fails otherwise.

Why can't we add more preconditions than required? Consider Postel's law. An operation (and that includes object constructors) should be liberal in what it accepts. While you have to draw the line somewhere (you can't really work with a reservation if the date is missing), an object shouldn't require more than it needs.

In general we observe that the fewer pre-conditions, the easier it is to create an object (or equivalent functional data structure). As a counter-example, this explains why Active Record is antithetical to unit testing. One precondition is that there's a database available, and while not impossible to automate in tests, it's quite the hassle. It's easier to work with POJOs in tests. And unit tests, being the first clients of an API, tell you how easy it is to use that API.

Contracts with third parties #

If validation is fundamentally parsing, it seems reasonable that operations should be pure functions. After all, a parser operates on unchanging (less-structured) data. A programming-language parser takes contents of text files as input. There's little need for more input than that, and the output is expected to be deterministic. Not surprisingly, Haskell is well-suited for writing parsers.

You don't, however, have to buy the argument that validation is essentially parsing, so consider another perspective.

Validation is a data transformation step you perform to deal with input. Data comes from a source external to your system. It can be a user filling in a form, another program making an HTTP request, or a batch job that receives files over FTP.

Even if you don't have a formal agreement with any third party, Hyrum's law implies that a contract does exist. It behoves you to pay attention to that, and make it as explicit as possible.

Such a contract should be stable. Third parties should be able to rely on deterministic behaviour. If they supply data one day, and you accept it, you can't reject the same data the next days on grounds that it was malformed. At best, you may be contravariant in input as time passes; in other words, you may accept things tomorrow that you didn't accept today, but you may not reject tomorrow what you accepted today.

Likewise, you can't have validation rules that erratically accept data one minute, reject the same data the next minute, only to accept it later. This implies that validation must, at least, be deterministic: The same input should always produce the same output.

That's half of the way to referential transparency. Do you need side effects in your validation logic? Hardly, so you might as well implement it as pure functions.

Putting the cart before the horse #

You may still think that my definition smells of a solution in search of a problem. Yes, pure functions are convenient, but does it naturally follow that validation should be implemented as pure functions? Isn't this a case of poor retconning?

Two buckets with a 'lid' labeled 'applicative validation' conveniently fitting over the validation bucket.

When faced with the question: What is validation, and what are business rules? it's almost as though I've conveniently sized the Validation sorting bucket so that it perfectly aligns with applicative validation. Then, the Business rules bucket fits whatever is left. (In the figure, the two buckets are of equal size, which hardly reflects reality. I estimate that the Business rules bucket is much larger, but had I tried to illustrate that, too, in the figure, it would have looked akilter.)

This is suspiciously convenient, but consider this: My experience is that this perspective on validation works well. To a great degree, this is because I consider validation a solved problem. It's productive to be able to take a chunk of a larger problem and put it aside: We know how to deal with this. There are no risks there.

Definitions do, I believe, rarely spring fully formed from some Platonic ideal. Rather, people observe what works and eventually extract a condensed description and call it a definition. That's what I've attempted to do here.

Business rules change #

Let's return to the perspective of validation as a technical contract between your system and a third party. While that contract should be as stable as possible, business rules change.

Consider the online restaurant reservation example. Imagine that you're the third-party programmer, and that you've developed a client that can make reservations on behalf of users. When a user wants to make a reservation, there's always a risk that it's not possible. Your client should be able to handle that scenario.

Now the restaurant becomes so popular that it decides to change a rule. Earlier, you could make reservations for one, three, or five people, even though the restaurant only has tables for two, four, or six people. Based on its new-found popularity, the restaurant decides that it only accepts reservations for entire tables. Unless it's on the same day and they still have a free table.

This changes the behaviour of the system, but not the contract. A reservation for three is still valid, but will be declined because of the new rule.

"Things that change at the same rate belong together. Things that change at different rates belong apart."

Business rules change at different rates than preconditions, so it makes sense to decouple those concerns.

Conclusion #

Since validation is a solved problem, it's useful to be able to identify what is validation, and what is something else. As long as an 'input rule' is self-contained (or parametrisable), deterministic, and has no side-effects, you can model it with applicative validation.

Equally useful is it to be able to spot when applicative validation isn't a good fit. While I'm sure that someone has published a ValidationT monad transformer for Haskell, I'm not sure I would recommend going that route. In other words, if some business operation involves impure actions, it's not going to fit the mold of applicative validation.

This doesn't mean that you can't implement business rules with pure functions. You can, but in my experience, abstractions other than applicative validation are more useful in those cases.


When is an implementation detail an implementation detail?

Monday, 19 June 2023 06:10:00 UTC

On the tension between encapsulation and testability.

This article is part of a series called Epistemology of interaction testing. A previous article in the series elicited this question:

"following your suggestion, aren’t we testing implementation details?"

This frequently-asked question reminds me of an old joke. I think that I first heard it in the eighties, a time when phones had rotary dials, everyone smoked, you'd receive mail through your apartment door's letter slot, and unemployment was high. It goes like this:

A painter gets a helper from the unemployment office. A few days later the lady from the office calls the painter and apologizes deeply for the mistake.

"What mistake?"

"I'm so sorry, instead of a painter we sent you a gynaecologist. Please just let him go, we'll send you a..."

"Let him go? Are you nuts, he's my best worker! At the last job, they forgot to leave us the keys, and the guy painted the whole room through the letter slot!"

I always think of this joke when the topic is testability. Should you test everything through a system's public API, or do you choose to expose some internal APIs in order to make the code more testable?

Letter slots #

Consider the simplest kind of program you could write: Hello world. If you didn't consider automated testing, then an idiomatic C# implementation might look like this:

internal class Program
{
    private static void Main(string[] args)
    {
        Console.WriteLine("Hello, World!");
    }
}

(Yes, I know that with modern C# you can write such a program using a single top-level statement, but I'm writing for a broader audience, and only use C# as an example language.)

How do we test a program like that? Of course, no-one seriously suggests that we really need to test something that simple, but what if we make it a little more complex? What if we make it possible to supply a name as a command-line argument? What if we want to internationalise the program? What if we want to add a help feature? What if we want to add a feature so that we can send a hello to another recipient, on another machine? When does the behaviour become sufficiently complex to warrant automated testing, and how do we achieve that goal?

For now, I wish to focus on how to achieve the goal of testing software. For the sake of argument, then, assume that we want to test the above hello world program.

As given, we can run the program and verify that it prints Hello, World! to the console. This is easy to do as a manual test, but harder if you want to automate it.

You could write a test framework that automatically starts a new operating-system process (the program) and waits until it exits. This framework should be able to handle processes that exit with success and failure status codes, as well as processes that hang, or never start, or keep restarting... Such a framework also requires a way to capture the standard output stream in order to verify that the expected text is written to it.

I'm sure such frameworks exist for various operating systems and programming languages. There is, however, a simpler solution if you can live with the trade-off: You could open the API of your source code a bit:

public class Program
{
    public static void Main(string[] args)
    {
        Console.WriteLine("Hello, World!");
    }
}

While I haven't changed the structure or the layout of the source code, I've made both class and method public. This means that I can now write a normal C# unit test that calls Program.Main.

I still need a way to observe the behaviour of the program, but there are known ways of redirecting the Console output in .NET (and I'd be surprised if that wasn't the case on other platforms and programming languages).

As we add more and more features to the command-line program, we may be able to keep testing by calling Program.Main and asserting against the redirected Console. As the complexity of the program grows, however, this starts to look like painting a room through the letter slot.

Adding new APIs #

Real programs are usually more than just a command-line utility. They may be smartphone apps that react to user input or network events, or web services that respond to HTTP requests, or complex asynchronous systems that react to, and send messages over durable queues. Even good old batch jobs are likely to pull data from files in order to write to a database, or the other way around. Thus, the interface to the rest of the world is likely larger than just a single Main method.

Smartphone apps or message-based systems have event handlers. Web sites or services have classes, methods, or functions that handle incoming HTTP requests. These are essentially event handlers, too. This increases the size of the 'test surface': There are more than a single method you can invoke in order to exercise the system.

Even so, a real program will soon grow to a size where testing entirely through the real-world-facing API becomes reminiscent of painting through a letter slot. J.B. Rainsberger explains that one major problem is the combinatorial explosion of required test cases.

Another problem is that the system may produce side effects that you care about. As a basic example, consider a system that, as part of its operation, sends emails. When testing this system, you want to verify that under certain circumstances, the system sends certain emails. How do you do that?

If the system has absolutely no concessions to testability, I can think of two options:

  • You contact the person to whom the system sends the email, and ask him or her to verify receipt of the email. You do that every time you test.
  • You deploy the System Under Test in an environment with an SMTP gateway that redirects all email to another address.

Clearly the first option is unrealistic. The second option is a little better, but you still have to open an email inbox and look for the expected message. Doing so programmatically is, again, technically possible, and I'm sure that there are POP3 or IMAP assertion libraries out there. Still, this seems complicated, error-prone, and slow.

What could we do instead? I would usually introduce a polymorphic interface such as IPostOffice as a way to substitute the real SmtpPostOffice with a Test Double.

Notice what happens in these cases: We introduce (or make public) new APIs in order to facilitate automated testing.

Application-boundary API and internal APIs #

It's helpful to distinguish between the real-world-facing API and everything else. In this diagram, I've indicated the public-facing API as a thin green slice facing upwards (assuming that external stimulus - button clicks, HTTP requests, etc. - arrives from above).

A box depicting a program, with a small green slice indicating public-facing APIs, and internal blue slices indicating internal APIs.

The real-world-facing API is the code that must be present for the software to work. It could be a button-click handler or an ASP.NET action method:

[HttpPost("restaurants/{restaurantId}/reservations")]
public async Task<ActionResult> Post(int restaurantId, ReservationDto dto)

Of course, if you're using another web framework or another programming language, the details differ, but the application has to have code that handles an HTTP POST request on matching addresses. Or a button click, or a message that arrives on a message bus. You get the point.

These APIs are fairly fixed. If you change them, you change the externally observable behaviour of the system. Such changes are likely breaking changes.

Based on which framework and programming language you're using, the shape of these APIs will be given. Like I did with the above Main method, you can make it public and use it for testing.

A software system of even middling complexity will usually also be decomposed into smaller components. In the figure, I've indicated such subdivisions as boxes with gray outlines. Each of these may present an API to other parts of the system. I've indicated these APIs with light blue.

The total size of internal APIs is likely to be larger than the public-facing API. On the other hand, you can (theoretically) change these internal interfaces without breaking the observable behaviour of the system. This is called refactoring.

These internal APIs will often have public access modifiers. That doesn't make them real-world-facing. Be careful not to confuse programming-language access modifiers with architectural concerns. Objects or their members can have public access modifiers even if the object plays an exclusively internal role. At the boundaries, applications aren't object-oriented. And neither are they functional.

Likewise, as the original Main method example shows, public APIs may be implemented with a private access modifier.

Why do such internal APIs exist? Is it only to support automated testing?

Decomposition #

If we introduce new code, such as the above IPostOffice interface, in order to facilitate testing, we have to be careful that it doesn't lead to test-induced design damage. The idea that one might introduce an API exclusively to support automated testing rubs some people the wrong way.

On the other hand, we do introduce (or make public) APIs for other reasons, too. One common reason is that we want to decompose an application's source code so that parallel development is possible. One person (or team) works on one part, and other people work on other parts. If those parts need to communicate, we need to agree on a contract.

Such a contract exists for purely internal reasons. End users don't care, and never know of it. You can change it without impacting users, but you may need to coordinate with other teams.

What remains, though, is that we do decompose systems into internal parts, and we've done this since before Parnas wrote On the Criteria to Be Used in Decomposing Systems into Modules.

Successful test-driven development introduces seams where they ought to be in any case.

Testing implementation details #

An internal seam is an implementation detail. Even so, when designed with care, it can serve multiple purposes. It enables teams to develop in parallel, and it enables automated testing.

Consider the example from a previous article in this series. I'll repeat one of the tests here:

[Theory]
[AutoData]
public void HappyPath(string statestring code, (stringbool, Uri) knownStatestring response)
{
    _repository.Add(state, knownState);
    _stateValidator
        .Setup(validator => validator.Validate(code, knownState))
        .Returns(true);
    _renderer
        .Setup(renderer => renderer.Success(knownState))
        .Returns(response);
 
    _target
        .Complete(state, code)
        .Should().Be(response);
}

This test exercises a happy-path case by manipulating IStateValidator and IRenderer Test Doubles. It's a common approach to testability, and what dhh would label test-induced design damage. While I'm sympathetic to that position, that's not my point. My point is that I consider IStateValidator and IRenderer internal APIs. End users (who probably don't even know what C# is) don't care about these interfaces.

Tests like these test against implementation details.

This need not be a problem. If you've designed good, stable seams then these tests can serve you for a long time. Testing against implementation details become a problem if those details change. Since it's hard to predict how things change in the future, it behoves us to decouple tests from implementation details as much as possible.

The alternative, however, is mail-slot testing, which comes with its own set of problems. Thus, judicious introduction of seams is helpful, even if it couples tests to implementation details.

Actually, in the question I quoted above, Christer van der Meeren asked whether my proposed alternative isn't testing implementation details. And, yes, that style of testing also relies on implementation details for testing. It's just a different way to design seams. Instead of designing seams around polymorphic objects, we design them around pure functions and immutable data.

There are, I think, advantages to functional programming, but when it comes to relying on implementation details, it's only on par with object-oriented design. Not worse, not better, but the same.

Conclusion #

Every API in use carries a cost. You need to keep the API stable so that users can use it tomorrow like they did yesterday. This can make it difficult to evolve or improve an API, because you risk introducing a breaking change.

There are APIs that a system must have. Software exists to be used, and whether that entails a user clicking on a button or another computer system sending a message to your system, your code must handle such stimulus. This is your real-world-facing contract, and you need to be careful to keep it consistent. The smaller that surface area is, the simpler that task is.

The same line of reasoning applies to internal APIs. While end users aren't impacted by changes in internal seams, other code is. If you change an implementation detail, this could cost maintenance work somewhere else. (Modern IDEs can handle some changes like that automatically, such as method renames. In those cases, the cost of change is low.) Therefore, it pays to minimise the internal seams as much as possible. One way to do this is by decoupling to delete code.

Still, some internal APIs are warranted. They help you decompose a large system into smaller subparts. While there's a potential maintenance cost with every internal API, there's also the advantage of working with smaller, independent units of code. Often, the benefits are larger than the cost.

When done well, such internal seams are useful testing APIs as well. They're still implementation details, though.


Collatz sequences by function composition

Monday, 12 June 2023 05:27:00 UTC

Mostly in C#, with a few lines of Haskell code.

A recent article elicited more comments than usual, and I've been so unusually buried in work that only now do I have a little time to respond to some of them. In one comment Struan Judd offers a refactored version of my Collatz sequence in order to shed light on the relationship between cyclomatic complexity and test case coverage.

Struan Judd's agenda is different from what I have in mind in this article, but the comment inspired me to refactor my own code. I wanted to see what it would look like with this constraint: It should be possible to test odd input numbers without exercising the code branches related to even numbers.

The problem with more naive implementations of Collatz sequence generators is that (apart from when the input is 1) the sequence ends with a tail of even numbers halving down to 1. I'll start with a simple example to show what I mean.

Standard recursion #

At first I thought that my confusion originated from the imperative structure of the original example. For more than a decade, I've preferred functional programming (FP), and even when I write object-oriented code, I tend to use concepts and patterns from FP. Thus I, naively, rewrote my Collatz generator as a recursive function:

public static IReadOnlyCollection<int> Sequence(int n)
{
    if (n < 1)
        throw new ArgumentOutOfRangeException(
            nameof(n),
            $"Only natural numbers allowed, but given {n}.");
 
    if (n == 1)
        return new[] { n };
    else
        if (n % 2 == 0)
            return new[] { n }.Concat(Sequence(n / 2)).ToArray();
        else
            return new[] { n }.Concat(Sequence(n * 3 + 1)).ToArray();
}

Recursion is usually not recommended in C#, because a sufficiently long sequence could blow the call stack. I wouldn't write production C# code like this, but you could do something like this in F# or Haskell where the languages offer solutions to that problem. In other words, the above example is only for educational purposes.

It doesn't, however, solve the problem that confused me: If you want to test the branch that deals with odd numbers, you can't avoid also exercising the branch that deals with even numbers.

Calculating the next value #

In functional programming, you solve most problems by decomposing them into smaller problems and then compose the smaller Lego bricks with standard combinators. It seemed like a natural refactoring step to first pull the calculation of the next value into an independent function:

public static int Next(int n)
{
    if ((n % 2) == 0)
        return n / 2;
    else
        return n * 3 + 1;
}

This function has a cyclomatic complexity of 2 and no loops or recursion. Test cases that exercise the even branch never touch the odd branch, and vice versa.

A parametrised test might look like this:

[Theory]
[InlineData( 2,  1)]
[InlineData( 3, 10)]
[InlineData( 4,  2)]
[InlineData( 5, 16)]
[InlineData( 6,  3)]
[InlineData( 7, 22)]
[InlineData( 8,  4)]
[InlineData( 9, 28)]
[InlineData(10,  5)]
public void NextExamples(int n, int expected)
{
    int actual = Collatz.Next(n);
    Assert.Equal(expected, actual);
}

The NextExamples test obviously defines more than the two test cases that are required to cover the Next function, but since code coverage shouldn't be used as a target measure, I felt that more than two test cases were warranted. This often happens, and should be considered normal.

A Haskell proof of concept #

While I had a general idea about the direction in which I wanted to go, I felt that I lacked some standard functional building blocks in C#: Most notably an infinite, lazy sequence generator. Before moving on with the C# code, I threw together a proof of concept in Haskell.

The next function is just a one-liner (if you ignore the optional type declaration):

next :: Integral a => a -> a
next n = if even n then n `div` 2 else n * 3 + 1

A few examples in GHCi suggest that it works as intended:

ghci> next 2
1
ghci> next 3
10
ghci> next 4
2
ghci> next 5
16

Haskell comes with enough built-in functions that that was all I needed to implement a Colaltz-sequence generator:

collatz :: Integral a => a -> [a]
collatz n = (takeWhile (1 <) $ iterate next n) ++ [1]

Again, a few examples suggest that it works as intended:

ghci> collatz 1
[1]
ghci> collatz 2
[2,1]
ghci> collatz 3
[3,10,5,16,8,4,2,1]
ghci> collatz 4
[4,2,1]
ghci> collatz 5
[5,16,8,4,2,1]

I should point out, for good measure, that since this is a proof of concept I didn't add a Guard Clause against zero or negative numbers. I'll keep that in the C# code.

Generator #

While C# does come with a TakeWhile function, there's no direct equivalent to Haskell's iterate function. It's not difficult to implement, though:

public static IEnumerable<T> Iterate<T>(Func<T, T> f, T x)
{
    var current = x;
    while (true)
    {
        yield return current;
        current = f(current);
    }
}

While this Iterate implementation has a cyclomatic complexity of only 2, it exhibits the same kind of problem as the previous attempts at a Collatz-sequence generator: You can't test one branch without testing the other. Here, it even seems as though it's impossible to test the branch that skips the loop.

In Haskell the iterate function is simply a lazily-evaluated recursive function, but that's not going to solve the problem in the C# case. On the other hand, it helps to know that the yield keyword in C# is just syntactic sugar over a compiler-generated Iterator.

Just for the exercise, then, I decided to write an explicit Iterator instead.

Iterator #

For the sole purpose of demonstrating that it's possible to refactor the code so that branches are independent of each other, I rewrote the Iterate function to return an explicit IEnumerable<T>:

public static IEnumerable<T> Iterate<T>(Func<T, T> f, T x)
{
    return new Iterable<T>(f, x);
}

The Iterable<T> class is a private helper class, and only exists to return an IEnumerator<T>:

private sealed class Iterable<T> : IEnumerable<T>
{
    private readonly Func<T, T> f;
    private readonly T x;
 
    public Iterable(Func<T, T> f, T x)
    {
        this.f = f;
        this.x = x;
    }
 
    public IEnumerator<T> GetEnumerator()
    {
        return new Iterator<T>(f, x);
    }
 
    IEnumerator IEnumerable.GetEnumerator()
    {
        return GetEnumerator();
    }
}

The Iterator<T> class does the heavy lifting:

private sealed class Iterator<T> : IEnumerator<T>
{
    private readonly Func<T, T> f;
    private readonly T original;
    private bool iterating;
 
    internal Iterator(Func<T, T> f, T x)
    {
        this.f = f;
        original = x;
        Current = x;
    }
 
    public T Current { getprivate set; }
 
    [MaybeNull]
    object IEnumerator.Current => Current;
 
    public void Dispose()
    {
    }
 
    public bool MoveNext()
    {
        if (iterating)
            Current = f(Current);
        else
            iterating = true;
        return true;
    }
 
    public void Reset()
    {
        Current = original;
        iterating = false;
    }
}

I can't think of a situation where I would write code like this in a real production code base. Again, I want to stress that this is only an exploration of what's possible. What this does show is that all members have low cyclomatic complexity, and none of them involve looping or recursion. Only one method, MoveNext, has a cyclomatic complexity greater than one, and its branches are independent.

Composition #

All Lego bricks are now in place, enabling me to compose the Sequence like this:

public static IReadOnlyCollection<int> Sequence(int n)
{
    if (n < 1)
        throw new ArgumentOutOfRangeException(
            nameof(n),
            $"Only natural numbers allowed, but given {n}.");
 
    return Generator.Iterate(Next, n).TakeWhile(i => 1 < i).Append(1).ToList();
}

This function has a cyclomatic complexity of 2, and each branch can be exercised independently of the other.

Which is what I wanted to accomplish.

Conclusion #

I'm still re-orienting myself when it comes to understanding the relationship between cyclomatic complexity and test coverage. As part of that work, I wanted to refactor the Collatz code I originally showed. This article shows one way to decompose and reassemble the function in such a way that all branches are independent of each other, so that each can be covered by test cases without exercising the other branch.

I don't know if this is useful to anyone else, but I found the hours well-spent.


Comments

I really like this article. So much so that I tried to implement this approach for a recursive function at my work. However, I realized that there are some required conditions.

First, the recusrive funciton must be tail recursive. Second, the recursive function must be closed (i.e. the output is a subset/subtype of the input). Neither of those were true for my function at work. An example of a function that doesn't satisfy either of these conditions is the function that computes the depth of a tree.

A less serious issue is that your code, as currently implemented, requires that there only be one base case value. The issue is that you have duplicated code: the unique base case value appears both in the call to TakeWhile and in the subsequent call to Append. Instead of repeating yourself, I recommend defining an extension method on Enumerable called TakeUntil that works like TakeWhile but also returns the first value on which the predicate returned false. Here is an implementation of that extension method.

2023-06-22 13:45 UTC

Tyson, thank you for writing. I suppose that you can't share the function that you mention, so I'll have to discuss it in general terms.

As far as I can tell you can always(?) refactor non-tail-recursive functions to tail-recursive implementations. In practice, however, there's rarely need for that, since you can usually separate the problem into a general-purpose library function on the one hand, and your special function on the other. Examples of general-purpose functions are the various maps and folds. If none of the standard functions do the trick, the type's associated catamorphism ought to.

One example of that is computing the depth of a tree, which we've already discussed.

I don't insist that any of this is universally true, so if you have another counter-example, I'd be keen to see it.

You are, of course, right about using a TakeUntil extension instead. I was, however, trying to use as many built-in components as possible, so as to not unduly confuse casual readers.

2023-06-27 12:35 UTC

The Git repository that vanished

Monday, 05 June 2023 06:38:00 UTC

A pair of simple operations resurrected it.

The other day I had an 'interesting' experience. I was about to create a small pull request, so I checked out a new branch in Git and switched to my editor in order to start coding when the battery on my laptop died.

Clearly, when this happens, the computer immediately stops, without any graceful shutdown.

I plugged in the laptop and booted it. When I navigated to the source code folder I was working on, the files where there, but it was no longer a Git repository!

Git is fixable #

Git is more complex, and more powerful, than most developers care to deal with. Over the years, I've observed hundreds of people interact with Git in various ways, and most tend to give up at the first sign of trouble.

The point of this article isn't to point fingers at anyone, but rather to serve as a gentle reminder that Git tends to be eminently fixable.

Often, when people run into problems with Git, their only recourse is to delete the repository and clone it again. I've seen people do that enough times to realise that it might be helpful to point out: You may not have to do that.

Corruption #

Since I use Git tactically I have many repositories on my machine that have no remotes. In those cases, deleting the entire directory and cloning it from the remote isn't an option. I do take backups, though.

Still, in this story, the repository I was working with did have a remote. Even so, I was reluctant to delete everything and start over, since I had multiple branches and stashes I'd used for various experiments. Many of those I'd never pushed to the remote, so starting over would mean that I'd lose all of that. It was, perhaps, not a catastrophe, but I would certainly prefer to restore my local repository, if possible.

The symptoms were these: When you work with Git in Git Bash, the prompt will indicate which branch you're on. That information was absent, so I was already worried. A quick query confirmed my fears:

$ git status
fatal: not a git repository (or any of the parent directories): .git

All the source code was there, but it looked as though the Git repository was gone. The code still compiled, but there was no source history.

Since all code files were there, I had hope. It helps knowing that Git, too, is file-based, and all files are in a hidden directory called .git. If all the source code was still there, perhaps the .git files were there, too. Why wouldn't they be?

$ ls .git
COMMIT_EDITMSG  description  gitk.cache  hooks/  info/  modules/        objects/   packed-refs
config          FETCH_HEAD   HEAD        index   logs/  ms-persist.xml  ORIG_HEAD  refs/

Jolly good! The .git files were still there.

I now had a hypothesis: The unexpected shutdown of my machine had left some 'dangling pointers' in .git. A modern operating system may delay writes to disk, so perhaps my git checkout command had never made it all the way to disk - or, at least, not all of it.

If the repository was 'merely' corrupted in the sense that a few of the reference pointers had gone missing, perhaps it was fixable.

Empty-headed #

A few web searches indicated that the problem might be with the HEAD file, so I investigated its contents:

$ cat .git/HEAD

That was all. No output. The HEAD file was empty.

That file is not supposed to be empty. It's supposed to contain a commit ID or a reference that tells the Git CLI what the current head is - that is, which commit is currently checked out.

While I had checked out a new branch when my computer shut down, I hadn't written any code yet. Thus, the easiest remedy would be to restore the head to master. So I opened the HEAD file in Vim and added this to it:

ref: refs/heads/master

And just like that, the entire Git repository returned!

Bad object #

The branches, the history, everything looked as though it was restored. A little more investigation, however, revealed one more problem:

$ git log --oneline --all
fatal: bad object refs/heads/some-branch

While a normal git log command worked fine, as soon as I added the --all switch, I got that bad object error message, with the name of the branch I had just created before the computer shut down. (The name of that branch wasn't some-branch - that's just a surrogate I'm using for this article.)

Perhaps this was the same kind of problem, so I explored the .git directory further and soon discovered a some-branch file in .git/refs/heads/. What did the contents look like?

$ cat .git/refs/heads/some-branch

Another empty file!

Since I had never committed any work to that branch, the easiest fix was to simply delete the file:

$ rm .git/refs/heads/some-branch

That solved that problem as well. No more fatal: bad object error when using the --all switch with git log.

No more problems have shown up since then.

Conclusion #

My experience with Git is that it's so powerful that you can often run into trouble. On the other hand, it's also so powerful that you can also use it to extricate yourself from trouble. Learning how to do that will teach you how to use Git to your advantage.

The problem that I ran into here wasn't fixable with the Git CLI itself, but turned out to still be easily remedied. A Git guru like Enrico Campidoglio could most likely have solved my problems without even searching the web. The details of how to solve the problems were new to me, but it took me a few web searches and perhaps five-ten minutes to fix them.

The point of this article, then, isn't in the details. It's that it pays to do a little investigation when you run into problems with Git. I already knew that, but I thought that this little story was a good occasion to share that knowledge.


Favour flat code file folders

Monday, 29 May 2023 19:20:00 UTC

How code files are organised is hardly related to sustainability of code bases.

My recent article Folders versus namespaces prompted some reactions. A few kind people shared how they organise code bases, both on Twitter and in the comments. Most reactions, however, carry the (subliminal?) subtext that organising code in file folders is how things are done.

I'd like to challenge that notion.

As is usually my habit, I mostly do this to make you think. I don't insist that I'm universally right in all contexts, and that everyone else are wrong. I only write to suggest that alternatives exist.

The previous article wasn't a recommendation; it's was only an exploration of an idea. As I describe in Code That Fits in Your Head, I recommend flat folder structures. Put most code files in the same directory.

Finding files #

People usually dislike that advice. How can I find anything?!

Let's start with a counter-question: How can you find anything if you have a deep file hierarchy? Usually, if you've organised code files in subfolders of subfolders of folders, you typically start with a collapsed view of the tree.

Mostly-collapsed Solution Explorer tree.

Those of my readers who know a little about search algorithms will point out that a search tree is an efficient data structure for locating content. The assumption, however, is that you already know (or can easily construct) the path you should follow.

In a view like the above, most files are hidden in one of the collapsed folders. If you want to find, say, the Iso8601.cs file, where do you look for it? Which path through the tree do you take?

Unfair!, you protest. You don't know what the Iso8601.cs file does. Let me enlighten you: That file contains functions that render dates and times in ISO 8601 formats. These are used to transmit dates and times between systems in a platform-neutral way.

So where do you look for it?

It's probably not in the Controllers or DataAccess directories. Could it be in the Dtos folder? Rest? Models?

Unless your first guess is correct, you'll have to open more than one folder before you find what you're looking for. If each of these folders have subfolders of their own, that only exacerbates the problem.

If you're curious, some programmer (me) decided to put the Iso8601.cs file in the Dtos directory, and perhaps you already guessed that. That's not the point, though. The point is this: 'Organising' code files in folders is only efficient if you can unerringly predict the correct path through the tree. You'll have to get it right the first time, every time. If you don't, it's not the most efficient way.

Most modern code editors come with features that help you locate files. In Visual Studio, for example, you just hit Ctrl+, and type a bit of the file name: iso:

Visual Studio Go To All dialog.

Then hit Enter to open the file. In Visual Studio Code, the corresponding keyboard shortcut is Ctrl+p, and I'd be highly surprised if other editors didn't have a similar feature.

To conclude, so far: Organising files in a folder hierarchy is at best on par with your editor's built-in search feature, but is likely to be less productive.

Navigating a code base #

What if you don't quite know the name of the file you're looking for? In such cases, the file system is even less helpful.

I've seen people work like this:

  1. Look at some code. Identify another code item they'd like to view. (Examples may include: Looking at a unit test and wanting to see the SUT, or looking at a class and wanting to see the base class.)
  2. Move focus to the editor's folder view (in Visual Studio called the Solution Explorer).
  3. Scroll to find the file in question.
  4. Double-click said file.

Regardless of how the files are organised, you could, instead, go to definition (F12 with my Visual Studio keyboard layout) in a single action. Granted, how well this works varies with editor and language. Still, even when editor support is less optimal (e.g. a code base with a mix of F# and C#, or a Haskell code base), I can often find things faster with a search (Ctrl+Shift+f) than via the file system.

A modern editor has efficient tools that can help you find what you're looking for. Looking through the file system is often the least efficient way to find the code you're looking for.

Large code bases #

Do I recommend that you dump thousands of code files in a single directory, then?

Hardly, but a question like that presupposes that code bases have thousands of code files. Or more, even. And I've seen such code bases.

Likewise, it's a common complaint that Visual Studio is slow when opening solutions with hundreds of projects. And the day Microsoft fixes that problem, people are going to complain that it's slow when opening a solution with thousands of projects.

Again, there's an underlying assumption: That a 'real' code base must be so big.

Consider alternatives: Could you decompose the code base into multiple smaller code bases? Could you extract subsystems of the code base and package them as reusable packages? Yes, you can do all those things.

Usually, I'd pull code bases apart long before they hit a thousand files. Extract modules, libraries, utilities, etc. and put them in separate code bases. Use existing package managers to distribute these smaller pieces of code. Keep the code bases small, and you don't need to organise the files.

Maintenance #

But, if all files are mixed together in a single folder, how do we keep the code maintainable?

Once more, implicit (but false) assumptions underlie such questions. The assumption is that 'neatly' organising files in hierarchies somehow makes the code easier to maintain. Really, though, it's more akin to a teenager who 'cleans' his room by sweeping everything off the floor only to throw it into his cupboard. It does enable hoovering the floor, but it doesn't make it easier to find anything. The benefit is mostly superficial.

Still, consider a tree.

A tree of folders with files.

This may not be the way you're used to see files and folders rendered, but this diagram emphases the tree structure and makes what happens next starker.

The way that most languages work, putting code files in folders makes little difference to the compiler. If the classes in my Controllers folder need some classes from the Dtos folder, you just use them. You may need to import the corresponding namespace, but modern editors make that a breeze.

A tree of folders with files. Two files connect across the tree's branches.

In the above tree, the two files who now communicate are coloured orange. Notice that they span across two main branches of the tree.

Thus, even though the files are organised in a tree, it has no impact on the maintainability of the code base. Code can reference other code in other parts of the tree. You can easily create cycles in a language like C#, and organising files in trees makes no difference.

Most languages, however, enforce that library dependencies form a directed acyclic graph (i.e. if the data access library references the domain model, the domain model can't reference the data access library). The C# (and most other languages) compiler enforces what Robert C. Martin calls the Acyclic Dependencies Principle. Preventing cycles prevents spaghetti code, which is key to a maintainable code base.

(Ironically, one of the more controversial features of F# is actually one of its greatest strengths: It doesn't allow cycles.)

Tidiness #

Even so, I do understand the lure of organising code files in an elaborate hierarchy. It looks so neat.

Previously, I've touched on the related topic of consistency, and while I'm a bit of a neat freak myself, I have to realise that tidiness seems to be largely unrelated to the sustainability of a code base.

As another example in this category, I've seen more than one code base with consistently beautiful documentation. Every method was adorned with formal XML documentation with every input parameter as well as output described.

Every new phase in a method was delineated with another neat comment, nicely adorned with a 'comment frame' and aligned with other comments.

It was glorious.

Alas, that documentation sat on top of 750-line methods with a cyclomatic complexity above 50. The methods were so long that developers had to introduce artificial variable scopes to avoid naming collisions.

The reason I was invited to look at that code in the first place was that the organisation had trouble with maintainability, and they asked me to help.

It was neat, yet unmaintainable.

This discussion about tidiness may seem like a digression, but I think it's important to make the implicit explicit. If I'm not much mistaken, preference for order is a major reason that so many developers want to organise code files into hierarchies.

Organising principles #

What other motivations for file hierarchies could there be? How about the directory structure as an organising principle?

The two most common organising principles are those that I experimented with in the previous article:

  1. By technical role (Controller, View Model, DTO, etc.)
  2. By feature

A technical leader might hope that, by presenting a directory structure to team members, it imparts an organising principle on the code to be.

It may even do so, but is that actually a benefit?

It might subtly discourage developers from introducing code that doesn't fit into the predefined structure. If you organise code by technical role, developers might put most code in Controllers, producing mostly procedural Transaction Scripts. If you organise by feature, this might encourage duplication because developers don't have a natural place to put general-purpose code.

You can put truly shared code in the root folder, the counter-argument might be. This is true, but:

  1. This seems to be implicitly discouraged by the folder structure. After all, the hierarchy is there for a reason, right? Thus, any file you place in the root seems to suggest a failure of organisation.
  2. On the other hand, if you flaunt that not-so-subtle hint and put many code files in the root, what advantage does the hierarchy furnish?

In Information Distribution Aspects of Design Methodology David Parnas writes about documentation standards:

"standards tend to force system structure into a standard mold. A standard [..] makes some assumptions about the system. [...] If those assumptions are violated, the [...] organization fits poorly and the vocabulary must be stretched or misused."

David Parnas, Information Distribution Aspects of Design Methodology

(The above quote is on the surface about documentation standards, and I've deliberately butchered it a bit (clearly marked) to make it easier to spot the more general mechanism.)

In the same paper, Parnas describes the danger of making hard-to-change decisions too early. Applied to directory structure, the lesson is that you should postpone designing a file hierarchy until you know more about the problem. Start with a flat directory structure and add folders later, if at all.

Beyond files? #

My claim is that you don't need much in way of directory hierarchy. From this doesn't follow, however, that we may never leverage such options. Even though I left most of the example code for Code That Fits in Your Head in a single folder, I did add a specialised folder as an anti-corruption layer. Folders do have their uses.

"Why not take it to the extreme and place most code in a single file? If we navigate by "namespace view" and search, do we need all those files?"

Following a thought to its extreme end can shed light on a topic. Why not, indeed, put all code in a single file?

Curious thought, but possibly not new. I've never programmed in SmallTalk, but as I understand it, the language came with tooling that was both IDE and execution environment. Programmers would write source code in the editor, but although the code was persisted to disk, it may not have been as text files.

Even if I completely misunderstand how SmallTalk worked, it's not inconceivable that you could have a development environment based directly on a database. Not that I think that this sounds like a good idea, but it sounds technically possible.

Whether we do it one way or another seems mostly to be a question of tooling. What problems would you have if you wrote an entire C# (Java, Python, F#, or similar) code base as a single file? It becomes more difficult to look at two or more parts of the code base at the same time. Still, Visual Studio can actually give you split windows of the same file, but I don't know how it scales if you need multiple views over the same huge file.

Conclusion #

I recommend flat directory structures for code files. Put most code files in the root of a library or app. Of course, if your system is composed from multiple libraries (dependencies), each library has its own directory.

Subfolders aren't prohibited, only generally discouraged. Legitimate reasons to create subfolders may emerge as the code base evolves.

My misgivings about code file directory hierarchies mostly stem from the impact they have on developers' minds. This may manifest as magical thinking or cargo-cult programming: Erect elaborate directory structures to keep out the evil spirits of spaghetti code.

It doesn't work that way.


Visual Studio Code snippet to make URLs relative

Tuesday, 23 May 2023 19:23:00 UTC

Yes, it involves JSON and regular expressions.

Ever since I migrated the blog off dasBlog I've been writing the articles in raw HTML. The reason is mostly a historical artefact: Originally, I used Windows Live Writer, but Jekyll had no support for that, and since I'd been doing web development for more than a decade already, raw HTML seemed like a reliable and durable alternative. I increasingly find that relying on skill and knowledge is a far more durable strategy than relying on technology.

For a decade I used Sublime Text to write articles, but over the years, I found it degrading in quality. I only used Sublime Text to author blog posts, so when I recently repaved my machine, I decided to see if I could do without it.

Since I was already using Visual Studio Code for much of my programming, I decided to give it a go for articles as well. It always takes time when you decide to move off a tool you've been used for a decade, but after some initial frustrations, I quickly found a new modus operandi.

One benefit of rocking the boat is that it prompts you to reassess the way you do things. Naturally, this happened here as well.

My quest for relative URLs #

I'd been using a few Sublime Text snippets to automate a few things, like the markup for the section heading you see above this paragraph. Figuring out how to replicate that snippet in Visual Studio Code wasn't too hard, but as I was already perusing the snippet documentation, I started investigating other options.

One little annoyance I'd lived with for years was adding links to other articles on the blog.

While I write an article, I run the site on my local machine. When linking to other articles, I sometimes use the existing page address off the public site, and sometimes I just copy the page address from localhost. In both cases, I want the URL to be relative so that I can navigate the site even if I'm offline. I've written enough articles on planes or while travelling without internet that this is an important use case for me.

For example, if I want to link to the article Adding NuGet packages when offline, I want the URL to be /2023/01/02/adding-nuget-packages-when-offline, but that's not the address I get when I copy from the browser's address bar. Here, I get the full URL, with either http://localhost:4000/ or https://blog.ploeh.dk/ as the origin.

For years, I've been manually stripping the origin away, as well as the trailing /. Looking through the Visual Studio Code snippet documentation, however, I eyed an opportunity to automate that workflow.

Snippet #

I wanted a piece of editor automation that could modify a URL after I'd pasted it into the article. After a few iterations, I've settled on a surround-with snippet that works pretty well. It looks like this:

"Make URL relative": {
  "prefix""urlrel",
  "body": [ "${TM_SELECTED_TEXT/^(?:http(?:s?):\\/\\/(?:[^\\/]+))(.+)\\//$1/}" ],
  "description""Make URL relative."
}

Don't you just love regular expressions? Write once, scrutinise forever.

I don't want to go over all the details, because I've already forgotten most of them, but essentially this expression strips away the URL origin starting with either http or https until it finds the first slash /.

The thing that makes it useful, though, is the TM_SELECTED_TEXT variable that tells Visual Studio Code that this snippet works on selected text.

When I paste a URL into an a tag, at first nothing happens because no text is selected. I can then use Shift + Alt + to expand the selection, at which point the Visual Studio Code lightbulb (Code Action) appears:

Screen shot of the make-URL-relative code snippet in action.

Running the snippet removes the URL's origin, as well as the trailing slash, and I can move on to write the link text.

Conclusion #

After I started using Visual Studio Code to write blog posts, I've created a few custom snippets to support my authoring workflow. Most of them are fairly mundane, but the make-URLs-relative snippet took me a few iterations to get right.

I'm not expecting many of my readers to have this particular need, but I hope that this outline showcases the capabilities of Visual Studio Code snippets, and perhaps inspires you to look into creating custom snippets for your own purposes.


Comments

Seems like a useful function to have, so I naturally wondered if I could make it worse implement a similar function in Emacs.

Emacs lisp has support for regular expressions, only typically with a bunch of extra slashes included, so I needed to figure out how to work with the currently selected text to get this to work. The currently selected text is referred to as the "region" and by specifying "r" as a parameter for the interactive call we can pass the start and end positions for the region directly to the function.

I came up with this rather basic function:

(defun make-url-relative (start end)
  "Converts the selected uri from an absolute url and converts it to a relative one.
This is very simple and relies on the url starting with http/https, and removes each character to the
first slash in the path"
  (interactive "r")
  (replace-regexp-in-region "http[s?]:\/\/.+\/" "" start end))
                

With this function included in config somewhere: it can be called by selecting a url, and using M-x make-url-relative (or assigned to a key binding as required)

I'm not sure if there's an already existing package for this functionality, but I hadn't really thought to look for it before so thanks for the idea 😊

2023-05-24 11:20 UTC

Folders versus namespaces

Monday, 15 May 2023 06:01:00 UTC

What if you allow folder and namespace structure to diverge?

I'm currently writing C# code with some first-year computer-science students. Since most things are new to them, they sometimes do things in a way that are 'not the way we usually do things'. As an example, teachers have instructed them to use namespaces, but apparently no-one have told them that the file folder structure has to mirror the namespace structure.

The compiler doesn't care, but as long as I've been programming in C#, it's been idiomatic to do it that way. There's even a static code analysis rule about it.

The first couple of times they'd introduce a namespace without a corresponding directory, I'd point out that they are supposed to keep those things in sync. One day, however, it struck me: What happens if you flout that convention?

A common way to organise code files #

Code scaffolding tools and wizards will often nudge you to organise your code according to technical concerns: Controllers, models, views, etc. I'm sure you've encountered more than one code base organised like this:

Code organised into folders like Controllers, Models, DataAccess, etc.

You'll put all your Controller classes in the Controllers directory, and make sure that the namespace matches. Thus, in such a code base, the full name of the ReservationsController might be Ploeh.Samples.Restaurants.RestApi.Controllers.ReservationsController.

A common criticism is that this is the wrong way to organise the code.

The problem with trees #

The complaint that this is the wrong way to organise code implies that a correct way exists. I write about this in Code That Fits in Your Head:

Should you create a subdirectory for Controllers, another for Models, one for Filters, and so on? Or should you create a subdirectory for each feature?

Few people like my answer: Just put all files in one directory. Be wary of creating subdirectories just for the sake of 'organising' the code.

File systems are hierarchies; they are trees: a specialised kind of acyclic graph in which any two vertices are connected by exactly one path. Put another way, each vertex can have at most one parent. Even more bluntly: If you put a file in a hypothetical Controllers directory, you can't also put it in a Calendar directory.

But what if you could?

Namespaces disconnected from directory hierarchy #

The code that accompanies Code That Fits in Your Head is organised as advertised: 65 files in a single directory. (Tests go in separate directories, though, as they belong to separate libraries.)

If you decide to ignore the convention that namespace structure should mirror folder structure, however, you now have a second axis of variability.

As an experiment, I decided to try that idea with the book's code base. The above screen shot shows the stereotypical organisation according to technical responsibility, after I moved things around. To be clear: This isn't how the book's example code is organised, but an experiment I only now carried out.

If you open the ReservationsController.cs file, however, I've now declared that it belongs to a namespace called Ploeh.Samples.Restaurants.RestApi.Reservations. Using Visual Studio's Class View, things look different from the Solution Explorer:

Code organised into namespaces according to feature: Calandar, Reservations, etc.

Here I've organised the namespaces according to feature, rather than technical role. The screen shot shows the Reservations feature opened, while other features remain closed.

Initial reactions #

This article isn't a recommendation. It's nothing but an initial exploration of an idea.

Do I like it? So far, I think I still prefer flat directory structures. Even though this idea gives two axes of variability, you still have to make judgment calls. It's easy enough with Controllers, but where do you put cross-cutting concerns? Where do you put domain logic that seems to encompass everything else?

As an example, the code base that accompanies Code That Fits in Your Head is a multi-tenant system. Each restaurant is a separate tenant, but I've modelled restaurants as part of the domain model, and I've put that 'feature' in its own namespace. Perhaps that's a mistake; at least, I now have the code wart that I have to import the Ploeh.Samples.Restaurants.RestApi.Restaurants namespace to implement the ReservationsController, because its constructor looks like this:

public ReservationsController(
    IClock clock,
    IRestaurantDatabase restaurantDatabase,
    IReservationsRepository repository)
{
    Clock = clock;
    RestaurantDatabase = restaurantDatabase;
    Repository = repository;
}

The IRestaurantDatabase interface is defined in the Restaurants namespace, but the Controller needs it in order to look up the restaurant (i.e. tenant) in question.

You could argue that this isn't a problem with namespaces, but rather a code smell indicating that I should have organised the code in a different way.

That may be so, but then implies a deeper problem: Assigning files to hierarchies may not, after all, help much. It looks as though things are organised, but if the assignment of things to buckets is done without a predictable system, then what benefit does it provide? Does it make things easier to find, or is the sense of order mostly illusory?

I tend to still believe that this is the case. This isn't a nihilistic or defeatist position, but rather a realisation that order must arise from other origins.

Conclusion #

I was recently repeatedly encountering student code with a disregard for the convention that namespace structure should follow directory structure (or the other way around). Taking a cue from Kent Beck I decided to investigate what happens if you forget about the rules and instead pursue what that new freedom might bring.

In this article, I briefly show an example where I reorganised a code base so that the file structure is according to implementation detail, but the namespace hierarchy is according to feature. Clearly, I could also have done it the other way around.

What if, instead of two, you have three organising principles? I don't know. I can't think of a third kind of hierarchy in a language like C#.

After a few hours reorganising the code, I'm not scared away from this idea. It might be worth to revisit in a larger code base. On the other hand, I'm still not convinced that forcing a hierarchy over a sophisticated software design is particularly beneficial.

P.S. 2023-05-30. This article is only a report on an experiment. For my general recommendation regarding code file organisation, see Favour flat code file folders.


Comments

Hi Mark,
While reading your book "Code That Fits in Your Head", your latest blog entry caught my attention, as I am struggling in software development with similar issues.
I find it hard, to put all classes into one project directory, as it feels overwhelming, when the number of classes increases.
In the following, I would like to specify possible organising principles in my own words.

Postulations
- Folders should help the programmer (and reader) to keep the code organised
- Namespaces should reflect the hierarchical organisation of the code base
- Cross-cutting concerns should be addressed by modularity.

Definitions
1. Folders
- the allocation of classes in a project with similar technical concerns into folders should help the programmer in the first place, by visualising this similarity
- the benefit lies just in the organisation, i.e. storage of code, not in the expression of hierarchy

2. Namespaces
- expression of hierarchy can be achieved by namespaces, which indicate the relationship between allocated classes
- classes can be organised in folders with same designation
- the namespace designation could vary by concerns, although the classes are placed in same folders, as the technical concern of the class shouldn't affect the hierarchical organisation

3. Cross-cutting concerns
- classes, which aren't related to a single task, could be indicated by a special namespace
- they could be placed in a different folder, to signalize different affiliations
- or even placed in a different assembly

Summing up
A hierarchy should come by design. The organisation of code in folders should help the programmer or reader to grasp the file structure, not necessarily the program hierarchy.
Folders should be a means, not an expression of design. Folders and their designations could change (or disappear) over time in development. Thus, explicit connection of namespace to folder designation seems not desirable, but it's not forbidden.

All views above are my own. Please let me know, what you think.

Best regards,
Markus

2023-05-18 19:13 UTC

Markus, thank you for writing. You can, of course, organise code according to various principles, and what works in one case may not be the best fit in another case. The main point of this article was to suggest, as an idea, that folder hierarchy and namespace hierarchy doesn't have to match.

Based on reader reactions, however, I realised that I may have failed to clearly communicate my fundamental position, so I wrote another article about that. I do, indeed, favour flat folder hierarchies.

That is not to say that you can't have any directories in your code base, but rather that I'm sceptical that any such hierarchy addresses real problems.

For instance, you write that

"Folders should help the programmer (and reader) to keep the code organised"

If I focus on the word should, then I agree: Folders should help the programmer keep the code organised. In my view, then, it follows that if a tree structure does not assist in doing that, then that structure is of no use and should not be implemented (or abandoned if already in place).

I do get the impression from many people that they consider a directory tree vital to be able to navigate and understand a code base. What I've tried to outline in my more recent article is that I don't accept that as an undisputable axiom.

What I do find helpful as an organising principle is focusing on dependencies as a directed acyclic graph. Cyclic dependencies between objects is a main source of complexity. Keep dependency graphs directed and make code easy to delete.

Organising code files in a tree structure doesn't help achieve that goal. This is the reason I consider code folder hierarchies a red herring: Perhaps not explicitly detrimental to sustainability, but usually nothing but a distraction.

How, then, do you organise a large code base? I hope that I answer that question, too, in my more recent article Favour flat code file folders.

2023-06-13 6:11 UTC

Is cyclomatic complexity really related to branch coverage?

Monday, 08 May 2023 05:38:00 UTC

A genuine case of doubt and bewilderment.

Regular readers of this blog may be used to its confident and opinionated tone. I write that way, not because I'm always convinced that I'm right, but because prose with too many caveats and qualifications tends to bury the message in verbose and circumlocutory ambiguity.

This time, however, I write to solicit feedback, and because I'm surprised to the edge of bemusement by a recent experience.

Collatz sequence #

Consider the following code:

public static class Collatz
{
    public static IReadOnlyCollection<intSequence(int n)
    {
        if (n < 1)
            throw new ArgumentOutOfRangeException(
                nameof(n),
                $"Only natural numbers allowed, but given {n}.");
 
        var sequence = new List<int>();
        var current = n;
        while (current != 1)
        {
            sequence.Add(current);
            if (current % 2 == 0)
                current = current / 2;
            else
                current = current * 3 + 1;
        }
        sequence.Add(current);
        return sequence;
    }
}

As the names imply, the Sequence function calculates the Collatz sequence for a given natural number.

Please don't tune out if that sounds mathematical and difficult, because it really isn't. While the Collatz conjecture still evades mathematical proof, the sequence is easy to calculate and understand. Given a number, produce a sequence starting with that number and stop when you arrive at 1. Every new number in the sequence is based on the previous number. If the input is even, divide it by two. If it's odd, multiply it by three and add one. Repeat until you arrive at one.

The conjecture is that any natural number will produce a finite sequence. That's the unproven part, but that doesn't concern us. In this article, I'm only interested in the above code, which computes such sequences.

Here are few examples:

> Collatz.Sequence(1)
List<int>(1) { 1 }
> Collatz.Sequence(2)
List<int>(2) { 2, 1 }
> Collatz.Sequence(3)
List<int>(8) { 3, 10, 5, 16, 8, 4, 2, 1 }
> Collatz.Sequence(4)
List<int>(3) { 4, 2, 1 }

While there seems to be a general tendency for the sequence to grow as the input gets larger, that's clearly not a rule. The examples show that the sequence for 3 is longer than the sequence for 4.

All this, however, just sets the stage. The problem doesn't really have anything to do with Collatz sequences. I only ran into it while working with a Collatz sequence implementation that looked a lot like the above.

Cyclomatic complexity #

What is the cyclomatic complexity of the above Sequence function? If you need a reminder of how to count cyclomatic complexity, this is a good opportunity to take a moment to refresh your memory, count the number, and compare it with my answer.

Apart from the opportunity for exercise, it was a rhetorical question. The answer is 4.

This means that we'd need at least four unit test to cover all branches. Right? Right?

Okay, let's try.

Branch coverage #

Before we start, let's make the ritual denouncement of code coverage as a target metric. The point isn't to reach 100% code coverage as such, but to gain confidence that you've added tests that cover whatever is important to you. Also, the best way to do that is usually with TDD, which isn't the situation I'm discussing here.

The first branch that we might want to cover is the Guard Clause. This is easily addressed with an xUnit.net test:

[Fact]
public void ThrowOnInvalidInput()
{
    Assert.Throws<ArgumentOutOfRangeException>(() => Collatz.Sequence(0));
}

This test calls the Sequence function with 0, which (in this context, at least) isn't a natural number.

If you measure test coverage (or, in this case, just think it through), there are no surprises yet. One branch is covered, the rest aren't. That's 25%.

(If you use the free code coverage option for .NET, it will surprisingly tell you that you're only at 16% branch coverage. It deems the cyclomatic complexity of the Sequence function to be 6, not 4, and 1/6 is 16.67%. Why it thinks it's 6 is not entirely clear to me, but Visual Studio agrees with me that the cyclomatic complexity is 4. In this particular case, it doesn't matter anyway. The conclusion that follows remains the same.)

Let's add another test case, and perhaps one that gives the algorithm a good exercise.

[Fact]
public void Example()
{
    var actual = Collatz.Sequence(5);
    Assert.Equal(new[] { 5, 16, 8, 4, 2, 1 }, actual);
}

As expected, the test passes. What's the branch coverage now?

Try to think it through instead of relying exclusively on a tool. The algorithm isn't more complicated that you can emulate execution in your head, or perhaps with the assistance of a notepad. How many branches does it execute when the input is 5?

Branch coverage is now 100%. (Even the dotnet coverage tool agrees, despite its weird cyclomatic complexity value.) All branches are exercised.

Two tests produce 100% branch coverage of a function with a cyclomatic complexity of 4.

Surprise #

That's what befuddles me. I thought that cyclomatic complexity and branch coverage were related. I thought, that the number of branches was a good indicator of the number of tests you'd need to cover all branches. I even wrote an article to that effect, and no-one contradicted me.

That, in itself, is no proof of anything, but the notion that the article presents seems to be widely accepted. I never considered it controversial, and the only reason I didn't cite anyone is that this seems to be 'common knowledge'. I wasn't aware of a particular source I could cite.

Now, however, it seems that it's wrong. Is it wrong, or am I missing something?

To be clear, I completely understand why the above two tests are sufficient to fully cover the function. I also believe that I fully understand why the cyclomatic complexity is 4.

I am also painfully aware that the above two tests in no way fully specify the Collatz sequence. That's not the point.

The point is that it's possible to cover this function with only two tests, despite the cyclomatic complexity being 4. That surprises me.

Is this a known thing?

I'm sure it is. I've long since given up discovering anything new in programming.

Conclusion #

I recently encountered a function that performed a Collatz calculation similar to the one I've shown here. It exhibited the same trait, and since it had no Guard Clause, I could fully cover it with a single test case. That function even had a cyclomatic complexity of 6, so you can perhaps imagine my befuddlement.

Is it wrong, then, that cyclomatic complexity suggests a minimum number of test cases in order to cover all branches?

It seems so, but that's new to me. I don't mind being wrong on occasion. It's usually an opportunity to learn something new. If you have any insights, please leave a comment.


Comments

My first thought is that the code looks like an unrolled recursive function, so perhaps if it's refactored into a driver function and a "continuation passing style" it might make the cyclomatic complexity match the covering tests.

So given the following:

public delegate void ResultFunc(IEnumerable<int> result);
public delegate void ContFunc(int n, ResultFunc result, ContFunc cont);

public static void Cont(int n, ResultFunc result, ContFunc cont) {
  if (n == 1) {
    result(new[] { n });
    return;
  }

  void Result(IEnumerable<int> list) => result(list.Prepend(n));

  if (n % 2 == 0)
    cont(n / 2, Result, cont);
  else
    cont(n * 3 + 1, Result, cont);
}

public static IReadOnlyCollection<int> Continuation(int n) {
  if (n < 1)
    throw new ArgumentOutOfRangeException(
      nameof(n),
      $"Only natural numbers allowed, but given {n}.");

  var output = new List<int>();

  void Output(IEnumerable<int> list) => output = list.ToList();

  Cont(n, Output, Cont);

  return output;
}

I calculate the Cyclomatic complexity of Continuation to be 2 and Step to be 3.

And it would seem you need 5 tests to properly cover the code, 3 for Step and 2 for Continuation.

But however you write the "n >=1" case for Continuation you will have to cover some of Step.

2023-05-08 10:11 UTC
Jeroen Heijmans #

There is a relation between cyclomatic complexity and branches to cover, but it's not one of equality, cyclomatic complexity is an upper bound for the number of branches. There's a nice example illustrating this in the Wikipedia article on cyclomatic complexity that explains this, as well as the relation with path coverage (for which cyclomatic complexity is a lower bound).

2023-05-08 15:03 UTC

I find cyclomatic complexity to be overly pedantic at times, and you will need four tests if you get really pedantic. First, test the guard clause as you already did. Then, test with 1 in order to test the

while
loop body not being run. Then, test with 2 in order to test that the
while
is executed, but we only hit the
if
part of the
if/else
. Finally, test with 3 in order to hit the
else
inside of the
while
. That's four tests where each test is only testing one of the branches (some tests hit more than one branch, but the "extra branch" is already covered by another test). Again, this is being really pedantic and I wouldn't test this function as laid out above (I'd probaby put in the test with 1, since it's an edge case, but otherwise test as you did).

I don't think there's a rigorous relationship between cyclomatic complexity and number of tests. In simple cases, treating things as though the relationship exists can be helpful. But once you start having iterrelated branches in a function, things get murky, and you may have to go to pedantic lengths in order to maintain the relationship. The same thing goes for code coverage, which can be 100% even though you haven't actually tested all paths through your code if there are multiple branches in the function that depend on each other.

2023-05-08 15:30 UTC

Thank you, all, for writing. I'm extraordinarily busy at the moment, so it'll take me longer than usual to respond. Rest assured, however, that I haven't forgotten.

2023-05-11 12:42 UTC

If we agree to the definition of cyclomatic complexity as the number of independent paths through a section of code, then the number of tests needed to cover that section must be the same per definition, if those tests are also independent. Independence is crucial here, and is also the main source of confusion. Both the while and if forks depend on the same variable (current), and so they are not independent.

The second test you wrote is similarly not independent, as it ends up tracing multiple paths through through if: odd for 5, and even for 16, 8, etc, and so ends up covering all paths. Had you picked 2 instead of 5 for the test, that would have been more independent, as it would not have traced the else path, requiring one additional test.

The standard way of computing cyclomatic complexity assumes independence, which simply is not possible in this case.

2023-06-02 00:38 UTC

Struan, thank you for writing, and please accept my apologies for the time it took me to respond. I agree with your calculations of cyclomatic complexity of your refactored code.

I agree with what you write, but you can't write a sentence like "however you write the "n >=1" case for [...] you will have to cover some of [..]" and expect me to just ignore it. To be clear, I agree with you in the particular case of the methods you provided, but you inspired me to refactor my code with that rule as a specific constraint. You can see the results in my new article Collatz sequences by function composition.

Thank you for the inspiration.

2023-06-12 5:46 UTC

Jeroen, thank you for writing, and please accept my apologies for the time it took me to respond. I should have read that Wikipedia article more closely, instead of just linking to it.

What still puzzles me is that I've been aware of, and actively used, cyclomatic complexity for more than a decade, and this distinction has never come up, and no-one has called me out on it.

As Cunningham's law says, the best way to get the right answer on the Internet is not to ask a question; it's to post the wrong answer. Even so, I posted Put cyclomatic complexity to good use in 2019, and no-one contradicted it.

I don't mention this as an argument that I'm right. Obviously, I was wrong, but no-one told me. Have I had something in my teeth all these years, too?

2023-06-12 6:35 UTC

Brett, thank you for writing, and please accept my apologies for the time it took me to respond. I suppose that I failed to make my overall motivation clear. When doing proper test-driven development (TDD), one doesn't need cyclomatic complexity in order to think about coverage. When following the red-green-refactor checklist, you only add enough code to pass all tests. With that process, cyclomatic complexity is rarely useful, and I tend to ignore it.

I do, however, often coach programmers in unit testing and TDD, and people new to the technique often struggle with basics. They add too much code, instead of the simplest thing that could possibly work, or they can't think of a good next test case to write.

When teaching TDD I sometimes suggest cyclomatic complexity as a metric to help decision-making. Did we add more code to the System Under Test than warranted by tests? Is it okay to forgo writing a test of a one-liner with cyclomatic complexity of one?

The metric is also useful in hybrid scenarios where you already have production code, and now you want to add characterisation tests: Which test cases should you at least write?

Another way to answer such questions is to run a code-coverage tool, but that often takes time. I find it useful to teach people about cyclomatic complexity, because it's a lightweight heuristic always at hand.

2023-06-12 7:24 UTC

Nikola, thank you for writing. The emphasis on independence is useful; I used compatible thinking in my new article Collatz sequences by function composition. By now, including the other comments to this article, it seems that we've been able to cover the problem better, and I, at least, feel that I've learned something.

I don't think, however, that the standard way of computing cyclomatic complexity assumes independence. You can easily compute the cyclomatic complexity of the above Sequence function, even though its branches aren't independent. Tooling such as Visual Studio seems to agree with me.

2023-06-13 5:32 UTC

Page 8 of 76

"Our team wholeheartedly endorses Mark. His expert service provides tremendous value."
Hire me!