How do you know whether the code you wrote is readable?

In a recent Twitter thread about pair and mob programming, Dan North observes:

"That’s the tricky problem I was referring to. If you think you can write code that other humans can understand, without collaborating or calibrating with other humans, assuming that an after-the-fact check will always be affirmative, then you are a better programmer than me."

I neither think that I'm a better programmer than Dan nor that, without collaboration, I can write code that other humans can understand. That's why I'd like someone else to review my code. Not write it together with me, but read it after I've written it.

Advantages of pair and ensemble programming #

Pair programming and ensemble (AKA mob) programming is an efficient way to develop software. It works for lots of people. I'm not insisting otherwise.

By working together, you can pool skills. Imagine working on a feature for a typical web application. This involves user interface, business logic, data access, and possibly other things as well. Few people are experts in all those areas. Personally, I'm comfortable around business logic and data access, but know little about web front-end development. It's great to have someone else's expertise to draw on.

By working together in real time, you avoid hand-offs. If I had to help implementing a feature in an asynchronous manner, I'd typically implement domain logic and data access in a REST API, then tell a front-end expert that the API is ready. This way of working introduces wait times into the process, and may also cause rework if it turns out that the way I designed the API doesn't meet the requirements of the front end.

Real-time collaboration addresses some of these concerns. It also improves code ownership. In Code That Fits in Your Head, I quote Birgitta Böckeler and Nina Siessegger:

"Consistent pairing makes sure that every line of code was touched or seen by at least 2 people. This increases the chances that anyone on the team feels comfortable changing the code almost anywhere. It also makes the codebase more consistent than it would be with single coders only.

"Pair programming alone does not guarantee you achieve collective code ownership. You need to make sure that you also rotate people through different pairs and areas of the code, to prevent knowledge silos."

With mob programming, you take many of these advantages to the next level. If you include a domain expert in the group, you can learn about what the organisation actually needs as you're developing a feature. If you include specialised testers, they may see edge cases or error modes you didn't think of. If you include UX experts, you'll have a chance to develop software that users can actually figure out how to use.

There are lots of benefits to be had from pair and ensemble programming. In Code That Fits in Your Head I recommend that you try it. I've recommended it to my customers. I've had good experiences with it myself:

"I’ve used [mob programming] with great success as a programming coach. In one engagement, I spent two to three days a week with a few other programmers, helping them apply test-driven development practices to their production code bases. After a few months of that, I went on vacation. Meanwhile those programmers kept going with test-driven development. Mob programming is great for knowledge transfer."

I don't, however, think that it's a one-size-fits-all solution.

The curse of knowledge #

While outlining the advantages of pair and ensemble programming, I didn't mention readability. I don't see how those ways of working address the problem of writing readable code.

I've reviewed code written by pairs, and it was neither more nor less readable than code written by a single programmer. I think that there's an easy-to-understand reason for this. It relates to the curse of knowledge:

"In 1990, Elizabeth Newton earned a Ph.D. in psychology at Stanford by studying a simple game in which she assigned people to one of two roles: “tappers” or “listeners.” Tappers received a list of twenty-five well-known songs, such as “Happy Birthday to You” and “The Star-Spangled Banner.” Each tapper was asked to pick a song and tap out the rhythm to a listener (by knocking on a table). The listener’s job was to guess the song, based on the rhythm being tapped. (By the way, this experiment is fun to try at home if there’s a good “listener” candidate nearby.)

"The listener’s job in this game is quite difficult. Over the course of Newton’s experiment, 120 songs were tapped out. Listeners guessed only 2.5 percent of the songs: 3 out of 120.

"But here’s what made the result worthy of a dissertation in psychology. Before the listeners guessed the name of the song, Newton asked the tappers to predict the odds that the listeners would guess correctly. They predicted that the odds were 50 percent.

"The tappers got their message across 1 time in 40, but they thought they were getting their message across 1 time in 2. Why?

"When a tapper taps, she is hearing the song in her head. Go ahead and try it for yourself—tap out “The Star-Spangled Banner.” It’s impossible to avoid hearing the tune in your head. Meanwhile, the listeners can’t hear that tune—all they can hear is a bunch of disconnected taps, like a kind of bizarre Morse Code.

"In the experiment, tappers are flabbergasted at how hard the listeners seem to be working to pick up the tune. Isn’t the song obvious? The tappers’ expressions, when a listener guesses “Happy Birthday to You” for “The Star-Spangled Banner,” are priceless: How could you be so stupid?

"It’s hard to be a tapper. The problem is that tappers have been given knowledge (the song title) that makes it impossible for them to imagine what it’s like to lack that knowledge. When they’re tapping, they can’t imagine what it’s like for the listeners to hear isolated taps rather than a song. This is the Curse of Knowledge. Once we know something, we find it hard to imagine what it was like not to know it. Our knowledge has “cursed” us. And it becomes difficult for us to share our knowledge with others, because we can’t readily re-create our listeners’ state of mind.

"The tapper/listener experiment is reenacted every day across the world. The tappers and listeners are CEOs and frontline employees, teachers and students, politicians and voters, marketers and customers, writers and readers. All of these groups rely on ongoing communication, but, like the tappers and listeners, they suffer from enormous information imbalances. When a CEO discusses “unlocking shareholder value,” there is a tune playing in her head that the employees can’t hear."

When you're writing code, you're a tapper. As you're writing the code, you know why you are writing it the way you do, you know what you've already tried that didn't work, the informal requirements that someone told you about over the water cooler, etc.

Why should pair or ensemble programming change that?

"One of the roles of a PR is to verify that someone who didn't write the new code can understand it.

"The constant communication of pair programming can result in code only that pair understands. Does a book with two authors not need an editor?"

So, how do you verify that code is readable?

Readability #

I often forget to remind the reader that discussions like this one, about software productivity, mostly rely on anecdotal evidence. There's little scientific evidence about these topics. The ensuing discussions tend to rely on subjectivity, and so, ultimately, does this one.

In Code That Fits in Your Head, I suggest heuristics for writing readable code, but ultimately, the only reliable test of readability that I can think of is simple:

Ask someone else to read the code.

That's what a code review ought to do. Anyone who took part in writing the code is a tapper. After I've written code, I'm a tapper. I'm in no position to evaluate whether the code I just wrote is readable.

You need a listener (or, here: a reader) to evaluate whether or not sufficient information came across.

I agree with Dan North that I need other humans to collaborate and calibrate. I just disagree that people who write code are in a position to evaluate whether the code is readable (and thereby can sustain the business in the long run).

Rejection #

What happens, then, if I submit a pull request that the reviewer finds unreadable?

The reviewer should either suggest improvements or decline the pull request.

I can tell from Dan's tweet that he's harbouring a common misconception about the pull request review process:

"assuming that an after-the-fact check will always be affirmative"

No, I don't assume that my pull requests always pass muster. That's also the reason that pull requests should be small. They should be small enough that you can afford to have them rejected.

I'm currently helping one of my clients with some code. I add some code and send an agile pull request.

Several times in the last month, my pull requests have remained unmerged. In none of the cases, actually, have the reviewer outright rejected the pull request. He just started asking questions, then we had a short debate over GitHub, and then I came to the conclusion that I should close the pull request myself.

No drama, just feedback.

Conclusion #

How do you verify that code is readable?

I can't think of anything better than asking someone else to read the code.

Obviously, we shouldn't ask random strangers about readability. We should ask team members to review code. One implication of collective code ownership is that when a team member accepts a pull request, he or she is also taking on the shared responsibility of maintaining that code. As I write in Code That Fits in Your Head, a fundamental criterion for evaluating a pull request is: Will I be okay maintaining this?



Wish to comment?

You can add a comment to this post by sending me a pull request. Alternatively, you can discuss this post on Twitter or somewhere else with a permalink. Ping me with the link, and I may respond.

Published

Monday, 18 October 2021 07:37:00 UTC

Tags



"Our team wholeheartedly endorses Mark. His expert service provides tremendous value."
Hire me!
Published: Monday, 18 October 2021 07:37:00 UTC