Code that fits in a context window by Mark Seemann
AI-friendly code?
On what's left of software-development social media, I see people complaining that as the size of a software system grows, large language models (LLMs) have an increasingly hard time advancing the system without breaking something else. Some people speculate that the context windows size limit may have something to do with this.
As a code base grows, an LLM may be unable to fit all of it, as well as the surrounding discussion, into the context window. Or so I gather from what I read.
This doesn't seem too different from limitations of the human brain. To be more precise, a brain is not a computer, and while they share similarities, there are also significant differences.
Even so, a major hypothesis of mine is that what makes programming difficult for humans is that our short-term memory is shockingly limited. Based on that notion, a few years ago I wrote a book called Code That Fits in Your Head.
In the book, I describe a broad set of heuristics and practices for working with code, based on the hypothesis that working memory is limited. One of the most important ideas is the notion of Fractal Architecture. Regardless of the abstraction level, the code is composed of only a few parts. As you look at one part, however, you find that it's made from a few smaller parts, and so on.
I wonder if those notions wouldn't be useful for LLMs, too.