Most of my concerns about AI are probably irrelevant, but what if one of them is not?

At the intersection of psychology, neuroscience, epistemology, and political science, there's a concept called motivated reasoning. In short, it describes the tendency to arrive at desired conclusions by reasoning processes heavily influenced by individual motivations. An example is a person who finds reasons to keep smoking that convinces him- or herself: Perhaps the smoker gloms onto evidence that smoking reduces appetite, and might therefore reason that it's better to keep smoking, because quitting would entail a weight gain, which is unhealthy.

As the example demonstrates, the reasoning process may not be particularly rigorous. While it may convince the person doing the reasoning, it convinces few other people.

The process if often subconscious. The person doing the reasoning may not be aware of the predilection leading to a desirable outcome. We all engage in motivated reasoning, so it's a kind of cognitive bias. Furthermore, it seems as though the more intelligent you are, the more susceptible to motivated reasoning you are. Apparently, the mechanism is that smarter people's superior mental resources enable them to find more convincing reasons for reaching desirable conclusions than less gifted persons.

Reasoning about the future of AI #

I have recently posted a series of articles critical of using AI for software development. These articles accept the capabilities of current LLM-based systems, but outline various concerns related to safety, epistemology, correctness, and similar areas.

While writing these articles, I've been aware that I'm likely to be engaging in motivated reasoning. Things are probably going to proceed at breakneck speed. Most of my concerns are probably moot while, on the other hand, I predict that we'll encounter problems that I didn't foresee.

Even so, if ninety percent of my concerns are irrelevant, that still leaves one that may turn out to be a real problem. Which one might it be?

Love of the craft #

Like many other software developers, I mourn our craft. I was originally drawn to programming because I was attracted to this particular kind of problem solving. It was like getting paid to solve puzzles, crosswords, sudokus, or whatever else you may be into.

During the decades of my career, I found that everything was interesting when framed as a programming problem. On the other hand, I've never been intrinsically interested in 'optimizing ad revenue', 'creating a marketplace for offal and other animal-processing waste products', 'implementing a complaint ticketing system', or 'enabling speculators to turn a profit from high-volume trading'. Some of these, I've actually done, and it was engaging work, but only because solving technical problems was stimulating.

When told that I can still 'solve business problems' by becoming a manager of agentic LLMs, that doesn't get my blood pumping. If I had found that prospect interesting, I would have become a manager decades ago.

I like writing code; not telling other entities to write code.

Incentives #

Of course, we were never paid based on whether we enjoyed the work. Rather, we were paid despite of it.

Usually, if you enjoy an activity, it's a hobby, and you pay to do it. Conversely, a job is an activity unpleasant enough that someone is willing to pay you to do it.

The last few decades of the software development job market is most likely abnormal. With some more help from motivated reasoning, I can think of a few reasons why this situation may last a bit longer, but again, I could be wrong.

In case, however, you think I'm incentivised by economics: To a small degree, I am. I'd definitely like to secure my finances better, but on the other hand, I'm getting near to what in some countries counts as retirement age, and I've had a good run so far. I'll get by, so that's not my main motivation.

I can, of course, keep programming as a hobby, but I do think that my services might still be valuable to some organizations. If so, please consider engaging with me.

Conclusion #

As my recent writings bear witness, I'm concerned about the current use of LLMs in software development. My concerns are related to safety and correctness.

Even so, it's possible that I'm engaging in motivated reasoning, a kind of cognitive bias where you arrive at conclusions beneficial to yourself. I still think that there's value in posting opinions that may act as counterpoints to techno-optimism.

I could be wrong about quite a few of my concerns, but still be right about one or two. If so, which ones?



Wish to comment?

You can add a comment to this post by sending me a pull request. Alternatively, you can discuss this post on Twitter or somewhere else with a permalink. Ping me with the link, and I may respond.

Published

Monday, 04 May 2026 05:28:00 UTC

Tags



"Our team wholeheartedly endorses Mark. His expert service provides tremendous value."
Hire me!
Published: Monday, 04 May 2026 05:28:00 UTC