Will you go to prison for an AI? by Mark Seemann
Who is liable for code written by LLMs?
It seems as though everyone is talking about agentic AI, and although it's perhaps already subject to semantic diffusion, I understand it as the process of letting one or more LLMs go off and write code on their own.
I've toyed enough with it to acknowledge that the potential is undeniable. I can see myself using LLM-based coding agents as team members on a software team, with me as the tech lead.
I don't think, however, that I will be using unsupervised coding agents for anything important. The idea of being unaware of the code strikes me as fundamentally risky. If you never wrote the code, and you don't even look at it (or perhaps only cursory), then how do you know that the end product works as intended?
The most common response is to test the software. That is not a bad idea. In fact, it's the best idea I can think of. Even so, there are fundamental epistemological problems to be addressed. Particularly, the widespread notion of making LLMs responsible for testing does not solve the problem.
Granted, there are kinds of software for which a few errors or unpredictable behaviour is no big deal. Conversely, there is another category where correctness is crucial.
If you vibe-code or use agentic AI to develop such software, what happens if it comes with bugs? If millions, or billions, of euros are lost? If people are maimed or killed?
Someone will be held accountable.
You can't hold an LLM accountable. It is also, in my opinion, improbable that the organizations behind the LLMs (OpenAI, Anthropic, etc.) are effectively accountable. They will say that the system was never intended to be used without human supervision.
Your manager will not take the fall for you. He or she will again claim that you were being paid to be technical, and that management's role is only to lead. If your manager is 'non-technical', this could even be a valid defence.
You will be held accountable.
Even so, you may think: How bad can it be?
Bad enough. If money or lives are lost, the hunt for a suitable scapegoat is on. Such a search usually ends at the leaves of the organisational tree. This could easily be you.
As a concrete example, James Robert Liang was sentenced to prison time for his involvement in the Volkswagen emissions scandal. While this case had nothing to do with LLMs, it illustrates that individual engineers may be held accountable for the actions of an organisation.
You, too, could go to prison for sufficiently egregious problems with the software you let LLMs develop. Are you comfortable with that risk?