Appeal to aithority by Mark Seemann
No, it's not a typo.
A few months ago, I was listening to a semi-serious programme from the Danish public service radio. This is a weekly programme about language that I always listen to as a podcast. The host is the backbone of the show, but in addition to new guests each week, he's flanked by a regular expert who is highly qualified to answer questions about etymology, grammar, semantics, etc.
In the episode I'm describing, the expert got a question that a listener had previously emailed. To answer, (s)he started like this (and I'm paraphrasing): I don't actually know the answer to this question, so I did what everyone does these days, when they don't know the answer: I asked ChatGPT.
(S)he then proceeded to read aloud what ChatGPT had answered, and concluded with some remarks along the lines that that answer sounded quite plausible.
If I used ten to twenty hours of my time re-listening to every episode from the past few months, I could find the particular episode, link to it, transcribe the exact words, and translate them to English to the best of my abilities. I am, however, not going to do that. First, I'm not inclined to use that much time writing an essay on which I make no income. Second, my aim is not to point fingers at anyone in particular, so I'm being deliberately vague. As you may have noticed, I've even masked the person's sex. Not because I don't remember, but to avoid singling out anyone.
The expert in question is a regular of the programme, and I've heard him or her give good and knowledgeable answers to many tricky questions. As far as I could tell, this particular question was unanswerable, along the lines of why is 'table' called 'table' rather than 'griungth'?
The correct answer would have been I don't know, and I don't think anyone else does.
Being a veteran of the programme, (s)he must have realized on beforehand that this wouldn't be good radio, and instead decided to keep it light-hearted.
I get that, and I wouldn't be writing about it now if it doesn't look like an example of an increasing trend.
People are using large language models (LLMs) to advocate for their positions.
Appeal to authority #
Appeal to authority is no new technique in discourse.
"You may also, should it be necessary, not only twist your authorities, but actually falsify them, or quote something which you have invented entirely yourself. As a rule, your opponent has no books at hand, and could not use them if he had."
This seems similar to how people have begun using so-called artificial intelligence (AI) to do their arguing for them. We may, instead, call this appeal to aithority.
Epistemological cul-de-sac #
We've all seen plenty of examples of LLMs being wrong. I'm not going to tire you with any of those here, but I did outline my experience with GitHub Copilot in 2022. While these technologies may have made some advances since then, they still make basic mistakes.
Not only that. They're also non-deterministic. Ask a system a question once, and you get one answer. Ask the same question later, and you may get a variation of the same answer, or perhaps even a contradictory answer. If someone exhibits an answer they got from an LLM as an argument in their favour, consider that they may have been asking it five or six times before they received an answer they liked.
Finally, you can easily ask leading questions. Even if someone shows you a screen shot of a chat with an LLM, they may have clipped prior instructions that nudge the system towards a particular bias.
I've seen people post screen shots that an LLM claims that F# is a better programming language than C#. While I'm sympathetic to that claim, that's not an argument. Just like how you feel about something isn't an argument.
This phenomenon seems to be a new trend. People use answers from LLMs as evidence that they are right. I consider this an epistemological dead end.
Real authority #
Regular readers of this blog may have noticed that I often go to great lengths to track down appropriate sources to cite. I do this for several reasons. One is simply out of respect for the people who figured out things before us. Another reason is to strengthen my own arguments.
It may seem that I, too, appeal to authority. Indeed, I do. When not used in in the way Schopenhauer describes, citing authority is a necessary epistemological shortcut. If someone who knows much about a particular subject has reached a conclusion based on his or her work, we may (tentatively) accept the conclusion without going through all the same work. As Carl Sagan said, "If you wish to make an apple pie from scratch, you must first invent the universe." You can't do all basic research by yourself. At some point, you'll have to take expert assertions at face value, because you don't have the time, the education, or the money to build your own Large Hadron Collider.
Don't blindly accept an argument on the only grounds that someone famous said something, but on the other hand, there's no reason to dismiss out of hand what Albert Einstein had to say about gravity, just because you've heard that you shouldn't accept an argument based on appeal to authority.
Conclusion #
I'm concerned that people increasingly seem to resort to LLMs to argue a case. The LLMs says this, so it must be right.
Sometimes, people will follow up their arguments with of course, it's just an AI, but... and then proceed to unfold their preferred argument. Even if this seems as though the person is making a 'real' argument, starting from an LLM answer establishes a baseline to a discussion. It still lends an aura of truth to something that may be false.