ChatGPT Delusion Syndrome: Why Disruption Doesn't Care About Sentience or Free Will
You or someone you love may suffer from this condition. Read for a cure.
A huge number of generally smart, thoughtful people seem to have lost the plot in conversations about artificial intelligence. In most cases, these people seem to downplay the disruptive potential of AI (specifically large language models like ChatGPT) for one of two similar reasons:
AI is not sentient or conscious (it doesn't have an internal representation of the external world); therefore, it can never do what humans can do and we shouldn’t worry about it.
AI lacks true agency or free will (it cannot choose to do anything it is not already tasked with, either directly or indirectly); therefore, it can never do what humans can do and we shouldn’t worry about it.
Both of these arguments are weak, because neither sentience nor free will is a prerequisite for super-competence, but let's explore each of them in a bit more detail given the popularity of such opinions.
Sentience and intelligence
Many people reflexively believe that some sort of inner, conscious experience is necessary for the kind of intelligence we ascribe to humans.
Here’s an excerpt from one believer of the Must-Think-to-Act school of thought:
"ChatGPT is a large language model that effectively mimics a middle ground of typical speech online, but it has no sense of meaning; it merely predicts the statistically most probable next word in a sentence based on its training data, which may be incorrect."
— Edward Ongweso Jr., “Everybody Please Calm Down About ChatGPT” (Vice)
And here’s one from another:
"... there’s no such thing as a system that can truly master language without a theory of the world. That is to say, as the science of meaning, semantics cannot be shorn from the world that produces meaning; to understand and speak effectively words must be understood and for words to be understood they must be compared to a universe that is apprehended with something like a conscious mind... Indeed, there is no place within ChatGPT where that “looking” could occur or where a conclusion would arise. It can’t know."
— Freddie deBoer, “Theory of the World, Theory of Mind, and Media Incentives” (Substack)
These lines of argument generally go on to reason that, because AI is not conscious, it fundamentally can't do the kinds of things that sentient creatures (like humans) can do. In other words, it is believed that having an inner experience is necessary for something to consistently accomplish goals or solve problems. (Let’s set aside, for a moment, the fact that ChatGPT in fact does consistently accomplish goals and solve problems.)
This is, as I see it, absurd. Sentience is orthogonal to intelligence; put slightly differently, consciousness is orthogonal to competence. (After all, intelligence is generally accepted to be nothing more than the ability to accomplish what you set out to accomplish.)
Perhaps we can observe this more clearly in cases where either sentience or intelligence – but not both – exists in a system. For example:
Mice have brains that are neuroanatomically fairly similar to ours, which strongly supports their sentience, and yet they are far less capable (at least across a broad range of tasks) than humans or ChatGPT.
The calculator on your phone has superhuman calculation abilities, and yet it has nothing like the neural networks present in human brains (or those present in certain types of machine learning systems). It almost definitely lacks consciousness, yet it can accomplish the task of complex arithmetic extraordinarily well.
It simply doesn't matter whether something has an internal model of the world or is just really good at getting at the right answer via some other mechanism. The end result – and the impact on the real world – is the same.
So what else might explain the differences between mice, calculators, humans, and ChatGPT?
Free will (i.e. causal agency) and intelligence
"If we want to give an account of the nature of intelligence, we must give an account of the nature of free will."
— Daniel Dennett, Freedom Evolves (2003)
Perhaps you think that the above examples about mice and calculators can be explained away by bringing free will into the equation. After all, if neither a mouse nor a calculator has free will, then perhaps that helps explain why humans have far broader and more significant capabilities than either of them.
Daniel Dennett is a prominent (and generally level-headed) academic philosopher who believes this, so I'll let him explain it in his own words:
"What distinguishes rationality from mere calculation is the capacity to weigh reasons, to consider alternatives, to deliberate about ends, and to choose among them on the basis of something more than just the laws of physics. All of these activities presuppose free will. If we cannot choose among alternatives, then we cannot deliberate, and if we cannot deliberate, then we cannot reason or plan. In short, intelligence requires free will."
— Daniel Dennett (From "Freedom Evolves," 2003).
"Intelligence is not a matter of blindly following rules or algorithms; it requires creativity, imagination, and the ability to make choices. These are all aspects of free will. If we want to understand what makes humans uniquely intelligent, we must account for our capacity for free will. Without free will, we would be unable to innovate, explore, or learn from our mistakes."
— Daniel Dennett (From "Breaking the Spell," 2006).
What I think Dennett proves here, above all else, is that smart and thoughtful people can be wrong about important things. So let's consider some other perspectives.
Sam Harris's argument, for example, is that free will is an illusion because our decisions and actions are determined exclusively by factors outside of our control (genetics, upbringing, and our environment). Moreover, this view seems to be fairly strongly supported by the research of neuroscientist Benjamin Libet and others, particularly in studies of the timing of conscious decisions.
Basically, Libet he showed that the brain begins to physically make a decision (via neuronal activity) before we are consciously aware of it. In his now-famous experiment, Libet asked participants to choose to press a button while measuring their brain activity. He found that the brain's readiness potential – a measure of neural activity associated with the decision to act – appeared in the brain up to 500 milliseconds before participants reported being aware of their intention to act.
"My research has shown that the brain makes decisions before we are consciously aware of them. This suggests that free will is an illusion and that our decisions are determined by factors outside of our control. It also implies that intelligence is not dependent on free will, since the brain is capable of processing information and making decisions without conscious awareness."
— Benjamin Libet, The Volitional Brain (2002)
Harris argues (and I agree) that this neurological evidence suggests that our decisions are not made consciously, but are instead the result of unconscious processes in the brain. This means that our sense of free will is an illusion, and that we are not in conscious control of our decisions and actions in the way that we might think.
More importantly, I agree with Sam that none of this means that we are not intelligent or capable of making decisions; intelligence is a matter of our ability to process information, learn from experience, and adapt to new situations, regardless of whether or not we have free will.
Phrased with his usual eloquence:
"The idea that free will is necessary for intelligence is a vestige of outdated philosophical and religious beliefs. We now know that the brain operates according to physical laws and that our decisions are the product of these laws. Intelligence can exist without free will because it is a matter of information processing, not magic."
— Sam Harris, Free Will (2012)
"Even if we accept that free will is an illusion, it doesn't follow that we are not intelligent or capable of making decisions. We simply make decisions in a different way than we thought we did. Intelligence is a matter of information processing and problem-solving, not free will."
— Sam Harris, Waking Up (2014)
So why should you care about any of this?
AI systems like ChatGPT — while perhaps not sentient or possessive of free will — can still disrupt industries, solve complex problems, and demonstrate extreme competence across a wide range of tasks. To think otherwise is to cling to ego-defensive, anthropocentric definitions of these terms, and to ignore the growing body of evidence that supports the orthogonal relationship between experiencing, deciding, and acting competently.
I hope I’ve made my position on this topic fairly clear, but you may still be wondering why any reasonable person should give an intellectual shit about it.
As AI continues to develop and improve, it will have an increasingly significant impact on our lives — one that could be radically positive, or irrevocably catastrophic. By dismissing concerns about AI via arguments about sentience or agency, we are likely to develop a false sense of security. (Mass unemployment and the propagation of misinformation could be the least of our worries, according to about 10% of people working in this field.)
Instead, influential and thoughtful people need to acknowledge AI's capabilities and disruptive potential, and find ways to leverage its super-competence for the benefit of humans (and other conscious creatures).
Our obsession with libertarian free will and humanlike sentience is intellectual quicksand; step out of it, and do what you can to ensure that AI advances responsibly.
After all, if an AI-launched nuke hurtles toward a major city in the United States, discussions of whether the AI knew it launched the nuke — or whether it truly decided to pull the trigger — will seem rather, well, inappropriate.