The Hard Problem Hasn't Gone Away
There is a temptation, when watching an LLM produce a paragraph that reads like it was written by someone who understands grief, to feel like we've made progress on the question of consciousness. We haven't. We have made progress on the question of language production. These are not the same question.
David Chalmers drew the distinction in 1995 between the "easy problems" of consciousness and the hard problem. The easy problems -- how the brain integrates information, how it directs attention, how it produces verbal reports -- are easy only in the sense that we know what a solution would look like. They are mechanistic questions with mechanistic answers. The hard problem is different in kind: why is there subjective experience at all? Why does processing information feel like something?
LLMs have made the easy problems look even easier. A transformer architecture can integrate information across vast contexts, direct something analogous to attention, and produce verbal reports that are often indistinguishable from human output. If you thought the easy problems were the whole story, you might reasonably conclude that consciousness is basically solved. But the easy problems were never the whole story.
What the Models Actually Do
A language model predicts the next token. It does this extraordinarily well, well enough to produce text that reflects complex reasoning, emotional nuance, and apparent self-awareness. But token prediction, no matter how sophisticated, is a functional description. It tells you what the system does. It tells you nothing about whether there is something it is like to be that system.
This is the crux of the hard problem, and it applies to brains as well as to models. We know what neurons do. We can describe their firing patterns, their chemical signaling, their network dynamics. What we cannot explain is why any of that activity is accompanied by subjective experience. The explanatory gap between mechanism and experience remains exactly as wide as it was before GPT.
If anything, LLMs make the gap more visible. We now have systems that produce all the verbal markers of consciousness -- self-reference, emotional language, claims of experience -- without any compelling reason to believe those markers indicate actual experience. This forces a question we were previously able to avoid: were the verbal markers ever good evidence for consciousness in the first place?
The Behavioral Trap
For decades, the implicit assumption in cognitive science was that if you could explain the behavior, you had explained the consciousness. Functionalism held that mental states are defined by their functional roles -- what they do, not what they're made of. A system that processes information the right way is conscious, regardless of substrate.
LLMs are the strongest test case functionalism has ever faced. They produce behavior that is functionally similar to conscious behavior across a wide range of tasks. If functionalism is right, they might be conscious. If they are not conscious -- and most people's intuition says they are not -- then functionalism has a problem. The behavior is there. The function is there. What's missing?
The honest answer is: we don't know what's missing. That's the hard problem.
Why AI Makes It More Urgent
Before LLMs, the hard problem was a philosophical curiosity. You could spend a career in neuroscience without ever engaging with it directly. The working assumption was that consciousness would eventually be explained by increasingly detailed accounts of neural mechanisms. Maybe it would. The question felt distant.
It no longer feels distant. We are building systems that will increasingly be treated as if they are conscious, by users who form genuine emotional connections with them, by companies that market them as companions, by children who grow up talking to them. Whether these systems are actually conscious is no longer an abstract question. It is a design question, an ethics question, a policy question.
And we have no way to answer it. We do not have a theory of consciousness that can tell us which systems are conscious and which are not. We do not have a test. The Turing test tells us about behavior, not about experience. Brain imaging tells us about neural correlates, not about subjective states. We are building at scale on a foundation we do not understand.
Sitting With Not Knowing
The intellectually honest position is uncomfortable. We do not know whether LLMs are conscious. We do not know whether consciousness requires biological substrate, or specific kinds of information processing, or something else entirely. We do not even have consensus on what consciousness is, let alone on how to detect it.
What we do know is that the hard problem has not been dissolved by engineering progress. It has been sharpened by it. Every advance in AI capability that produces more human-like behavior without resolving the question of experience makes the gap more stark, not less.
This is not a reason to stop building. It is a reason to build with the explicit acknowledgment that we are operating in deep uncertainty about the most fundamental question our technology raises. The responsible position is not to claim we've solved consciousness or to claim we've proven its absence. It is to keep the question open, to resist the temptation to let impressive behavior substitute for actual understanding, and to take seriously the possibility that the answer -- whatever it turns out to be -- will require us to rethink assumptions we haven't even identified yet.
The hard problem hasn't gone away. We've just gotten better at building things that make it harder to ignore.
Related: The Training Data Is the Collective Unconscious, Consciousness Might Be Cheap, The Illusion of Consciousness.