What Is This?
In 1994, a young Australian philosopher named David Chalmers stood at a conference in Tucson, Arizona, and drew a line that neuroscience has not yet crossed.
He distinguished between what he called the easy problems of consciousness and the hard problem. The easy problems — though they may take decades to solve — are in principle tractable: explain how the brain integrates information, directs attention, controls behaviour, processes sensory data, generates language, distinguishes sleep from waking. These are difficult scientific and engineering questions, but they're the right kind of questions. You solve them by finding the right computational or neural mechanisms. Progress is being made.
The hard problem is different in kind, not just degree. It's this: why does any of that information processing feel like anything?
When you see red, your visual cortex processes photons of a specific wavelength. Neurons fire. Information travels. That part — the mechanism — is an easy problem. The hard problem is: why does that processing produce the experience of redness? Why is there a qualitative, subjective, first-person what-it's-like to seeing red, rather than just the mechanical processing with no inner experience attached?
Philosophers call these first-person subjective qualities qualia — the redness of red, the taste of coffee, the felt quality of pain, the sound of middle C. Science can explain everything about qualia — their causes, their neural correlates, their evolutionary function — and still not explain why they feel like something rather than nothing.
Thomas Nagel had framed this differently in 1974. In his paper "What Is It Like to Be a Bat?", he argued that science can fully explain bat sonar — how it works, what it detects, how the brain processes it — and still tell you nothing about what it is like from the inside to be a bat. The subjective perspective is not captured by any third-person description, however complete.^1
Chalmers took this intuition and formalised it. The hard problem, he argued, is that consciousness requires more than functional explanation. You can build a complete functional theory of the mind — a theory that explains everything the mind does — and the question "but why does it feel like anything?" would remain entirely open.
Why Does It Matter?
- Every AI lab in the world is quietly dodging this question. When Anthropic says Claude is "helpful and harmless," when OpenAI talks about "aligned" AI, when researchers debate whether AI systems have "goals" or "values" — all of these discussions sit next to the hard problem without engaging it. If Chalmers is right, a system that exhibits every functional marker of consciousness (reports experiences, behaves as if it has inner states, passes every behavioural test) might have no inner experience whatsoever. A "philosophical zombie" — physically and functionally identical to a conscious being but with no subjective experience — is, on Chalmers' view, conceivable and therefore logically possible. You cannot tell from the outside. This is not science fiction. It's the most unresolved question about the systems you're interacting with daily.
- It redraws the line between "explains consciousness" and "explains behaviour." Most neuroscientific progress is progress on behaviour, not on experience. Neural correlates of consciousness (NCCs) — specific brain patterns associated with conscious states — are being mapped with increasing precision. But correlation between a brain state and an experience is not an explanation of why that brain state produces experience. This matters because billions of dollars of research funding is being deployed under the implicit assumption that explaining the correlates will eventually explain the experience. It may not.
- It's the deepest question about personal identity, death, and continuity. If you were teleported — scanned, destroyed, and reconstructed atom-for-atom elsewhere — would you survive? Physically identical. Functionally identical. But would you — the experiencer, the subject of experience — make it through? The hard problem is what makes this question hard: we don't know if subjective experience is just functional organisation or something else. The answer to the teleporter thought experiment depends on the answer to the hard problem.
- It matters for ethics at civilisational scale. If we don't understand what generates consciousness, we don't know which systems can suffer. AI, animals, modified organisms, uploaded minds — the moral circle question depends on resolving (or at least engaging seriously with) the hard problem. Getting it wrong means either including non-suffering systems in moral consideration (costly but harmless) or excluding genuinely suffering systems (a moral catastrophe).
Key People & Players
David Chalmers (NYU) — The man who named the hard problem and whose book The Conscious Mind (1996) remains the clearest philosophical statement of the case for consciousness being fundamentally irreducible to physical description. His 1994 Tucson talk was the turning point.^2
Daniel Dennett (1942–2024) — The most prominent eliminativist. His view: the hard problem is a pseudo-problem generated by confused intuitions. There is no "inner light" — consciousness is what the brain does, and the sense that there must be something more is itself a cognitive illusion. Consciousness Explained (1991) is the best statement of this view. The title is either accurate or the most audacious in philosophy, depending on your prior.^3
Thomas Nagel — "What Is It Like to Be a Bat?" (1974) is the foundational paper on the irreducibility of subjective experience. Nagel is a reluctant dualist — he doesn't think consciousness can be explained in physical terms but doesn't have a positive theory of what it is instead.
Giulio Tononi (Wisconsin) — Developer of Integrated Information Theory (IIT). Proposes that consciousness is integrated information (measured by a quantity called Φ, phi). Systems with high Φ are conscious; systems with low Φ are not. The theory implies that even simple systems have some primitive form of consciousness — a position called panpsychism. IIT is the most mathematically serious consciousness theory, and among the most controversial.
Anil Seth (Sussex) — Author of Being You (2021), the most widely read recent treatment of consciousness. His "controlled hallucination" framework: consciousness is the brain's predictions about the causes of its sensory inputs — a model of reality that includes the self. This doesn't fully solve the hard problem but reframes it: the question becomes "what is the brain predicting?" rather than "why does experience exist?" More tractable, less satisfying philosophically.
Antonio Damasio (USC) — Neurologist whose somatic marker hypothesis and work on emotion and consciousness provides the most compelling neuroscientific engagement with first-person experience. His work is the closest thing to a bridge between the hard problem and empirical neuroscience.
The Current State
The hard problem is in a strange moment. On one hand, it has never been more relevant — AI systems generating human-like outputs, brain-computer interfaces, consciousness research finally becoming scientifically respectable (funded by the Templeton Foundation and others). On the other, the field remains as philosophically fractured as it was in 1994.
The major positions:
Dualism (Chalmers): Consciousness is a fundamental feature of reality not reducible to physical facts. The hard problem is real. Some additional explanatory principle — possibly related to information, possibly panpsychist in character — is required.
Eliminativism (Dennett): The hard problem is a pseudo-problem. Our intuitions about "inner experience" are simply wrong. There's nothing it's like to be anything — just processing that models itself. The "zombie" thought experiment doesn't demonstrate the possibility of non-conscious functional duplicates; it reveals confused thinking.
Higher-Order Theories: Consciousness arises when a mental state is represented by a higher-order state (a thought about a thought). David Rosenthal's version is the most developed. Still doesn't fully engage with why the higher-order representation produces experience.
Global Workspace Theory (Baars/Dehaene): The most empirically productive theory. Consciousness is what happens when information is "broadcast" globally across brain regions rather than staying local. Explains access consciousness (what information is available to the rest of the mind) better than phenomenal consciousness (why information feels like something).
IIT (Tononi): The most mathematically rigorous. The most bizarre in its implications (complex integrated systems — and possibly the internet — have some form of consciousness; simple feedforward networks don't, regardless of complexity).
The AI complication:
Large language models can produce every linguistic output associated with consciousness — reports of experience, emotional reactions, uncertainty, preferences. The hard problem means this tells us very little. A philosophical zombie would produce identical outputs. We have no agreed-upon test for whether any system, biological or artificial, has genuine phenomenal experience. This is not a solved problem. Anyone claiming certainty in either direction — "of course LLMs aren't conscious" or "of course they might be" — is overstating what we know.
Best Resources to Learn More
- Being You by Anil Seth — The best accessible book on consciousness science published in the past decade. Engages the hard problem honestly without pretending to solve it.^4
- The Conscious Mind by David Chalmers — The clearest philosophical case that the hard problem is real and irreducible to physical description.^5
- Consciousness Explained by Daniel Dennett — The best counter-argument. Read this after Chalmers.^6
- "What Is It Like to Be a Bat?" by Thomas Nagel (1974) — 12 pages. The essay that changed everything.^7
- Anil Seth's TED Talk: "Your brain hallucinates your conscious reality" — 17 minutes. Best single introduction.^8