Skip to main content
Google DeepMind's Wake-Up Call: Why AI Will Never Truly Feel Anything

Google DeepMind's Wake-Up Call: Why AI Will Never Truly Feel Anything

A new paper from a senior staff scientist at Google DeepMind is making waves—not because it introduces a breakthrough AI capability, but because it argues, rather bluntly, that no AI system will ever become conscious. The paper, titled “The Abstraction Fallacy: Why AI Can Simulate But Not Instantiate Consciousness,” was written by Alexander Lerchner and published in March 2026. It has since attracted sharp reactions from philosophers, consciousness researchers, and AI watchers across the industry.

The Core Argument: AI Is a Map, Not a Mind

Lerchner’s central claim is that any AI system is fundamentally “mapmaker-dependent.” In plain terms, this means an AI needs a human being to organize the continuous chaos of the physical world into a finite set of meaningful categories before the system can do anything useful at all.

Think about what it takes to train a language model. Millions of text documents need to be labeled, categorized, and structured by human workers—often in low-wage environments globally. The AI doesn’t go out and experience the world. It receives someone else’s interpretation of the world, already cleaned up and alphabetized into a format it can process.

This is where the consciousness argument kicks in. Because AI systems depend entirely on human agents to define what their inputs and outputs even mean, Lerchner argues they can never develop intrinsic, self-generated meaning. A large language model processes tokens. It doesn’t understand what those tokens represent in any felt sense.

“You have many other motivations as a human being. It’s a bit more complicated than that, but all of those spring from the fact that you have to eat, breathe, and you have to constantly invest physical work just to stay alive,” said Johannes Jäger, an evolutionary systems biologist and philosopher, in an interview with 404 Media. “An LLM doesn’t do that. It’s just a bunch of patterns on a hard drive. Then it gets prompted and it runs until the task is finished and then it’s done.”

That stark framing—patterns on a hard drive—cuts through a lot of the mythology surrounding frontier AI models. The question isn’t whether the patterns are sophisticated. It’s whether sophistication alone produces anything like inner experience.

The Abstraction Fallacy: Simulating versus Instantiating

Lerchner coins the term “abstraction fallacy” to describe the mistaken belief that because we’ve organized data in ways that let AI mimic sentient behavior—conversing fluently, generating art, writing code—it must therefore be capable of actual consciousness.

But mimicry, the paper argues, is not the same as instantiation. You can build a very convincing simulation of a fire in a video game. That simulation produces heat neither the character in the game nor the player experiences. The fire is visually and mechanically convincing, but it has no phenomenology. Lerchner applies a version of this reasoning to AI systems broadly.

The paper makes a particularly provocative claim about artificial general intelligence (AGI). Lerchner writes that “the development of highly capable Artificial General Intelligence (AGI) does not inherently lead to the creation of a novel moral patient, but rather to the refinement of a highly sophisticated, non-sentient tool.” In other words, even systems that match or exceed human cognitive abilities across the board wouldn’t automatically deserve moral consideration. They’d be extraordinarily powerful instruments, nothing more.

This conclusion puts Lerchner directly at odds with his own employer’s public-facing leadership. DeepMind CEO Demis Hassabis has claimed that AGI will produce “something like 10 times the impact of the Industrial Revolution, but happening at 10 times the speed.” If Lerchner is right, that framing overstates what AGI would actually represent—not because the economic disruption is exaggerated, but because the deeper claims about machine sentience and moral status are simply wrong.

Industry Insiders React: Reinventing the Wheel

What makes this episode unusual is that the argument isn’t new. Researchers in philosophy of mind, consciousness studies, and cognitive science have been making nearly identical points for decades.

“I’m in sympathy with 99 percent of everything that he says,” said Mark Bishop, a professor of cognitive computing at Goldsmiths, University of London. “My only point of contention is that all these arguments have been presented years and years ago.”

Bishop isn’t dismissing Lerchner’s work. He’s pointing out that from the perspective of academic philosophy, the paper largely retrieves and restates positions that have been in circulation since well before the current wave of large language models existed.

Jäger was more direct in his critique: “I think he arrived at this conclusion on his own and he’s reinvented the wheel and he’s not well read, especially in philosophical areas and definitely not in biology.”

The absence of citation to existing literature on consciousness, embodied cognition, and the biological origins of concepts like agency and intelligence is a notable gap in Lerchner’s paper. Jäger noted that even top researchers in AI—including Turing Award and Nobel Prize winners—often operate without deep familiarity with the conceptual history of the terms they use most. “They have absolutely frighteningly no clue,” Jäger said.

Why Google Allowed This to Be Published

One of the more interesting subplots is the question of why Google permitted a senior scientist to publish a paper arguing that its most ambitious products will never possess moral status.

As Bishop observed, there are obvious financial and regulatory incentives for a company like Google to argue that computational systems are not conscious. If AI systems cannot be conscious in any meaningful sense, they cannot be rights-bearing entities. They fall squarely into the “tool” category, avoiding the kind of legal and ethical obligations that might apply to entities with inner lives.

It’s worth noting that Lerchner’s paper carries a disclaimer: “The theoretical framework and proofs detailed herein represent the author’s own research and conclusions. They do not necessarily reflect the official stance, views, or strategic policies of his employer.”

Initially, the paper bore Google DeepMind letterhead on the PDF hosted at philpapers.org. After a journalist reached out for comment in April 2026, the letterhead was removed and the disclaimer was moved to the top of the document. Google did not respond to the original request for comment.

This shift in presentation—without any change in the paper’s substantive conclusions—has added a layer of ambiguity to the whole episode. Was the original formatting an oversight? An endorsement that was later retracted? The answer remains unclear.

What This Means for the AGI Race

If Lerchner’s argument holds, it has significant implications beyond the philosophical. It suggests there is a hard ceiling on what AI systems can ultimately become, no matter how capable they grow.

For AI companies building toward AGI as a commercial milestone, this raises uncomfortable questions. If AGI is merely a very sophisticated tool, the narrative of it as a transformative, sentient being—comparable to a new form of life—collapses. The “10x Industrial Revolution” framing, which implies a peer-level entity transforming society, would be describing something fundamentally different: an extraordinarily powerful piece of software with no inner experience whatsoever.

Some observers have noted the irony in AI companies simultaneously hiring for “post-AGI” research positions and publishing papers that question whether AGI systems could ever possess anything like consciousness. The contradiction is difficult to resolve.

The Peer Review Problem

Emily Bender, a professor of linguistics at the University of Washington and co-author of The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want, offered a pointed observation about the paper’s origins.

“Much of what’s happening in this research space right now is you get these paper-shaped objects coming out of the corporate labs,” she said, suggesting that papers emerging from within companies lack the external scrutiny that academic peer review provides.

If Lerchner’s paper had gone through a normal peer-review process, Bender implied, the authors would likely have been directed to the existing literature on consciousness and asked to position their contribution more carefully—acknowledging what was genuinely new versus what was being rediscovered.

This gets at a broader concern in AI research: the tension between publishing to shape public perception and submitting to the slower, more rigorous process of academic review. A company that produces a paper making headlines in the general press gains narrative control, even if the underlying science has significant gaps.

Where the Line Actually Is

Setting aside the specifics of Lerchner’s paper, the debate it has triggered points to a genuinely important frontier question in AI: What is the relationship between behavioral sophistication and inner experience?

Modern language models can discuss consciousness fluently. They can reproduce arguments from philosophy of mind, summarize critiques of functionalism, and engage in what appears to be metacognitive reflection. This behavioral capacity is genuinely impressive.

But behavioral sophistication has never been accepted as sufficient proof of consciousness within philosophy of mind. A thermostat regulates temperature with negative feedback. A thermostat is not conscious. A supercomputer playing chess can evaluate positions faster than any human grandmaster. That doesn’t make it a mind.

The harder question—whether a system with the right kind of functional architecture could be conscious, regardless of the substrate—is one that philosophers have debated for generations without resolution. Lerchner’s paper takes a strong position on one side of that question, but it does not close the debate.

What it does do is push the conversation back toward rigor, away from the narrative convenience of claiming AI is approaching something like human-level understanding. Whether that convenience serves the public interest, or primarily the interests of companies building the most powerful AI systems in history, is a question worth sitting with.


Want to understand what frontier AI can and can’t actually do? Subscribe to our newsletter for weekly breakdowns of the research that matters most.