3 min read

Google DeepMind Scientist: AI Will Never Be Conscious

Google DeepMind Scientist: AI Will Never Be Conscious
Google DeepMind Scientist: AI Will Never Be Conscious
4:52

We've known this. We just haven't been saying it loudly enough.

A senior staff scientist at Google DeepMind, Alexander Lerchner, has published a paper arguing that no AI system—no large language model, no computational architecture—will ever achieve consciousness. The paper is titled "The Abstraction Fallacy: Why AI Can Simulate But Not Instantiate Consciousness." Philosophers who reviewed it called the argument sound. They also noted, with varying degrees of patience, that this argument has existed in academic literature for decades.

So: a Google scientist publishes a philosophy paper that philosophers already wrote. Google quietly removes its own branding from the PDF after a journalist asks questions. And somehow this is a news story in 2026.

It is—but not for the reasons most headlines suggest.

Why LLMs Cannot Be Conscious, Explained Without the Jargon

Lerchner's core argument is that AI systems are "mapmaker-dependent." They require a human to first organize the world into meaningful categories—to label images, structure training data, define what counts as a useful output. The abstraction fallacy, as he frames it, is the belief that because AI can manipulate language and symbols in ways that look like thought, it might actually be thinking.

It can't. Not without a body. Not without the fundamental biological pressure of staying alive.

Evolutionary systems biologist Johannes Jäger put it cleanly: "An LLM doesn't have any intrinsic meaning. Its meaning comes from the way that some human agent externally has defined a meaning." The model runs when prompted. Then it stops. It has no stake in its own existence because it has no existence to protect.

This is not a fringe position. It is, in fact, the majority position among philosophers of mind who study this seriously. Lerchner arrived at it independently—and, according to his critics, without citing most of the literature that got there first.

The AGI Hype Gap Is the Actual Story

Here's where this gets worth your attention.

DeepMind's own CEO, Demis Hassabis, has publicly claimed AGI will have "10 times the impact of the Industrial Revolution, happening at 10 times the speed." His own senior scientist just published a paper arguing that the theoretical ceiling on what AI can achieve is fundamentally lower than that framing implies—because without consciousness, AGI remains, in Lerchner's own words, "a highly sophisticated, non-sentient tool."

Both things are true at Google simultaneously. That's not a contradiction. That's a company with one message for investors and another for philosophers.

Mark Bishop, professor of cognitive computing at Goldsmiths, noted that Google may have its own reasons to be comfortable with the conclusion that AI can't be conscious—primarily that conscious AI would invite legislation, rights frameworks, and regulatory scrutiny that no one in Silicon Valley wants.

The paper's disclaimer was moved from the bottom to the top of the document after a journalist inquired. Google did not respond to the original request for comment.

What the AI Research Community Gets Wrong About Itself

The sharpest quote in this story comes from Jäger, who said AI researchers have "absolutely frighteningly no clue" about the conceptual history of terms like "intelligence" and "agency" that they use constantly—and that this includes Turing Prize and Nobel Prize winners.

Emily Bender, co-author of The AI Con, called papers like Lerchner's "paper-shaped objects"—outputs that look like peer-reviewed science but bypass the process that would have flagged, among other things, the absence of existing citations.

This insularity has direct consequences for everyone building on top of AI systems. Marketing and growth teams investing in AI infrastructure are making decisions based on a public narrative shaped by people who, by their own colleagues' admission, aren't reading broadly enough to know what they don't know.

Conscious or Not, AI Is Already Consequential

None of this means AI tools aren't useful, powerful, or worth serious strategic investment. They are. But the question of consciousness isn't just philosophical dinner-party material. It shapes regulatory frameworks, liability standards, and the ethical commitments companies make—or quietly walk back—when the pressure increases.

The gap between what AI companies say in press releases and what their own researchers publish in papers is growing. Paying attention to that gap is no longer optional for businesses that want to use these tools responsibly.

We think about this distinction constantly. If you want guidance grounded in what AI can actually do—not what its CEOs claim it will—the Winsome Marketing team is here.