AI in Marketing

Yann LeCun Is Reportedly Leaving Meta to Chase "World Models"

Written by Writing Team | Nov 19, 2025 12:00:00 PM

Yann LeCun, Meta's chief AI scientist for fundamental research and one of the most influential figures in modern artificial intelligence, is reportedly leaving the company to found a startup focused on "world models." According to reports from the Financial Times and Wall Street Journal, the 65-year-old AI pioneer has concluded that large language models—the technology driving ChatGPT, Claude, and Meta's own Llama—are fundamentally limited and represent a "dead end" for achieving human-level intelligence.

This isn't a sudden pivot. LeCun has been telegraphing his skepticism about LLMs for over a year, calling them "basically an off-ramp, a distraction, a dead end" in April 2024. What's changed is apparently his willingness to stay at Meta while the company doubles down on exactly the technology he's lost faith in.

The Org Chart That Tells the Story

The immediate context for LeCun's departure involves some awkward corporate reshuffling. This past summer, 28-year-old Alexandr Wang—co-creator of ChatGPT—became head of AI at Meta, effectively making an LLM evangelist LeCun's boss. Meta also brought in Shengjia Zhao as another chief scientist this year, with the company announcement touting a scaling "breakthrough" Zhao delivered.

LeCun says he's lost faith in scaling.

Meta's AI operation apparently has an eccentric organizational structure split into multiple separate groups, which led to hundreds of layoffs last month in an effort to rationalize the org chart. When you have multiple "chief scientists" and competing visions of which technology represents the future, someone eventually has to leave. In this case, it's the elder statesman who thinks everyone else is pursuing a dead end.

The irony is thick: Mark Zuckerberg wrote in July that Meta's in-house AI development had advanced so remarkably that "superintelligence is now in sight." LeCun evidently disagrees about the path to get there.

The Rotating Cube Problem

LeCun's critique of LLMs centers on a fundamental limitation he's explained repeatedly in public speeches, most notably at the AI Action Summit in Paris in February. His argument: LLMs don't understand the world the way humans or even animals do.

His go-to thought experiment is elegantly simple: "If I tell you 'imagine a cube floating in the air in front of you. Okay now rotate this cube by 90 degrees around a vertical axis. What does it look like?' It's very easy for you to kind of have this mental model of a cube rotating."

An LLM can write a detailed description of a rotating cube. It can generate a dirty limerick about a hovering cube. It can produce code to render a 3D cube. What it can't do is actually model the spatial reasoning required to understand what rotating a cube means in the way a human immediately grasps it.

LeCun argues that this limitation stems from how LLMs are trained. While these models process text data equivalent to 450,000 years of human reading, a four-year-old child who has been awake for 16,000 hours has processed 1.4 x 10^14 bytes of sensory data about the physical world through vision and touch. That's more information than an LLM, and it's fundamentally different information—embodied, spatial, causal understanding rather than statistical patterns in text.

"We can't even reproduce cat intelligence or rat intelligence, let alone dog intelligence," LeCun says. "They can do amazing feats. They understand the physical world. Any housecat can plan very highly complex actions. And they have causal models of the world."

What Are World Models?

LeCun's vision for world models—which he's already begun developing at Meta, including an introductory video that asks you to imagine a rotating cube—involves systems that maintain a current "estimate of the state of the world" as an abstract representation of everything relevant in context. Rather than sequential, tokenized text prediction, these models would predict "the resulting state of the world that will occur after you take that sequence of actions."

This would enable, LeCun argues, systems that can genuinely plan actions hierarchically to fulfill objectives and systems that can actually reason about causality rather than pattern-match from training data. He also claims world models would have more robust safety features because control mechanisms would be built into the architecture rather than applied as post-hoc fine-tuning to mysterious black boxes.

The technical approach involves what LeCun describes as energy-based models: "You want an energy function that measures incompatibility, and given an x, find a y that has low energy for that x." The system looks at the current state of the world and seeks compatibility with some different desired state by finding efficient solutions.

If this sounds abstract and tentative, it should. LeCun is describing a moonshot research program, not a product roadmap. He hasn't even confirmed he's founding a new company, though the Financial Times report suggests that's the plan.

The Wearables Bet

Part of LeCun's motivation stems from his work on Meta's AI smart glasses and his conviction that future AI systems need to interact with wearables as if they are people. LLMs, he argues, can't do this effectively because they lack grounded understanding of physical reality.

This connects to his broader vision: if we're going to build AI that assists humans in the physical world rather than just chatting in text boxes, we need models that actually understand spatial relationships, object permanence, physical causality, and the structure of embodied experience.

It's a compelling argument, even if the implementation path remains unclear. Every frontier lab is working on multimodal models that process images, video, and audio alongside text. But LeCun seems to be arguing for something more fundamental—a different architecture that doesn't treat visual or spatial reasoning as add-ons to text models but as primary modes of understanding.

The Credibility Question

LeCun is not some random critic shouting from the sidelines. He's a Turing Award winner, one of the godfathers of deep learning, and a researcher with decades of contributions to the field. When he says LLMs are a dead end for AGI, it carries weight—even if other equally credible researchers disagree vehemently.

AI critic Gary Marcus has pointed out the apparent contradiction in LeCun's position: he spent years defending LLMs from Marcus' critiques, then flip-flopped to calling them dead ends. But research positions evolving based on new evidence isn't inconsistency—it's how science is supposed to work.

The real question is whether LeCun is correct that world models represent the path forward, or whether he's chasing a different dead end while LLMs continue improving through better training techniques, synthetic data, test-time compute, and architectural refinements.

What This Means for AI Development

If LeCun succeeds in building something genuinely transformative with world models, it would represent a major paradigm shift away from the LLM-centric approach dominating current AI development. If he fails—or if progress takes a decade of expensive research with minimal intermediate results—it validates the scaling maximalists who think we just need bigger LLMs trained on more data.

Either way, having one of AI's most influential researchers leave a frontier lab to pursue a fundamentally different technical approach is significant. It suggests the field isn't as unified on the path to AGI as the relentless focus on bigger models might imply.

It also means Meta is fully committed to the LLM scaling path, with leadership that believes in that approach even as their former chief scientist for fundamental research publicly calls it a distraction. That's a clear strategic bet: Meta thinks Llama and its successors represent the future, not whatever LeCun builds next.

For now, we wait to see if LeCun confirms his departure and announces whatever comes next. The rotating cube thought experiment will either become a famous illustration of the limitation that doomed LLMs, or a curious historical footnote about the time a brilliant scientist chased the wrong problem.

Place your bets accordingly.

Source: "Imagine a Cube Floating in the Air: The New AI Dream Allegedly Driving Yann LeCun Away from Meta" by Mike Pearl, Gizmodo, November 15, 2025