3 min read

Yann LeCun wants us to Pursue Superhuman Intelligence (Rather Than AGI)

Yann LeCun wants us to Pursue Superhuman Intelligence (Rather Than AGI)
Yann LeCun wants us to Pursue Superhuman Intelligence (Rather Than AGI)
5:55

One of the most credentialed voices in AI research just published a paper arguing that AGI — Artificial General Intelligence, the concept every major lab claims to be chasing — is a fundamentally flawed idea built on a flawed premise. And given that Yann LeCun has been willing to be wrong publicly before and pay the reputational cost, when he puts his name on something this pointed, it's worth taking seriously.

The paper, co-authored by researchers from Columbia University, NYU, and the startup Distyl, makes a case that is both philosophically careful and practically inconvenient for a lot of people with a lot of money invested in the AGI narrative.

The Core Argument: Human Intelligence Isn't General

The central thesis is clean. Human intelligence is not general — it is highly specialized through millions of years of evolution. We simply cannot perceive our own limitations because we have no external reference point for them. The paper uses Magnus Carlsen as its illustration: the chess world champion is not objectively good at chess in any absolute sense. He is good at chess relative to other humans. Measured against a computer, his abilities reflect the ceiling of human performance, not some universal standard of intelligence. Our perception of his genius is shaped entirely by the limitations of our species.

If human intelligence is specialized rather than general, then building something that matches or exceeds human intelligence is not the same thing as building something general. The target was never what the field said it was.

The No Free Lunch Problem Nobody Wants to Talk About

The paper systematically examines the most prominent AGI definitions in circulation and finds that none of them survive contact with their own criteria.

Definitions claiming true generality collide with the No Free Lunch theorem — a well-established result in mathematics stating that no single algorithm can perform optimally across all possible problems. Generality, in a strict sense, is mathematically impossible. Definitions that limit AGI to human-level capability aren't general by definition — they're just human. Definitions from organizations like OpenAI or from DeepMind CEO Demis Hassabis are characterized by the researchers as either impossible to measure or internally inconsistent.

This is not an abstract philosophical complaint. If the definitions don't hold up, then the benchmarks built on those definitions don't mean what the press releases say they mean. And the race to achieve something nobody has coherently defined is, at minimum, a strange way to spend hundreds of billions of dollars.

What They're Proposing Instead

The researchers propose replacing AGI with a concept they call Superhuman Adaptable Intelligence — SAI. The shift from generality to adaptability is substantive. Adaptability acknowledges that intelligence operates in context, that performance is always measured relative to something, and that the meaningful question isn't whether a system can do everything but whether it can handle novel situations effectively across a meaningful range of domains.

This framing is more honest about what current AI systems actually do and what future systems are likely to do. It also creates measurable targets. Adaptability can be tested against specific conditions. Generality, as the paper demonstrates, cannot be coherently defined, let alone tested.

New call-to-action

Why LeCun's Involvement Changes the Weight of This

LeCun has a long record of public heterodoxy in AI. He has been openly skeptical of large language models as a path to anything resembling human cognition, arguing for years that current architectures are missing fundamental components of intelligence. He is not an AI doomer, but he is also not a booster. He is a researcher with serious credentials and a history of taking positions that make powerful people uncomfortable.

When someone with that profile co-authors a paper calling the foundational concept of the entire frontier AI industry incoherent, the appropriate response is not to dismiss it because it's inconvenient. It is to read the argument carefully and ask whether the criticism holds.

Based on what's been published, it does.

What This Means for Anyone Paying Attention to AI in Business

For marketing and growth leaders trying to make sense of AI vendor claims, this paper offers a useful interpretive frame. When a company announces progress toward AGI, or claims its model is approaching human-level general intelligence, the appropriate question is now sharper: general according to which definition, measured against what baseline, and who decided that benchmark was meaningful?

The organizations that will use AI most effectively over the next several years are not the ones most credulous about the biggest claims. They are the ones asking the most precise questions about what specific systems can actually do in specific contexts — which is, not coincidentally, exactly what the SAI framework is designed to produce.

AGI, as a concept, has been doing a lot of narrative work in the industry. It has justified valuations, fundraising rounds, and a particular kind of urgency that has compressed the space for careful thinking. If the concept itself doesn't hold up — and a serious group of researchers is now publicly arguing that it doesn't — that urgency warrants reexamination.

The race to AGI was always partly a communications strategy. Superhuman Adaptable Intelligence is a less dramatic phrase. It's also probably a more accurate one.

For help cutting through the signal and noise in AI and building a strategy grounded in what these systems actually do, Winsome Marketing's growth strategists can help you ask the right questions before you invest in the wrong answers.

Polish Doctors Worried About AI Dependence

Polish Doctors Worried About AI Dependence

Here's the thing about scientific studies: they're like TikTok videos—the more sensational the headline, the less likely you are to dig into what...

Read More
AGI - Apple's Reality Check on Silicon Valley's Favorite Delusion

AGI - Apple's Reality Check on Silicon Valley's Favorite Delusion

Here's a fun party trick: next time someone breathlessly tells you we're "months away from AGI," ask them to explain why ChatGPT cited six entirely...

Read More
Accenture CEO Julie Sweet's Three AI Red Flags

Accenture CEO Julie Sweet's Three AI Red Flags

When Julie Sweet talks, Fortune 500 CEOs listen—and for good reason. As Accenture's CEO overseeing 770,000 employees and speaking to more Fortune 500...

Read More