3 min read

AlphaFold at Five: We Solved Protein Folding and Still Can't Cure the Common Cold

AlphaFold at Five: We Solved Protein Folding and Still Can't Cure the Common Cold
AlphaFold at Five: We Solved Protein Folding and Still Can't Cure the Common Cold
6:30

Five years ago, DeepMind's AlphaFold cracked protein folding—a problem so fundamental that solving it earned a Nobel Prize in Chemistry. The system predicted the three-dimensional structure of proteins with atomic accuracy, compiled a database of 200 million protein structures, and gets used by 3.5 million researchers in 190 countries. The 2021 Nature paper describing it has been cited 40,000 times.

So where's my malaria vaccine? Where's the Alzheimer's cure? Where's the designer enzyme that eats plastic and solves climate change?

AlphaFold represents genuine scientific achievement—the kind that advances human knowledge and opens research pathways that didn't exist before. But five years in, the gap between "we can predict protein structure" and "we can design drugs that work in humans" remains stubbornly wide. We solved the structure problem. We're still figuring out what to do with the solution.

From Game Playing to Life-Saving (Theoretically)

DeepMind made its name teaching AI to beat humans at Go. Then it pivoted to something more serious: predicting how amino acid chains fold into three-dimensional proteins. This matters because protein structure determines function, and understanding function is essential for drug design, disease research, and basically all of molecular biology.

AlphaFold 2 nailed this. Then AlphaFold 3 extended predictions to DNA, RNA, and small molecules—the interactions that actually matter for understanding how biological systems work. DeepMind's Pushmeet Kohli calls protein folding a "root node problem"—solve it, and entire branches of research unlock.

That part worked. The unlocking part is slower than advertised.

New call-to-action

The Hallucination Problem We're Politely Ignoring

AlphaFold 3 uses diffusion models, which are more "creative" than earlier architectures. Creativity in protein prediction means hallucinations—the system generates plausible-looking structures that don't exist in reality, particularly in disordered protein regions.

DeepMind addresses this with "confidence scores" and verification systems, pairing generative models with rigorous checkers. Kohli emphasizes that scientists have validated AlphaFold predictions in labs repeatedly over five years, building trust through empirical confirmation.

But here's the tension: we're using increasingly generative AI models for scientific predictions, then relying on human scientists to verify which outputs are real versus which are computational fantasies. That's not AI replacing scientific discovery—that's AI creating more work for scientists who now have to experimentally validate AI hallucinations alongside their regular research.

The verification bottleneck doesn't disappear just because we automated hypothesis generation.

The "AI Co-Scientist" That Still Needs Human Supervision

DeepMind is launching "AI co-scientist"—an agentic system built on Gemini 2.0 that generates hypotheses, debates itself, and suggests experimental approaches. Multiple AI agents argue with each other about interpretations, then produce research proposals for humans to actually test.

Kohli frames this as partnership: AI handles the "how" of solving problems while humans focus on "what" questions are worth asking. Researchers at Imperial College used the system to study viruses that hijack bacteria, potentially opening pathways for tackling antimicrobial resistance.

But notice what happened: the AI "rapidly analyzed decades of published research" and arrived at a hypothesis that matched what the Imperial team had already spent years developing experimentally. The system compressed literature synthesis, but humans still designed validation experiments and understood clinical significance.

This is useful. It's also not the revolutionary transformation we were promised. It's expensive computational literature review.

The Next Five Years: Simulating Cells We Still Can't Cure

Kohli's vision for the next five years involves simulating complete human cells—understanding exactly how DNA gets read, how signaling molecules produce proteins, how the whole system functions. If we could reliably simulate cells, we could test drug candidates computationally, understand disease mechanisms fundamentally, and design personalized treatments.

If. Could. Potentially.

These are the same conditional promises we heard five years ago when AlphaFold 2 launched. Solve protein folding, unlock new branches of research, transform medicine and biology. We solved protein folding. The transformation is pending.

The problem isn't that AlphaFold failed—it succeeded spectacularly at what it was designed to do. The problem is that computational prediction doesn't automatically translate to clinical reality. Biology is messier than structure prediction. Diseases involve cascading failures across multiple systems. Drugs that work in silico fail in living organisms for reasons we still don't fully understand.

The Structure Isn't the Disease

Knowing protein structure helps researchers understand what might go wrong. It doesn't tell them how to fix it, or whether fixing it won't break something else, or whether a fix that works in cells will work in tissues, or whether a tissue fix will survive the immune system, or whether surviving the immune system won't cause worse problems than the original disease.

That's the gap between computational biology and actual medicine. AlphaFold bridges the first part beautifully. The rest remains stubbornly analog.

We're not dismissing the achievement—predicting 200 million protein structures is extraordinary, and the researchers using AlphaFold are doing legitimate science that advances human knowledge. But five years in, it's reasonable to ask when "advancing knowledge" translates to "curing diseases," and the honest answer is: we still don't know.

Meanwhile, we're already announcing the next big thing: AI co-scientists, cellular simulations, designer enzymes for climate change. The hype cycle moves faster than the research does.

If you need help separating AI capabilities from AI marketing or building strategies around technologies that solve current problems rather than theoretical future ones, Winsome Marketing specializes in cutting through computational optimism.

Supervised Reinforcement Learning: A New Training Method That Actually Works for Small AI Models

Supervised Reinforcement Learning: A New Training Method That Actually Works for Small AI Models

A research team has published a new paper introducing Supervised Reinforcement Learning (SRL), a training framework designed to help smaller...

Read More
Gemini 3 Deep Think: Google's Premium Reasoning Model Arrives

Gemini 3 Deep Think: Google's Premium Reasoning Model Arrives

Google just rolled out Gemini 3 Deep Think mode to AI Ultra subscribers in the Gemini app. This isn't incremental improvement—it's a fundamental...

Read More
Google's 1.3 Quadrillion Token Boast

Google's 1.3 Quadrillion Token Boast

Google wants you to be impressed by 1.3 quadrillion tokens processed per month. CEO Sundar Pichai highlighted the figure at a recent Google Cloud...

Read More