Sometimes history doesn't repeat—it just gets an algorithm upgrade. Writing in The Guardian, historian Edna Bonhomme drew a chilling parallel between today's AI-powered medical misinformation and the charlatans of centuries past. Her timing couldn't have been more prescient.
Just days after Bonhomme's piece, we witnessed the most spectacular example of what she warned against: RFK Jr.'s "Make America Healthy Again" report, a 73-page government document that appears to have been ghostwritten by ChatGPT, complete with fabricated citations and nonexistent studies. The report, which at least 21 of those links were dead and contained titles of papers that don't exist, represents exactly the kind of AI-amplified quackery Bonhomme predicted.
Her insight was simple yet profound: "Medical charlatans have existed through history. But AI has turbocharged them."
When Government Becomes the Charlatan
Bonhomme's analysis began with a personal moment—consulting ChatGPT about her baby's health and feeling wary about the interaction. That wariness proved prophetic. The epidemiologist Katherine Keyes, who was listed in the MAHA report as the first author of a study on anxiety and adolescents, told NOTUS: "The paper cited is not a real paper that I or my colleagues were involved with."
This isn't just academic sloppiness—it's exactly what Bonhomme warned about. She noted that AI "has been found not only to include false citations but also to 'hallucinate', that is, to invent nonexistent material." The MAHA report became a masterclass in this phenomenon, with 522 citations in the initial version of the report and multiple references to studies that simply don't exist.
The Washington Post's analysis revealed the smoking gun: some of the references in the MAHA report included "oaicite" markers attached to their URLs—a strong indicator that AI was used to create them. It's like finding a barcode on a supposedly ancient artifact.
Bonhomme's historical perspective illuminates why this feels so familiar yet so dangerous. She reminded us that during the 17th and 18th centuries, quacks like Buonafede Vitali and Giovanni Greci sold "balsamo simpatico (sympathetic balm) to treat venereal diseases" from public squares. They had charisma, a platform, and dubious remedies.
Today's digital charlatans have something far more powerful: the veneer of scientific legitimacy. As Bonhomme noted, "This disinformation may appear on platforms that we believe to be reliable, such as search engines, or masquerade as scientific papers, which we're used to seeing as the most reliable sources of all."
The MAHA report exemplifies this perfectly. It was presented as "gold standard" science and "radical transparency" by the Department of Health and Human Services. Yet the report simply takes small, targeted studies and broadens them to become nationwide evidence of the very theories Kennedy has been pushing for years and years.
What makes this particularly insidious is how AI amplifies existing biases and conspiracy theories. The technology journalist Karen Hao, whom Bonhomme quoted, asked the crucial question: "How do we govern artificial intelligence?" The MAHA report debacle shows what happens when we don't.
Steven Piantadosi, a professor in psychology and neuroscience at the University of California at Berkeley explained the fundamental problem: "The problem with current AI is that it's not trustworthy, so it's just based on statistical associations and dependencies. It has no notion of ground truth, no notion of … a rigorous logical or statistical argument."
Unlike science, which strives to uncover truth, AI—as Bonhomme noted—"has no interest in whether something is true or false." It simply patterns-matches and regurgitates, creating what appears to be authoritative content without any commitment to accuracy.
The stakes couldn't be higher. As Bonhomme observed, "RFK Jr believes that he is an arbiter of science, even if the Maha report appears to have cited false information." The report's influence extends far beyond academic circles—it's shaping government policy that affects millions of Americans.
Art Caplan, a bioethicist at the New York University Grossman School of Medicine put it bluntly: "It's the kind of thing that gets a senior researcher into deep trouble, potentially losing their funding. It's the kind of thing that leads to a student getting an F. It's inexcusable."
Yet when pressed, Department of Health and Human Services spokesman Andrew Nixon said that "minor citation and formatting errors have been corrected, but the substance of the MAHA report remains the same". This dismissive response misses the point entirely—if the citations are fabricated, how can we trust the substance?
Bonhomme's analysis becomes even more relevant when we consider how AI systems learn. They're trained on existing data, including both legitimate research and conspiracy theories. When someone like RFK Jr.—who has spent years promoting debunked theories about vaccines and autism—uses AI to support his predetermined conclusions, the system obligingly generates "evidence" that fits his narrative.
The result is what Bonhomme called "an environment where fact and fiction meld into each other, leaving minimal foundation for scientific objectivity." The MAHA report exemplifies this perfectly, cherry-picking real studies while fabricating others to support predetermined conclusions.
Bonhomme's most important insight was that "individual solutions can be helpful in assuaging our fears, but we require robust and adaptable policies to hold big tech and governments accountable regarding AI misuse." The MAHA report scandal proves her point—when government agencies themselves become vectors for AI-generated misinformation, individual skepticism isn't enough.
We need systemic solutions: mandatory disclosure when AI is used in government reports, verification processes for citations, and consequences for officials who present AI-generated content as authoritative research. Without these safeguards, we risk, as Bonhomme warned, "creating an environment where charlatanism becomes the norm."
Perhaps the most telling aspect of this entire saga is the meta-irony Bonhomme identified. Kennedy has threatened this week to bar government-funded scientists from publishing in major medical journals while using research published in those very journals to bolster his points in the MAHA report.
He simultaneously attacks the peer-review process while relying on AI to fabricate the very type of peer-reviewed citations he claims to distrust. It's charlatanry wrapped in technological sophistication, exactly what Bonhomme predicted.
Bonhomme's historical perspective offers a roadmap for addressing this crisis. Just as medieval societies eventually developed regulations for medical practitioners, we need modern frameworks for AI-generated content in public policy. Her call for "robust and adaptable policies to hold big tech and governments accountable regarding AI misuse" feels more urgent than ever.
The MAHA report isn't just a case study in government incompetence—it's a preview of what happens when we let AI systems amplify our worst impulses without adequate oversight. Bonhomme's warning deserves to be heeded before digital charlatanism becomes the new normal.
In her words: "We risk creating an environment where charlatanism becomes the norm." The MAHA report suggests we may already be there.
Don't let AI snake oil contaminate your marketing strategy. Winsome Marketing's growth experts help you leverage authentic AI tools while maintaining credibility and trust with your audience. Ready to build something real?