3 min read

80% of FDA Employees are Using AI

80% of FDA Employees are Using AI
80% of FDA Employees are Using AI
5:12

The agency overseeing the safety of American medicine is moving fast on AI. That's both the promising part and the concerning part.

The FDA has announced a pilot program with AstraZeneca and Amgen to monitor clinical trials in real time using AI and cloud infrastructure. Instead of waiting for pharmaceutical companies to submit documentation that can run millions of pages, the agency will receive direct data feeds from active studies—so when a patient develops a fever or a tumor shrinks, regulators see it in the cloud as it happens. FDA Chief AI Officer Jeremy Walsh estimates the approach could cut 20 to 40 percent off total trial duration. FDA Commissioner Marty Makary called it a milestone, noting that the review process has barely changed since the 1960s.

The same announcement revealed that over 80 percent of FDA staff now use an internal generative AI tool called Elsa. That tool has been reported to fabricate nonexistent studies.

What the Real-Time Monitoring Program Actually Changes

The current drug approval timeline runs ten to twelve years on average. Roughly 45 percent of the time between Phase 1 clinical trials and regulatory submission is spent on paperwork and administrative processing—not on science. The pilot program attacks that specific inefficiency by eliminating the submission lag entirely. Direct data feeds mean the FDA is reviewing evidence as it's generated rather than after it's compiled, packaged, and filed.

Walsh has been explicit that safety standards are not being lowered. The argument is that faster review of the same data produces the same safety outcomes with less calendar time. If the pilot holds up, the FDA has published a public request for information to gather additional proposals for AI-driven improvements across the clinical trial process.

The $120 million in projected annual savings is earmarked in part to fund the rehiring of up to 3,000 scientists—a figure that reflects the other half of this story.

The DOGE Context

The FDA's AI push is happening in the shadow of significant institutional disruption. The Trump administration's DOGE-driven layoffs in early 2025 cut deeply into the agency's staffing. All current consolidation efforts are being carried out without additional resources. The efficiency argument for AI adoption at the FDA is therefore not purely aspirational—it's partly compensatory. The agency is using AI to do more with fewer people, by necessity.

That context matters when evaluating the internal adoption numbers. In early 2025, roughly one percent of FDA staff regularly used generative AI. Today that number is above 80 percent. That is an extraordinary adoption curve for a regulatory agency operating under resource constraints. Whether the pace of adoption has outrun the pace of validation is the question worth asking.

The Elsa Problem

Elsa is the FDA's internal AI tool for reading, writing, and summarizing reports. In early pilot projects it reduced administrative tasks that previously took ten days down to twenty minutes. Walsh described those results as significant.

Multiple FDA employees told CNN last summer that Elsa regularly fabricates nonexistent studies and misrepresents research data. Walsh acknowledged the issue directly, noting that Elsa behaves like other large language models—it can hallucinate. He offered no specific mitigation beyond that acknowledgment.

This is not a minor footnote. The FDA's core function is evaluating evidence. An internal tool that invents studies is a direct threat to that function if its outputs are not rigorously verified before they inform any decision. The speed gains Elsa produces are real. The risk it introduces is also real. It is not clear from the available information whether the agency has built sufficient verification workflows around Elsa's outputs.

What This Means Beyond Pharmaceuticals

The FDA's situation is a compressed version of the challenge facing every institution deploying AI under resource pressure: the efficiency gains are immediate and measurable, the failure modes are subtle and potentially serious, and the institutional capacity to distinguish between good AI output and plausible-sounding bad AI output takes time to build.

For marketing and business leaders watching this from the outside, the pattern is familiar. AI adoption at scale, driven by cost pressure, without fully resolved governance—that is the story of most enterprise AI deployments right now, not just federal agencies. The FDA's hallucinating internal tool is a high-stakes version of a problem that exists in less visible forms across nearly every sector using generative AI at volume.

Speed without verification is not efficiency. It's risk with a faster clock.

Building AI into your marketing operations without the hallucination risk requires the right governance from the start. Winsome Marketing's growth team helps you move fast and build verification into the workflow.