3 min read

Google Kills its Crowdsourced AI Health Advice Feature

Google Kills its Crowdsourced AI Health Advice Feature
Google Kills its Crowdsourced AI Health Advice Feature
5:27

"A revolutionary use of AI to transform health outcomes globally." That's how Google described its "What People Suggest" feature when it launched — a tool that surfaced crowdsourced medical advice from strangers who had experienced similar health conditions, delivered through AI-assisted search.

It is no longer available. Google confirmed the shutdown this week, describing it as part of a "broader simplification" of its search page.

When asked directly whether safety concerns drove the decision, a Google spokesperson said it "had nothing to do with the quality or safety of the feature."

That denial is doing a significant amount of work.

What the Feature Actually Was

"What People Suggest" was, in practice, exactly what it sounds like: AI-surfaced health recommendations from anonymous users who believed they had relevant personal experience with a medical condition. The feature was positioned as a complement to expert medical information — giving users peer perspectives alongside clinical sources.

Former Google chief health officer Karen DeSalvo framed the rationale clearly at launch: people value hearing from others with similar experiences, not just from doctors. That's true. It's also true of Reddit, patient forums, and Facebook groups — none of which carry Google's implicit authority or reach billions of search queries per day.

The distinction matters. When an anonymous poster on a forum recommends a treatment approach, the reader understands the context. When Google's AI surfaces that same recommendation inside a search result, the framing changes entirely. The platform's credibility transfers, whether Google intends it to or not.

The Context Google's Statement Omits

The spokesperson's assertion that safety played no role in the decision sits awkwardly against recent history.

In January, a Guardian investigation found that Google's AI Overviews — the AI-generated summary boxes appearing at the top of search results — were surfacing medically misleading information in ways that posed risks to users. Days after that report, Google restricted AI Overviews for certain medical queries specifically over safety concerns.

"What People Suggest" was pulled shortly after. The timeline is not incriminating on its own. The categorical denial that safety was a factor, given that context, is harder to take at face value.

Google's communication here follows a familiar pattern in the AI industry: ship aggressively, describe the product in maximalist terms, and when problems emerge, frame the retreat as a product decision rather than a safety one. "Broader simplification" is cleaner than "we reconsidered the risk profile of surfacing anonymous medical advice to hundreds of millions of people."

New call-to-action

The Deeper Problem With AI Health Information

The "What People Suggest" episode is a specific instance of a broader challenge that no AI company has fully solved: the tension between scale and epistemic responsibility.

At scale, an AI system that surfaces medical information — even well-intentioned, peer-sourced information — is making implicit recommendations to people in vulnerable moments. A person searching about symptoms, medication interactions, or treatment options is not in a neutral state. They're often scared, uncertain, and actively seeking guidance they can act on.

Crowdsourced advice from people with similar experiences has genuine value in appropriate contexts. Patient communities, disease-specific forums, and peer support networks serve real needs that clinical information alone doesn't meet. The problem is not the information itself. It's the interface — an AI search layer that strips context, flattens expertise, and presents everything with equivalent authority.

Google's AI Overviews restriction for medical queries acknowledged this problem implicitly. "What People Suggest" extended that same problem by design.

What This Means for Brands and Marketers in Health-Adjacent Categories

For content and growth teams operating in health, wellness, or any category where user safety is material, this story is a useful benchmark for what responsible AI deployment looks like — and what it doesn't.

The "revolutionary" language Google deployed at launch is the same register that gets companies into trouble. Describing an unvalidated feature as transformative for global health outcomes sets an expectation the product cannot meet and raises the stakes when it fails. Restraint in launch language is not just PR caution — it's an honest acknowledgment of what the technology can and cannot do at that moment.

For anyone building AI-assisted content strategy in sensitive categories, the lesson is structural: the interface through which AI delivers information shapes how that information is received, regardless of accuracy. Design carries responsibility. Scale amplifies it.

Google's spokesperson said the feature had "nothing to do with quality or safety." Maybe. But the companies most worth trusting on AI safety are generally the ones that don't need to say that.


Source: The News Digital, March 16, 2026 — "Google ends crowdsourced AI health advice feature: Find out why"


Winsome Marketing helps growth teams build AI content strategies that are both effective and defensible. Talk to our experts at winsomemarketing.com.

Is the Internet Drowning in Digital (AI-Generated) Garbage?

4 min read

Is the Internet Drowning in Digital (AI-Generated) Garbage?

We're drowning in digital garbage, and frankly, it's time to stop pretending otherwise. The internet has become a septic tank of AI-generated...

Read More
Laura Bates' Warning About AI Sexism Deserves Industry Attention

Laura Bates' Warning About AI Sexism Deserves Industry Attention

When Laura Bates receives 200 death threats on a bad day, you might wonder if her latest book represents justified alarm or activist hyperbole. After...

Read More
That

That "Summarize with AI" Button May Be Quietly Poisoning Your Chatbot's Memory

Microsoft's security team just exposed a manipulation technique already being used by dozens of legitimate companies: hidden prompts embedded in...

Read More