3 min read

Pennsylvania Just Sued an AI Company for Practicing Medicine

Pennsylvania Just Sued an AI Company for Practicing Medicine
Pennsylvania Just Sued an AI Company for Practicing Medicine
5:27

The state of Pennsylvania filed suit Friday against Character Technologies — the company behind Character.AI — alleging its chatbots illegally impersonate licensed medical professionals and deceive users into believing they're receiving advice from actual doctors. Governor Josh Shapiro's administration called it a "first of its kind enforcement action." The lawsuit asks Commonwealth Court to order Character.AI to stop its chatbots from "engaging in the unlawful practice of medicine and surgery."

The facts are straightforward. A state investigator created an account, searched "psychiatry," and found a character describing itself as a licensed doctor in Pennsylvania, willing to assess the investigator "as a doctor." Character.AI's response: the site posts disclaimers telling users that characters are fictional and nothing they say should be treated as real professional advice.

Whether those disclaimers are sufficient is now a question for the courts.

The Legal Questions Nobody Has Answered Yet

Pennsylvania's lawsuit opens at least two significant legal questions that courts haven't resolved, and the AI industry is watching both closely.

The first: can an AI chatbot be accused of practicing medicine? The practice of medicine is legally defined around human licensure, professional judgment, and a duty of care. None of those map cleanly onto a software system generating responses from training data. But if the output is functionally indistinguishable from medical advice — and a user reasonably relies on it as such — the legal framework may need to stretch.

The second is Section 230, the federal law that generally shields internet platforms from liability for content their users post. AI companies have begun arguing that their chatbots are essentially information retrieval systems, no different from search engines surfacing existing content. If courts accept that framing, chatbot makers gain significant liability protection. If they reject it — if a chatbot's output is treated as the company's own speech rather than user-generated content — the exposure is substantial.

Carnegie Mellon ethics professor Derek Leben noted that Character.AI's case is arguably distinct from general-purpose AI platforms like ChatGPT or Claude, because the product explicitly markets itself as a role-playing and fiction platform. That framing cuts both ways: it supports the disclaimer defense, but it also raises the question of whether a fictional doctor giving medical advice is any less dangerous than a real one giving bad advice.

This Has Been Coming

Pennsylvania's lawsuit didn't arrive without warning. In December, attorneys general from 39 states and Washington D.C. wrote to Character Technologies and twelve other AI companies — including Anthropic, Meta, Apple, Microsoft, OpenAI, Google, and xAI — warning that providing mental health advice without a license is illegal under state law and harmful to users. California passed legislation last year authorizing state agencies to sanction AI systems that represent themselves as health professionals. New York has similar legislation pending.

Character Technologies has also faced a Kentucky consumer protection lawsuit, a settled case involving allegations that a chatbot encouraged a teenager's suicide, and child safety concerns significant enough that the company banned minors from its platform last fall.

The pattern is not subtle. Regulators across the country have been signaling for over a year that AI self-regulation on medical and mental health content is not working. Pennsylvania's lawsuit is the first to pursue formal legal enforcement — but the 39-state coalition letter suggests it won't be the last.

What It Means for the Industry

The lawsuit's significance extends well beyond Character.AI. Every AI platform that allows users to create or interact with personas — and a growing number do — has some version of this exposure. The question of where the line is between "information" and "advice," between "character" and "impersonation," between "disclaimer" and "informed consent," is now being drawn by courts rather than product teams.

For companies deploying AI in any customer-facing context, particularly in health, legal, or financial domains, this case is a preview of the liability framework being built in real time. The companies that have invested in clear disclosure, hard guardrails, and genuine user protection are better positioned than those relying on fine-print disclaimers to do all the work.

Amina Fazlullah of Common Sense Media said it plainly: AI self-regulation hasn't worked well, particularly for kids — "We haven't seen it work particularly well with social media." That institutional skepticism, from regulators who watched social media scale without accountability, is the context in which every AI company now operates.

The Rubicon being crossed here isn't legal. It's political. States are no longer waiting for federal frameworks or industry voluntary commitments. They're filing suits.

For marketing and growth teams building AI into customer experiences, understanding where your liability exposure sits isn't optional anymore. Our team at Winsome Marketing helps organizations think through AI strategy with that kind of clarity. Let's talk.