3 min read

New York Wants to Make It Illegal for AI to Pretend It's Your Lawyer

New York Wants to Make It Illegal for AI to Pretend It's Your Lawyer
New York Wants to Make It Illegal for AI to Pretend It's Your Lawyer
5:58

There is currently no law in the United States preventing an AI chatbot from claiming to be a licensed attorney and then giving you legal advice. That gap in the legal framework is not a technicality. It is an active vulnerability being exploited in ways that are materially harming people, and New York is doing something about it.

A bill working through New York's legislature — described by its sponsor as the first of its kind in the country — would bar AI chatbots from impersonating lawyers, doctors, therapists, and other licensed professionals. It would allow users who relied on erroneous professional advice from an AI posing as a credentialed human to sue the platform directly. Crucially, AI companies could not escape liability simply by burying a disclaimer noting the user is interacting with a non-human. The protection attaches to behavior, not boilerplate.

State Senator Kristen Gonzalez, who is sponsoring the bill, put the problem plainly: "Today, there is no law that says that a large language model cannot tell you that it is a lawyer, that it is a licensed therapist, and then give you legal advice or therapy accordingly."

She's right. And the fact that we need a law to address this reflects how far behind governance has fallen relative to deployment.

Why This Bill Is Structurally Sound

The design of this legislation is worth examining because it closes the loopholes that have allowed similar consumer protection efforts to fail in practice.

Most AI liability frameworks have been undermined by the same mechanism: a disclosure buried in terms of service, a pop-up acknowledging non-human interaction, a footnote that technically notifies the user while doing nothing to change their experience or protect them from harm. This bill explicitly removes that escape hatch. Notification that you're talking to a chatbot does not immunize a platform from liability if that chatbot then represents itself as a licensed professional and dispenses substantive advice in that capacity.

That's the right call. The harm doesn't come from a user's ignorance of AI's existence. It comes from a system actively performing professional authority it does not hold, and a person making consequential decisions based on that performance. A disclaimer doesn't undo the advice. A right to sue creates a structural incentive for platforms to govern their own systems rather than relying on users to protect themselves.

The bill is also part of a broader New York legislative package on AI governance, including a bill that protects minors from unsafe chatbot features and another that requires platforms to display notices about potential output inaccuracies. New York is not approaching AI regulation as a series of isolated patches. It is building a framework, which is the only approach that has any chance of keeping pace with the technology.

The Problem Is Already in Court

This legislation isn't arriving in a vacuum. The legal system is already absorbing the consequences of unregulated AI professional impersonation, and the picture is not flattering.

Nippon Life Insurance Company of America filed suit against OpenAI this week, alleging that ChatGPT practiced law without a license after helping a former disability claimant breach a settlement agreement and flood a federal court docket with meritless filings. OpenAI has denied the claim. Separately, courts across the country have sanctioned lawyers for submitting AI-generated briefs containing fictitious case citations — hallucinated material presented as legal precedent. Some judges have imposed fines. The American legal system is spending real resources cleaning up AI outputs that were never adequately governed at the source.

Meanwhile, OpenAI, Google's Gemini, and Character.AI are each facing lawsuits alleging their tools contributed to user suicides. The companies have denied wrongdoing but settled some cases. The pattern is consistent: platforms deploy, harms emerge, litigation follows years later, settlements happen quietly. A regulatory framework that creates liability before widespread harm has occurred is categorically preferable to that cycle.

Enterprise Supply Chain Technology Case Study CTA

What New York Getting This Right Means for the Rest of the Country

New York has a history of regulatory leadership that eventually shapes national standards. Its financial regulations, data privacy frameworks, and environmental standards have repeatedly preceded federal action. AI governance is likely to follow the same pattern — not because Washington lacks interest, but because state-level specificity often moves faster and more concretely than federal rulemaking.

For marketing and growth teams deploying AI tools that touch customer experience, legal information, health guidance, or any domain adjacent to licensed professional advice, the New York bill is a leading indicator worth building toward now rather than scrambling to comply with later. If your AI-powered tools make recommendations that could be construed as professional advice — and many do — the question of how they identify themselves to users, and what liability your organization carries for that identification, is no longer theoretical.

The AI governance questions that matter most for businesses right now aren't about capability. They're about accountability. Who is responsible when an AI gives someone the wrong answer about their legal rights, their medication dosage, their mental health crisis? New York is answering that question with clarity: the platform is, and users have recourse.

That's not a burden on responsible AI deployment. It's a condition of it.

If you want to build AI into your customer experience and marketing operations in ways that hold up legally and reputationally as governance tightens, Winsome Marketing's strategists can help you get ahead of the curve.

In New York, Employers Have to Disclose AI-Related Layoffs

In New York, Employers Have to Disclose AI-Related Layoffs

Finally, someone in government gets it. New York just became the first state to require companies to disclose whether artificial intelligence is the...

Read More
Has AI Replaced College Writing?

1 min read

Has AI Replaced College Writing?

Hua Hsu's masterful excavation in The New Yorker reads like an academic autopsy report, but we're not examining a corpse—we're watching a...

Read More
AI's Overthinking Problem Is Real — And It's Costing You More Than You Know

AI's Overthinking Problem Is Real — And It's Costing You More Than You Know

Reasoning models frequently arrive at the correct answer, then keep talking anyway. A new ByteDance study quantified exactly how bad this is: in over...

Read More