A bipartisan bill banning minors from AI companion platforms passed the Senate Judiciary Committee 22-0 on Wednesday. The GUARD Act — Guidelines for User Age-verification and Responsible Dialogue — now heads to the Senate floor. A companion bill was introduced in the House the same day.
A unanimous committee vote on technology legislation in 2026 is not a small thing. This is a Congress that rarely agrees on anything. On this, there were no dissenting votes.
What the GUARD Act Actually Does
The bill draws a meaningful distinction between AI companions — platforms like Character.AI, designed to simulate human relationships and emotional bonds — and general-purpose chatbots like ChatGPT. The companion platforms get a blanket ban for minors. Chatbots are not fully banned but must verify user ages and disclose their non-human status. Companies that violate the terms face criminal and civil penalties.
That distinction matters. The harm driving this legislation isn't AI helping kids with homework or answering questions about history. It's platforms engineered specifically to create emotional dependency — systems that simulate friendship, romance, and intimate relationship dynamics with users who may be twelve years old and have no framework for understanding what they're actually interacting with.
Since 2023, multiple minors have died by suicide in circumstances where AI companion platforms played a documented role. Lawsuits from victims' families are accumulating in courts across the country. The Pennsylvania lawsuit against Character.AI filed last week — alleging the platform impersonates licensed medical professionals — adds another layer to a legal picture that is becoming increasingly difficult to dismiss as isolated incidents.
Why the Unanimous Vote Means Something
Sen. Josh Hawley introduced the GUARD Act in October, and the bipartisan co-authorship with Sen. Richard Blumenthal is a signal worth paying attention to. Hawley and Blumenthal do not agree on much. Child safety online, it turns out, is one area where the political calculation is straightforward: there is no constituency for defending the right of AI companion platforms to emotionally manipulate children.
Blumenthal's floor warning was unusually candid: "Warning, we're not done yet. Others who have championed this kind of legislation know that they will be relentless and tireless. Whatever they say publicly, they will be behind the scenes with armies of lawyers and lobbyists trying to fight us, mislead, and confuse."
He's right that the lobbying pressure will come. It always does. But the 22-0 vote is a signal that legislators have made their read of the political environment, and it's not favorable to the platforms.
The Legitimate Tensions Are Worth Taking Seriously
The bill is not without complications, and the honest version of this story acknowledges them.
Sen. Padilla raised real concerns about privacy and security in age verification — a process that typically requires biometrics or government ID and effectively becomes identity verification. Jennifer Huddleston of the Cato Institute articulated the downstream risk clearly: mandatory age verification for AI chatbots creates an infrastructure that could chill anonymous speech, affect people seeking sensitive medical information, and generate identity data that can be breached or misused.
These are not industry talking points. They are genuine civil liberties concerns that deserve to be worked through carefully in the legislative process.
Sen. Cruz's concern about cutting children off from beneficial AI tools is also worth engaging honestly. There is a real question about whether blanket platform bans are more effective than thoughtful design requirements, age-appropriate access tiers, and genuine transparency. The Robert Half data published this week shows AI fluency is already a baseline job market expectation — a generation that doesn't develop familiarity with these tools faces real disadvantages.
The bill's authors differentiated companion platforms from educational and general-purpose chatbots precisely because they understand this tension. A child using AI to study, write, or learn is different from a child in a simulated romantic relationship with a system engineered to maximize emotional engagement. The GUARD Act draws that line. Whether it draws it in exactly the right place is the work of the Senate floor debate.
What Happens Next
Blumenthal's prediction about industry lobbying will be tested quickly. AI companies have already invoked First Amendment protections in related litigation, and legal analysts expect similar arguments against the GUARD Act. Joel Thayer at the America First Policy Institute believes the government has strong grounds to defeat those challenges — the interactive nature of chatbot relationships dilutes the First Amendment speech interest that protected social media platforms.
That legal question will eventually get answered by courts. In the meantime, the legislative signal is clear: the era of self-regulation for AI platforms that interact with children is ending. The question is no longer whether regulation comes, but how carefully it gets designed.
For marketing teams, brand safety officers, and anyone building products that touch younger demographics, the GUARD Act's progress is a preview of the compliance environment taking shape. Our team at Winsome Marketing helps organizations build AI strategy that accounts for the regulatory direction of travel. Let's talk.


Writing Team