Senators Josh Hawley (R-MO) and Richard Blumenthal (D-CT) just introduced the GUARD Act, legislation that would ban everyone under 18 from accessing AI chatbots and force companies to verify ages through government IDs or "reasonable" methods like face scans. According to The Verge's reporting, the bill also requires chatbots to disclose they aren't human every 30 minutes and makes it illegal to operate chatbots that produce sexual content for minors or promote suicide.
The legislation comes weeks after parents and safety advocates testified at a Senate hearing about AI chatbots' impact on kids. Blumenthal's statement is unambiguous: "Big Tech has betrayed any claim that we should trust companies to do the right thing on their own when they consistently put profit first ahead of child safety."
Here's the thing: the concerns are valid. OpenAI already flagged that over 500,000 ChatGPT users weekly show signs of mania or psychosis, with millions more using it for emotional support. We know teenagers are vulnerable. We know chatbots can be manipulative. We know there have been tragedies. The question isn't whether we should protect kids. The question is whether this bill will actually work—or just create a surveillance infrastructure that fails at its stated goal while making the internet worse for everyone.
The GUARD Act requires AI companies to verify ages by requiring users to upload government IDs or provide validation through another "reasonable" method—which the bill suggests might include face scans. Let's walk through what that actually means in practice.
You want to use ChatGPT? Submit your driver's license. Now every AI company has a database of government-issued IDs tied to user accounts. What could possibly go wrong? Data breaches, identity theft, stalking, doxxing—pick your catastrophe. And that's assuming companies store this data securely, which history suggests they won't.
Alternatively, AI companies could use facial recognition to estimate your age. Great. Now we've built a biometric surveillance layer into every conversational AI platform. Teenagers will just use their parents' accounts, VPNs, or find workarounds—but everyone else loses privacy permanently. Once you've normalized facial recognition for age verification, you've normalized facial recognition. Full stop.
The bill also creates enforcement nightmares. Does this apply to open-source models? Can teenagers use AI coding assistants for homework? What about educational chatbots? What about chatbots embedded in other services? The law doesn't specify, which means either massive overreach (ban everything) or massive loopholes (ban nothing that matters).
The GUARD Act requires AI chatbots to disclose they aren't human every 30 minutes. Presumably this is to prevent teenagers from forming parasocial relationships with AI. But let's be honest: if a teenager is emotionally dependent on ChatGPT, a reminder every 30 minutes isn't going to help. It's security theater. It makes legislators feel like they did something without addressing the actual problem.
The bill also makes it illegal for chatbots to claim they are human, similar to California's AI safety bill. Fine. Good. But chatbots don't claim to be human—they're just good enough at conversation that users forget they're not. The issue isn't that AI is lying. It's that conversational AI is designed to feel human, and that design creates attachment. A legal disclaimer doesn't undo that.
If the goal is to reduce harm, the solution isn't periodic reminders. It's better mental health infrastructure, digital literacy education, and parental involvement. None of which this bill addresses.
Here's what the GUARD Act gets right: AI chatbots can be harmful to minors. Sexual content, suicide encouragement, emotional manipulation—all real problems. But the solution isn't banning access. It's better design, better safeguards, and better accountability.
OpenAI already retrained GPT-5 with 170 psychiatrists and clinicians. They added clinical taxonomies, hotline redirects, session-break nudges. That's the kind of intervention that actually works—building safety into the product, not building surveillance into the access layer.
The bill also makes it illegal to operate chatbots that produce sexual content for minors or promote suicide. Great. That should already be illegal under existing child safety laws. If it's not, close that loophole. But tying it to age verification and ID uploads turns a straightforward safety measure into a privacy disaster.
Let's be clear about what the GUARD Act would do if passed:
1. It creates a national ID database for AI access. Every major AI company now collects and stores government IDs or biometric data. That's a honeypot for hackers and a surveillance infrastructure for governments.
2. It drives teenagers to unregulated platforms. Teenagers won't stop using AI. They'll just use platforms that don't comply—foreign services, open-source models, decentralized systems. The result? Less safety, not more, because now they're using tools with zero oversight.
3. It normalizes age verification across the internet. Once AI chatbots require ID uploads, every other platform will follow. Social media, gaming, forums—everything becomes gated by government ID. That's not child safety. That's infrastructure for mass surveillance.
4. It does nothing to address the root causes. Teenagers use AI chatbots for emotional support because they don't have better options. Therapy waitlists are six weeks. School counselors are overworked. Parents are unavailable. Banning ChatGPT doesn't solve any of that—it just removes one coping mechanism and leaves the underlying crisis untouched.
If Congress actually cares about protecting kids from AI harms, here's what would work:
The GUARD Act does none of this. It's performative legislation designed to look like action while avoiding the hard work of actually solving the problem. And in the process, it normalizes surveillance infrastructure that will outlast the moral panic that created it.
Want to build technology strategies that prioritize both safety and privacy—without the false trade-offs? Let's talk. Because the companies that win won't just comply with bad laws. They'll build better systems that make those laws unnecessary.