4 min read

Congress Proposes the GUARD Act to Ban Teens From Chatbots

Congress Proposes the GUARD Act to Ban Teens From Chatbots
Congress Proposes the GUARD Act to Ban Teens From Chatbots
7:36

Senators Josh Hawley (R-MO) and Richard Blumenthal (D-CT) just introduced the GUARD Act, legislation that would ban everyone under 18 from accessing AI chatbots and force companies to verify ages through government IDs or "reasonable" methods like face scans. According to The Verge's reporting, the bill also requires chatbots to disclose they aren't human every 30 minutes and makes it illegal to operate chatbots that produce sexual content for minors or promote suicide.

The legislation comes weeks after parents and safety advocates testified at a Senate hearing about AI chatbots' impact on kids. Blumenthal's statement is unambiguous: "Big Tech has betrayed any claim that we should trust companies to do the right thing on their own when they consistently put profit first ahead of child safety."

Here's the thing: the concerns are valid. OpenAI already flagged that over 500,000 ChatGPT users weekly show signs of mania or psychosis, with millions more using it for emotional support. We know teenagers are vulnerable. We know chatbots can be manipulative. We know there have been tragedies. The question isn't whether we should protect kids. The question is whether this bill will actually work—or just create a surveillance infrastructure that fails at its stated goal while making the internet worse for everyone.

Age Verification: The Idea That Sounds Simple Until You Think About It

The GUARD Act requires AI companies to verify ages by requiring users to upload government IDs or provide validation through another "reasonable" method—which the bill suggests might include face scans. Let's walk through what that actually means in practice.

Government ID uploads

You want to use ChatGPT? Submit your driver's license. Now every AI company has a database of government-issued IDs tied to user accounts. What could possibly go wrong? Data breaches, identity theft, stalking, doxxing—pick your catastrophe. And that's assuming companies store this data securely, which history suggests they won't.

Face scans

Alternatively, AI companies could use facial recognition to estimate your age. Great. Now we've built a biometric surveillance layer into every conversational AI platform. Teenagers will just use their parents' accounts, VPNs, or find workarounds—but everyone else loses privacy permanently. Once you've normalized facial recognition for age verification, you've normalized facial recognition. Full stop.

The bill also creates enforcement nightmares. Does this apply to open-source models? Can teenagers use AI coding assistants for homework? What about educational chatbots? What about chatbots embedded in other services? The law doesn't specify, which means either massive overreach (ban everything) or massive loopholes (ban nothing that matters).

New call-to-action

The 30-Minute Humanity Disclosure: Security Theater in Text Form

The GUARD Act requires AI chatbots to disclose they aren't human every 30 minutes. Presumably this is to prevent teenagers from forming parasocial relationships with AI. But let's be honest: if a teenager is emotionally dependent on ChatGPT, a reminder every 30 minutes isn't going to help. It's security theater. It makes legislators feel like they did something without addressing the actual problem.

The bill also makes it illegal for chatbots to claim they are human, similar to California's AI safety bill. Fine. Good. But chatbots don't claim to be human—they're just good enough at conversation that users forget they're not. The issue isn't that AI is lying. It's that conversational AI is designed to feel human, and that design creates attachment. A legal disclaimer doesn't undo that.

If the goal is to reduce harm, the solution isn't periodic reminders. It's better mental health infrastructure, digital literacy education, and parental involvement. None of which this bill addresses.

The Real Problem: We're Legislating Symptoms, Not Causes

Here's what the GUARD Act gets right: AI chatbots can be harmful to minors. Sexual content, suicide encouragement, emotional manipulation—all real problems. But the solution isn't banning access. It's better design, better safeguards, and better accountability.

OpenAI already retrained GPT-5 with 170 psychiatrists and clinicians. They added clinical taxonomies, hotline redirects, session-break nudges. That's the kind of intervention that actually works—building safety into the product, not building surveillance into the access layer.

The bill also makes it illegal to operate chatbots that produce sexual content for minors or promote suicide. Great. That should already be illegal under existing child safety laws. If it's not, close that loophole. But tying it to age verification and ID uploads turns a straightforward safety measure into a privacy disaster.

What This Bill Actually Accomplishes (Hint: Not What It Claims)

Let's be clear about what the GUARD Act would do if passed:

1. It creates a national ID database for AI access. Every major AI company now collects and stores government IDs or biometric data. That's a honeypot for hackers and a surveillance infrastructure for governments.

2. It drives teenagers to unregulated platforms. Teenagers won't stop using AI. They'll just use platforms that don't comply—foreign services, open-source models, decentralized systems. The result? Less safety, not more, because now they're using tools with zero oversight.

3. It normalizes age verification across the internet. Once AI chatbots require ID uploads, every other platform will follow. Social media, gaming, forums—everything becomes gated by government ID. That's not child safety. That's infrastructure for mass surveillance.

4. It does nothing to address the root causes. Teenagers use AI chatbots for emotional support because they don't have better options. Therapy waitlists are six weeks. School counselors are overworked. Parents are unavailable. Banning ChatGPT doesn't solve any of that—it just removes one coping mechanism and leaves the underlying crisis untouched.

What We Should Be Doing Instead

If Congress actually cares about protecting kids from AI harms, here's what would work:

  • Mandate safety features in AI design. Require hotline redirects, crisis detection, session limits, and mental health safeguards. OpenAI already did this voluntarily. Make it law for everyone.
  • Fund mental health infrastructure. The reason teenagers turn to AI for emotional support is because human support is unavailable or inaccessible. Fix that problem.
  • Teach digital literacy. Kids need to understand what AI is, how it works, and why it's not a substitute for human relationships. That's education, not legislation.
  • Hold companies accountable for harms, not access. If a chatbot encourages suicide or produces harmful content, sue the company. Create liability. Don't create surveillance.

The GUARD Act does none of this. It's performative legislation designed to look like action while avoiding the hard work of actually solving the problem. And in the process, it normalizes surveillance infrastructure that will outlast the moral panic that created it.

Want to build technology strategies that prioritize both safety and privacy—without the false trade-offs? Let's talk. Because the companies that win won't just comply with bad laws. They'll build better systems that make those laws unnecessary.

Trump's AI

Trump's AI "Action Plan" Is All Sizzle, No Steak

We've seen this movie before. A sweeping policy document drops with grand proclamations about American leadership, peppered with buzzwords like...

Read More
The Vibe Check Finally Gets a Benchmark: Why

The Vibe Check Finally Gets a Benchmark: Why "Feels Right" Matters in AI Code Generation

We've all been there. You prompt an LLM to write code, it spits out something that technically works, but it doesn't feel right. The variable names...

Read More
Meta Finally Admits AI Chatbots Don't Belong in Teenagers' DMs

Meta Finally Admits AI Chatbots Don't Belong in Teenagers' DMs

Meta will let parents disable AI chatbots on Instagram "early next year," which raises an obvious question: why were unsupervised AI conversations...

Read More