3 min read

Google Is Rethinking Teen and AI Safety Online

Google Is Rethinking Teen and AI Safety Online
Google Is Rethinking Teen and AI Safety Online
6:46

The debate about young people and the internet has been dominated by one argument: keep them off it, or at least limit how much of it they can access. A summit hosted by Google in Dublin this week pushed back on that framing — not to dismiss safety concerns, but to argue that blanket restriction is producing outcomes nobody intended.

The "Growing Up in the Digital Age" Summit, hosted at Google's Safety Engineering Center, brought together child safety experts, educators, and policymakers to examine what actually works. The event produced six themes and one significant announcement: a $20 million global initiative, jointly funded by Google.org and YouTube, focused on teen digital wellbeing in the AI era.

The Case Against Blanket Bans

The summit's clearest throughline was skepticism toward restriction as a primary safety mechanism. Research cited at the event, along with input from global rights and safety organizations, supports a consistent finding: young people pushed off regulated platforms don't stop using the internet. They migrate to less regulated environments where the protections they had — parental controls, content moderation, supervised experiences — no longer apply.

That dynamic reframes the policy debate in practical terms. A blanket ban doesn't reduce risk exposure. It relocates it while removing the infrastructure that managed it. The organizations at the summit argued for making the digital world better for young people rather than off-limits to them — a distinction that shifts the burden from access control to experience design.

What Google Is Actually Building

The product-level announcements at the summit reflect that design orientation. SafeSearch is on by default in Google Search. On YouTube, uploads are private by default for users under 18, and well-being features, including "Take a Break" and "Bedtime" reminders, are automatically activated. Parents using Family Link now have a consolidated view for managing device settings, usage summaries, and screen time limits. A forthcoming feature will allow parents to set YouTube Shorts viewing time to zero for supervised teen accounts — described as an industry first.

For Gemini, Google has implemented content safeguards for users under 18 that cannot be disabled. These include design choices to prevent the AI from simulating intimacy, presenting itself as a companion, or claiming to be human. That last category is worth noting: it reflects an acknowledgment that the risks AI poses to young users aren't identical to the risks posed by social media or search. The concern isn't just content exposure — it's the nature of the relationship AI can simulate, and what that means for developing minds.

The $20 Million Initiative

The most substantive announcement was the $20 million global initiative for teen digital wellbeing, structured as a partnership between Google.org and YouTube. The funding will support a multilingual, open-source resource center and curriculum, informed by a global Ipsos study of more than 9,500 teenagers designed to ensure the content reflects what young people actually need rather than what adults assume they need.

The curriculum scope is broad: seeking help online, managing digital stress, and — notably — understanding how to interact with AI in healthy ways. That last element positions the initiative as explicitly AI-era programming rather than a repackaged version of existing digital literacy curricula. The content will be distributed through nonprofits and YouTube creators already working in youth support contexts.

The open-source and multilingual design choices are significant. They suggest an intent to make the resources adaptable and globally applicable rather than optimized for English-speaking Western markets — a meaningful distinction for a problem that doesn't respect geographic boundaries.

Age Assurance as Infrastructure

One of the more technically substantive discussions at the summit concerned age verification — the mechanism that underlies most of the above. The current debate is often framed as a binary between weak age gates that are trivially circumvented and invasive identity checks that raise privacy concerns.

Google's position, supported by its research, advocates a risk-based approach in which the level of verification required scales with the risk of the content or feature. An analogy offered at the summit: a credit card company doesn't verify whether you're old enough to buy a drink — the pub does. The verification obligation belongs closest to the point of risk.

Google is supporting the development of global, interoperable age-verification standards and open-sourcing privacy-preserving age-check technology to make adoption easier for services that need them. If that infrastructure develops, it could shift age assurance from a compliance checkbox to a functional layer that adapts to context — which is closer to how child safety experts say it needs to work.

The AI Variable

What makes this moment different from prior conversations about youth online safety is the AI layer. The risks that prompted earlier regulatory debates — algorithmic recommendation, social comparison, screen time — are now joined by something qualitatively different: AI systems capable of sustained, personalized, emotionally responsive interaction.

The Gemini design choices reflect an early attempt to address that difference at the product level. The curriculum investment reflects an acknowledgment that design alone isn't sufficient — young people also need frameworks for understanding what they're interacting with and how to engage with it in ways that serve their wellbeing rather than undermine it.

Whether design-led approaches at this scale are sufficient to the challenge is a question the summit didn't resolve. What it did establish is a clearer articulation of the alternative to restriction: age-appropriate experiences, adaptable parental controls, and infrastructure built to protect young people within the digital world rather than from it.

For marketing professionals and content teams, the practical implication is straightforward — the standards being developed for youth-facing AI interactions, content recommendation, and age assurance will shape what's permissible and expected across consumer-facing AI products more broadly. Organizations building toward responsible AI deployment in consumer contexts should be watching this space closely. Winsome Marketing's team can help you think through what it means for your own products and communications.

What the Reverse CAPTCHA Study Means for Marketers

What the Reverse CAPTCHA Study Means for Marketers

Researchers just proved that invisible characters — literally unreadable to human eyes — can be embedded in ordinary-looking text to hijack AI...

Read More
Microsoft's Desktop Peeping Tom: Copilot Vision's Concerning Expansion

Microsoft's Desktop Peeping Tom: Copilot Vision's Concerning Expansion

Microsoft just announced that Copilot Vision can now peek at your entire desktop, not just individual apps. The feature, currently rolling out to...

Read More
Google's Gemma Scope 2: 110 Petabytes of Interpretability (And One Big Question)

Google's Gemma Scope 2: 110 Petabytes of Interpretability (And One Big Question)

Google just released Gemma Scope 2, an interpretability toolkit for the entire Gemma 3 model family (270M to 27B parameters). The numbers are...

Read More