3 min read

Grimes, AI Doom, and a Berkeley Residency

Grimes, AI Doom, and a Berkeley Residency
Grimes, AI Doom, and a Berkeley Residency
5:50

A data scientist, sex researcher, and orgy organizer has co-founded an AI safety creator residency in Berkeley. Up to 100 content creators will receive free housing and food in exchange for posting daily short-form content about AI existential risk. Grimes is a mentor. Eliezer Yudkowsky — the godfather of AI doom thinking — is also a mentor. The program is called Plz Don't Kill Us. It is real.

This is either a genuinely important communication experiment or the most Berkeley thing that has ever happened. Possibly both.

The Problem It's Trying to Solve Is Legitimate

Strip away the aesthetics and the actual argument underneath Plz Don't Kill Us is worth taking seriously. The AI safety movement has a messaging crisis. The people who understand the risks best — researchers, alignment theorists, technical philosophers — are extraordinarily bad at communicating with anyone outside their own community. The gap between "p(doom) is non-trivial" and something a non-technical person can emotionally process is vast, and nobody has convincingly bridged it.

Aella's diagnosis is correct: short-form content is where public understanding gets shaped in 2026, and almost nobody in AI safety is operating there effectively. The Foresight Institute ran an existential meme competition last year. The Frame Fellowship launched an eight-week creator program in January. These are genuine attempts to solve a real problem, even if the solutions look strange from the outside.

The question is whether "100 creators posting daily about AI doom" is the mechanism that closes that gap — or whether it produces a content flood that the algorithm buries, the public scrolls past, and the AI labs ignore entirely.

The Funding Math Is Worth Noting

The program has an $800,000 goal and is roughly 60% funded through the Survival & Flourishing Fund and the Machine Intelligence Research Institute. That's real institutional backing from organizations that take AI risk seriously and have been operating in this space for years. This isn't a Substack side project. It's a funded content operation with mentors, conduct policies, and daily posting requirements.

The daily posting requirement is either the program's most important feature or its most likely failure point. Mandatory volume is how you build an audience on TikTok. It's also how you burn out creators and produce content that prioritizes quantity over the careful communication of genuinely complex ideas. Aella herself acknowledged the tension: "We care about accuracy. I would not just accept anybody." Accuracy and daily posting volume are not natural companions.

The Grimes Question

Grimes as an AI safety mentor is a choice that invites scrutiny. She is a genuinely compelling cultural figure with a proven ability to make niche ideas feel mythologically significant — which is arguably exactly the skill set this program needs. She is also the former partner of Elon Musk, whose own AI company xAI received a warning letter from 39 state attorneys general about chatbot safety practices. Her relationship to the AI industry is complicated in ways that a mentor role in an AI doom program doesn't fully resolve.

That's not a disqualification. It's a tension worth naming.

What Aella Actually Believes

The interview is worth reading in full because Aella is unusually direct. She rates her worry at nine out of ten. She says she's doing more drugs and not saving for retirement. She's "done a lot of grieving in advance." She describes AI development as equivalent to nuclear weapons, controlled by a small number of companies that are "very personally incentivized to not believe that it is a threat."

That is not the language of someone doing content strategy. It's the language of someone who genuinely believes the timeline is short and the response is inadequate.

Whether you share that assessment or not, the sincerity is notable. The AI safety space has plenty of people who perform concern for career or funding reasons. Aella reads as someone who has actually sat with the implications and found them genuinely distressing — and who is doing the only thing she could figure out how to do about it.

The program's goal, she says, is action rather than anxiety. Calling representatives. Making politicians understand that the public sees the threat and wants it taken seriously. That's a modest and reasonable objective, even inside an unusual vehicle.

The Honest Verdict

Plz Don't Kill Us is an experiment in whether cultural fluency and content volume can do what technical expertise and policy papers haven't. That's a legitimate hypothesis. The execution — a Berkeley residency, celebrity mentors, daily doom content — looks chaotic from the outside. But the AI safety movement trying to reach a mass audience through conventional channels has produced approximately zero mass awareness. Unconventional might be the only option left.

What it won't do is change anything at the labs. The people building the systems Aella is worried about are not waiting to see how the TikTok metrics land. The audience this program needs to reach is policymakers and the general public — and the path from Berkeley residency to congressional testimony is longer than the timeline she's worried about.

Still. Someone is trying. That's not nothing.

For marketing and growth leaders watching how AI safety gets communicated — and how public perception of AI risk shapes the regulatory environment your tools operate in — this is worth following. Our team at Winsome Marketing tracks the full spectrum of the AI story. Let's talk.