8 min read

OpenAI Served Subpoenas and Accused of Intimidation

OpenAI Served Subpoenas and Accused of Intimidation
OpenAI Served Subpoenas and Accused of Intimidation
14:56

A sheriff's deputy arrived at Nathan Calvin's door in August while he and his wife were sitting down to dinner. The 29-year-old general counsel of Encode—a three-person AI policy nonprofit—was being served with a subpoena from OpenAI demanding all his private communications about California's SB 53, the state's AI transparency law. The organization he works for was also served. OpenAI's legal justification? The company suspected critics like Encode might be secretly funded by Elon Musk, based on essentially zero evidence, and decided the discovery process in its lawsuit against Musk was a convenient pretext to intimidate anyone who'd worked against its policy interests.

This isn't litigation strategy. It's corporate bullying with a legal veneer. And if we let companies deploy subpoenas as weapons to silence nonprofits, researchers, and advocacy groups who dare to challenge them, we're surrendering any pretense that policy debates happen on merit rather than through raw economic power.

Let's be clear about what happened here: OpenAI, a company with a market valuation approaching half a trillion dollars, used its ongoing legal battle with Elon Musk to target tiny nonprofits that advocated for AI safety regulations OpenAI opposed. The message wasn't subtle. Criticize our restructuring, support stronger safety requirements, file amicus briefs against us—and we'll demand every email, text message, and private communication you've ever had about our company or our competitors. We'll make you hire lawyers you can't afford. We'll imply you're a puppet of our enemies. And we'll do it all while claiming we're just following standard discovery procedures.

The Timeline: Policy Opposition Becomes Legal Targeting

According to Fortune's detailed reporting, here's what actually happened:

Encode's work on SB 53: The nonprofit advocated for California's Transparency in Frontier Artificial Intelligence Act, which requires certain AI developers to publish safety frameworks, report critical incidents to the state, and share catastrophic risk assessments. OpenAI actively lobbied against strong versions of the bill, sending letters to Governor Newsom's office urging California to treat companies as compliant if they'd already signed federal safety agreements or joined international frameworks—provisions that would have gutted the law's enforcement teeth.

Encode's amicus brief: The organization filed a brief in OpenAI's lawsuit against Elon Musk, supporting some of Musk's arguments about OpenAI's departure from its original nonprofit mission. The brief was public, transparent, and well within Encode's advocacy mandate.

OpenAI's subpoena: In August, while SB 53 was still being negotiated, OpenAI served Encode and Calvin personally with subpoenas demanding all communications related to the organization's work on AI policy, its funding sources, and—critically—its private deliberations about SB 53 while the bill was under active legislative debate.

The timing is damning. OpenAI didn't wait until after SB 53 passed to investigate alleged Musk connections. It served the subpoenas while the bill was being negotiated, creating maximum chilling effect on a tiny nonprofit's ability to advocate effectively. Sunny Gandhi, Encode's vice president of political affairs, described it bluntly: "It's terrifying to have a half a trillion dollar company come after you."

That's the point. The terror is the mechanism.

The Musk Pretext: Conspiracy Theories as Discovery Justification

OpenAI's defense, articulated in a Friday post by chief strategy officer Jason Kwon, is that Encode's decision to support Musk in litigation and the organization's allegedly incomplete funding disclosures "raises legitimate questions about what is going on." The company claims it simply wanted to know whether Encode was "working in collaboration with third parties who have a commercial competitive interest adverse to OpenAI."

This argument collapses under basic scrutiny.

First, Encode is not funded by Elon Musk. Calvin stated this explicitly. The organization formally responded to OpenAI's subpoena confirming it receives no Musk funding, and OpenAI has not contested that response or pursued enforcement.

Second, even if Encode were Musk-funded, so what? Organizations don't lose their First Amendment rights to participate in policy debates because their donors have business interests. By OpenAI's logic, any think tank funded by tech companies, any advocacy group backed by philanthropists with portfolio holdings, any nonprofit supported by foundations with endowment exposure to AI stocks—all of them become fair game for invasive discovery the moment they take positions that challenge a litigant's commercial interests.

Third, OpenAI offered no actual evidence of coordination. The subpoena wasn't based on intercepted communications, whistleblower testimony, or documentary proof of collusion. It was based on the fact that Encode filed an amicus brief agreeing with some of Musk's legal arguments—a publicly available court filing that is literally designed to allow third parties to weigh in on litigation.

Tyler Johnston, founder of AI watchdog group the Midas Project, revealed he received similar treatment: a knock at his door in Oklahoma with a subpoena demanding "every text/email/document that, in the 'broadest sense permitted,' relates to OpenAI's governance and investors." He noted that instead of simply asking if he was Musk-funded—to which he would have answered no—OpenAI demanded what amounted to "a list of every journalist, congressional office, partner organization, former employee, and member of the public we'd spoken to about their restructuring."

That's not a funding inquiry. It's a mapping operation. OpenAI wanted to know who was talking to whom about the company's governance structure, its nonprofit-to-for-profit conversion, and its policy positions. The Musk lawsuit was simply the jurisdictional hook to compel that disclosure.

Internal Dissent: When Your Own Employees Call You Out

What makes this episode particularly revealing is the public pushback from OpenAI's own ranks. Joshua Achiam, the company's head of mission alignment, wrote on X: "At what is possibly a risk to my whole career I will say: this doesn't seem great." He continued: "We can't be doing things that make us into a frightening power instead of a virtuous one. We have a duty and a mission to all of humanity, and the bar to pursue that duty is remarkably high."

Helen Toner, the former OpenAI board member who resigned after the failed 2023 effort to oust CEO Sam Altman, was more direct: "The dishonesty & intimidation tactics in their policy work are really not" great, acknowledging that while OpenAI does some excellent work, its approach to policy advocacy crosses lines.

These aren't external critics. These are people who've worked inside OpenAI, who understand its culture, and who felt compelled to publicly warn that the company's behavior contradicts its stated mission. When your head of mission alignment is risking his career to call out internal conduct, that's not a PR problem—it's a values crisis.

New call-to-action

SB 53: What OpenAI Was Really Fighting

Calvin emphasized to Fortune that the under-covered aspect of this story is OpenAI's conduct around SB 53 itself. The company's letter to Governor Newsom urged California to treat companies as compliant if they'd already signed federal safety agreements or joined international frameworks like the EU's AI Code of Practice.

This provision would have functionally exempted major AI developers from state oversight. Why? Because federal AI safety agreements are currently voluntary, non-binding, and largely symbolic. The White House's July 2023 voluntary commitments signed by OpenAI and others include pledges to conduct safety testing and share information—commitments with no enforcement mechanism, no penalties for non-compliance, and no independent verification.

If California had adopted OpenAI's suggested language, companies could exempt themselves from state transparency requirements simply by signing federal commitments they were already ignoring. It's regulatory arbitrage dressed as harmonization, and it would have gutted SB 53's enforcement provisions.

The final law, signed by Newsom in late September, did not include OpenAI's proposed carve-outs. It requires frontier AI developers to publish safety frameworks, report critical incidents to the state, and share catastrophic risk assessments under state oversight—exactly the kind of binding, verifiable requirements that voluntary federal commitments lack.

Encode advocated for those stronger provisions. OpenAI lobbied against them. When lobbying didn't work, OpenAI served subpoenas. The sequencing matters.

The Chilling Effect: What Happens Next

The immediate damage is already done. Calvin described this as "the most stressful period of my professional life." Encode—founded by Sneha Revanur when she was 15 years old and currently operating with three full-time staff—was forced to retain legal counsel to respond to OpenAI's demands. The organization doesn't have OpenAI's resources, its legal infrastructure, or its ability to absorb the costs and distraction of protracted discovery disputes.

That resource asymmetry is the weapon. OpenAI doesn't need to win the underlying legal argument about whether Encode is Musk-funded. The company just needs to make participation in policy debates expensive, stressful, and risky enough that small organizations think twice before opposing its positions.

The broader chilling effect extends to every researcher, nonprofit, and advocacy group working on AI policy:

  • If you file an amicus brief against a major AI company, expect subpoenas.
  • If you advocate for regulations a company opposes, expect demands for your private communications.
  • If you criticize a company's restructuring or governance, expect implications that you're a stalking horse for its competitors.

This isn't hypothetical. It's the playbook OpenAI just deployed. And unless there are consequences—judicial sanctions for abusive discovery, bar complaints for weaponizing litigation, reputational costs that actually matter—every other AI company will adopt the same tactics.

What OpenAI Should Have Done

There was a legitimate path here. If OpenAI genuinely believed Encode was covertly funded by Musk and filing amicus briefs as part of a coordinated litigation strategy, the company could have:

  1. Asked directly. A simple inquiry: "Can you confirm your funding sources?" If Encode refused, that refusal could support more invasive discovery.
  2. Narrowed the subpoena. Instead of demanding all communications about SB 53 and OpenAI's governance, request only documents showing financial relationships with Musk entities.
  3. Timed it appropriately. Wait until after SB 53 passed to pursue discovery, avoiding any appearance of using litigation to interfere with active policy debates.
  4. Provided actual evidence. Base subpoenas on concrete indicators of coordination, not just the fact that an organization filed a public amicus brief agreeing with some of Musk's arguments.

OpenAI did none of these things. Instead, it went for maximum intimidation: personal service at Calvin's home during dinner, broad demands for private communications, and public implications of hidden funding—all while SB 53 was under negotiation and Encode was actively advocating for stronger provisions OpenAI opposed.

Chris Lehane, OpenAI's head of global affairs, recently posted on LinkedIn describing the company as having "worked to improve" SB 53. Calvin called that characterization "deeply at odds with his experience over the past few months." That's diplomatic phrasing for what most people would call gaslighting. OpenAI fought the bill, lost, and is now claiming credit for engagement.

The Mission Contradiction

Calvin closed his thread by asking: "Does anyone believe these actions are consistent with OpenAI's nonprofit mission to ensure that AGI benefits humanity?"

The answer is obviously no. You don't ensure AGI benefits humanity by intimidating the nonprofits advocating for safety regulations. You don't fulfill a mission to "all of humanity" by using litigation as a cudgel against three-person organizations that can't afford extended legal battles. You don't demonstrate commitment to transparency by demanding critics turn over their private communications about policy work.

OpenAI's stated mission and its actual conduct have diverged so completely that even internal employees are publicly calling it out. The company that claims to be building AGI for the benefit of all is using half-trillion-dollar leverage to silence critics, weaken safety regulations, and map the advocacy networks of organizations that challenge its governance structure.

This is what regulatory capture looks like in real time. This is how companies that talk about existential risk and benefiting humanity actually behave when faced with modest state-level transparency requirements. This is why we can't trust industry self-regulation or voluntary commitments—because when push comes to shove, these companies deploy subpoenas, not safety frameworks.

Calvin noted that he uses OpenAI's products and values the company's AI safety research. Many OpenAI employees genuinely want the company to be a force for good. But intention doesn't matter when the institutional behavior is this corrosive. A company can publish excellent research papers and simultaneously destroy the policy ecosystem that might constrain its commercial interests. The two aren't contradictory—they're complementary parts of a strategy to dominate both the technical and regulatory dimensions of AI development.

We should expect better. We should demand better. And when a half-trillion-dollar company shows up at a nonprofit staffer's door during dinner to demand his private communications about a state safety law, we should call it exactly what it is: an abuse of legal process designed to intimidate critics into silence.

OpenAI built this reputation one subpoena at a time. The rest of us just have to decide whether we're going to let that become normal.


If you're navigating AI policy and need strategic guidance from people who won't be intimidated by subpoenas or corporate pressure, we're here. Let's talk about building accountability into the system.

Gov. Gavin Newsom Signed California's First-in-Nation AI Safety Law

Gov. Gavin Newsom Signed California's First-in-Nation AI Safety Law

California just did what the federal government has spent three years refusing to do: establish actual accountability standards for AI companies....

Read More
California's 'No Robo Bosses Act' Might Be the Balance We Need

California's 'No Robo Bosses Act' Might Be the Balance We Need

In a world where AI agents are outperforming human programmers and chatbots are handling customer service calls, California is taking a...

Read More
California Signs on Google, Microsoft, Adobe, and IBM to Give Free AI Training

California Signs on Google, Microsoft, Adobe, and IBM to Give Free AI Training

California just signed deals with Google, Microsoft, Adobe, and IBM to provide free AI training to 2.6 million students across high schools,...

Read More