5 min read

The OpenAI Pentagon Deal That Broke the Internet — and Maybe the Industry's Soul

The OpenAI Pentagon Deal That Broke the Internet — and Maybe the Industry's Soul
The OpenAI Pentagon Deal That Broke the Internet — and Maybe the Industry's Soul
9:27

Let's not bury the lede: OpenAI made a deal with the Department of Defense hours after Anthropic refused to — and the internet responded by canceling its ChatGPT subscriptions in droves. Katy Perry switched to Claude. Reddit revolted. And Sam Altman spent the weekend in what can only be described as a very expensive apology tour.

This is the story of a week that cracked the AI industry's ethical façade wide open. And if you work in marketing, growth, or business strategy, you should be paying very close attention — because the AI tools you're betting your workflows on just became a moral choice.

The Refusal That Started a Revolution: Anthropic Draws Its Red Lines

The backstory matters. Anthropic — the "safety-first" AI company founded in 2021 by former OpenAI employees — had already deployed Claude across the Pentagon's classified networks. When the Department of Defense pushed for expanded access, specifically for mass domestic surveillance and fully autonomous weapons systems, Anthropic CEO Dario Amodei said no.

Flat no. On the record. No.

Amodei argued that current AI models are not reliable enough to deploy in autonomous weapons and that mass surveillance violates constitutional rights. He further disputed the federal government's legal authority to designate Anthropic a supply chain risk at all, and — pointedly — advised Pentagon contractors that their existing use of Claude was unaffected.

The Trump administration did not take this well. Defense Secretary Pete Hegseth declared Anthropic a supply-chain risk, and President Trump took to Truth Social to call the company's leadership "leftwing nut jobs" who had made a "DISASTROUS MISTAKE."

So far, so predictable. A safety-focused AI company refuses to be overreached by the government. The government retaliates. Story as old as, well, the last five years of Silicon Valley's relationship with Washington.

But here's where it gets genuinely disturbing.

The Deal That Shocked Even OpenAI's Own CEO

Within hours of Anthropic's talks collapsing on Friday, February 28, OpenAI announced it had struck its own agreement to supply AI to the Pentagon's classified networks. The timing is notable: the deal was announced just hours before the U.S. and Israel launched strikes on Iran that killed its leader Ruhollah Khomeini and hundreds of civilians.

The optics were, as Altman himself would later admit, catastrophic.

Just days earlier, Altman had told employees in an internal memo that OpenAI shared the same "red lines" as Anthropic. So users — reasonably — wanted to know: if you have the same red lines, why could you make the deal when Anthropic couldn't?

OpenAI published a blog post claiming its models could not be used for mass domestic surveillance, autonomous weapons, or "high-stakes automated decisions." It argued its approach was superior because, rather than relying on usage policies, it retains "full discretion over our safety stack," deploys via cloud, keeps "cleared OpenAI personnel in the loop," and has "strong contractual protections."

That argument was immediately challenged. Techdirt's Mike Masnick claimed the deal "absolutely does allow for domestic surveillance," because the contract's reference to Executive Order 12333 is effectively how the NSA conducts surveillance by capturing communications on lines outside the U.S. — even when those communications involve American citizens.

This is not a minor procedural objection. This is someone pointing at the contract and saying the guardrails have a door in them.

"You're Now Training a War Machine": The User Revolt

The public didn't wait for legal analysis. They acted.

Claude surged to the No. 1 spot on Apple's U.S. App Store on Saturday, dethroning ChatGPT just one day after OpenAI's Pentagon announcement. Claude also climbed the Android charts in both the U.S. and U.K., and Anthropic reported that every single day of the preceding week had set an all-time record for sign-ups.

According to Anthropic, free users jumped by more than 60% since January, and paid subscribers more than doubled. The server strain was so intense that more than 1,400 users reported outages on Monday morning as Anthropic described "unprecedented demand."

On Reddit, a thread in r/ChatGPT calling on users to cancel their subscriptions became one of the forum's most highly-upvoted posts of all time, with the header reading: "You're now training a war machine. Let's see proof of cancellation."

Katy Perry — yes, that Katy Perry — publicly announced her switch to Claude and urged others to follow. We are officially past the era of AI being a niche techie concern. This is a mainstream consumer values conversation now.

Altman's Damage Control and the Questions He Couldn't Answer

To his credit, Altman didn't hide. He hosted a rare public AMA on X and took the punches. To his discredit, some of his answers were deeply unsatisfying.

When asked what OpenAI would do if the DoD issued orders that violated the Constitution, Altman said the company would refuse — even if it meant imprisonment, quipping, "Please come visit me in jail if necessary."

That's a memorable line. It's not a policy.

More troubling was his apparent faith that the military's own self-regulation was sufficient protection. Altman asserted that "people in our military are far more committed to the Constitution than an average person off the streets" — a statement that left commenters invoking Edward Snowden's name with considerable frequency.

By Monday, he had published what amounted to a formal retraction-in-spirit: "We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy." He also stated that Anthropic should not be designated a supply chain risk and expressed hope that the DoD would offer Anthropic the same terms OpenAI agreed to.

Which raises the most uncomfortable question of the entire episode: if the terms are the same, why did only one company get penalized for saying no?

Nobody in Washington has answered that yet.

The Irony That Makes Your Head Spin

Here is where the story becomes almost Shakespearean in its tragic irony. Reports emerged that the DoD used Claude — Anthropic's own AI — to help select targets in the Iran strikes. Meaning Anthropic's principled public stand may have been, at least in part, theatrical — their AI was already in the machine they refused to officially arm.

This does not absolve OpenAI. But it does suggest that the entire industry's ethical framework is more performative than structural. The problem isn't one bad actor. The problem is a system that lacks enforceable guardrails at every level — from the boardroom to the battlefield.

What This Means for Marketers, Growth Leaders, and Anyone Buying AI Tools

Here's why we're covering this in a marketing publication and not just leaving it to the defense reporters:

Your AI tool choices are now brand choices. The mass migration from ChatGPT to Claude wasn't driven by a product update or a pricing change. It was driven by values alignment. Users looked at OpenAI's Pentagon deal and decided they didn't want their subscription dollars funding something they found ethically troubling. That is consumer behavior that every CMO should be tracking — because the same calculus will apply to your brand if you're publicly aligned with AI tools that your customers find objectionable.

Vendor ethics are vendor risk. If you've built workflows around a single AI provider — and most marketing teams have — this week showed how fast the ground can shift. Anthropic woke up on a Friday as a Pentagon partner and went to sleep designated a supply chain threat. OpenAI woke up with the contract and went to bed with a PR crisis. Neither outcome was predictable 48 hours earlier.

Transparency is the new competitive advantage. Anthropic's willingness to publicly state what it would and would not allow its technology to be used for — and to pay a real price for that position — is what drove 60%+ user growth in a single week. In a market saturated with AI features and capability claims, ethical clarity is becoming a genuine differentiator. For your own brand's AI strategy, the question isn't just "what can this tool do?" It's "what won't this tool do, and do we stand behind that?"

If you're building your AI marketing strategy on tools you haven't vetted at a values level, this week is a fire drill. For a deeper look at how to evaluate AI vendors as strategic partners — not just software — our growth strategy resources are a good place to start.

The AI industry has spent four years promising it would be responsible with power it was accumulating at unprecedented speed. This week, the bill came due. One company paid it. Another is still negotiating the terms.

We'd all do well to remember which one had to crash its servers from demand — and which one's CEO spent the weekend explaining himself on X.


Want to Make Smart AI Calls for Your Business?

Winsome Marketing's growth experts help brands cut through the noise and build AI-integrated strategies that are both effective and defensible. Let's talk.

Big Tech's AI Civil War Just Went Political

Big Tech's AI Civil War Just Went Political

The same companies building AI are now bankrolling the politicians who will regulate it. This is no longer a technology story. It's a power story.

Read More
The Insurance Industry's AI Problem: When Multibillion-Dollar Claims Meet Unquantifiable Risk

The Insurance Industry's AI Problem: When Multibillion-Dollar Claims Meet Unquantifiable Risk

The insurance industry has spent centuries perfecting the art of risk quantification. Actuaries can tell you the probability of a house fire, a car...

Read More
The AI Industry Is Buying Elections — Just Don't Mention AI

The AI Industry Is Buying Elections — Just Don't Mention AI

Tens of millions of dollars from the AI industry are flooding the 2026 midterms. The ads are about immigration, healthcare, and Trump. Not a single...

Read More