AI in Marketing

Chinese and U.S. Experts Agree AI Should be Restricted in Defense

Written by Writing Team | Nov 21, 2025 1:00:00 PM

Everyone agrees AI shouldn't be weaponized. Nobody's willing to go first in stopping.

At the U.S.-China Hong Kong Forum on Monday, experts from both nations converged on a comfortable consensus: AI use in military applications should be restricted, global governance frameworks are necessary, and dialogue between Beijing and Washington could ease tensions. According to South China Morning Post, Christopher Nixon Cox of the Richard Nixon Foundation called bioweapons an "obvious area" for cooperation, while Tsinghua University's Zhang Tuosheng urged resuming intergovernmental AI dialogue as soon as possible.

The problem isn't that these ideas are wrong. The problem is they're irrelevant when one side is building AI-powered surveillance infrastructure at scale and the other is restricting chip exports to slow them down. Consensus without enforcement is just theater.

Here's what actually happened at the forum: experts acknowledged the growing use of AI in weapon-related functions raises "serious ethical and accountability concerns," then pivoted to discussing why cooperation is "logically difficult." Sun Chenghao from Tsinghua's Centre for International Security and Strategy put it plainly: "It is hard to imagine the Chinese government deciding to cooperate closely with the US on governance while facing US restrictions on hardware. Logically, that just doesn't make sense."

Translation: you can't sanction us on AI chips and then expect us to collaborate on AI ethics. And he's right.

The Bioweapons Red Herring

Cox's focus on AI-generated bioweapons is strategic misdirection. Yes, AI that can rapidly design novel pathogens is terrifying. Yes, both nations should prevent that. But bioweapons are the obvious threat—the one everyone can agree is bad because it threatens everyone equally.

The real military AI applications are more mundane and far more immediate: autonomous drones, facial recognition for targeting, predictive logistics, signals intelligence, cyber warfare. These aren't science fiction scenarios. They're being deployed now. And neither the U.S. nor China is slowing down development because the competitive disadvantage of falling behind is too severe.

According to SCMP, Xi Jinping and Joe Biden agreed in November 2024 that "humans, not artificial intelligence, should decide the use of nuclear weapons"—a first-of-its-kind consensus. But earlier that year, China refused to sign a global non-binding pact on responsible military AI use at a Seoul summit. The contradiction is instructive: China will agree to restrictions on catastrophic scenarios that benefit everyone, but not on tactical advantages that benefit them specifically.

The Chip War Makes Everything Worse

Sun Chenghao's point about hardware restrictions reveals the central tension. Washington has blocked exports of advanced AI chips and chipmaking equipment to China, citing military applications. China has responded by accelerating domestic chip development, launching an "AI+" initiative in August to upgrade sectors and spur breakthroughs.

This dynamic guarantees arms race behavior. When one side restricts access to foundational technology, the other side treats self-sufficiency as an existential priority. China isn't just building alternative chip supply chains—they're treating it as a matter of national sovereignty. And they're making progress. As SCMP notes, China has "steadily advanced its technology and narrowed the chip gap," prompting some in the U.S. to rethink export restrictions.

But rethinking doesn't mean reversing. The geopolitical logic is locked in. The U.S. can't ease restrictions without empowering Chinese military AI capabilities. China can't stop developing domestic alternatives without accepting permanent technological subordination. Nobody has an exit strategy.

Kai-Fu Lee's Uncomfortable Truth

The most interesting comments at the forum came from Kai-Fu Lee, founder of 01.AI and former head of Google China. Lee argued that U.S.-China competition would be "better and more constructive" if Americans understood the Chinese approach to AI development.

His characterization: the U.S., shaped by Silicon Valley's winner-take-all mindset, pursues artificial general intelligence (AGI) as a "giant-step function"—one breakthrough model that dominates everything. China pursues incremental progress, open-source sharing, and user-driven competition focused on practical applications rather than platform dominance.

Lee's framing is partially self-serving—he runs a Chinese AI startup competing with American giants—but it contains real insight. "As the US spends trillions of dollars dreaming or hoping to build that one giant AGI model to squash everyone else, China is sort of like collaborating and building open source, and trying to figure out ways to make money," he said.

This isn't "China good, America bad." It's strategic positioning. China's approach makes sense when you're behind technologically but ahead in implementation scale. Open-source collaboration accelerates catch-up. Incremental progress compounds faster when you have more engineers and fewer regulatory constraints. User-driven competition works when your domestic market is 1.4 billion people.

Lee's argument that Silicon Valley thinking causes the U.S. government to assume "as one company squashes other companies, one country will squash other countries" is sharp critique. But it ignores that China's government does operate on zero-sum assumptions about geopolitical competition—just with different tactics.

What Restricting Military AI Actually Requires

Effective restrictions on military AI would require verification mechanisms that neither side will accept. The U.S. won't allow Chinese inspectors to audit Pentagon AI systems. China won't allow American inspectors to verify military applications aren't being developed by dual-use research institutions like Tsinghua.

Without verification, agreements are meaningless. Both sides can sign declarations about nuclear weapons decisions requiring human approval while simultaneously developing autonomous systems for every other military function. The bioweapons consensus Cox advocates for is valuable precisely because biological threats are detectable—pathogen releases can't be hidden. AI weapon development happens in labs and data centers, invisible until deployment.

The experts at the Hong Kong Forum understand this. That's why their recommendations focus on "dialogue" and "cooperation" without specifying enforcement. They're advocating for what's politically possible, not what's strategically sufficient.

The Race Nobody's Winning

Here's the uncomfortable conclusion: military AI development is a classic security dilemma. Both sides would be better off with restrictions, but neither can trust the other to comply, so both keep building. The more each side builds, the more the other feels threatened, accelerating the cycle.

China and the U.S. agree AI shouldn't be weaponized. They'll keep weaponizing it anyway. Because in a world where one breakthrough could shift military balance permanently, the risk of falling behind is greater than the risk of escalation.

Until it isn't.

Building business strategies that account for geopolitical AI competition? Winsome Marketing's growth experts help you navigate fragmented tech ecosystems and regulatory divergence. Let's strategize.