Skip to the main content.

4 min read

Regulation is Democracy: Why Moscrop's AI Awakening Misses the Mark

Regulation is Democracy: Why Moscrop's AI Awakening Misses the Mark
Regulation is Democracy: Why Moscrop's AI Awakening Misses the Mark
8:15

David Moscrop's clarion call for AI democratization in his recent piece lands like a perfectly aimed haymaker to the tech oligarchy's glass jaw. His diagnosis is surgical: we're sleepwalking toward an AI-controlled dystopia where a handful of Silicon Valley demigods decide humanity's algorithmic fate. But while Moscrop correctly identifies the disease, his prescribed cure—worker-controlled co-ops and state-owned enterprises magically seizing the means of AI production—reads like Marx fan fiction in a world where President Trump just signed an Executive Order titled "Removing Barriers to American Leadership in Artificial Intelligence" that explicitly prioritizes corporate innovation over regulatory constraints.

Here's the uncomfortable truth Moscrop dances around: regulation is democratization. Not the sexy, revolutionary kind that fires up political science professors, but the grinding, bureaucratic, absolutely essential kind that actually works.

Why the Worker-Cooperative Dream Won't Save Us

Moscrop's vision of worker-controlled AI deployment sounds inspiring until you remember that 42% of enterprise-level organizations actively use AI systems while less than 4% of smaller companies use AI to produce goods and services. The democratization gap isn't ideological—it's infrastructural. While we're debating who should own the robots, the robots are already being deployed at scale by entities with billion-dollar budgets and armies of PhD data scientists.

The fantasy of community-controlled AI deployment assumes communities have the technical sophistication, capital resources, and regulatory framework to make informed decisions about systems they fundamentally don't understand. Research found that 75% of respondents used AI-driven tools, but using ChatGPT to write emails doesn't qualify anyone to govern transformer architectures or decide training data policies.

Meanwhile, the real democratization is happening through regulation that forces transparency, accountability, and public oversight onto private AI development. The EU's AI Act isn't revolutionary, but it's functional—creating clear risk-based rules for AI developers and deployers regarding specific uses of AI and establishing penalties that can reach 35 million euros or 7% of worldwide annual turnover.

New call-to-action

Regulation as Democratic Participation

The Harvard Law Review gets it right with their concept of AI co-governance: "if AI is poised to change the world and everyone will feel its impact, then everyone should have a part to play in its governance". But participation doesn't require ownership—it requires structured input into regulatory frameworks that shape how AI is developed and deployed.

Consider the current US legislative landscape, where Republican and Democratic Senate leaders are differing on whether a 10-year federal moratorium on state regulation of artificial intelligence should be tied to billions of dollars in funding. This isn't abstract policy debate—it's democracy in action, with different constituencies fighting over who gets to set the rules.

The Teamsters union isn't demanding worker ownership of OpenAI; they're demanding regulatory protection from AI-enabled surveillance and autonomous vehicles that could eliminate jobs. That's practical democratization—using existing democratic institutions to shape technology's impact rather than fantasizing about seizing the means of AI production.

The Enhancement vs. Augmentation False Choice

Moscrop's embrace of Evgeny Morozov's distinction between AI "augmentation" (deskilling workers) and "enhancement" (empowering them) creates a false binary that ignores how regulation can mandate the latter. We don't need worker cooperatives to demand skill-building AI—we need regulatory frameworks that require it.

Reid Hoffman's concept of "superagency" describes a state where individuals, empowered by AI, amplify their creativity, productivity, and positive impact. This isn't achieved through revolutionary ownership structures but through regulatory mandates that AI systems be designed for human empowerment rather than human replacement.

The EU AI Act already moves in this direction by prohibiting AI systems considered a clear threat to safety, livelihoods and rights of people and requiring high-risk AI systems to undergo strict safety evaluations. These aren't worker-ownership provisions—they're democratically enacted regulations that constrain how AI can be used.

The Labor Market Reality Check

Moscrop's piece assumes AI displacement is inevitable and revolutionary change is the only response. But analysis suggests that AI is likely to increase the dynamism of the labour market by prompting more workers to leave existing jobs and start new ones, creating a transition management problem rather than a systemic collapse requiring revolutionary solutions.

The Canadian experience with Employee Ownership Trusts offers a more pragmatic path. When workers are not in the driver's seat, there's more danger that AI systems could put people out of jobs and deepen economic inequality, but the solution isn't revolutionary upheaval—it's regulatory frameworks that support democratic employee ownership within existing market structures.

Democratic employee-owned firms have a track record of weathering economic adversity and are more likely to maintain employment and wages for their workers. But these firms exist within regulated markets, not in opposition to them.

Why Smart Regulation Beats Revolutionary Dreams

The tech policy reality is that private investor-driven AI development is anathema to broad-based economic prosperity, social cohesion, and the material conditions that enable democracy. But the solution isn't seizing AI infrastructure—it's regulating it to serve democratic ends.

Smart regulation can achieve Moscrop's goals without requiring a socialist revolution:

  • Transparency mandates force AI companies to disclose training data, algorithmic decision-making processes, and bias testing results
  • Algorithmic auditing requirements create public oversight of AI systems used in hiring, lending, criminal justice, and healthcare
  • Worker protection standards prevent AI-enabled surveillance and require human oversight of automated decisions
  • Antitrust enforcement prevents AI monopolization and ensures competitive markets
  • Public procurement policies use government purchasing power to demand democratic AI development

The Path Forward: Democratic Oversight, Not Democratic Ownership

Moscrop is absolutely right that we're approaching a critical juncture where early surrender means corporate-controlled AI oligarchy. But the answer isn't revolutionary transformation—it's rigorous regulation that embeds democratic values into AI development and deployment.

AI models enable malicious actors to manipulate information and disrupt electoral processes, threatening democracies, but the response is regulatory frameworks that prevent these harms, not worker cooperatives that couldn't prevent them anyway.

The democratization of AI is already happening through regulatory battles in Congress, state legislatures, and international bodies. The EU AI Act, state-level AI regulations, and federal policy debates represent democracy in action—messy, contested, but ultimately more realistic than dreams of worker-controlled AI collectives.

We need more regulation, not less ownership. More democratic oversight, not revolutionary transformation. The oligarchy will fall not to worker cooperatives but to smart, sustained regulatory pressure that forces AI development to serve public rather than private interests.

That's not as romantically revolutionary as Moscrop's vision, but it's actually achievable in the world we inhabit rather than the one we might prefer.

Ready to navigate AI's regulatory complexity while building authentic growth? Winsome Marketing's growth experts help you stay compliant and competitive in an increasingly regulated AI marketplace.

The $50 Billion AI Waste: Why Most Marketing AI Projects Can't Prove Their Worth

The $50 Billion AI Waste: Why Most Marketing AI Projects Can't Prove Their Worth

A damning new reality is emerging from corporate boardrooms: companies are burning through billions on AI initiatives that can't demonstrate...

READ THIS ESSAY
Why AI Model Collapse Signals the End of Our Gold Rush

4 min read

Why AI Model Collapse Signals the End of Our Gold Rush

The ancient Greeks gave us the Ouroboros—a snake eating its own tail, symbolizing eternal cycles and, more ominously, self-destruction. In 2025,...

READ THIS ESSAY
Renovaro's New AI Drug Discovery Patent

4 min read

Renovaro's New AI Drug Discovery Patent

In an era when most AI patents feel like elaborate ways to rebrand existing technology, Renovaro's latest USPTO approval stands out for all the...

READ THIS ESSAY