California Signs on Google, Microsoft, Adobe, and IBM to Give Free AI Training
California just signed deals with Google, Microsoft, Adobe, and IBM to provide free AI training to 2.6 million students across high schools,...
7 min read
Writing Team
:
Oct 1, 2025 8:00:02 AM
California just did what the federal government has spent three years refusing to do: establish actual accountability standards for AI companies. Gov. Gavin Newsom signed SB 53 into law Monday, creating the nation's first framework requiring major AI developers to publicly disclose their safety protocols, report significant incidents, and protect whistleblowers who raise concerns. It's not the sweeping regulation some advocates wanted—that died last year when Newsom vetoed the more expansive SB 1047 amid tech industry backlash. But it's something we haven't had until now: enforceable transparency requirements for an industry that's been operating with essentially zero oversight.
And before the "but innovation" crowd starts prophesying California's tech exodus, let's be clear about what this law actually does. It doesn't ban models. It doesn't cap compute. It doesn't mandate pre-approval for releases. It says: if you're building frontier AI systems, you need to tell the public what safety measures you're using, and you need to report serious incidents. That's not regulatory overreach—that's the bare minimum standard we should expect from companies building technology that could reshape labor markets, information systems, and maybe civilization itself.
The fact that this is controversial tells you everything about how successfully the AI industry has avoided accountability.
Let's start with what California's new law mandates. SB 53 requires certain AI developers—specifically those building frontier models with significant capability thresholds—to publicly disclose their safety and security protocols. Not just file them with a government agency. Publicly disclose them. That means researchers, civil society organizations, journalists, and the public can actually examine what safety measures these companies claim to be implementing.
The law also creates a formal mechanism for reporting major safety incidents to the state. If an AI system is involved in significant harm—cyberattacks conducted without human oversight, for example, or deceptive model behavior that causes real-world damage—there's now a process for documenting and investigating that. Previously? Nothing. Companies could experience serious safety failures and face zero obligation to tell anyone.
SB 53 includes whistleblower protections for AI workers who raise safety concerns. This matters enormously. We've already seen cases of AI researchers being pressured to stay quiet about safety issues, or facing retaliation for speaking up. According to reporting from Vox in early 2025, multiple AI safety researchers have left companies due to concerns about being silenced. Whistleblower protections mean those workers have legal recourse if they're punished for raising legitimate concerns.
Finally, the law establishes groundwork for CalCompute—a state-run cloud computing cluster that could provide researchers and smaller organizations access to compute resources currently concentrated among a handful of tech giants. That's infrastructure democratization, making it possible for entities beyond OpenAI, Anthropic, Google, and Meta to conduct meaningful AI research.
State Sen. Scott Wiener, the bill's author, framed it this way: "With a technology as transformative as AI, we have a responsibility to support that innovation while putting in place commonsense guardrails to understand and reduce risk."
Commonsense guardrails. Not innovation-killing regulation. Transparency requirements and incident reporting. The fact that this needed legislation—and faced opposition—is absurd.
Here's something that should quiet the "California is going too far" narrative: SB 53 is actually more transparent than the EU AI Act in key ways. The EU requires companies to submit safety protocols privately to regulators. California requires public disclosure. The EU's incident reporting focuses primarily on direct harms to individuals. California includes crimes committed without human oversight and deceptive model behavior—categories the EU doesn't mandate reporting for.
In other words, California just passed stronger transparency requirements than Europe—the jurisdiction that's supposedly been too aggressive with tech regulation. And somehow, European AI development hasn't collapsed. Mistral AI is building competitive models in France. Germany has thriving AI research labs. The UK's AI sector continues growing despite similar regulatory discussions.
The argument that transparency kills innovation has always been backwards. Opacity kills trust. And without public trust, AI adoption stalls—which actually does harm innovation. Companies that can demonstrate robust safety practices and transparent operations will have competitive advantages in markets where procurement officers, enterprise clients, and regulators are increasingly asking hard questions about AI risk.
Anthropic, maker of Claude, publicly supported SB 53. Co-founder Jack Clark said: "We're proud to have worked with Senator Wiener to help bring industry to the table and develop practical safeguards that create real accountability for how powerful AI systems are developed and deployed."
Anthropic is a frontier AI lab competing directly with OpenAI and Google. If transparency requirements were truly innovation-killing, they wouldn't support them. The fact that they do suggests that serious AI companies recognize that accountability frameworks are necessary—and potentially advantageous.
Let's address the elephant in the room: why is California doing this instead of Congress? Because the federal government has spent three years debating AI safety while accomplishing essentially nothing enforceable.
President Trump's administration has prioritized accelerating AI development to beat China in the tech race. That's a legitimate geopolitical concern, but it's been used to justify avoiding any meaningful safety regulation. The logic goes: if we regulate, China wins. Therefore, no regulation. It's a false binary that ignores the possibility of smart regulation that enhances competitiveness by building public trust and preventing catastrophic failures that would set the entire industry back.
Meanwhile, Congressional efforts have been mostly performative. Republican Sen. Ted Cruz has proposed freezing state AI legislation, arguing that 50 different state standards would be chaotic. He asked: "Do you really want Gavin Newsom and Karen Bass and Comrade Mamdani in New York City setting the rules for AI and governing AI across this country? I think that would be cataclysmic."
Setting aside the "Comrade Mamdani" red-baiting (which is unserious), Cruz has a point about regulatory fragmentation—but only if Congress actually passes federal standards. They haven't. Asking states to wait indefinitely while Congress debates is asking them to accept zero oversight while AI capabilities rapidly advance. That's not responsible governance.
California is filling a vacuum. If federal lawmakers don't like states setting standards, they should pass federal legislation that preempts state rules. Until then, complaining about state action while doing nothing at the federal level is just obstruction.
The Chamber of Progress, a tech industry lobbying group, slammed SB 53 as harmful to California's innovation economy. Robert Singleton, their senior director for California, claimed: "This could send a chilling signal to the next generation of entrepreneurs who want to build here in California."
This is lobbying boilerplate, and it's not convincing. California remains the undisputed center of the global tech industry despite having stronger labor protections, more aggressive privacy laws, and higher taxes than most states. The AI industry specifically is overwhelmingly concentrated in the Bay Area. OpenAI, Anthropic, Google DeepMind, and dozens of well-funded AI startups are headquartered there.
The idea that transparency requirements and incident reporting will suddenly drive them away doesn't pass the smell test. Where would they go? Texas, where the compute infrastructure, talent pool, and venture capital ecosystem are all smaller? Florida, where they'd gain... what, exactly, in exchange for losing access to the industry's hub?
Collin McCune from Andreessen Horowitz wrote that SB 53 "misses an important mark by regulating how the technology is developed — a move that risks squeezing out startups, slowing innovation, and entrenching the biggest players."
But SB 53 doesn't regulate how models are developed. It requires transparency about safety measures and incident reporting. Startups building frontier AI systems should already be implementing safety protocols—if they're not, that's the problem the law addresses. And if transparency requirements somehow favor larger players, that suggests those larger players already have more robust safety practices. Is that an argument against the law, or an indictment of startup safety standards?
SB 53 is not perfect legislation. It's narrower than last year's SB 1047, which would have imposed liability on developers for catastrophic harms from their models. That broader approach was probably too aggressive for a first-mover state law—California was never going to pass comprehensive AI regulation before any federal framework existed. But SB 1047's failure forced a recalibration toward something more politically viable.
What SB 53 gets right is focusing on transparency and accountability without prescribing specific technical approaches. It doesn't tell companies how to build safe AI—it tells them they need to explain what safety measures they're using and report when things go wrong. That's a governance framework, not technical micromanagement.
The whistleblower protections are particularly important. AI safety is a rapidly evolving field where our understanding of risks changes as capabilities advance. Workers inside AI companies often see problems before regulators or the public does. Protecting those workers' ability to raise concerns without facing retaliation creates an early warning system for the entire industry.
CalCompute, if properly funded and implemented, could genuinely democratize access to AI research infrastructure. Right now, meaningful AI research requires compute resources that cost millions or tens of millions of dollars. That concentrates research capacity among well-funded corporations and elite universities. A state-funded computing cluster open to researchers, academics, and smaller organizations would enable more diverse research perspectives—which improves our collective understanding of AI capabilities and risks.
What SB 53 misses is enforcement specificity. The law creates disclosure and reporting requirements but doesn't spell out detailed penalties for violations or fund aggressive enforcement mechanisms. That means effectiveness depends heavily on how the California Attorney General's office implements it. Strong implementation could make this landmark legislation. Weak implementation could make it symbolic.
Newsom hinted at signing the bill while speaking at a U.N. General Assembly panel event with former President Bill Clinton. That's not accidental positioning. California is explicitly positioning itself as a model for national and potentially international AI safety standards.
Similar legislation passed in New York. Now, you have the two largest state economies in the U.S. with aligned AI safety frameworks. At that point, companies building frontier AI systems will need to comply regardless—making federal preemption less relevant because the most important markets have already set standards.
This is how policy change often happens in the U.S. California passes environmental standards; eventually automakers build to California specs for the whole country because it's cheaper than maintaining separate fleets. California passes privacy regulations; tech companies implement them nationwide because managing different privacy regimes is operationally complex. California passes AI safety requirements; companies adopt them broadly because transparency to California regulators means transparency period.
Congressional members have discussed crafting national AI safety frameworks with potential carve-outs for state rules. That's the sensible approach—establish federal baseline standards while allowing states to experiment with stronger protections. But until federal legislation actually passes, California's framework is the de facto standard.
We're at an inflection point where AI capabilities are advancing faster than our governance frameworks can adapt. That creates genuine risks—not hypothetical science fiction scenarios, but near-term harms from systems deployed without adequate safety testing, transparency about limitations, or accountability when failures occur.
The AI industry's preferred approach has been self-regulation: trust us, we're implementing safety measures, we're being responsible, we don't need external oversight. That's the same pitch every emerging technology industry makes, and it's never been sufficient. Financial services needed regulation. Pharmaceuticals needed regulation. Automobiles needed safety standards. AI is not uniquely exempt from the pattern of powerful technologies requiring external accountability.
SB 53 doesn't solve every AI safety challenge. It's not designed to. It's a foundation—transparency requirements, incident reporting, whistleblower protections—that enables more sophisticated regulation down the line as we better understand AI risks and impacts.
The tech industry will adapt, just as it always has to new regulatory requirements. The notion that California's law will cripple innovation is contradicted by California's continued dominance in tech despite decades of regulations that were supposed to drive companies away.
And if transparency about safety protocols is truly threatening to an AI company's business model, that tells you something important about their safety protocols.
Regulation isn't the enemy of innovation—it's the foundation of sustainable growth. Winsome Marketing helps companies navigate emerging AI governance frameworks with strategies that turn compliance into competitive advantage. Because the companies that get ahead of regulation don't just survive the rules—they help write them. Let's talk about your AI governance strategy before someone else defines the standards you'll need to follow.
California just signed deals with Google, Microsoft, Adobe, and IBM to provide free AI training to 2.6 million students across high schools,...
In a world where AI agents are outperforming human programmers and chatbots are handling customer service calls, California is taking a...
Disney's $300 million lawsuit against Midjourney reads like a masterclass in corporate theater. The entertainment giant, clutching its Mickey Mouse...