AI in Marketing

San Francisco's AI Experiment

Written by Writing Team | Jun 6, 2025 12:00:02 PM

In a world where AI promises often feel like marketing fluff, San Francisco's City Attorney David Chiu is doing something refreshingly practical: using artificial intelligence to tackle the bureaucratic monster that has been devouring government efficiency for decades. Working with Stanford researchers, Chiu has identified and eliminated 140 redundant reporting requirements from the city's bloated municipal code—and this might be the first genuinely sensible AI deployment we've seen in government.

The numbers alone are staggering. San Francisco's municipal code runs about the same length as the entire U.S. federal rulebook—that's 75 "Moby Dicks" worth of legal text. No human legal team could realistically audit all that bureaucratic sediment. But Stanford's Regulation, Evaluation and Governance Lab trained an AI program to "think like a lawyer" and systematically identify every reporting requirement buried in those thousands of pages.

What they found was both hilarious and horrifying: requirements for reports on fixed newspaper racks that no longer exist, duplicate reporting mandates across different departments, and layers upon layers of what Stanford's Daniel Ho aptly termed "policy sludge."

The Right Tool for the Right Job

This isn't AI trying to replace human judgment—it's AI doing the grunt work that humans literally cannot do at scale. "This tool saved us countless hours of work," Chiu said. "Because of the length of our code… it's likely a project we would never have undertaken."

That's the key insight here. AI isn't being asked to make policy decisions or interpret complex regulations. It's being used as a sophisticated search engine to find patterns in massive datasets, then humans review the findings and make the actual decisions. The AI identified the reporting requirements; Chiu's team evaluated which ones could be eliminated, combined, or streamlined.

The results are promising. More than a third of the nearly 500 reporting requirements that can be altered by city ordinance are being changed, with 140 eliminated entirely. Departments across the city government—from the controller to the planning department to the Mayor's Office of Housing and Community Development—will be freed from producing reports that serve no current purpose.

Evidence-Based Success

Stanford's approach was methodical and transparent. Ho's team calibrated and tested the tool before running the city search, first validating it against the U.S. legal code where they successfully identified 1,400 known reports plus hundreds more previously unidentified ones. This isn't a black-box AI system making mysterious recommendations—it's a carefully trained tool that has been tested and validated.

The federal government has been experimenting with similar approaches. In 2023, federal agencies disclosed 710 AI use cases; by 2024, that number had more than doubled to 1,757. The U.S. Patent and Trademark Office has deployed AI tools to improve patent classification and search processes, reducing application processing times. The Transportation Security Administration has begun integrating AI-enabled technologies to speed up security screening.

But San Francisco's initiative stands out because it's attacking a specific, measurable problem with demonstrable results. Rather than vague promises about "efficiency gains," we have concrete numbers: 140 reporting requirements eliminated, countless staff hours freed up for actual public service.

The Broader Implications

This project represents something we rarely see in government technology deployments: appropriate scope and realistic expectations. San Francisco isn't claiming AI will revolutionize governance or replace human workers. They're using it as a powerful tool to solve a specific administrative problem that has been growing for decades.

The timing is particularly interesting. As discussions about AI in government tend toward either utopian dreams or dystopian fears, San Francisco is demonstrating a middle path: using AI to make government more efficient without fundamentally altering how democratic institutions function.

A European criminal injury compensation agency achieved similar success with a GenAI copilot that helped case workers navigate complex policy and legal frameworks, leading to an expected 80-day reduction in average case processing time. These aren't flashy applications, but they represent the kind of practical improvements that could restore some public trust in government competence.

Reasonable Concerns and Safeguards

The cautious optimism is warranted partly because San Francisco isn't rushing into this blindly. The city has already established comprehensive Generative AI Guidelines for city workers, emphasizing responsible use and appropriate oversight. Chief Technology Officer Michael Makstman has explicitly stated he doesn't believe chatbots should be plugged into city services just yet, preferring a careful, validated approach.

The partnership with Stanford also provides crucial academic oversight. This isn't a vendor selling a black-box solution to government clients—it's a research collaboration with transparency and validation built into the process.

That said, concerns remain valid. Critics worry that AI-driven government efficiency could prioritize speed over democratic deliberation, or that automated systems might miss important nuances in policy implementation. The key is ensuring human oversight remains central to the process, with AI handling the analytical heavy lifting while humans make the policy decisions.

A Replicable Model

What makes San Francisco's approach particularly encouraging is its replicability. "This isn't just a San Francisco problem," Chiu said, referencing the millions of pages produced by Congress every year as a similar bureaucratic black hole. The Stanford team's AI tool could theoretically be deployed in other cities, counties, and even at the federal level to identify similar inefficiencies.

The project also suggests a sustainable approach to AI in government: start with clearly defined problems, use AI for tasks that benefit from its strengths (pattern recognition in large datasets), maintain human oversight for all decisions, and measure concrete outcomes.

The Path Forward

San Francisco's legislation will be introduced to the Board of Supervisors next week, with Supervisor Bilal Mahmood noting that City Hall has been "weighted down by unnecessary code for too long." If successful, this could become a template for responsible AI deployment in government.

The real test will be implementation and long-term impact. Will eliminating these reporting requirements actually free up staff time for more meaningful work? Will it improve service delivery to citizens? Will other jurisdictions adopt similar approaches?

For now, we have something rare in the AI governance space: a concrete example of technology being used appropriately to solve a real problem with measurable results. In an era of AI hype and government skepticism, that's worth celebrating—cautiously.

Looking to implement AI solutions that actually solve business problems? Winsome Marketing's growth experts help organizations identify practical AI applications that deliver measurable results without the hype.