1 min read
San Francisco's AI Experiment
In a world where AI promises often feel like marketing fluff, San Francisco's City Attorney David Chiu is doing something refreshingly practical:...
4 min read
Writing Team
:
Jul 21, 2025 9:28:47 AM
Government inefficiency isn't just a political talking point—it's a real problem that costs taxpayers billions and makes essential services harder to access. But what if artificial intelligence could actually solve some of these problems? Stanford Professor Daniel Ho's research with cities like San Francisco suggests that AI's biggest potential in government isn't chatbots or public-facing tools, but in addressing the bureaucratic bottlenecks that consume government workers' time and prevent them from serving citizens effectively.
Ho's work identifies three strategic approaches to AI implementation that could genuinely transform how government operates. But like any powerful technology, each comes with significant benefits and serious risks that leaders need to understand before implementation.
Strategy 1: Strategic AI Interventions in Complex Workflows
The Approach: Instead of building comprehensive AI systems, Ho advocates for inserting AI at specific bottleneck points in government processes. His team's work with Santa Clara County exemplifies this approach—they adapted a large language model to parse millions of property deeds, identifying discriminatory language that would have taken human staff nearly 10 years to review manually.
The Pros:
The Cons:
The Santa Clara County project demonstrates both the promise and peril of this approach. While the efficiency gains are undeniable, using AI to identify historical discrimination raises questions about whether AI systems trained on biased data can reliably detect bias—a philosophical and technical challenge that remains unresolved.
The Approach: Rather than automating benefits decisions, Ho proposes using AI to streamline information processing, allowing caseworkers to focus on human interaction and complex decision-making. The system would identify relevant information in case files to help determine eligibility while preserving human judgment for final decisions.
The Pros:
The Cons:
This approach faces a fundamental tension: the very efficiency gains that make AI attractive in benefits administration could lead to rushed or impersonal service. Research from the Electronic Privacy Information Center shows that automated public benefits decisions have already "incorrectly rejected eligible applicants, spurred on improper fraud allegations and overpayment recollection proceedings, and cost state governments millions."
The Approach: Using AI to analyze municipal codes and regulations, identifying contradictory, outdated, or unnecessarily burdensome rules. Ho's team developed search systems that can find every instance where legislation requires time-consuming reports, including absurd requirements like monitoring newspaper racks that no longer exist.
The Pros:
The Cons:
The policy reform approach represents perhaps the highest-stakes application of AI in government. While identifying outdated regulations seems straightforward, the broader implications of AI-driven policy analysis are profound. If AI systems recommend eliminating environmental protections because they're "burdensome," or suggest reducing worker safety requirements because they're "costly," the technology could become a tool for ideological deregulation rather than objective governance improvement.
Government is on track to spend more on AI than any other industry by 2025, with an estimated 19% compound annual growth rate in AI investment between 2022 and 2027. But enthusiasm for AI adoption shouldn't obscure the real challenges of government implementation.
Unlike private sector AI deployment, government AI systems must operate under much stricter accountability standards. Citizens have a right to understand how decisions affecting them are made, and government agencies must be able to explain their reasoning. This transparency requirement makes many AI applications more complex and expensive in government settings.
The federal Office of Management and Budget has already released guidelines for AI procurement, and states like California have developed their own frameworks for AI acquisition. These emerging regulatory structures suggest that government AI adoption will be heavily regulated—a necessary safeguard, but one that could slow deployment and increase costs.
Ho's research consistently emphasizes that AI's value in government lies not in replacing human workers but in amplifying their capabilities. This human-centered approach addresses legitimate concerns about job displacement while maximizing the technology's benefits.
However, successful implementation requires significant investment in training and change management. Government workers need to understand not just how to use AI tools, but when to trust them and when to override them. This requires a level of technical literacy that many government agencies currently lack.
The age demographics of government workers present additional challenges. As of 2024, the average age of U.S. federal government employees is approximately 47 years, making cultural adaptation to AI tools potentially more difficult than in private sector environments.
Ho's three strategies represent a thoughtful approach to AI implementation in government—one that prioritizes human agency while leveraging technology's strengths. But the success of these approaches depends entirely on execution, oversight, and ongoing evaluation.
The most promising aspect of Ho's work is its focus on solving real problems rather than implementing technology for its own sake. By targeting specific inefficiencies, supporting human decision-making, and enabling policy reform, AI can genuinely improve government operations.
But the risks are equally real. Biased algorithms, system failures, and over-reliance on technology could make government services worse, not better. The key is implementing these strategies with robust safeguards, continuous monitoring, and a commitment to human oversight.
Government AI adoption is inevitable, but it's not predetermined. The choices leaders make about how to implement these technologies will determine whether AI becomes a tool for better governance or another source of bureaucratic dysfunction. Ho's research provides a roadmap for getting it right, but only if policymakers are willing to embrace both the opportunities and the responsibilities that come with it.
Ready to help your organization navigate AI implementation while maintaining human-centered values? Winsome Marketing's growth experts work with public and private sector leaders to develop AI strategies that enhance rather than replace human capabilities. Because the future of work is about augmentation, not automation.
1 min read
In a world where AI promises often feel like marketing fluff, San Francisco's City Attorney David Chiu is doing something refreshingly practical:...
Stephen Moore's latest op-ed arguing for hands-off government approach to AI regulation has reignited one of the most consequential policy debates of...
Microsoft just announced their AI system can diagnose complex medical cases with 80% accuracy while human doctors managed only 20% on the same test...