AI Can Read Every Law Ever Written—But It Can't Think Like a Lawyer
Computer scientist Randy Goebel has been running a competition for over a decade that exposes AI's most fundamental weakness in legal reasoning: it...
4 min read
Writing Team
:
Jan 7, 2026 8:00:01 AM
Waymo is developing a Gemini-powered in-car AI assistant designed to comfort anxious passengers with "reassuring tone" responses about the autonomous vehicle's safety. Security researcher Jane Manchun Wong discovered details in Waymo's mobile app code revealing a "reassurance_protocol" that triggers when users "express anxiety or nervousness about the Waymo Driver's behavior."
The instruction: "Prioritize a comforting, reassuring tone. Acknowledge the rider's feeling first, then provide a brief, confident statement about the system's safety design."
An example response: "I understand it can feel different being driven this way. Please be assured that the Waymo Driver sees all around the vehicle and is designed to maintain a safe distance from everything it sees. Your safety is our absolute highest priority."
So Waymo's solution to passenger anxiety about being driven by AI is... another AI that tells you the first AI is safe. This is either brilliant user experience design or the most recursive tech company logic imaginable. Probably both.
Let's acknowledge what's actually happening: Waymo identified that passengers express anxiety about autonomous driving behavior, and rather than addressing those concerns through human interaction, transparency, or interface improvements showing why the vehicle made specific decisions, they're deploying a chatbot to deliver pre-scripted reassurance.
The protocol is honest about its purpose—acknowledge feelings, assert safety, prioritize comforting tone. That's reasonable crisis communication strategy. The question is whether AI-generated reassurance feels genuine or rings hollow when passengers want actual understanding of why the car just did that weird thing.
Human drivers can explain their decisions conversationally: "I'm braking early because that car ahead looks like it might merge suddenly." Autonomous vehicles make thousands of micro-decisions per second based on sensor fusion and predictive models. An AI assistant saying "the system is designed to maintain safe distance" doesn't actually explain why the vehicle behaved unexpectedly—it just asserts everything's fine.
The example response follows textbook support scripting: validate emotion ("I understand it can feel different"), provide factual reassurance ("sees all around the vehicle"), close with priority statement ("Your safety is our absolute highest priority").
This works once. Maybe twice. By the third time a passenger gets anxious and receives nearly identical comforting platitudes, the response pattern becomes obvious—you're not getting explanation, you're getting managed.
Effective reassurance requires context-specific answers. If the car braked suddenly, explain what sensors detected. If it took an unusual route, show the traffic reasoning. If it's maintaining uncomfortable proximity to another vehicle, demonstrate the safety margin calculations. Generic "we prioritize safety" statements don't address specific concerns.
The risk: passengers learn the AI assistant delivers corporate messaging rather than genuine transparency, reducing trust instead of building it. When something genuinely unusual happens and the assistant responds with the same reassuring template, users won't believe it.
Passenger anxiety about autonomous vehicle behavior isn't irrational fear requiring soothing—it's often legitimate response to driving patterns that differ from human expectations. Autonomous vehicles optimize for safety and efficiency using logic that feels unnatural to passengers trained on decades of human driving behavior.
The solution isn't reassurance—it's education and transparency. Show passengers what the vehicle sees. Explain decision-making in real-time. Provide interface elements that demonstrate the car understands its environment correctly. Build trust through comprehension rather than comforting platitudes.
Waymo's reassurance protocol treats anxiety as emotional problem requiring management rather than information problem requiring explanation. That's a UX failure disguised as empathetic design.
Here's the deeper issue: using AI to reassure passengers about AI creates recursive trust dependencies. You're asking people who don't fully trust autonomous driving to trust an AI assistant's claims about the autonomous driving system's safety.
If passengers trusted AI judgment completely, they wouldn't be anxious. Deploying more AI to solve AI trust issues assumes the problem is presentation rather than fundamental uncertainty about delegating life-safety decisions to algorithms.
The alternative approach: acknowledge that autonomous vehicles make decisions differently than humans, provide genuine transparency into those decisions, and accept that some passengers will remain uncomfortable until familiarity reduces anxiety naturally over time. That's slower but more honest than deploying reassurance bots.
Charitable interpretation: most passenger anxiety isn't about specific dangerous situations—it's about unfamiliar driving patterns that feel wrong but aren't actually unsafe. In those cases, gentle reassurance acknowledging the unfamiliarity while confirming intentional design could help.
"The car is maintaining larger following distance than human drivers typically use—this is intentional for additional safety margin" is more useful than "your safety is our highest priority." Specific explanations beat generic reassurances.
If Waymo's Gemini assistant can provide contextual, situation-specific explanations rather than templated comfort, it could genuinely improve passenger experience. If it's just delivering corporate scripting with natural language variety, it'll feel like talking to customer service chatbots—technically responsive, emotionally hollow.
The article mentions this alongside "an update for its power outage problem"—suggesting Waymo vehicles had issues related to power outages. That probably deserves more attention than the reassurance chatbot, but here we are focusing on the AI assistant because that's what gets coverage.
This is the tech media pattern: power infrastructure problems get mentioned in passing, AI features get full analysis. Maybe we should reverse that priority when discussing vehicles responsible for passenger safety.
People riding in autonomous vehicles want to understand what's happening and why. They want transparency into decision-making, visibility into what sensors detect, and confidence the system understands its environment correctly.
An AI assistant can help provide that—if it focuses on explanation over reassurance, specificity over platitudes, and education over emotional management. If it's just delivering scripted comfort, passengers will learn to ignore it the same way they've learned to ignore "your call is important to us" messages.
Waymo has opportunity to build genuinely useful in-vehicle AI that helps passengers understand autonomous driving. Whether they're building that or just automating customer service scripts wrapped in Gemini will determine if this enhances trust or becomes another feature users disable.
If you need help designing AI interaction patterns that prioritize transparency over reassurance or building customer experience strategies around genuine understanding rather than emotional management, Winsome Marketing focuses on trust through comprehension.
Computer scientist Randy Goebel has been running a competition for over a decade that exposes AI's most fundamental weakness in legal reasoning: it...
The robots came back from the factory looking like they'd been through a war. Scratches. Scuffs. Industrial grime coating their frames. And Figure AI...
Microsoft just declared war on workplace chaos with the most comprehensive AI agent rollout in corporate history. The company is flooding Teams with...