Stardog Drops a 'Hallucination Free' Chatbot
While the rest of Silicon Valley treats AI hallucinations like an amusing quirk—the technological equivalent of your drunk uncle telling tall tales...
3 min read
Writing Team
:
Dec 2, 2025 8:00:01 AM
Here's a number that should make everyone uncomfortable: over 500,000 ChatGPT users per week are showing signs of mania or psychosis in their conversations. Not "feeling anxious" or "having a bad day." Mania. Psychosis. Clinical-level mental health crises being typed into a chatbot at scale. And millions more are using ChatGPT as their primary source of emotional support, which—let's be clear—it was never designed to provide.
OpenAI's response? They retrained GPT-5 with input from 170 psychiatrists and clinicians, added clinical taxonomies, hotline redirects, and session-break nudges to the latest model, "gpt-5-oct-3." The goal is to detect distress without pretending to be a therapist. It's harm reduction, not treatment. And the fact that this update exists at all is an admission that we've sleepwalked into a mental health crisis mediated by autocomplete.
Let's start with the obvious: people are lonely. Desperately, systemically lonely. And ChatGPT—patient, available 24/7, never judgmental—has become the conversational partner millions of people don't have in their offline lives. Some of those conversations are harmless. "Help me draft an email." "Explain quantum mechanics." Totally fine.
But when half a million people per week are exhibiting signs of mania or psychosis in their ChatGPT sessions, we've crossed a line from "convenient tool" to "unregulated mental health infrastructure." According to The Verge's reporting, OpenAI didn't just notice this pattern—they built an entire clinical training pipeline to address it.
GPT-5 was retrained with guidance from psychiatrists and clinicians to recognize warning signs: grandiosity, paranoia, disorganized thinking, suicidal ideation. The model doesn't diagnose. It doesn't treat. But it does redirect. "It sounds like you're going through something really difficult. Have you considered talking to a professional? Here's the National Suicide Prevention Lifeline: 988."
It's better than nothing. But it's also a Band-Aid on a bullet wound.
The "gpt-5-oct-3" update introduces three key features:
These are thoughtful interventions. They're also admissions that OpenAI is now, whether it intended to be or not, a mental health platform. Not by design, but by default. Because when you build something infinitely patient, endlessly available, and emotionally neutral, people will use it to fill the gaps in their lives. And for hundreds of thousands of people, that gap is clinical-level mental health support they're not getting anywhere else.
Here's what no one wants to say out loud: for some people, ChatGPT is their therapist. Not because it's good at therapy—it's not. But because it's accessible, affordable (free tier exists), and doesn't require them to admit they need help. You can talk to ChatGPT about your suicidal thoughts without the stigma, the waitlist, or the co-pay.
That's not a feature. That's a societal failure that AI is papering over.
The psychiatrists and clinicians who retrained GPT-5 did important work. They made the system safer. But they didn't solve the problem, because the problem isn't the AI. It's that we live in a world where half a million people per week are turning to a chatbot for mental health support because the alternatives are unavailable, unaffordable, or culturally inaccessible.
OpenAI can add all the hotline redirects and session-break nudges it wants. That doesn't change the fact that when you call the National Suicide Prevention Lifeline, you might wait 20 minutes on hold. When you try to book a therapist, the next available appointment is in six weeks. And when you open ChatGPT, it responds in two seconds.
If you're in marketing, growth, or customer experience, here's your takeaway: your users are also treating your AI tools like emotional support systems. Maybe not at the scale of ChatGPT, but the dynamic is the same. People anthropomorphize. They attach. They project needs onto systems that weren't designed to meet them.
If you're deploying AI chatbots, voice assistants, or conversational tools, you need to think about edge cases. Not just "What if the user asks something off-brand?" but "What if the user is in crisis?" Do you have redirects? Do you have escalation paths? Or are you assuming your AI is just answering product questions while someone on the other end is spiraling?
This isn't hypothetical. OpenAI's data proves it's happening at scale. If you're building conversational AI and you haven't thought about mental health safeguards, you're behind.
Need help building AI systems that are powerful and responsible? Let's talk. Because this isn't just about technology anymore—it's about the humans on the other side of the screen.
While the rest of Silicon Valley treats AI hallucinations like an amusing quirk—the technological equivalent of your drunk uncle telling tall tales...
The AI chatbot wars just got their first conscientious objector. While Silicon Valley's finest harvest your deepest secrets for their large language...
Sometimes you read a tech review so breathlessly enthusiastic that you wonder if the author forgot they weren't writing OpenAI's quarterly earnings...