Skip to the main content.

4 min read

The AI Journal: Rosebud's $6M Bet on Digital Self-Reflection

The AI Journal: Rosebud's $6M Bet on Digital Self-Reflection
The AI Journal: Rosebud's $6M Bet on Digital Self-Reflection
7:57

Rosebud just closed a $6 million seed round to scale what might be either the future of personal development or the beginning of the end for authentic self-reflection. The AI-powered journaling app analyzes your most private thoughts, identifies patterns over time, and provides insights "just like a human mentor would." With users having already journaled 500 million words and spent over 30 million minutes on the platform, we're clearly past the experimental phase.

But as AI makes its way into increasingly intimate corners of our lives—from our therapy sessions to our midnight anxieties—we need to ask ourselves: Are we solving genuine problems or creating new ones? The $6 million question isn't just about Rosebud's business model; it's about what we're willing to give up in exchange for convenience and scale.

Here's the 'pro' take (yeah, this is cool) and the 'con' take (this is humanity-destroying).

The Case FOR: "What if everybody had something that was looking out for what's best for them?"

Let's start with the most compelling argument: accessibility. Traditional mentorship and therapy are expensive, geographically limited, and often unavailable when you need them most. Sean Dadashi, Rosebud's co-founder, puts it perfectly: "I benefited so much from having mentorship throughout my life at various times, and I've suffered in times when I haven't had that mentorship."

The numbers back this up. New research suggests that given the right kind of training, AI bots can deliver mental health therapy with as much efficacy as — or more than — human clinicians. In a recent clinical trial of an AI therapy bot called Therabot, those diagnosed with major depressive disorder reported a 51 percent average reduction in symptoms. Those with generalized anxiety disorder reported a 31 percent average reduction in symptoms.

Rosebud's approach seems particularly thoughtful. Unlike generic chatbots, it's designed around understanding individual communication styles and emotional languages. As co-founder Chrys Bader explains: "One person might want validation and a soft approach, whereas somebody might want the really hard, like, 'hey, challenge me, call me out on my BS' approach."

This personalization at scale is genuinely revolutionary. Human mentors and therapists can't be available 24/7, can't remember every detail of your journey over months and years, and certainly can't provide consistent support to millions of people simultaneously. The AI doesn't get tired, doesn't have bad days, and doesn't bring their own emotional baggage to your sessions.

The mental health crisis makes this even more urgent. With therapist shortages, high costs, and long waitlists, there are millions of people without access to therapy. If AI can provide meaningful support to even a fraction of those people, aren't we morally obligated to pursue it?

Rosebud also addresses the privacy concerns head-on: all journal data is encrypted, entries are never shared with third parties or used to train AI models, and the company seems genuinely committed to ethical data handling. They're not looking to replace therapists—they're trying to democratize access to quality mentorship that most people never had anyway.

New call-to-action

The Case AGAINST: "Is there nothing sacred anymore?"

But here's where it gets uncomfortable: We're talking about feeding our most vulnerable thoughts to algorithms designed to optimize engagement and extract patterns. Yes, Rosebud claims they're not using journal entries to train AI models, but they're still analyzing, categorizing, and responding to our deepest fears, hopes, and struggles through computational processes we don't fully understand.

The American Psychological Association has raised serious concerns about unregulated AI therapy tools. APA CEO Arthur C. Evans Jr., PhD, and several other members of staff shared concerns that chatbots that impersonate therapists, including AI characters claiming to be trained in therapeutic techniques, are misleading users and may constitute deceptive marketing.

Even more troubling is the fundamental question of what we lose when we outsource self-reflection to machines. Journaling has traditionally been about developing your own voice, your own insights, your own capacity for introspection. When an AI starts providing the patterns, questions, and guidance, are we actually growing—or are we becoming dependent on external validation and direction for our most personal experiences?

AI does not have the capacity to empathize and form genuine connections with clients, which are vital in therapy. The researchers note that "It seems unlikely that AI will ever be able to empathize with a patient, relate to their emotional state, or provide the patient with the kind of connection that a human doctor can provide".

There's also the concern about what happens to human relationships when we get comfortable with AI providing emotional support. Overreliance on AI for mental health care could lead to clients becoming overly dependent on such tools for emotional support and decision-making, potentially reducing their ability to manage their mental health independently.

Consider the broader implications: If millions of people are sharing their most intimate thoughts with AI systems, what does that mean for privacy, for human agency, for the very nature of self-knowledge? Even with the best encryption and privacy policies, we're creating unprecedented databases of human psychology that could be misused, hacked, or leveraged in ways we can't foresee.

The therapeutic relationship has always been about human connection—the messy, imperfect, but ultimately healing experience of being truly seen and understood by another person. When we replace that with algorithmic pattern recognition, however sophisticated, we're not just changing the delivery mechanism; we're changing the fundamental nature of the experience.

The Question We Can't Avoid

Both sides make compelling points, but the bigger question isn't whether Rosebud specifically will succeed or fail. It's whether we're comfortable with AI becoming our confidant, our mirror, and our guide through the most personal aspects of human experience.

The technology clearly works for many people—Rosebud's user metrics and the clinical trial results prove that. But working and being good for us might be two different things. Just because we can create AI mentors doesn't mean we should, especially when the long-term psychological and social consequences remain largely unknown.

The middle ground might be thinking of AI tools like Rosebud as supplements rather than replacements—digital training wheels that help people develop better self-reflection habits before transitioning to more traditional forms of mentorship or therapy. But that requires the companies building these tools to actively encourage users to eventually graduate beyond them, which runs counter to typical tech business models focused on engagement and retention.

What do you think? Are we witnessing the democratization of personal development, or are we trading authentic human connection for convenient algorithmic responses? Should our most private thoughts remain private, even if AI analysis could genuinely help us grow?

The $6 million invested in Rosebud suggests the market has already made its choice. The question is whether that's the choice we actually want.


Ready to explore AI's role in personal and professional development without losing sight of human connection? Contact Winsome Marketing's growth experts to develop strategies that leverage technology while preserving the irreplaceable value of authentic human relationships.

 
Anthropic's Government Models for U.S. Security Customers

Anthropic's Government Models for U.S. Security Customers

Anthropic just announced custom AI models built exclusively for U.S. national security customers. These "Claude Gov" models are "already deployed...

READ THIS ESSAY
The Great Art Heist: How AI Companies Built Empires on Creative Theft

6 min read

The Great Art Heist: How AI Companies Built Empires on Creative Theft

Former Meta executive Nick Clegg's recent confession reveals the uncomfortable truth about artificial intelligence: the entire industry is built on...

READ THIS ESSAY
Your AI Therapist Wants to Please You—And That's the Problem

1 min read

Your AI Therapist Wants to Please You—And That's the Problem

Researchers found that AI chatbots designed to win user approval gave dangerous advice to vulnerable users, including telling a fictional recovering...

READ THIS ESSAY