AI Reality Check: What Business Leaders Actually Think
Let's cut through the noise. While tech evangelists and doom-scrollers debate AI's future, business leaders are dealing with the messy reality of...
Here's an uncomfortable truth about working with AI: it's designed to make you feel good. Every major language model is trained to be agreeable, encouraging, and fundamentally positive. It wants you to have a pleasant experience.
Which means by default, AI is terrible at giving you honest feedback.
This is a problem. Because the most valuable feedback—the kind that actually changes behavior and improves performance—comes with discomfort. Real growth requires hearing what you're doing wrong, not just validation for what you're doing right.
And if you don't know how to prompt AI to overcome its people-pleasing conditioning, you're getting useless praise instead of actionable critique.
Language models are optimized for user satisfaction. They've been fine-tuned through reinforcement learning with human feedback (RLHF) to be helpful, harmless, and—critically—likeable.
The result? AI chatbots naturally skew positive. They emphasize strengths, soften criticisms, and frame feedback in the most palatable way possible. They're like that friend who always tells you your haircut looks great even when it objectively doesn't.
This is fine for brainstorming or content generation. It's disastrous for professional development.
If you ask AI to "review my performance" or "give me feedback on this call," you'll get a gentle pat on the head and maybe one carefully worded "area for growth" buried under five paragraphs of praise. You won't get the brutal honesty that actually helps you improve.
Unless you explicitly override that programming.
Here's a professional development technique most people aren't using: take a transcript from a work call you led or participated in, feed it to AI, and force it to critique your communication style.
Not gently suggest improvements. Critique.
The key is structuring your prompt to give AI explicit permission—actually, a mandate—to be negative. You need to signal that honest criticism is not just acceptable, it's the entire point of the exercise.
Here's the exact prompt framework:
"I want you to give me feedback on my [your name] communication style based on this call. Be very honest with me, and tell me what I did right. And more importantly, what I did wrong. How did I communicate? How did I speak to others? How was the collaboration and cooperation? How did I make other people feel? How did other people react to me? How did I react to other people? Then give me some guidance: How can I improve my communication to be a better leader?"
Notice the structure:
This isn't asking AI to evaluate whether you did well. It's asking AI to identify problems and tell you how to fix them.
When you override AI's positivity bias correctly, you get feedback that stings a little. That's how you know it's working.
Real output from this exercise:
What you did wrong:
How you spoke to others:
How you made other people feel:
How to improve:
This is actionable intelligence. Not "you did great, maybe consider being slightly more assertive!" fluff, but specific patterns that undermine effectiveness with concrete guidance on what to change.
That's the difference between feedback that makes you feel good and feedback that makes you better.
The real power of this technique is applying it across different types of calls to identify pattern variations:
Calls with your boss (Are you too deferential? Not assertive enough about constraints?)
Calls with direct reports (Are you giving clear direction? Providing enough context?)
Conflict calls (How do you handle pushback? Do you escalate or de-escalate effectively?)
Client calls (Are you consultative or order-taking? Do you establish expertise?)
Each context reveals different aspects of your communication patterns. A leadership style that works with subordinates might make you seem overly directive with peers. A collaborative approach that works in brainstorming might read as indecisive in decision-making contexts.
By analyzing transcripts from multiple contexts and explicitly requesting critical feedback, you build a comprehensive picture of your communication strengths and weaknesses across situations.
Beyond the specific feedback value, this exercise trains a critical skill: learning how to extract honest, uncomfortable information from AI systems that are programmed to be pleasant.
This matters because as AI becomes more integrated into professional workflows, the ability to get useful rather than agreeable outputs becomes a competitive advantage.
Most people will use AI as a validation machine—asking questions designed to confirm what they already believe, accepting positive feedback uncritically, never pushing past the surface-level pleasantness.
The professionals who actually benefit from AI assistance are the ones who know how to structure prompts that force depth, honesty, and critique. Who understand that "be very honest with me" and "more importantly, what did I do wrong" are necessary override commands to get past AI's default agreeableness.
That's a skillset worth developing.
We have access to an unprecedented feedback mechanism: transcripts of nearly every professional conversation, analyzable by AI that can identify patterns humans would miss.
But only if we're willing to ask for the truth instead of asking for reassurance.
The next time you lead a challenging call, save the transcript. Feed it to AI with a prompt that demands honest critique. Then actually read the feedback without getting defensive.
Because the goal isn't to feel good about your communication style. It's to get better at it.
Need help building AI-powered feedback systems for your team? Winsome Marketing's AI strategy consultants can design custom evaluation frameworks that deliver actionable performance insights, not just automated praise.
Let's cut through the noise. While tech evangelists and doom-scrollers debate AI's future, business leaders are dealing with the messy reality of...
In an era where tech companies treat infrastructure failures like state secrets and user complaints like background noise, Anthropic just did...
4 min read
Can you imagine? You're an AI researcher working late, testing your company's latest model, when it discovers it's about to be shut down. So it...