6 min read

Your Role in AI Transformation: What Leadership, Managers, and Staff Must Do

Your Role in AI Transformation: What Leadership, Managers, and Staff Must Do
Your Role in AI Transformation: What Leadership, Managers, and Staff Must Do
11:55

The managing partner looked frustrated.

"We've given them everything. The tools. The training. The budget. Executive support. What more do they want?"

I asked a different question: "What are you doing?"

Silence.

"I mean, personally. Not what you're providing. What behaviors are you modeling? What questions are you asking in meetings? What are you celebrating? What are you doing differently than you did before AI?"

More silence.

Here's what most leaders miss: AI transformation doesn't fail because people lack resources. It fails because nobody knows what they're actually supposed to do differently.

Everyone knows they "should support AI adoption." Nobody knows what that looks like on Tuesday afternoon when they're making decisions about projects, people, and priorities.

Let me make it concrete. Here's exactly what different roles need to do to make AI transformation succeed.

For Leadership: The Four Non-Negotiable Responsibilities

If you're a partner, managing director, or executive leader, your role isn't to use AI the most or become the most technically proficient. Your role is to make AI adoption structurally safe and strategically valuable.

1. Model Curiosity and Experimentation

Not "encourage" it. Model it.

What this actually looks like:

In partner meetings, you say: "I spent two hours last week trying to get AI to draft this client proposal. First three attempts were terrible. Here's what I learned about what works and what doesn't."

When reviewing work, you ask: "Did you explore whether AI could help with this? What did you try?" Not as criticism—as genuine curiosity.

When something goes wrong, you share it: "I used AI for financial analysis and it completely missed a key factor. Reminder that AI outputs need expert validation. Here's what I should have caught."

Why this matters: If leadership only shares polished success stories, everyone learns that you should only use AI when you're already confident it will work perfectly. That's not learning—that's performing.

Your team needs to see you learning in public, making mistakes, iterating, asking for help. If you won't do it, they won't either.

2. Protect Resources (Time, Budget, Attention)

Saying "learning is important" while maintaining 95% utilization targets is a contradiction your team notices.

What this actually looks like:

Adjust capacity expectations: If you expect people to learn AI, build that into workload planning. Not "find time when you can" but "this role includes 3 hours weekly for AI learning for the next six months."

Fund experimentation: Budget for tools, training, and the productivity dip that comes with learning. If you're not willing to see short-term efficiency drops for long-term capability gains, don't pretend you support transformation.

Agenda time in meetings: Put AI learnings on meeting agendas regularly. Not as "other business" at the end, but as a standing item with real time allocated.

Why this matters: People believe what you resource, not what you say. If AI learning happens only in people's "spare time," you've communicated that it's optional.

3. Celebrate Progress, Not Just Results

Results take time. Progress happens weekly. If you only celebrate results, you'll wait months before anyone feels acknowledged.

What this actually looks like:

Recognize experiments: "Sarah tried using AI for client onboarding this week. It didn't fully work, but she discovered it's great for initial data gathering. That's valuable learning."

Celebrate questions: "Great question about whether AI could handle this. Let's explore that."

Acknowledge iteration: "I love that you're on version four of this prompt. That's exactly what mastery looks like."

Share small wins: "Three people are now using AI for meeting summaries and saving 2 hours weekly. That's 6 hours we've reclaimed for higher-value work."

Why this matters: What gets celebrated gets repeated. If you only celebrate big wins, people wait to share until they have big wins. Early adoption dies in silence.

4. Remove Barriers When Teams Surface Them

Your job isn't to prevent all problems. It's to remove problems quickly when people identify them.

What this actually looks like:

When someone says: "This AI tool doesn't integrate with our document management system"
You say: "Let me work on that. What would solve it?" (Then you actually work on it.)

When someone says: "I can't experiment because my utilization target doesn't allow time"
You say: "You're right. Let's adjust your target for this quarter while you're learning."

When someone says: "Our data security policy prevents us from using the most helpful tools"
You say: "Let's review the policy. Where can we create appropriate guidelines rather than blanket prohibitions?"

Why this matters: People will surface barriers exactly once. If nothing happens, they stop telling you about problems and just work around them or give up.

New call-to-action

For Department Heads: The Bridge Between Strategy and Execution

If you're a department head, practice leader, or senior manager, you're the crucial translation layer. Leadership sets direction. You make it real.

1. Champion AI Adoption in Your Teams

Not "tell people to use AI" but actively champion it.

What this actually looks like:

Weekly check-ins: "What did you try with AI this week? What worked? What didn't?"

Connect use cases to work: "This analysis you're doing—could AI help with the data gathering portion?"

Share your own use: "I used AI to draft this department update. Took me 10 minutes instead of an hour."

Protect experimentation time: When someone's learning AI and their productivity temporarily drops, you defend that to leadership: "They're building capability that will pay off in three months."

Why this matters: Your team doesn't listen to enterprise-wide communications. They listen to you. If you're not actively championing AI, they assume it's not really important.

2. Surface Use Cases and Pain Points

You're closest to the actual work. You know where AI could help and where it's failing.

What this actually looks like:

Document patterns: "Three people asked about the same use case this week. We need better guidance on this."

Escalate barriers: "My team wants to use AI for client research but our tools don't access the databases they need. Can we solve this?"

Identify wins: "AI just saved us 15 hours on the Johnson project. We should document this workflow and share it firm-wide."

Challenge assumptions: "We're not using AI for this process because we've always done it manually. Should we reconsider?"

Why this matters: Leadership can't fix problems they don't know exist. You're their eyes and ears.

3. Support Your AI Champions

Every department has 1-2 people who are naturally ahead of everyone else. Your job is to activate them, not just let them work alone.

What this actually looks like:

Give them visibility: Ask them to demonstrate what they're doing in team meetings.

Protect their time: "I want you to spend 2 hours this week helping three colleagues get started with AI."

Recognize them formally: Include "helped advance team AI adoption" in their performance review.

Connect them to each other: Introduce your champions to champions in other departments.

Why this matters: Champions burn out quickly if they're helping others without recognition or support. Make it part of their role, not extra work.

4. Integrate AI into Performance Conversations

If AI capability isn't part of how you evaluate performance, people know it's optional.

What this actually looks like:

In goal setting: "One of your development goals this quarter is to identify three workflows where AI could improve efficiency."

In reviews: "You've made great progress on traditional skills. Where are you in your AI learning journey?"

In development planning: "To be ready for senior manager, you'll need to be comfortable using AI for client analysis. Let's build that skill."

In recognition: "Your innovative use of AI for this project is exactly the kind of forward thinking we value."

Why this matters: Performance systems are the truth-teller about what matters. If AI doesn't show up there, it doesn't really matter.

For Everyone: The Universal Responsibilities

Regardless of your role, here's what everyone must do:

1. Commit to Protected Learning Time

Not "I'll learn when I have time" but "I will spend X hours weekly learning AI, and I will protect that time like I protect client meetings."

What this actually looks like:

Block time on your calendar. "Friday 2-4pm: AI Learning." Treat it as non-negotiable.

Use it actually for learning. Not email. Not other work. Actually experimenting with AI.

Track your progress. "Three weeks ago I couldn't do X. Now I can. Here's what I learned."

2. Experiment Within Guardrails

You don't need permission to try everything. But you do need to understand what's safe and what's not.

What this actually looks like:

Use AI for first drafts, not final outputs. Generate proposals, analyses, summaries—then review and refine them.

Don't input confidential client data into public tools without understanding security implications.

Test on internal work first before using AI on high-stakes client deliverables.

Ask when uncertain: "Is it appropriate to use AI for this?" Better to ask than to guess wrong.

3. Share What You Learn (Failures Included)

Your colleagues need to know what you're discovering—especially what doesn't work.

What this actually looks like:

Share prompts that worked: "This prompt structure got me great results for client summaries."

Share what failed: "I tried using AI for contract review and it missed key clauses. Don't trust it for legal analysis."

Ask for help publicly: "I'm stuck on this use case. Has anyone figured this out?"

Contribute to shared resources: Add to prompt libraries, documentation, shared learnings.

4. Ask Questions—Lots of Questions

Questions drive learning. The quality of AI outputs depends on the quality of questions asked.

What this actually looks like:

Ask about use cases: "Could AI help with this task?"

Ask about approaches: "What's the best way to prompt AI for this type of analysis?"

Ask about outputs: "Why did AI recommend this? What assumptions is it making?"

Ask about implications: "If AI handles this, what does that free us up to do instead?"

The Partnership Growth Paradox Resolution

Remember the question that started this entire conversation: "How do we accelerate AI adoption while maintaining the culture and relationships that drive client retention?"

Here's the answer: We make our people—and their growth—the center of the transformation, not an afterthought to it.

And that requires everyone playing their role:

Leaders create the structural conditions that make AI adoption safe and valuable.

Managers translate strategy into daily practice and remove barriers for their teams.

Everyone commits to learning, shares knowledge, and asks the questions that drive collective progress.

This isn't aspirational. This is operational. These are the specific behaviors that make AI transformation work.

The firms that succeed aren't the ones with the best technology. They're the ones where everyone knows their role and actually does it.

What's your role? And what are you doing this week to fulfill it?


Ready to clarify roles and drive accountability for AI transformation? Winsome's consulting practice helps professional services firms define specific responsibilities for each level, build accountability systems that work, and create the momentum that turns strategy into sustained change. We don't just tell you what to do—we help you actually do it. Let's define what success looks like for your role.

If All You're Planning For AI is Tech, you're Missing This...

If All You're Planning For AI is Tech, you're Missing This...

I can tell within the first thirty minutes of an AI transformation planning session whether a firm is going to succeed or fail.

Read More
The Four Pillars of a Curious Culture for AI Adoption

The Four Pillars of a Curious Culture for AI Adoption

Every firm says they want a culture of innovation. They put it in their values statements. They mention it in recruiting pitches. They talk about it...

Read More