The Paid Media Trap: Why Throwing Money at Awareness Doesn't Create Demand
The paid media campaign launched with reasonable expectations. Budget allocated. Keywords researched. Ads written. Landing page—well, you sent people...
6 min read
Joy Youell
:
Dec 29, 2025 7:59:59 AM
One of the biggest barriers I've encountered when going in and doing any kind of AI implementation project is the fact that, by and large, the people who are leading IT at mid-sized companies—at least in my experience with maybe twenty different companies this year—tend to be older, a little bit later in their careers. And they're not all good at AI.
This isn't a criticism of their skills or their experience. They may be excellent at what IT has traditionally been—managing infrastructure, securing networks, maintaining systems, implementing software according to vendor specifications. But AI isn't technology. AI is deployed through technology, but it is not fundamentally technology at all.
And that creates a massive problem. Because these IT leaders have such a chokehold on data, and how they're going to do the rules around everything, and they've always been the through point and the ones giving permission. And they're not willing to give up control.
IT is concrete. It's about servers and networks and applications and databases. You implement things according to specifications. You secure them. You maintain them. You troubleshoot when they break. There are right answers and wrong answers. There are vendor certifications and best practices and proven methodologies.
AI doesn't work like that. It's abstract. It's probabilistic. It learns and adapts. You can't just "install" it according to a manual. You have to train it, refine it, iterate on it. The outputs aren't deterministic—you don't get the same result every time. Success looks different for every implementation because it depends on your specific data, your specific use cases, your specific workflows.
For IT leaders who built their entire careers on concrete, deterministic systems, this is too theoretical, too abstract. It's not the way IT works. They can't wrap their heads around it.
So when you come to them with an AI initiative, they respond the only way they know how—by asking for specifications, requirements, security audits, vendor certifications, implementation plans. All the things that make sense for traditional IT projects but completely miss the point with AI.
They ask: "What do you need that for?" "Why do you need access to that data?" "What's the business justification?" "What vendor are you using?" "What's the security model?" "How do we maintain this?" And every question is rooted in the traditional IT framework that just doesn't apply.
Here's what makes this particularly difficult: IT leaders have spent years building their authority around being the gatekeepers of technology and data. They got to that position by being careful, by saying no to risky projects, by maintaining security and stability. Their entire professional identity is built on control.
AI implementation requires loosening that control. It requires experimentation. It requires giving teams access to data they haven't traditionally had. It requires trying things that might not work. It requires accepting some level of risk in exchange for potential upside.
And IT leaders who have spent their entire careers being rewarded for minimizing risk are not going to suddenly embrace that approach just because you tell them AI is important.
They're going to do what they've always done—slow things down, ask for more documentation, require more approvals, impose more restrictions. And they'll justify it as being responsible stewards of the company's data and systems. Which, from their perspective, they are.
The problem is that this approach kills AI initiatives before they start. Because AI implementation requires iteration and learning. If every experiment requires three months of approvals and documentation, you can't iterate. You can't learn. You can't adapt. You just waste time and money building something that's obsolete before it launches.
The generational gap compounds the problem. AI tools are evolving rapidly. New capabilities emerge monthly. Best practices are still being figured out in real-time. The people who are excelling at AI implementation are often younger, more comfortable with ambiguity, more willing to experiment.
But they're not the ones in positions of authority to approve and implement these initiatives. The people with authority—the IT leaders, the CIOs, the CTOs at many mid-sized companies—are from a generation that learned technology when it was more stable, more predictable, more controllable.
And they're looking at AI through that lens. They're trying to treat it like any other IT project, with the same processes and controls and approval mechanisms. Which means they're fundamentally misunderstanding what they're dealing with.
Here's an example from a real client: they had an IT leader who insisted on complete specifications before granting any data access for an AI project. He wanted to know exactly what data would be used, exactly how it would be processed, exactly what the outputs would look like, exactly what security measures would be in place. All reasonable questions for a traditional IT project.
But with AI, you don't know all of that upfront. You need to experiment to figure out what data is actually useful. You need to iterate to refine the outputs. You need to start small and learn as you go. The IT leader's perfectly reasonable requirements made the project impossible.
Any initiative that requires data normalization, data hygiene, data storage—all that kind of stuff—is going to have to happen on a micro level. It's going to have to happen in a department first, or in a specific section of the company, with a specific team, on a specific project. Just because access at a wide level is really hard.
This is the practical reality: you can't implement AI at scale when IT controls all access to data and systems. Because IT will slow everything down with approval processes that don't match the iterative nature of AI development.
So you have to find ways to work around it. You implement pilots in specific departments where you can get local approval. You use data that doesn't require IT's permission. You build proof-of-concept projects that demonstrate value before asking for broader access. You do whatever you can to create momentum outside of IT's direct control.
But this creates its own problems. Shadow IT is a real concern. Compliance and security risks are real concerns. You can't just bypass IT entirely and hope for the best. At some point, you need their cooperation.
The only way to do it is to come from above. You need to go in and say, "I'm going to be the AI advisor, and the IT leader and the CTO and the CIO are going to report to me." That way you have control.
This is harsh, but it's the reality. You cannot convince a rigid, inflexible IT leader to change their entire approach through persuasion. You cannot educate them into seeing AI differently. The frameworks they operate from are too deeply ingrained.
What you can do is get executive buy-in above them. You need the CEO or COO or CFO to understand why AI matters, what it requires, and why the traditional IT approval process is incompatible with that. And you need them to grant authority to someone—whether that's you, or a fractional leader, or a new internal role—who can drive AI initiatives with IT in a support role rather than a gatekeeping role.
This is uncomfortable. It's politically sensitive. It can create friction. But it's often the only way to make progress.
The alternative is what we see constantly: clients who want to implement AI, who see the value, who are willing to invest—but who get completely stalled by IT gatekeepers who can't or won't adapt their approach. Projects that take six months to get approval. Initiatives that die in committee. Pilots that never launch because they can't get data access.
If you can't get executive buy-in to override IT, your only option is to start very small. Find a use case that doesn't require broad data access. Build something that demonstrates value with minimal IT involvement. Get results that create momentum.
Then use those results to make the case for broader access and more ambitious projects. This is the long game. It's frustrating. It's slower than it should be. But it might be the only path available.
The key is choosing that first project carefully. It needs to be valuable enough to matter but constrained enough that you can execute without IT's full cooperation. It needs to deliver results quickly enough that you don't lose momentum. And it needs to build credibility with the executive team so you can eventually get the authority you need to do this properly.
Companies are going to realize just how badly they need to implement AI over the next two to three years. The competitive advantage is real. The efficiency gains are real. The strategic value is real.
But a lot of companies are going to struggle because they can't navigate around their IT gatekeepers. They're going to watch competitors pull ahead while they're still stuck in six-month approval cycles for pilot projects.
The IT gatekeeper problem is one of the biggest barriers to AI adoption that nobody wants to talk about. Because it's politically uncomfortable. Because it means acknowledging that the people who have been stewards of technology for decades might not be the right people to lead the next technology transition. Because solving it requires uncomfortable organizational change.
But ignoring it doesn't make it go away. It just ensures your AI initiatives keep stalling.
AI initiatives fail when traditional IT frameworks block implementation. At Winsome Marketing, we help companies get executive buy-in and implement AI strategies that work around—or override—gatekeepers who can't adapt to how AI actually works.
Ready to break through the IT barrier? Let's build an AI implementation strategy with the authority it needs to actually succeed.
The paid media campaign launched with reasonable expectations. Budget allocated. Keywords researched. Ads written. Landing page—well, you sent people...
Here's a conversation I have at least once a month: A company shows me their tech stack. They're paying thousands for HubSpot. I ask what they're...
Let me tell you about the irreplaceable value of elite research skills - this is where we separate the pros from the amateurs, and where human writers