LLMs Are Malleable: What Black Hat Tactics from 2010 Mean for AI Search
LLMs are deeply malleable, and right now, they are completely susceptible to the spam tactics that kind of worked pretty well for Black Hat SEO ten...
We've been talking about personalization and marketing for, like, twelve thousand million trillion years. We know how important it is, and how great it can be. But the tracking of it is just an absolute nightmare, and it's an especially nightmare when we look at AI.
The citations, the mentions, the clicks, the attribution—all of that stuff is going to become a huge challenge for us and the industry going into 2026.
AI search is the Wild West right now. There are all these people claiming they can provide robust tracking on it, and I don't believe any of them, to be frank. I don't think anyone's got a good solution yet.
Here's why: LLMs are deeply malleable, and right now they are completely susceptible to manipulation. We don't have any concrete ranking factors from them. It's a black box as to how they ingest information, how they output their citations, how they decide to share different sources based on the personalization of my search versus someone else's search.
The spammy tactics that worked pretty well for Black Hat SEO ten years ago? They work right now for LLMs. You can manipulate what an LLM says through Reddit threads. I know about one person who tried to launch a complete defamation of their competitor through Reddit and were deeply successful doing that. It really ruined their reputation on Google, on Reddit, and in LLMs because of the importance of those forums.
The fundamental problem is that AI search is personalized in ways we can't see or measure. When I search for something in ChatGPT, I get different results than you do. The LLM is drawing on different contexts, different conversation histories, different inferred preferences.
Traditional search had personalization too, but we could at least see aggregate data. We could track rankings, even if they varied by location or search history. We had Search Console. We had analytics that showed us something.
With AI search, we're flying blind. There's no dashboard showing you when your content was cited. There's no way to know if you're appearing in AI-generated responses. There's no metric for "AI visibility" that actually means anything reliable.
And even when tools claim to track it, they're guessing. They're sampling. They're extrapolating from limited data points.
Even beyond just knowing if you're being cited, there's the bigger question: what happens when someone gets their answer from an LLM?
Did they visit your site? Probably not. Did they buy from you? Maybe, eventually, through some circuitous path you'll never be able to trace. Did they remember your brand was mentioned? Who knows.
The entire conversion funnel we've built our measurement systems around assumes people click through to websites. AI search short-circuits that assumption entirely.
So how do you prove ROI on content when the traffic never hits your analytics? How do you justify budget for "AI optimization" when you can't show concrete results?
I don't have an answer of "this is the perfect way we need to track all of this stuff." But it is something we're going to need to look at through a lot of different studies, watch people who are doing some cool stuff, try to run some experiments. We're going to have to get really creative in this space.
Here's what I think we can do: stop trying to measure AI search the way we measured traditional search. It's not going to work. The paradigms are too different.
Instead, we need to think about broader indicators. Brand search volume. Direct traffic patterns. Sales cycle changes. Qualitative feedback from prospects about how they found you. Competitive intelligence about where you're being mentioned.
It's messier. It's less precise. But it might be more honest than pretending we can track something we fundamentally can't.
This is something we need to consider as a team: the types of tools we trust enough. I've had multiple clients come to me asking how we're tracking AI search right now, what tool we're using, what tool they should be using. They need a solution because their clients are shopping around for it, even if it's not perfect, just so they can help control that conversation.
We need to have an answer. Not necessarily a perfect tool, but a framework for how we approach this measurement problem honestly.
Because right now, the alternative is clients spending a ton of money on something that's completely useless to them.
Maybe the bigger lesson here is that we're entering a period where certainty is less available. Where proving things definitively is harder. Where we have to make strategic decisions with less data backing them up.
That's uncomfortable. But it might also be reality for a while.
The marketers who win in this environment won't be the ones with the best tracking dashboards. They'll be the ones who can operate strategically even when measurement is imperfect.
AI search tracking is messy, and anyone claiming otherwise is selling something. At Winsome Marketing, we help brands build search strategies that work even when measurement is imperfect—using broader indicators and honest frameworks.
Ready for a more honest conversation about AI search? Let's talk about what we can actually measure and what we can't.
LLMs are deeply malleable, and right now, they are completely susceptible to the spam tactics that kind of worked pretty well for Black Hat SEO ten...
Your website being a community—this one is fun, and I've been seeing more of this. It's a little different, but I think the shift toward wanting...
I remember the foundational HubSpot framework from fifteen years ago, when they said the reasons why people search something on the internet. That...