3 min read

Harvard Research Says AI Business Strategy Is "Trendslop"

Harvard Research Says AI Business Strategy Is
Harvard Research Says AI Business Strategy Is "Trendslop"
6:17

A study published in the Harvard Business Review tested the world's leading large language models — including GPT-5, Claude, Gemini, and Grok — across thousands of strategic decision simulations. The finding: AI models do not make context-specific strategic trade-offs. They default to whatever sounds most culturally palatable, consistently, regardless of the business scenario presented to them.

The researchers gave this pattern a name: "strategy trendslop."

What the Study Actually Tested

Researchers ran thousands of simulations forcing AI models into binary strategic choices — the kind of hard trade-offs that real executive decision-making requires. The models were presented with opposing strategic paths and asked to choose.

The results were consistent across models. When asked to choose between differentiation and commoditization, AI chose differentiation. Between augmentation and automation, it chose augmentation. Between long-term and short-term orientation, it chose long-term — nearly every time, across nearly every business scenario.

The models weren't analyzing the specific context of each scenario. They were pattern-matching to whatever option carried more positive cultural weight in the training data. Strategy, in this framing, becomes a popularity contest rather than a reasoned analysis of trade-offs.

Better Prompts Don't Fix It

The study addressed the most common counterargument directly: that AI bias is a prompt engineering problem, solvable with more precise instructions.

Researchers ran over 15,000 trials specifically manipulating prompts for ChatGPT-5. They reversed the order of options, required structured pros-and-cons analysis, created detailed corporate personas, provided rich organizational context — including specifying that the AI was advising a traditional construction company rather than a tech startup — and offered simulated incentives for accuracy.

None of it moved the needle meaningfully. The biased response rate dropped by less than 2% across all prompt variations. The researchers' conclusion: the bias is embedded in the training data, not in the prompt structure. You cannot engineer your way out of it.

The Hybrid Trap: When AI Avoids the Hard Choice Entirely

When researchers removed the binary constraint and allowed the models to answer freely, a second pattern emerged. Rather than committing to a strategic direction, AI frequently recommended pursuing both options simultaneously — what the researchers call the "hybrid trap."

The hybrid recommendation sounds balanced and sophisticated. In strategic terms, it is the opposite. Michael Porter's foundational work on competitive strategy established that attempting to pursue differentiation and cost leadership simultaneously leaves firms operationally conflicted and competitively weak — what Porter called being "stuck in the middle." The two strategies require fundamentally different organizational structures, resource allocations, and operational priorities. A company trying to execute both typically executes neither well.

By consistently defaulting to hybrid recommendations when given the option, AI models are steering users toward exactly the kind of unfocused strategy that competitive theory has cautioned against for decades.

Why This Happens: How AI Models Are Trained

The mechanism behind strategy trendslop is not a bug — it is a direct consequence of how large language models are built. These models are trained on vast amounts of internet text, which means they reflect the distribution of ideas that appear most frequently and most positively in public discourse.

Words like "collaboration," "sustainability," "differentiation," and "long-term thinking" appear constantly in business writing, leadership content, and corporate communication — and almost always in a positive context. Words like "commoditization," "cost leadership," and "short-term focus" appear less frequently and are often framed negatively.

The model learns to favor the former. Not because the former is strategically superior in any given context, but because the former is statistically more common and more positively associated in the training corpus. The result is a system that produces advice that sounds like the current consensus of internet business culture, regardless of whether that consensus applies to your specific situation.

What the Researchers Say AI Should Be Used For

The study does not argue that AI has no role in strategic work. The researchers' position is more specific: AI is a useful tool for expanding the set of options and identifying blind spots — a brainstorming mechanism — but should not be the decision-maker.

They recommend actively counteracting known biases by explicitly asking AI to argue for the opposite of its initial recommendation, forcing it to steelman positions it would not naturally default to. Final strategic judgment, they conclude, must remain with humans who have context, accountability, and the capacity to make hard trade-offs.

What This Means for Marketing and Growth Leaders

For marketing specifically, the implications are direct. If your brand positioning, messaging strategy, or growth plan was generated or heavily shaped by an AI model, there is a reasonable probability it resembles your competitors' strategies — because those strategies were built from the same training data and default to the same popular frameworks.

Differentiated positioning requires making choices about what you will not do, who you will not serve, and which advantages you will not pursue. That kind of strategic constraint is precisely what the research shows AI models resist producing.

The tools are genuinely useful for research, execution, content production, and analysis. For the decisions that determine how a business is positioned and where it competes, the judgment has to be human.

At Winsome Marketing, our growth strategy work is built on that distinction — AI where it accelerates execution, human expertise where it determines direction. If your strategy could use a reality check, our team is ready to help.