CatAttack Study Exposes Vulnerabilities in AI Reasoning Models
A groundbreaking study from researchers at Collinear AI, ServiceNow, and Stanford University has exposed a fundamental vulnerability in...
The problem with AI adoption isn't enthusiasm. It's method.
A study conducted by Stanford University researchers, given access to observe how Google employees learned and used AI tools over 18 months, has identified a consistent pattern separating people who extract genuine value from AI and those who don't. The finding isn't about prompt engineering or technical fluency. It's about how people frame the problem before they open the tool.
The research, published this week alongside a post from Google DeepMind's organizational design lead Martin Gonzalez, describes a dominant failure mode called "simple substitution" — and a five-strategy framework that characterizes how successful adopters actually work.
Most people who try AI tools approach them by identifying a task they already do and looking for an AI alternative. Write an email — ask AI to write the email. Summarize a document — ask AI to summarize the document. Research a topic — ask AI to research the topic.
The Stanford researchers found that this approach consistently underdelivers, for a predictable reason: the effort required to learn the tool, construct an effective input, and evaluate and refine the output frequently exceeds the time saved by not doing the task manually. The payoff doesn't clear the effort threshold, and adoption stalls.
The people who became consistent, high-value AI users didn't start with tasks. They started with blockers — the specific friction points in their work that, if removed, would allow them to move faster, think more clearly, or operate at a higher level. That reframe changes the entire implementation logic.
The Stanford researchers found that successful AI adopters, regardless of their actual job function, were unknowingly applying a product management approach: identifying high-value opportunities, understanding what different AI tools can do, and finding a fit between the two. Rather than looking for quick task substitutions, they redesigned workflows.
The analogy the study uses is useful: generative AI is a Swiss Army knife — a general-purpose tool with many functions. The product manager mindset helps you decide which blade to pull for a given job, rather than defaulting to the one you've seen used most often.
From that orientation, the study identifies five strategies.
The first is to start with what's blocking your work, not with the technology. Identify the specific hurdles — the tasks that slow you down, limit your analytical depth, or constrain creative output — and use those as the brief for finding an AI solution.
The second is to choose the right tool beyond a chatbot. The default assumption that AI assistance means a conversational interface misses a significant portion of what's available. Different tools are better suited to different problems, and evaluating that fit is part of the adoption process.
The third is to start small and experiment rapidly. The goal in early adoption isn't to redesign the entire workflow — it's to prototype, test, and refine. The research suggests this approach surfaces what actually works and avoids the frustration and cost of premature scaling.
The fourth is to think holistically across systems. The researchers found that the largest productivity gains came not from isolated task automation but from embedding AI across broader processes — bridging datasets, stitching together workflows that reduce multiple manual steps, or synthesizing inputs from multiple areas of expertise into strategic outputs. The value compounds when AI operates across a system rather than within a single task.
The fifth is to document and share what works. Successful adopters package their findings into repeatable templates that others can adapt. This step converts individual productivity gains into organizational ones, allowing teams to skip the trial-and-error phase and build on proven approaches.
The study was conducted inside one of the most AI-native organizations in the world, with access to tools and internal resources most companies don't have. That context could make its findings seem distant from the average enterprise AI rollout.
The opposite is more likely true. If simple substitution is the dominant failure mode at Google — where employees have strong technical fluency, broad access to tools, and organizational encouragement to experiment — it is almost certainly the dominant failure mode everywhere else as well.
The implication for organizations currently in the middle of AI adoption programs is direct. Training employees on how to use AI tools is a different and shallower intervention than training them to identify where AI can remove meaningful blockers in their actual work. The former produces occasional use. The latter produces the workflow redesign that generates compounding returns.
The distinction between those two outcomes is not primarily a technology question. It's an organizational design question — about how AI adoption is framed, what success is measured against, and whether the knowledge developed by early adopters is captured and distributed or left to dissipate.
For teams working through that question in marketing, growth, and content operations, the Stanford framework is a useful starting structure. If you want help applying it to your specific workflows and team structure, Winsome Marketing's growth team works through exactly this kind of implementation challenge.
A groundbreaking study from researchers at Collinear AI, ServiceNow, and Stanford University has exposed a fundamental vulnerability in...
Another day, another headline screaming about AI gone rogue in the legal profession. This time, it's New Jersey attorney Sukjin Henry Cho, slapped...
We finally have research that asks the right question about AI coding tools.