4 min read
Research Suggests AI Chatbots May Be Reducing Brain Activity
Writing Team
:
Apr 23, 2026 12:00:00 AM
A growing body of research is raising a question that cuts against the prevailing enthusiasm for AI productivity tools: what does regular reliance on large language models do to the brain over time? The early findings are worth taking seriously — not as a reason to abandon AI tools, but as a reason to understand how we are using them.
The central concern is cognitive offloading — the transfer of mental tasks to AI systems in ways that reduce the brain's own engagement with those tasks. The research suggests that reduction in engagement is not neutral. It has measurable effects on brain activity, memory formation, and potentially long-term cognitive health.
The MIT Study: 55% Reduction in Brain Activity
Research scientist Nataliya Kosmyna and colleagues at MIT Media Lab recruited 54 students to write short essays, dividing them into three groups: one using ChatGPT, one using Google search with AI summaries disabled, and one using no technology. Brainwave activity was measured throughout.
The results were stark. Students who wrote without assistance showed widespread brain activity across regions associated with creativity and information processing. The search-engine group showed strong activity in visual processing areas. The ChatGPT group showed brain activity reduced by up to 55% compared to the unassisted group.
"The brain didn't fall asleep, but there was much less activation in the areas corresponding to creativity and to processing information," Kosmyna said.
The memory effects were equally notable. After submitting their essays, students in the AI group were largely unable to quote from their own work and reported feeling no ownership over it. The essays themselves were described by evaluating teachers as "soulless" — similar to each other and lacking originality and depth.
The findings have not yet been published in a peer-reviewed journal, a significant caveat. The sample size of 54 students is small, and the essay topics were deliberately open-ended and required little research — conditions that may not generalize to all AI use cases. Nevertheless, the direction of the findings is consistent with other research in the field.
The Four-Month Follow-Up: Potential Lasting Effects
The more concerning finding from Kosmyna's study came four months later. Students who had originally used ChatGPT were asked to write a new essay without AI assistance. Their neural connectivity was measurably lower than that of students who had switched in the opposite direction — from unassisted writing to AI-assisted writing.
The interpretation is cautious but significant: students who used AI for the initial essays may not have engaged with the subject matter in ways that built lasting neural pathways. The cognitive work that produces durable knowledge and mental capability was, at least partially, outsourced — and the brain reflects that.
Cognitive Surrender: The University of Pennsylvania Research
A separate study from the University of Pennsylvania describes a related phenomenon the researchers term "cognitive surrender." Participants using generative AI chatbots tended to accept AI outputs with minimal scrutiny, overriding their own intuition and judgment in favor of the AI's response.
This is a distinct mechanism from simple reliance. Cognitive surrender describes an active deferral of judgment — not just using AI to do a task, but using AI to decide what to think about the task's output. The implications for professional work — where judgment, skepticism, and independent analysis are the core value-add — are direct.
A parallel finding from a separate context underscores the concern: medical professionals who used an AI tool to screen for colon cancer for three months subsequently performed worse at spotting tumors without the tool than they had before using it. The skill degraded during the period of AI assistance.
The Gamma Wave Research: Long-Term Cognitive Health
Computational neuroscientist Vivienne Ming, author of Robot Proof, conducted research with University of Berkeley students asked to predict real-world outcomes — including commodity prices. The majority of participants simply asked AI and copied the answer. Gamma wave activity — a marker of cognitive effort — showed minimal activation.
Ming's concern extends beyond the immediate productivity loss. Weak gamma wave activity has been linked in other research to cognitive decline later in life. If the pattern of minimal cognitive engagement with AI-generated answers becomes habitual — particularly in younger populations — the long-term neurological implications could be significant.
"If we don't use it, the long-term implications for cognitive health are pretty strong," Ming said, referring to deep thinking as a cognitive capability that requires regular exercise to maintain.
Ming also noted a critical exception: a small subset of participants — fewer than 10% — used AI differently. Rather than accepting AI outputs directly, they used AI to gather data and then analyzed that data themselves. This group made more accurate predictions than other participants and showed stronger brain activation. The tool was the same; the mode of engagement was different.
What the Research Does and Does Not Show
Several important caveats apply to this body of research. The MIT study has not been peer-reviewed. Sample sizes across multiple studies are small. The conditions of controlled studies — essay writing on abstract topics, commodity price prediction — may not fully represent the range of ways professionals use AI tools in practice.
What the research does show, consistently, is that passive AI use — accepting outputs without independent engagement — reduces measurable cognitive effort. It does not show that AI use is inherently harmful. Ming's finding that active, analytical engagement with AI produces better outcomes and stronger brain activation suggests the tool's effect depends substantially on how it is used.
The GPS analogy Ming draws is instructive. Increased GPS use has been linked in research to worse spatial memory over time, and poor spatial navigation is a potential predictor of Alzheimer's Disease. Ming made this prediction about GPS almost two decades ago. The parallel to LLMs is not exact, but the mechanism — reduced cognitive exercise of a specific capability leading to measurable decline in that capability — is consistent.
What This Means for Professionals Using AI Tools
For marketing and growth professionals whose daily work involves writing, analysis, and strategic judgment — the precise capabilities this research flags as most vulnerable — the findings suggest a practical framework for AI use.
The distinction that Ming's research highlights is between AI as a cognitive replacement and AI as a cognitive tool. The former accepts AI output and moves on. The latter uses AI to gather, organize, or draft, then applies independent judgment, analysis, and synthesis to that output. The former reduces cognitive engagement. The latter may actually enhance it by raising the ceiling of what a human analyst can work with.
The research does not argue against using AI. It argues against using it passively. For professionals whose value lies in judgment, originality, and analytical depth, maintaining those capabilities requires exercising them — even when, especially when, AI can do the surface-level work faster.
Building AI into your workflow in ways that enhance rather than replace your team's cognitive output is exactly the kind of strategic question our team at Winsome Marketing works through with growth-focused clients. If you want to think through what responsible, high-performance AI use looks like for your organization, let's connect.

