What 700 Million Weekly ChatGPT Users Tell Us About Our Cognitive Future
We're witnessing the largest uncontrolled experiment in human cognitive behavior in history. Every week, 700 million people ask artificial...
4 min read
Writing Team
:
Nov 10, 2025 8:00:00 AM
New research from Aalto University just confirmed what every AI skeptic has been warning about: using AI makes us massively overestimate our own cognitive performance. And the more "AI-literate" you think you are, the worse the problem gets. In a study of 500 participants using ChatGPT to solve logical reasoning tasks from the LSAT, researchers found that everyone overestimated their performance—but the people who considered themselves AI experts were the most delusional.
According to the study published in Computers in Human Behavior, this represents a reversal of the Dunning-Kruger Effect. Normally, people who are bad at something overestimate their abilities, while experts underestimate theirs. But with AI, the pattern flips: the more AI-literate you think you are, the more you overestimate your performance. Professor Robin Welsch puts it bluntly: "We would expect people who are AI literate to not only be better at interacting with AI systems, but also at judging their performance with those systems—but this was not the case."
This isn't just academic curiosity. This is evidence that AI is eroding metacognition—our ability to accurately assess our own thinking. And when you can't tell the difference between your work and the AI's work, between your judgment and the AI's pattern-matching, you lose the ability to know when you're wrong. That's not productivity. That's cognitive outsourcing with no quality control.
Here's the behavioral pattern that should terrify anyone deploying AI at scale: most users prompted ChatGPT only once per question. They copied the question, pasted it into the AI, and accepted the output without checking, second-guessing, or iterating. Welsch calls this "cognitive offloading"—letting the AI do all the processing while you abdicate judgment entirely.
The study used LSAT logical reasoning questions, which require significant cognitive effort. These aren't simple factual queries—they're complex, multi-layered problems where getting the right answer requires understanding context, evaluating competing arguments, and constructing logical inferences. Exactly the kind of work AI is terrible at (as we learned from the legal reasoning research).
But users didn't engage critically. They treated ChatGPT like an oracle. One prompt, one answer, done. And when asked to assess their own performance, they consistently overestimated how well they did. The AI gave them confidence without competence. They felt smarter because they had an answer quickly, but they weren't actually better at reasoning.
This is the AI productivity trap: speed and confidence don't equal accuracy. You're not thinking faster—you're thinking less and assuming the AI compensated. And because the feedback loop is broken (you don't know you're wrong until much later, if ever), you never learn to calibrate your confidence.
Doctoral researcher Daniela da Silva Fernandes identifies the core problem: "Current AI tools are not fostering metacognition, and we are not learning about our mistakes." Metacognition is awareness of your own thought processes—knowing when you understand something, when you're guessing, when you need more information. It's the internal calibration system that tells you "I'm confident about this" versus "I should double-check."
AI breaks that system. Because the AI always sounds confident, you absorb that confidence. Because the AI produces fluent, well-structured text, you assume it's correct. And because getting an answer is so easy, you don't engage the cognitive effort required to actually understand the problem.
The result? You develop an "illusion of knowledge." You think you understand because you have an answer, but you didn't do the reasoning that produces understanding. You're like a student who copied someone's homework—you have the solution, but you can't explain how you got there. And when challenged, you can't defend it, because it was never yours to begin with.
This is particularly dangerous for knowledge workers whose value comes from judgment, not execution. If you're a lawyer, marketer, strategist, or analyst, your job isn't to produce documents—it's to produce sound reasoning that informs decisions. AI can produce documents. It can't produce sound reasoning. And if you mistake the former for the latter, you're not being productive—you're being replaced without realizing it.
The most striking finding is the reversal of the Dunning-Kruger Effect among AI-literate users. Normally, novices overestimate their abilities while experts are more humble. But with AI, the people who consider themselves AI experts showed the greatest overconfidence. They assumed their AI literacy translated to better performance, when in reality it just made them trust the AI more blindly.
Welsch's interpretation: "AI literacy might be very technical, and it's not really helping people actually interact fruitfully with AI systems." Translation: knowing how AI works doesn't mean you know how to use it well. In fact, it might make you worse, because you're overconfident in your ability to evaluate AI outputs.
This should concern every organization investing in "AI training" and "AI literacy programs." If those programs focus on technical knowledge—how models are trained, what tokens are, how context windows work—without teaching critical evaluation of AI outputs, you're not making employees more effective. You're making them more confidently wrong.
Fernandes proposes a simple intervention: "AI could ask the users if they can explain their reasoning further. This would force the user to engage more with AI, to face their illusion of knowledge, and to promote critical thinking."
Imagine if ChatGPT didn't just give you an answer. Imagine if it said: "Here's my reasoning. Can you explain why you think this is correct? What assumptions am I making? What would change if [X variable] were different?" That would force users to engage critically, not just accept outputs passively.
This is what the study calls "better feedback loops". If AI systems required multiple prompts, iterative refinement, and explicit justification of reasoning, users would develop metacognitive awareness. They'd learn to distinguish between "the AI said this" and "I understand why this is true." They'd calibrate their confidence based on their actual understanding, not the AI's fluency.
But current AI tools don't do this. They optimize for ease of use, which means minimizing friction, which means maximizing cognitive offloading. You get answers fast. You feel productive. And you never realize you've stopped thinking.
If you're using AI for content creation, analysis, strategy, or decision-making, here's your warning: you probably think you're better at it than you are. Not because you're incompetent, but because AI's confidence is contagious, and its outputs are convincing. You're developing an illusion of knowledge without building actual expertise.
The solution isn't to stop using AI. It's to build friction back into the process:
Organizations deploying AI need to build these practices into workflows. Not "use AI to be faster," but "use AI to augment reasoning while maintaining critical judgment." Otherwise, you're not increasing productivity—you're de-skilling your workforce and calling it innovation.
Want to build AI workflows that enhance thinking instead of replacing it? Let's talk. Because the companies that win won't just use AI faster—they'll use it smarter. And that requires knowing the difference between confidence and competence.
We're witnessing the largest uncontrolled experiment in human cognitive behavior in history. Every week, 700 million people ask artificial...
Vishal Shah, Meta's VP of Metaverse, just sent an internal memo obtained by 404 Media ordering employees to use AI to "go 5X faster"—not 5% faster,...
1 min read
YouTube just crossed a line that should terrify anyone who cares about truth, trust, and the future of authentic content. The revelation that YouTube...