Open AI Researcher Says That The AI Homework War Is Over
Andrej Karpathy just said the quiet part out loud: schools have lost the war on AI homework. Not losing. Lost. Past tense. The former OpenAI...
4 min read
Writing Team
:
Jan 26, 2026 8:00:04 AM
The American Association of Colleges and Universities and Elon University just surveyed 1,057 faculty members about generative AI's impact on higher education. The results paint a picture of a profession watching its core mission—developing critical thinking, attention, and ethical reasoning in students—actively undermined by technology they feel unprepared to address.
Nine in 10 faculty believe AI will diminish students' critical thinking skills. Ninety-five percent say it will increase student overreliance on AI tools. Eighty-three percent predict decreased attention spans. These aren't technophobic outliers. These are the people actually in classrooms watching what's happening to student cognition in real time.
And most institutions have done essentially nothing to prepare them for it.
Sixty-eight percent of faculty report their institutions have not prepared them to use AI in teaching, mentorship, or scholarship. This isn't a minor oversight—it's systemic abandonment of educators facing rapid technological disruption of fundamental pedagogical assumptions.
The consequences are already visible in graduates. Sixty-three percent of professors say last spring's graduates were not prepared to use generative AI at work. Seventy-one percent say those graduates lack understanding of ethical issues related to AI use. We're sending students into a workforce where AI literacy matters while failing to provide the education that would develop it.
About a quarter of faculty don't use any AI tools at all. A third don't use them in teaching. Eighty-two percent cite resistance to or unfamiliarity with AI as hurdles in departmental adoption. This isn't simple technophobia—it's rational response to being handed disruptive technology without training, resources, or institutional guidance on how to preserve educational values while integrating it.
Lynn Pasquerella, president of AAC&U, framed it correctly: "This is not a story of simple resistance to change. It is, instead, a portrait of a profession grappling seriously with how to uphold educational values in a rapidly shifting technological landscape."
Except they're grappling alone, without institutional support, while student cognitive capabilities demonstrably decline.
Seventy-eight percent of faculty say AI-driven cheating is rising. But defining what constitutes cheating has become impossibly complex.
Just over half of faculty consider following a detailed AI-generated outline to be cheating. Just under half say it's legitimate use or they're unsure. Forty-five percent say using AI to edit papers is legitimate. Fifty-five percent say it's illegitimate or they're uncertain.
This isn't disagreement on edge cases. This is fundamental confusion about what constitutes students' own work when AI can generate outlines, draft prose, and edit for clarity. The survey reveals a profession unable to reach consensus on basic academic integrity standards because the technology has fundamentally altered what "writing a paper" means.
The problem compounds when you consider assessment. If faculty can't agree on whether AI-assisted editing is cheating, how do they construct assignment rubrics? How do they evaluate submissions? How do they explain standards to students when the faculty themselves are split nearly 50-50 on legitimate versus illegitimate use?
The survey's most alarming finding: 90 percent of faculty believe AI will diminish students' critical thinking skills. This isn't abstract concern about hypothetical future impacts. Faculty are reporting observed decline in capabilities that constitute the core purpose of higher education.
Critical thinking—the ability to analyze information, evaluate sources, construct arguments, identify logical fallacies, and synthesize evidence into coherent positions—doesn't develop through passive consumption of AI-generated content. It develops through the struggle of constructing arguments, the friction of wrestling with complex ideas, and the cognitive effort of translating thought into structured communication.
When students offload that struggle to AI tools, they're not just taking shortcuts on assignments. They're opting out of the cognitive development those assignments were designed to produce.
Eighty-three percent of faculty predict AI will decrease student attention spans. This tracks with existing research on how digital tools fragment attention and reduce capacity for sustained focus. But AI accelerates the problem—students can now receive instant answers to questions that previously required extended engagement with source material, deep reading, and patient synthesis.
The skill of sitting with uncertainty, tolerating confusion while working through complex material, and persisting through intellectual difficulty is eroding. These aren't incidental capabilities. They're foundational to advanced thinking in every domain.
Despite overwhelming agreement that AI will harm critical thinking, attention, and self-reliance, faculty are split on whether AI literacy matters for student success. Half say it's extremely or very important. Eleven percent say it's slightly important. Thirteen percent say it's irrelevant.
This split reveals a deeper confusion: Is the goal preparing students to work effectively with AI tools, or is the goal developing human cognitive capabilities that AI cannot replicate? The answer should be "both," but achieving both requires institutional clarity on educational priorities and pedagogical training faculty aren't receiving.
Some faculty maintain hopeful predictions. Sixty-one percent believe AI will improve and customize learning. Forty percent think it will increase students' ability to write clearly. Forty-one percent believe it will improve research skills.
These predictions contradict the same faculty's beliefs about declining critical thinking, attention, and overreliance. The cognitive dissonance suggests a profession hoping technology will somehow solve problems it's actively creating—a hope unsupported by evidence from classrooms where AI is already in use.
Nearly half of surveyed faculty view AI's future impact as more negative than positive. Only one in five see it as more positive than negative. The remaining third presumably fall somewhere in the middle or are unsure.
These aren't luddites resisting inevitable progress. These are educators with direct observation of what's happening to student cognition, learning behaviors, and academic integrity. They're watching students develop dependencies on tools that atrophy the exact capabilities higher education exists to develop.
And they're doing it without institutional preparation, clear guidelines, consensus on standards, or resources to address the problems they're observing.
For those of us outside academia watching AI deployment, this survey should be alarming. Higher education serves as early warning system for broader societal impacts. When 90 percent of faculty report AI diminishing critical thinking in students, that's not an education-sector problem. That's a preview of workforce capabilities, civic engagement quality, and democratic discourse capacity in populations that grew up offloading cognitive effort to AI.
The question isn't whether AI will be integrated into education—it already has been, with or without faculty preparation. The question is whether we'll address the cognitive consequences faculty are reporting, or whether we'll continue deploying tools that demonstrably harm the thinking skills we claim to value while calling it innovation.
Right now, we're choosing the latter. Faculty are telling us clearly what's happening. We should listen.
Need strategic guidance on AI deployment that accounts for actual human impacts? Winsome Marketing's growth experts help organizations implement AI without sacrificing the capabilities that create genuine value. Let's talk.
Andrej Karpathy just said the quiet part out loud: schools have lost the war on AI homework. Not losing. Lost. Past tense. The former OpenAI...
Google DeepMind announced it had discovered 2.2 million new crystalline materials using AI. Microsoft and Meta followed with their own grand...
Microsoft just rolled out something called Mico—a tutor avatar for Copilot that wears glasses and a hat, turns yellow instead of blue, and promises...