Skip to the main content.

5 min read

Has AI Replaced College Writing?

Has AI Replaced College Writing?
Has AI Replaced College Writing?
9:18

Hua Hsu's masterful excavation in The New Yorker reads like an academic autopsy report, but we're not examining a corpse—we're watching a metamorphosis that makes Kafka look optimistic. His piece, "What Happens After A.I. Destroys College Writing?", doesn't just chronicle the death of the five-paragraph essay; it documents the precise moment when education became theater, and every student learned to become a director rather than an actor.

The statistics alone make for a dystopian Netflix special: 47.3% of Cambridge students are leveraging AI chatbots to complete their degree requirements, while 89% of students acknowledge using ChatGPT for their homework. But Hsu's genius lies not in the numbers—it's in the portraits of students like Alex, who casually admits to using AI "for any type of writing in life" while simultaneously acknowledging it's "of course" cheating. This isn't moral bankruptcy; it's radical pragmatism in a system that never quite figured out its own purpose.

The Great Unraveling: When Process Becomes Product

What emerges from Hsu's ethnographic deep dive is something far more unsettling than academic dishonesty: we're witnessing the collision between an educational philosophy built on suffering and a generation that refuses to pretend difficulty equals depth. When Alex shows Hsu his art history paper—generated in minutes, earning an A-minus—he's not breaking the system. He's exposing its fundamental emptiness.

The most chilling moment comes when Alex admits he'd be "so fucked" if pressed for specifics about his own paper. This isn't laziness; it's the logical endpoint of an educational model that prioritized performance over understanding. As Hsu notes, if this were 2007, the paper's "generic tone" and "precise, box-ticking quality" wouldn't have raised eyebrows. We've been training students to write like machines for decades—now we're shocked they've outsourced the job to actual machines.

Current AI detection tools reflect this same fundamental confusion about what we're trying to preserve. Even the most accurate AI detector available, Scribbr's premium tool, only achieves 84% accuracy, while false positives occur when human-written texts are flagged as AI. We're building increasingly sophisticated surveillance systems to catch students using tools that often produce better work than they could manage alone.

The Productivity Paradox: When Efficiency Becomes the Enemy

The students Hsu interviews reveal a generation that has internalized the brutal logic of optimization. May, the Georgetown sophomore, describes AI as allowing her to "sleep more now"—a confession that reads like revolutionary literature in a culture that has weaponized exhaustion as virtue. When she explains how AI helps her "breeze through busywork and deepen her engagement with the courses she felt passionate about," she's not describing cheating. She's describing triage.

By 2025, the AI market in education will go up to $6 billion, and half of all learning management systems are expected to incorporate AI by the end of 2024. Universities are rushing to partner with the same companies their honor codes theoretically prohibit students from using. The cognitive dissonance would be comedic if it weren't so revelatory of institutional confusion.

Professor Barry Lam's experiment epitomizes this confusion: when he gave his undergraduates a PhD-level question, expecting AI to accelerate their learning, "they fucking failed it miserably." The problem isn't that AI makes students lazy—it's that we've confused information access with intellectual development, speed with depth, efficiency with understanding.

New call-to-action

The Authenticity Crisis: What Happens When Everyone Sounds Like Everyone

Perhaps the most profound insight in Hsu's piece comes from Kevin, the Syracuse student who worries that his friend's essay "does kind of sound like ChatGPT." We're not just dealing with plagiarism; we're witnessing the emergence of a new kind of linguistic homogenization. When AI becomes the median point of acceptable prose, human writing starts to converge toward the algorithmic mean.

This convergence creates what we might call the "authenticity trap"—the harder we police for AI-generated content, the more human writing begins to mimic AI patterns. Students learn to write not for clarity or beauty or truth, but for undetectability. They become performers in an elaborate game of Turing Test theater, where the goal isn't communication but camouflage.

The irony deepens when we consider that ChatGPT's training data largely consists of human writing—including, presumably, decades of student essays. We're now in the bizarre situation of students using AI trained on other students' work to complete assignments designed to teach them skills the AI has already abstracted from similar assignments. It's academic ouroboros, but with profit margins.

The Humanization of Skill: What Remains When Everything Else Falls Away

What Hsu captures brilliantly is the moment when educators like Corey Robin and Harry Stecopoulos retreat to handwritten blue books—not out of nostalgia, but out of necessity. There's something both tragic and hopeful about this return to the most basic technologies of thought. When Robin asks students to identify and contextualize excerpts from their reading, he's not just testing knowledge; he's insisting on the irreducible value of human memory and connection.

The return to handwriting isn't just pedagogical—it's philosophical. Neuroscientists have found that the "embodied experience" of writing by hand taps into parts of the brain that typing does not. In a world where AI can generate prose at superhuman speed, the slow, physical act of forming letters becomes a form of resistance.

Yet this resistance carries its own limitations. As Hsu notes, students who learned to type before they could tie their shoes experience handwriting as "stifling." We're not just asking them to abandon their tools; we're asking them to abandon their native mode of expression. The question becomes: are we preserving something essential, or are we simply privileging the technologies of our own intellectual formation?

The Skills Question: What Writing Will Become

The deeper question Hsu's piece raises isn't whether students should use AI—it's what happens to human thought when the tools of thinking fundamentally change. Writing has never been just about communication; it's been about the peculiar way that struggling with language forces us to struggle with ideas. When that struggle disappears, what replaces it?

Some educators, like Dan Melzer at UC Davis, see opportunity in the crisis. His workshops treat writing as "a deliberative, iterative process involving drafting, feedback, and revision"—a vision that positions AI as a collaborator rather than a replacement. But this requires a fundamental shift in how we understand education itself, from knowledge transmission to skill development to something closer to intellectual apprenticeship.

The students Hsu interviews suggest a generation that has already made this shift. They've become curators of AI output, editors of algorithmic prose, managers of digital labor. These are not traditional academic skills, but they may be more relevant to their future work than the ability to construct a five-paragraph essay by hand.

Performance vs. Understanding

What emerges from Hsu's exploration is a fundamental question that goes far beyond education: in a world where machines can perform cognitive tasks with superhuman efficiency, what makes human cognition valuable? Is it our ability to reproduce information, to follow forms, to demonstrate compliance with academic conventions? Or is it something more elusive—our capacity for genuine surprise, for connecting disparate ideas, for finding meaning in uncertainty?

The students in Hsu's piece aren't rebels; they're realists. They've intuited something that many educators are still struggling to accept: the old model of academic performance is already dead. We can either grieve its loss while building surveillance systems to prevent its replacement, or we can begin the harder work of figuring out what comes next.

The choice isn't between authentic human thought and artificial intelligence. It's between clinging to forms of assessment that were always partly performative and creating new ways to cultivate the kinds of thinking that remain irreducibly human. The essay is dead; long live whatever comes after.

Ready to navigate this transformation intelligently? Winsome Marketing's growth experts understand how AI is reshaping not just education, but every aspect of how we communicate, persuade, and connect. We help companies build authentic engagement strategies that leverage AI's power while preserving human creativity and insight. Because in a world where everyone has the same tools, the competitive advantage belongs to those who know how to use them with wisdom.

England Issues AI Guidance for Educators - the U.S. Needs to Catch up

1 min read

England Issues AI Guidance for Educators - the U.S. Needs to Catch up

The irony is almost too perfect to bear. Just as artificial intelligence threatens to automate vast swaths of human expertise, the teaching...

READ THIS ESSAY
OpenAI's Bioweapons Warning

OpenAI's Bioweapons Warning

We need to stop everything and talk about what OpenAI just admitted. On June 19, 2025, OpenAI issued a chilling warning: its next generation of AI...

READ THIS ESSAY
AGI - Apple's Reality Check on Silicon Valley's Favorite Delusion

AGI - Apple's Reality Check on Silicon Valley's Favorite Delusion

Here's a fun party trick: next time someone breathlessly tells you we're "months away from AGI," ask them to explain why ChatGPT cited six entirely...

READ THIS ESSAY