The AI Blackmail Crisis
Remember when your biggest workplace concern was someone stealing your lunch from the office fridge? Those were adorable times. Now we're dealing...
3 min read
Writing Team
:
Mar 16, 2026 8:00:01 AM
There is a genuinely useful product buried inside what Grammarly — now operating under the parent brand Superhuman — just launched. The idea of receiving craft-specific feedback modeled on the stylistic principles of great writers is not inherently absurd. Writers have always learned by studying masters. The problem is how Grammarly got there, and what they chose not to ask permission for along the way.
The feature, called Expert Review, allows users to solicit feedback styled after real writers and thinkers — living and dead. Stephen King. Neil deGrasse Tyson. Carl Sagan. William Strunk Jr. The recently deceased historian David Abulafia, who died in January. The software processes your text and returns guidance attributed to these figures, with a disclaimer noting that none of them are actually involved and none have endorsed the product.
Historian Vanessa Heggie called it "obscene" on LinkedIn. Yale postdoctoral fellow C.E. Aubin told WIRED the system "seems to validate the profound mistrust so many scholars in the humanities have for AI." These are not overreactions.
Before getting to what's wrong here, it's worth acknowledging what Grammarly is attempting, because the underlying instinct is sound.
Feedback specificity matters. Generic AI writing suggestions — tighten this sentence, vary your structure, avoid passive voice — are useful up to a point and then become noise. Feedback anchored to a specific stylistic tradition, a particular intellectual framework, or a recognizable voice gives writers something to push against. "Write cleaner sentences" lands differently than "William Zinsser would cut this paragraph in half." The latter creates a mental model. It teaches rather than just corrects.
If the Expert Review feature were built on publicly available stylistic principles, properly licensed material, and — crucially — the actual participation or at minimum the legal consent of living subjects, there would be a strong product argument for it. Personalized, tradition-specific writing feedback is a real gap in what AI writing tools currently offer.
The execution, however, has serious problems that good intentions don't resolve.
The legal question is unresolved and significant. Training AI models on the published works of living authors without permission sits in deeply contested copyright territory — territory currently being litigated in multiple cases across the industry. Grammarly's disclaimer that these "references to experts are for informational purposes only" and "do not indicate any affiliation" does not resolve the underlying question of how those expert agents were built. What data trained them? Was it licensed? Those questions have no public answers.
The ethical question is, if anything, sharper. Replicating the intellectual voice of living scholars and researchers — people who have spent careers building the expertise being simulated — without their knowledge or consent is a meaningful harm regardless of legal outcome. Aubin's observation cuts to the precise point: these are not expert reviews because no experts are involved. The scholarship is being used while the scholar is being removed. That's not a technicality. It's a foundational problem with the product's premise.
The treatment of the recently deceased adds another dimension entirely. Building an AI agent modeled on a historian who died in January, before his estate or colleagues have had any opportunity to address questions of intellectual legacy, is the kind of move that forces the industry to reckon with questions it has been deliberately avoiding. At what point after death does a person's life work become available for commercial simulation without consent?
For marketing and content professionals evaluating AI writing tools, the Grammarly Expert Review story is useful precisely because it separates a legitimate product innovation from an ethically compromised implementation.
The demand for voice-specific, tradition-anchored feedback is real and growing. AI tools that can provide craft-level guidance — not just grammatical correction — have genuine value in content strategy and brand voice development. But the tools that will prove durable are the ones built on clear intellectual property frameworks, transparent data sourcing, and in the case of living subjects, actual participation or consent.
The reputational risk of deploying tools built on contested IP is not theoretical. As copyright litigation around AI training data moves through the courts, the companies and agencies whose workflows depend on those tools inherit exposure they may not have priced in. Choosing AI tools with clean IP lineage is becoming a due diligence question, not just an ethical preference.
Grammarly identified a real need. They should have built toward it more carefully.
If you want to build AI into your content operations in ways that hold up legally, reputationally, and creatively, Winsome Marketing's strategists can help you find the tools worth trusting.
Remember when your biggest workplace concern was someone stealing your lunch from the office fridge? Those were adorable times. Now we're dealing...
A research team has published a new paper introducing Supervised Reinforcement Learning (SRL), a training framework designed to help smaller...
Google just rolled out Gemini 3 Deep Think mode to AI Ultra subscribers in the Gemini app. This isn't incremental improvement—it's a fundamental...