Skip to the main content.

4 min read

Anthropic Cleared in Federal Copyright Case

Anthropic Cleared in Federal Copyright Case
Anthropic Cleared in Federal Copyright Case
7:04

Judge William Alsup just handed down a ruling that should terrify every creative professional in America. In blessing Anthropic's wholesale piracy of copyrighted books to train Claude AI, he's essentially declared that "exceedingly transformative" use justifies any theft—as long as you're building a multi-billion-dollar AI empire.

This isn't just bad law. It's intellectual property apocalypse dressed up as legal precedent.

The Precedent Problem: When Google Books Meets AI Greed

Alsup's ruling leans heavily on the "transformative use" doctrine established in Campbell v. Acuff-Rose (1994) and weaponized in Authors Guild v. Google (2015). But the comparison is intellectually dishonest and legally dangerous.

@aeyespybywinsome

To be clear - Anthropic will still face a trial on pirated books but their use of the material... well, they for away with it. #Claude #anthropic

♬ original sound - AEyeSpy

Google Books created a searchable index that displayed limited snippets—essentially a digital library catalog that helped people discover books they might want to buy. The Second Circuit found this transformative because "Google's search and snippet functions provided a new and different purpose compared to the original works" and crucially, the snippets "did not provide a significant market substitute for the protected aspects of the originals."

Anthropic's Claude, by contrast, ingests entire books to generate competing content. When Claude writes a mystery novel, it's not pointing you toward Andrea Bartz's "We Were Never Here"—it's replacing it. The "transformation" here is pure linguistic sleight of hand: theft becomes "training," plagiarism becomes "learning," and copyright infringement becomes "innovation."

The legal framework Alsup relies on requires four-factor analysis, but he conveniently glosses over the market harm factor. The Supreme Court in Campbell emphasized that "the effect on the potential market for the original (and the market for derivative works) is 'undoubtedly the single most important element of fair use.'" How can the judge claim no market substitution when AI models trained on copyrighted works are explicitly designed to produce similar content?

New call-to-action

The Derivative Works Disaster

Here's where Alsup's reasoning completely falls apart: derivative works protection. Under 17 U.S.C. § 106(2), copyright holders have exclusive rights to authorize derivative works based on their original creations. This isn't some arcane legal technicality—it's the economic foundation of creative industries.

When Anthropic feeds "The Good Nurse" into Claude and the AI subsequently generates medical thrillers that incorporate similar narrative structures, character archetypes, and plot devices learned from Charles Graeber's work, that's derivative use. The fact that Claude doesn't photocopy exact passages is irrelevant—it's creating new works derived from copyrighted source material without permission or compensation.

The judge's "like any reader aspiring to be a writer" analogy is particularly insulting. Human readers don't have perfect recall of millions of copyrighted works. They don't instantly analyze and synthesize every book they've ever read to generate competing content on demand. The comparison between human inspiration and machine data processing reveals either profound ignorance of how AI actually works or willful misunderstanding.

Why This Precedent Is Catastrophic

For Authors: If "transformative training" justifies unlimited copying, what prevents AI companies from scraping every book, every article, every piece of creative writing? Authors spend years crafting unique voices, developing signature styles, creating distinctive narrative approaches—all of which become fair game for AI consumption under this ruling.

For Visual Artists: The precedent extends beyond literature. Disney and Universal already filed lawsuit against AI image generator Midjourney this month, accusing it of piracy. If text-based training gets a free pass, visual art training won't be far behind. Every digital artwork becomes potential AI fodder.

For Musicians: Sound familiar? It should. We've already seen AI models trained on copyrighted music without compensation. If courts consistently rule that AI training constitutes transformative fair use, the entire licensing ecosystem that supports creative professionals collapses.

The Real Legal Standard: Thomson Reuters Shows the Way

Contrast Alsup's permissive ruling with a recent Delaware decision that actually got it right. In Thomson Reuters Enterprise Centre GMBH v. ROSS Intelligence Inc., a federal court "granted Thomson Reuters's partial motion for summary judgement on its direct infringement claim and rejected Ross's fair use defense" when an AI company used Westlaw headnotes to train a competing legal search engine.

The Delaware court recognized what Alsup missed: when AI training creates direct commercial competition with the source material, fair use doesn't apply. The fact that Ross's output "resembles how Westlaw uses headnotes" rather than generating entirely new content actually supported finding infringement, not fair use.

This is the precedent courts should follow—not Alsup's intellectual property giveaway.

The Economic Reality Check

Let's be brutally honest about what's happening here. Anthropic holds "more than seven million pirated books in a 'central library'" according to Alsup's own ruling. Seven. Million. Pirated. Books. The company could face "up to $150,000 in damages per copyrighted work"—potentially over $1 trillion in statutory damages—yet the judge treats this as a minor footnote while blessing the underlying theft.

Meanwhile, authors like Andrea Bartz, whose creative labor built Claude's capabilities, get nothing. Zero compensation for their involuntary contribution to a multi-billion-dollar AI business. The judge's ruling essentially declares that if you steal enough copyrighted material and process it sufficiently, theft becomes legal innovation.

This isn't advancing "the progress of the arts and sciences." It's institutionalizing the largest theft of intellectual property in human history.

The most maddening aspect of Alsup's ruling isn't just its legal shakiness—it's the smug certainty that stealing from millions of creators serves some greater good. It doesn't. It serves Anthropic's shareholders while gutting the economic incentives that drive creative work.

If this precedent stands, we're not building a more innovative future. We're building a world where human creativity becomes free raw material for AI companies, and the people who actually create get nothing.

Ready to protect your creative IP in an AI-first world where courts can't tell theft from transformation? At Winsome Marketing, our growth experts help creative businesses and content companies develop strategies that safeguard intellectual property while leveraging AI responsibly. Let's build defenses against the coming AI copyright apocalypse.

 
Disney Sues Midjourney for 'AI Bootlegs'

Disney Sues Midjourney for 'AI Bootlegs'

Disney's $300 million lawsuit against Midjourney reads like a masterclass in corporate theater. The entertainment giant, clutching its Mickey Mouse...

READ THIS ESSAY
Reddit is Suing Anthropic Because... ?

Reddit is Suing Anthropic Because... ?

Reddit just sued Anthropic for training AI on user comments, and honestly? The audacity is breathtaking. Not because Anthropic scraped the...

READ THIS ESSAY
The Great MAGA AI Swindle: When Coal Meets Code

The Great MAGA AI Swindle: When Coal Meets Code

Picture this: Your grandfather's rotary phone trying to run TikTok while powered by a coal furnace in the basement. Welcome to Trump's 2025 AI...

READ THIS ESSAY