3 min read

UK Court Rules That Stable Diffusion Isn't "Infringing Copies"

UK Court Rules That Stable Diffusion Isn't
UK Court Rules That Stable Diffusion Isn't "Infringing Copies"
6:59

The High Court in London just handed Stability AI a major victory, dismissing Getty Images' primary copyright claims and ruling that Stable Diffusion—trained on millions of copyrighted photos—is not an "infringing copy" under UK law. Judge Joanna Smith's decision hinged on a technical but critical distinction: an AI model that doesn't store or reproduce copyrighted works isn't itself an infringing copy, even if it was trained on those works.

Getty dropped its main claims about model training and generated images after realizing it couldn't prove training happened in the UK. The case narrowed to secondary copyright and trademark issues, with Getty winning a limited trademark claim about watermarks in older Stable Diffusion versions.

But the broader precedent is clear: under current UK law, training AI models on copyrighted material without permission isn't automatically infringement if the model doesn't reproduce the originals. This is a seismic moment for generative AI—and a disaster for content creators hoping copyright law would protect them.

The Legal Logic: Models Aren't Copies If They Don't Store Originals

Judge Smith's ruling turns on how the UK Copyright, Designs and Patents Act (CDPA) defines an "infringing copy." Getty argued that Stable Diffusion's model weights—the mathematical parameters learned during training—constitute an infringing copy because creating them would have been infringement if done in the UK. The judge disagreed, stating that a model "which does not store or reproduce any Copyright Works (and has never done so)" doesn't meet the legal definition of an infringing copy, even though UK law can cover intangible objects as "articles." This interpretation aligns with how AI models actually function: they learn statistical patterns from training data but don't retain verbatim copies of individual images. This distinction between transformative learning and mechanical reproduction is becoming the dominant legal framework in jurisdictions evaluating AI training.

The practical impact is enormous. If AI models aren't infringing copies when trained on copyrighted material—as long as they don't reproduce that material directly—then the entire business model of generative AI companies is legally viable under UK law.

Getty's case was supposed to establish that "scraping" millions of copyrighted images to train models without permission constitutes infringement. Instead, it established the opposite: training is permissible if the model transforms the data rather than storing it. For content creators, this is existentially threatening. Their work can be ingested, analyzed, and used to create competing outputs without compensation or consent, and copyright law offers no remedy as long as the AI doesn't reproduce originals verbatim.

Getty's Narrow Win on Trademarks—And Why It Doesn't Matter

Getty secured a limited trademark victory: some older versions of Stable Diffusion could generate watermarks similar to Getty Images' or iStock's trademarks in specific cases. But Judge Smith emphasized this was confined to particular image examples and noted it's "impossible to know how many (or even on what scale) watermarks have been generated in real life" that match this pattern. The court dismissed Getty's claims for reputational harm and rejected additional damages. This is a pyrrhic victory. Stability AI can resolve the watermark issue by updating model versions and adding filters to prevent trademark-like outputs. The core business—training models on copyrighted images—remains untouched.

The watermark finding is instructive for marketing teams using AI-generated content. If your AI tool occasionally produces outputs containing recognizable trademarks or logos, you're exposed to trademark infringement claims even if the underlying copyright claims fail. The solution is straightforward: implement output filters that screen for brand marks, logos, and watermarks before publishing. We've advised clients to run AI-generated visuals through trademark detection tools as part of their review workflows. It's a small operational cost that eliminates legal risk.

What This Means for Content Creators and Marketing Teams

For content creators—photographers, illustrators, designers—this ruling confirms their worst fears: copyright law wasn't designed for AI and offers minimal protection against model training. Getty Images, one of the world's largest stock photo companies with vast legal resources, couldn't make the copyright claims stick.

Individual creators have no chance. The only viable strategy is contractual: negotiate licensing agreements with AI companies that compensate for training use, or lobby for legislative changes that explicitly address AI training as a separate category requiring consent. Coalitions of artists, photographers, and writers are pushing for EU and US legislation that would require opt-in consent for training data. The UK ruling makes that legislative push more urgent.

For marketing teams using AI-generated content, this ruling is a green light—with caveats. You can deploy tools like Stable Diffusion without worrying that model training itself constitutes infringement. But you're still liable for outputs that reproduce copyrighted works too closely or incorporate trademarks. The risk shifts from training to generation. Build review processes that screen AI outputs for similarity to known copyrighted works and trademark violations before publication. The legal exposure is no longer "did the model train on this illegally" but "does the output infringe directly." That's a narrower, more manageable risk.

Copyright Law Is Officially Behind the Curve

The UK ruling exposes a fundamental mismatch: copyright law protects reproduction and distribution of specific works, but AI models learn patterns across millions of works without reproducing any single one. Existing legal frameworks weren't built for this, and courts are struggling to apply 20th-century statutes to 21st-century technology.

Judge Smith's decision is legally defensible under current UK law—and completely inadequate as policy. Content creators deserve compensation when their work is used commercially, even if that use is transformative rather than reproductive. The current system gives them neither protection nor payment. That's not a sustainable equilibrium. Either legislation will evolve to address AI training explicitly, or the creative industries will collapse under the weight of uncompensated use. Getty's loss just accelerated that reckoning.


Ready to deploy AI-generated content with legal guardrails that protect your brand? Winsome Marketing's growth experts help teams build compliant workflows that balance innovation with risk management. Let's talk.

Getty's AI Lawsuit Signals Hope

Getty's AI Lawsuit Signals Hope

Here's a thought that might sound radical in 2025: artists deserve to get paid when their work powers billion-dollar AI companies. Revolutionary,...

Read More
OpenAI's Privilege Fight: The Discovery Battle That Will Define AI's Legal Future

OpenAI's Privilege Fight: The Discovery Battle That Will Define AI's Legal Future

OpenAI is fighting for its life in a Manhattan courtroom, and the weapon pointed at it isn't a novel legal theory or a sympathetic plaintiff—it's...

Read More
Anthropic's landmark copyright settlement with authors

Anthropic's landmark copyright settlement with authors

The hand-wringing started immediately. "AI companies are caving to copyright trolls!" screamed the usual suspects when Anthropic settled its landmark...

Read More