Stability AI wins UK High Court case vs Getty Images

Photo - Stability AI wins UK High Court case vs Getty Images
UK High Court Judge Joanna Smith DBE has largely rejected Getty Images’ copyright case against Stability AI, ruling in November 2025 that the maker of Stable Diffusion did not commit secondary copyright infringement in the UK because the alleged training acts were not shown to have happened in this jurisdiction.
But she did find limited trademark infringement where AI-generated images reproduced Getty’s watermark.

The case, filed in 2023, was expected to become the first clear UK test of whether an AI developer must obtain permission to use large volumes of copyrighted images for model training. Instead, the central question was never decided. 

Getty abandoned the core, primary-infringement allegation during trial because it could not prove that the scraping and model-training operations it complained about actually took place in the UK, and UK copyright law is territorial – it applies only to acts done in the UK. Once that claim fell away, the court was left with narrower issues and found that Stability AI’s model, as it is delivered to users, does not store or reproduce Getty’s photographs in a way that would make the company secondarily liable.

The judge’s reasoning turned on two linked points. First, for UK copyright to bite, the claimant had to show an infringing act inside the UK. Getty’s theory was that Stability AI, a UK company, was responsible for the ingestion of some 12 million Getty images to train Stable Diffusion. But the evidence put the training infrastructure outside the UK, so the court treated those acts as foreign. Second, on the remaining theory – that every time Stable Diffusion outputs an image it is effectively reproducing protected material it was trained on – the judge said the model does not “contain” Getty’s images and does not output them as stored copies, so the conditions for secondary infringement were not met. That part of the ruling is explicit: “The Secondary Infringement Claim fails.”
The one area where Getty succeeded was branding. The court accepted that some Stable Diffusion outputs contained Getty’s watermark or close variants of it and that this amounted to trademark infringement. Getty Images then argued that Stability AI should not be able to shift responsibility to end users because the provider controls the training data and the model that generates the output. The court agreed with that framing and said liability in such watermark cases rests with the model provider, not with individual users, because it is the provider that put a model into circulation that can generate trademark-bearing images. Getty immediately called that part of the judgment “a significant win for intellectual property owners.”

Even so, the judge stressed that the trademark win was historically narrow and fact-specific. The watermark finding does not mean that every AI output that resembles a protected photo is automatically an infringement, and the passage in the judgment describing Getty’s success makes clear that the claim was limited in scope and that the court did not accept the broader attempt to characterize Stable Diffusion as a general-purpose infringement engine. A 90-page-plus judgment could be called a win for AI developers on the core copyright theory, with only a small correction on visible brand elements.

For AI and media companies the practical consequence is awkward: the UK now has a high-profile AI-and-copyright decision that does not actually say whether training on copyrighted images without a licence is lawful, because the claimant could not tie the training to the UK.

Getty Images has already said it will take the UK court’s factual findings into the US litigation, which remains active. That reflects another theme in the London judgment: nothing in it prevents a claimant from suing in the place where the model was actually trained. The UK court simply asked whether the complained-of acts happened here and, when the answer was no, dealt only with what was left. Territoriality, not AI exceptionalism, is what stopped the big copyright question from being answered this week.  

Sebile Fane cut her teeth in blockchain by building tiny NFT experiments with friends in her living room, long before the buzzwords took hold. She’s driven by a curiosity for the human stories behind smart contracts — whether it’s a small-town artist minting her first token or a DAO voting on climate grants — and weaves technical insight with genuine empathy.