High Court rules Stable Diffusion training does not infringe copyright

UK High Court dismisses Getty Images copyright claims against Stability AI, ruling AI models don't store copies, but finds narrow trademark violations on watermarks.

UK High Court dismisses Getty Images copyright claims against Stability AI
UK High Court dismisses Getty Images copyright claims against Stability AI

The High Court of England and Wales delivered a partial victory for Stability AI on November 4, 2025, dismissing Getty Images' secondary copyright infringement claims while finding limited trademark violations related to synthetic watermarks in older versions of Stable Diffusion. Mrs Justice Joanna Smith ruled that the AI image generation model does not reproduce or store Getty's copyrighted photographs during training or in its model weights, addressing fundamental questions about how copyright law applies to diffusion models.

According to the judgment handed down in case IL-2023-000007, Stability AI prevailed on the core copyright claims after Getty Images dropped its primary infringement allegations during the trial. The court found no reproduction of Getty images under sections 17, 22, and 23 of the Copyright, Designs and Patents Act 1988. Justice Smith determined that Stable Diffusion learns statistical distributions from training data rather than storing copies of original works, a technical distinction that proved central to the copyright analysis.

Getty Images commenced proceedings in January 2023, alleging that Stability AI used approximately 11 million copyrighted photographs from its database to train Stable Diffusion without authorization. The stock photography company claimed infringement based on inputs used during model training, outputs generated by users, and the model itself constituting an "infringing article" under UK copyright law. Stability AI contested all claims, asserting that training occurred outside UK jurisdiction and that the model operates through statistical learning rather than content reproduction.

The jurisdictional challenge proved decisive for Getty's primary claims. Stability AI presented evidence demonstrating that development and training of Stable Diffusion took place on servers outside the United Kingdom, primarily using Amazon Web Services clusters and the CompVis GitHub repository. Getty dropped its input and output copyright claims toward the end of the trial, citing difficulties in proving that infringing acts occurred within UK territorial boundaries. This left only the secondary infringement claim regarding whether Stable Diffusion itself constitutes an "infringing copy" imported into the UK.

Justice Smith rejected Getty's secondary infringement theory, ruling that AI models trained on copyrighted material do not necessarily constitute infringing articles. The judgment establishes that because Stable Diffusion does not store or reproduce copyright works in its model weights or outputs, it cannot be classified as an infringing copy under section 27(3) of the CDPA. This determination addresses a novel legal question about whether intangible information systems fall within statutory provisions originally drafted for physical media.

The technical operation of diffusion models became central to the copyright analysis. According to court documents, Stable Diffusion uses a latent diffusion architecture that compresses images into a mathematical representation during training, extracting statistical patterns rather than storing pixel-level copies. The model consists of three components: a variational autoencoder that handles compression and decompression, a U-Net that performs denoising operations in latent space, and a CLIP text encoder that processes user prompts. The training process involves adjusting billions of numerical weights and biases across neural network layers to minimize prediction errors, without maintaining copies of training images in the final model.

Getty Images secured narrow victories on trademark infringement claims related to synthetic watermarks. The court found that Stable Diffusion versions 1.x and 2.1 generated images containing visual elements resembling Getty's watermark, potentially causing consumer confusion about the source or affiliation of synthetic images. Justice Smith ruled that Stability AI bore responsibility for these trademark violations, rejecting the company's defense that users rather than the model provider should be liable for generated outputs.

However, the trademark findings applied to limited circumstances. According to the judgment, Stability AI was responsible only for model releases distributed through its own platforms, including DreamStudio and the Developer Platform API. The court determined that Stability AI was not liable for the CompVis GitHub repository release, which occurred before the company's commercial involvement. Additionally, Getty's claims under section 10(3) of the Trade Marks Act 1994 and passing off allegations were dismissed for lack of evidence regarding detriment or misrepresentation.

The ruling addresses several related legal theories Getty advanced. The court rejected claims of database right infringement, finding insufficient evidence that Stability AI extracted substantial portions of Getty's systematically arranged image database. Getty dropped these database claims near the conclusion of the trial, acknowledging similar jurisdictional challenges to those affecting its copyright allegations. The passing off claims failed because Getty could not demonstrate that synthetic images containing watermark-like elements would lead consumers to believe Getty endorsed or created the AI-generated content.

Procedural history reveals the contentious nature of the litigation. Stability AI filed a strike-out application in October 2023, seeking to dismiss Getty's training and development claims pre-trial. Justice Smith denied that application in December 2023, citing "unanswered questions and inconsistencies" in evidence submitted by Emad Mostaque, Stability AI's former chief executive officer. The judge noted apparent contradictions between Mostaque's sworn statements and his previous media appearances discussing Stable Diffusion's development. Both parties underwent extensive disclosure processes regarding the precise nature of training and development activities.

The case carries implications for how AI developers approach model training and deployment in the UK market. The US Copyright Office released a comprehensive report in May 2025 examining when AI training on copyrighted works constitutes fair use versus requiring licensing from rights holders. That report emphasized transformativeness and market effects as key factors in fair use analysis, suggesting that AI companies implement guardrails to prevent infringing outputs. The UK High Court ruling complements this framework by establishing that the training process itself, when conducted through statistical learning without content reproduction, may not trigger copyright liability under UK law.

Marketing professionals face evolving legal standards for AI-generated creative assets. Pinterest introduced controls for AI-generated content in October 2025, allowing users to adjust exposure to synthetic imagery across beauty, art, fashion, and home decor categories. These platform-level responses to AI content proliferation reflect broader industry concerns about distinguishing human creativity from machine-generated material. Advertisers deploying AI tools for product visualization and campaign creative must navigate trademark considerations highlighted by the Getty ruling, particularly regarding inadvertent reproduction of protected marks.

The trademark findings underscore risks AI model providers face when training data includes protected branding elements. According to the judgment, older versions of Stable Diffusion occasionally generated images containing watermark-like patterns that could be mistaken for authentic Getty branding. This occurred despite Stability AI's efforts to filter inappropriate content during training on the LAION-5B dataset, which contained over 5 billion image-text pairs scraped from the internet. The court held that model providers bear responsibility for such outputs, regardless of whether individual users intentionally prompted the generation of branded imagery.

Advertise on ppc land

Buy ads on PPC Land. PPC Land has standard and native ad formats via major DSPs and ad platforms like Google Ads. Via an auction CPM, you can reach industry professionals.

Learn more

Technical evidence presented during the trial detailed Stable Diffusion's architecture and training methodology. The model underwent training through iterative adjustments of billions of parameters across multiple GPU clusters, processing batches of images to minimize differences between predicted and actual denoised outputs. This process occurred over numerous epochs, with each epoch representing a complete pass through the training dataset. The resulting model weights encode statistical relationships between text descriptions and visual patterns, enabling the system to generate novel images from text prompts without accessing original training images.

Getty Images emphasized that the ruling confirmed its copyright-protected works were used to train Stable Diffusion, regardless of where training occurred. The company stated it would leverage factual findings from the UK judgment in its parallel US litigation against Stability AI in California federal court. Getty dismissed its earlier Delaware action in August 2025, consolidating US claims in California where a jury trial has been demanded. That case seeks damages calculated at $150,000 per infringement for 11,383 works, potentially totaling $1.7 billion.

Stability AI characterized the outcome as resolving core copyright concerns that formed the basis of Getty's lawsuit. Christian Dowell, general counsel for Stability AI, stated that the judgment provides clarity for AI developers regarding the lawfulness of model training on publicly available data. The company argued throughout the proceedings that imposing copyright liability for statistical learning processes would fundamentally impair AI research and development, particularly for open-source initiatives that rely on large-scale internet datasets.

Legal experts offered divergent assessments of the ruling's broader significance. Simon Barker, partner and head of intellectual property at Freeths, described the decision as drawing a clear line that training AI models on copyright works without storing or reproducing those works does not constitute secondary copyright infringement under UK law. He noted that the judgment is likely to influence future litigation and policy debates on AI and intellectual property internationally. Rebecca Newman, legal director at Addleshaw Goddard, expressed concern that the ruling establishes a precedent allowing models trained on infringing data outside the UK to be imported without legal repercussions, suggesting UK secondary copyright protections may be insufficient for creators.

The ruling arrives amid intensifying debate over AI training practices and copyright policy. The US Senate introduced the TRAIN Act in July 2025, establishing an administrative subpoena mechanism allowing copyright owners to identify which protected works were used to train AI models. That bipartisan legislation represents a procedural approach to transparency without predetermining substantive infringement outcomes. Meanwhile, research published in January 2025cautioned against using copyright law as the primary tool to regulate AI systems, arguing that strengthening restrictions on training data could harm innovation and public interest.

International approaches to AI training and copyright continue diverging. A study published in October 2024 identified emerging convergence in copyright exceptions for AI training across the United States, European Union, Japan, and China. The research found that despite varying legal traditions, countries increasingly embrace mechanisms to balance copyright protection with AI development demands. The European Union implemented specific text and data mining exceptions in its Copyright Directive, while Japan pioneered express exceptions for computational data analysis. The UK abandoned proposed commercial text and data mining exceptions in early 2023 following creative industry opposition.

Getty Images maintains extensive partnerships with AI companies under licensing arrangements. The company announced a $3.7 billion merger with Shutterstock in January 2025, creating a combined entity serving over 1.4 million subscribers across 200 countries. That transaction emphasized enhanced capabilities in generative AI technologies through consolidated content offerings. Getty operates its own AI image generation tools trained on licensed iStock photography and video libraries, positioning itself as offering legally compliant alternatives to models trained on unlicensed internet data.

The judgment addresses access mechanisms through which users interact with Stable Diffusion models. According to court appendices, users can download models locally via GitHub and Hugging Face repositories, generate images through Stability AI's DreamStudio web interface, or access the Developer Platform API. Each method includes license terms specifying user responsibilities for generated content, with the CreativeML Open RAIL++-M License prohibiting uses that violate applicable laws, generate false information intended to harm others, or infringe intellectual property rights. The court found these terms placed obligations on users but did not absolve Stability AI of liability for trademark violations inherent in model training.

Model versioning proved relevant to trademark liability determinations. Stable Diffusion progressed through multiple releases, including versions 1.1, 1.2, 1.3, 1.4, and 2.0, with later iterations incorporating improved safety filters and training refinements. According to the judgment, trademark infringement findings applied specifically to versions 1.x and 2.1, reflecting watermark generation in those releases. Subsequent versions, including Stable Diffusion XL released in July 2023, incorporated architectural improvements and different training approaches that the court found did not produce the same trademark violations.

Commercial implications extend beyond the immediate parties. AI developers must evaluate training data sources and implement filtering mechanisms to prevent reproduction of protected marks and branding elements in generated outputs. Content licensing markets continue evolving, with various platforms establishing formal agreements for AI training data access. OpenAI faces ongoing litigation from The New York Times regarding alleged unauthorized use of news content, while Google established licensing partnerships with The Associated Press for real-time news integration into Gemini AI applications.

The Getty ruling avoids establishing comprehensive precedent on primary copyright infringement in AI training contexts. By dropping input and output claims, Getty limited the court's opportunity to rule definitively on whether using copyrighted images as training data constitutes reproduction under section 17 of the CDPA. This procedural outcome leaves unresolved questions about the application of UK copyright law to training activities conducted within UK jurisdiction. Future cases may address these substantive issues if plaintiffs can demonstrate that alleged infringement occurred on UK servers or through UK-based personnel.

Advertisers and marketers must consider trademark risks when deploying AI-generated creative assets. The ruling confirms that model providers, not only end users, bear responsibility for trademark violations embedded in training processes. Brands using AI tools for campaign development should evaluate whether generated imagery inadvertently incorporates protected marks from competitors or third parties. European Commission guidelines on AI transparencyunder Article 50 of the AI Act will require machine-readable marking systems for synthetic content, including watermarks, metadata, and cryptographic authentication methods.

Database protection received limited attention in the final judgment due to Getty's withdrawal of those claims. The company originally argued that Stability AI extracted substantial portions of its systematically arranged image database, which required hundreds of millions of dollars in investment. UK database rights protect against unauthorized extraction and reuse of substantial database portions when substantial investment has been made in obtaining, verifying, or presenting contents. The jurisdictional challenges that undermined copyright claims similarly affected database allegations, as Getty could not prove extraction occurred within UK territory.

The case demonstrates challenges copyright holders face proving infringement in cross-border AI development scenarios. Modern machine learning infrastructure typically distributes training across multiple jurisdictions using cloud computing resources, making it difficult to establish that specific acts occurred in particular countries. This territorial aspect of copyright law creates strategic considerations for both AI developers selecting training locations and rights holders determining optimal litigation venues. Getty's decision to pursue parallel US litigation reflects attempts to navigate these jurisdictional complexities.

Training dataset composition and filtering mechanisms received scrutiny during proceedings. Stable Diffusion models were trained on subsets of the LAION-5B dataset, which contains billions of image-URL pairs scraped from the internet. According to court documents, Stability AI applied LAION's NSFW detector to filter adult, violent, and sexual content, though the judgment noted this filtering did not prevent inclusion of Getty watermarked images. The presence of watermarks in training data enabled the model to occasionally generate similar visual patterns in outputs, forming the basis for successful trademark claims.

Model cards accompanying Stable Diffusion releases on Hugging Face disclosed intended research purposes, potential misuses, and technical limitations. These documents stated the models were "intended for research purposes only" and identified appropriate applications including safe deployment research, understanding generative model limitations, artistic content generation, educational tools, and generative model research. The cards explicitly warned against intentionally creating images that would be disturbing, distressing, or offensive, or that propagate harmful stereotypes. The court considered whether these disclosures affected liability determinations.

Getty Images expressed concern about ongoing risks to intellectual property owners despite the narrow trademark victory. The company urged the UK government to build on current laws addressing AI-related infringement, arguing that even well-resourced companies remain vulnerable due to lack of transparent requirements for AI training practices. This advocacy position reflects broader creative industry concerns about maintaining viable business models in an environment where AI systems can generate content similar to human-created works without licensing or compensation.

The ruling's international influence remains uncertain. UK courts are not bound by each other's decisions outside the appellate hierarchy, though persuasive authority may influence similar cases. US courts hearing parallel Getty claims against Stability AI operate under different copyright statutes and fair use frameworks. The Copyright Office's May 2025 report established that US fair use analysis focuses on transformativeness, market effects, nature of copyrighted works, and amount used. These factors differ from UK secondary infringement provisions addressing infringing articles and territorial jurisdiction.

Technical expert testimony addressed whether model weights contain copyrighted expression. Stability AI's experts demonstrated that weights represent numerical parameters encoding statistical relationships rather than compressed or encrypted copies of training images. The billions of floating-point numbers in Stable Diffusion's architecture do not correspond to individual training images in decodable form. This technical reality supported the court's conclusion that models do not store or reproduce copyright works, distinguishing AI training from traditional copying activities like creating backup copies or distributing unauthorized reproductions.

Getty's licensing approach emphasizes providing legally compliant training data for AI development. The company offers datasets through formal agreements that specify permitted uses and provide attribution and compensation to photographers and artists. This business model depends on enforcement of intellectual property rights to maintain value propositions distinct from freely available internet data. The litigation against Stability AI serves strategic purposes beyond this specific case, signaling to the AI industry that unlicensed use of professional photography may face legal challenges despite technical arguments about statistical learning.

Stability AI's open-source distribution model complicates enforcement and liability questions. The company released Stable Diffusion model weights publicly through repositories accessible to millions of developers worldwide. Users can download, modify, and deploy these models independently without ongoing relationship with Stability AI. The court addressed which releases triggered UK liability, finding that only distributions through Stability AI's own platforms—DreamStudio and the Developer Platform—created responsibility for trademark violations, while the earlier CompVis GitHub release did not.

Commercial considerations influence how AI companies approach training data sourcing. Some major technology firms have pursued licensing agreements with publishers and content providers, while others rely on fair use or similar exceptions under applicable copyright laws. Andreessen Horowitz argued in October 2023 that AI model training extracts statistical patterns and facts rather than storing copyrighted content, with research showing extremely small rates of memorization. The venture capital firm characterized suggestions that models are warehouses of copyrighted material as misunderstanding the technology.

Enforcement mechanisms for addressing AI-generated content containing protected marks remain evolving. Platforms hosting user-generated AI imagery face questions about their responsibilities for detecting and removing trademark violations. The Getty ruling holds model providers accountable for training processes that enable trademark infringement, but identifying synthetic images containing protected marks at scale presents technical challenges. Automated detection systems may struggle with variations and transformations that obscure brand elements while retaining recognizable characteristics.

The case proceeded through extensive procedural phases spanning nearly three years from initial filing to judgment. Stability AI's unsuccessful December 2023 application to strike out claims signaled that substantive issues required full trial examination. Getty's mid-trial decision to narrow its allegations reflected strategic assessments about evidentiary support and likelihood of success on various theories. This procedural evolution demonstrates the complexity of adapting existing intellectual property frameworks to AI technology contexts.

Timeline

Summary

Who: Getty Images (US) Inc, Getty Images International U.C., Getty Images (UK) Limited, Getty Images Devco UK Limited, iStockPhoto LP, and Thomas M. Barwick Inc brought claims against Stability AI Limited, a London-based artificial intelligence company that developed Stable Diffusion. Justice Joanna Smith DBE presided over the case in the Business and Property Courts of England and Wales, Intellectual Property List. The litigation involved extensive expert testimony on AI training methodologies and copyright law interpretation.

What: The High Court ruled on November 4, 2025, that Stable Diffusion does not infringe Getty's copyrights through secondary infringement under sections 22 and 23 of the Copyright, Designs and Patents Act 1988. Justice Smith found that AI models learning statistical distributions from training data do not store or reproduce copyright works in model weights or outputs. However, the court determined that Stability AI committed trademark infringement under section 10(1) and 10(2) of the Trade Marks Act 1994 by distributing Stable Diffusion versions 1.x and 2.1, which generated images containing synthetic watermarks resembling Getty's protected marks. Getty's section 10(3) trademark claims and passing off allegations were dismissed.

When: Getty Images commenced proceedings in January 2023, filed particulars of claim in May 2023, and the case proceeded to trial in June 2025. Getty dropped its primary copyright claims on June 25, 2025, during the trial. Justice Smith delivered the judgment on November 4, 2025, nearly three years after initial filing. The case addressed Stable Diffusion versions released between 2022 and 2023, with specific findings that versions 1.x and 2.1 contained trademark violations but later versions did not.

Where: The proceedings took place in the High Court of Justice, Business and Property Courts of England and Wales, Intellectual Property List (ChD), under case number IL-2023-000007. Training and development of Stable Diffusion occurred outside the United Kingdom on servers including Amazon Web Services clusters and through the CompVis GitHub repository. Getty dropped copyright claims because it could not prove infringing acts occurred within UK territorial jurisdiction. Stability AI distributed models through its DreamStudio platform and Developer Platform API, which the court found created UK liability for trademark violations.

Why: The case matters for the advertising and marketing community because it establishes the first major UK precedent on AI model training and copyright liability, clarifying that statistical learning from copyrighted works does not constitute reproduction or storage triggering secondary infringement. This affects how advertisers can deploy AI-generated creative assets and whether model providers face liability for training practices. The trademark findings demonstrate that brands must evaluate whether AI tools inadvertently incorporate protected marks in generated imagery. The ruling influences the broader legal landscape as multiple jurisdictions grapple with balancing AI innovation against intellectual property protections. Marketing professionals face evolving standards for content authenticity and disclosure as platforms implement AI content controls and regulatory frameworks develop transparency requirements.