The UK high court ruling did not resolve core copyright controversies in AI training
The long-anticipated ruling from the UK High Court in the Getty Images v Stability AI case was expected to clarify how copyright law applies to artificial intelligence. It didn’t. Instead of solving one of the most critical legal challenges in modern AI, whether training AI models on third-party copyrighted materials without permission breaks the law, the court stepped around the issue entirely.
This avoidance wasn’t because the court lacked interest. Getty Images dropped its central claim regarding AI model training after conceding that Stability AI didn’t conduct its training work in the UK. That point matters. Jurisdiction defines the limits of the court’s power, and in this case, it cut the legs out from a potentially landmark ruling before it could happen. Iain Connor, an Intellectual Property Partner at Michelmores, put it plainly: the UK still lacks a clear legal ruling on whether an AI model’s learning process, based on ingestion of copyrighted material, is lawful.
For senior decision-makers, that leaves a vacuum. Without regulatory direction, AI leaders are expected to navigate that space without a clear map. That means executives now carry most of the burden for intellectual property risk when developing, training, or integrating generative models in global operations.
The situation calls for a shift in thinking. Until governments or courts catch up, legal uncertainty remains the default. That means risk modeling, internal policy clarity, and legal safeguards have to be built at the company level. Relying on conventional IP paradigms isn’t sufficient. If you’re integrating generative AI into your products, this is one of those red-flag issues that needs board-level attention, today.
The ruling confirmed trademark infringement for using Getty’s watermarks in AI-generated images
The court did make one definitive call: Stability AI infringed on Getty and iStock’s trademarks by generating images containing their watermarks. Legally, that’s a win for Getty. Technically, that part of the decision wasn’t controversial. Intentionally or not, AI-generated content mimicking another company’s watermark is a basic breach of branding rights.
But here’s what matters: this win is cosmetic compared to the broader legal question about AI and copyright. Iain Connor, speaking on this point, acknowledged it won’t move the needle much for Getty, saying it offers “little solace.” He’s right. From a strategic perspective, clear rules around copyright training would’ve reshaped AI operations globally. Instead, we got a modest opinion on a narrow instance of trademark use.
For business leaders, the takeaway is that trademark protection still works as intended, even in an AI-driven context. That’s encouraging. Your brand identity is still defensible, digital or not. But don’t confuse that with broader intellectual property protection. This decision didn’t do anything to establish whether using licensed images, music, or written content in AI training datasets is lawful or not.
In short, one container was marked but the ship kept sailing. If your leadership team is waiting for the courts to define fair rules for AI content use, don’t hold your breath. You’re better off building your legal and operational buffer around proactive licensing strategies and IP audits than waiting for case law to align with technological reality.
Jurisdictional limitations restricted the court’s ability to address the legality of training AI on copyrighted materials
The core issue in the Stability AI case, whether training on copyrighted data breaks the law, was never fully tested. The reason? Jurisdiction.
Stability AI said it didn’t conduct the training of its image generation model, Stable Diffusion, inside the UK. Getty conceded that point. Because of this, the UK High Court couldn’t rule on whether the training itself constituted copyright infringement. That limitation had a ripple effect, the most important question remained untouched. Instead of a direct answer on the legality of AI training practices, the outcome offered procedural clarity and legal ambiguity at the same time.
Nathan Smith, Intellectual Property Partner at Katten Muchin Rosenman, made it clear that the decision left “superficial clarity.” For companies relying on legal frameworks to approve their AI training pipelines, this is a weak foundation. You still don’t know where the legal line starts or ends.
For C-suite leaders overseeing AI policy, this kind of jurisdictional gap should prompt long-term strategic coordination. If your AI systems operate across borders, then you’ll need a risk registry that accounts for legal exposure not in just one region, but many. Copyright law varies significantly between the UK, US, EU, and Asia-Pacific. Without structured contracts, localized legal input, and documented AI training practices, you’re leaving too much to chance.
In simpler terms: the court couldn’t answer the big question because it wasn’t allowed to. That puts the pressure back on you, not regulators, to build frameworks that can survive in gray areas.
The court found that AI systems learn statistical patterns rather than creating direct copies of copyrighted works
This is the part that matters to anyone involved in developing or integrating AI into their systems. According to expert evidence presented in court, Stable Diffusion, the AI model at the center of the case, does not store the images it trains on. It doesn’t recreate those images in its outputs either. Instead, it statistically analyzes patterns in the training data and uses this understanding to generate entirely new content.
James Clark, a Partner at Spencer West focused on data protection and AI regulation, explained it this way: training a model on copyrighted content does not, in itself, result in the model storing or reproducing those works. From a legal standpoint, that weakens arguments rooted in traditional definitions of copying.
This point changes the playing field. If the data is not stored or retrievable and the model isn’t reproducing the virgin work directly, then most conventional infringement arguments begin to lose weight. That doesn’t mean there’s zero risk. It means enforcement will require a different legal theory, one that’s not grounded in proving direct copies were made.
For business leaders, this offers conditional reassurance. AI models built using this statistical approach may fall outside current legal frameworks that define “copying.” That could be good news for generative platforms and tools, but remember: laws evolve. There’s growing international pressure to redefine IP rules in response to AI technologies.
Until that happens, stay grounded. Make sure your tech teams can document how data is ingested and interpreted by your models. Being able to demonstrate how your systems operate, including which datasets were used and how outputs are created, will matter more in regulatory reviews going forward.
The case highlighted the challenge of assigning liability for intellectual property violations in the context of AI
One of the few takeaways from this case is where accountability lies, and it’s not with the end users of AI-generated content. The legal commentary around the case strongly points to the developers and providers of AI models as the ones ultimately responsible when it comes to intellectual property violations.
Wayne Cleghorn, Partner at Excello Law and a specialist in data protection and AI regulation, made the point clearly: when copyrighted works are used, or misused, in the development of AI systems, liability falls primarily on those who build and distribute the models. Getty reinforced this by stating it couldn’t feasibly pursue every developer who may have used its content. Instead, Getty is calling for governments to introduce transparency standards that would allow IP rightsholders to proactively protect content used in AI systems.
For executives overseeing AI strategy, this is an operational trigger. If you’re building models or integrating off-the-shelf ones, you’ll need legal and compliance teams involved from the start of the training process. Document how and where training data comes from. Build systems that allow for future compliance checks. Waiting for external litigation or regulatory pressure is not a sound strategy when you’re the liable party.
This also raises broader corporate questions, like how much legal exposure your AI activities carry. Without clear government rules, any meaningful containment of IP liability must happen inside the organization. Transparency, documentation, and responsible procurement of datasets aren’t compliance extras anymore, they’re primary strategic considerations.
Legal ambiguity is driving a shift toward negotiated commercial licensing agreements over protracted litigation
The lack of legal certainty isn’t stopping companies from moving forward. What’s happening instead is a shift in how organizations manage copyright risks around AI. In the absence of clear laws or reliable court precedent, businesses like Getty Images are securing their positions through licensing. That’s a strategic realignment, from a reactive, court-driven response to a proactive commercial model.
This became clear following Getty’s announcement of a new AI-focused licensing deal. Notably, its share price rose after the disclosure, which suggests public markets saw the move as a sign of leadership rather than a retreat. Commercial licensing provides a clearer, faster route to de-risking AI development while building long-term partnerships with content holders.
For decision-makers, this signals where the ecosystem is heading. Companies that want to avoid reputational damage and legal friction will need to consider licensing talks sooner rather than later. Negotiated deals with content owners allow AI model developers to operate at scale without triggering lawsuits or brand conflicts.
It’s also efficient. Licensing builds predictability into product development timelines. It creates room for operational clarity, legal gets aligned with engineering, and product launches can move forward without last-minute constraints. In short, the winners in this space will be the companies that align their legal strategy with their long-term model capabilities. Betting on drawn-out court cases is not a productive use of time or capital when the market already favors proactive solutions.
Main highlights
- UK courts offer no legal clarity on AI training: Executives should not rely on courts to define whether using copyrighted material in AI training is lawful. Without jurisdictional coverage, the UK High Court left this core issue unresolved, leaving a critical gap in legal risk guidance.
- Trademark protections hold, but broader IP concerns remain: While the court upheld Getty’s trademark claim over AI-generated watermarked images, it avoided the larger question of content use in machine learning. Leaders should not confuse limited IP wins with strategic protection.
- Jurisdiction matters when training AI: Operations outside national boundaries can block meaningful legal decisions. Tech leaders must consider cross-border legal structures when designing AI pipelines to avoid jurisdictional blind spots.
- AI models that learn patterns may avoid traditional infringement claims: Courts acknowledged that generative models like Stable Diffusion do not store or reproduce original images. Leaders should ensure their engineering teams can defend model design and explain how outputs are generated.
- Liability defaults to those building and deploying AI models: Legal responsibility lies with developers and model providers, not end users. Companies developing or fine-tuning models must document data sources and build legal safeguards into their AI stack.
- Market momentum favors licensing over litigation: In the absence of clear legislation, commercial licensing is becoming the go-to strategy. Executives should explore proactive rights agreements to minimize IP conflicts and maintain development velocity.


