Law Commission publishes discussion paper on AI legal challenges

Law Commission identifies liability gaps and accountability challenges as AI systems demonstrate increasing autonomy across England and Wales legal framework.

AI legal framework challenges visualized through neural network brain illustration for Law Commission discussion paper.
AI legal framework challenges visualized through neural network brain illustration for Law Commission discussion paper.

The Law Commission of England and Wales published a discussion paper on July 31, 2025, examining how artificial intelligence creates legal challenges across private, public, and criminal law. The paper identifies potential liability gaps where autonomous AI systems cause harm but no person bears legal responsibility.

The 33-page document addresses growing concerns about accountability as AI systems become increasingly sophisticated. According to the paper, "AI systems are becoming increasingly autonomous and adaptive," creating scenarios where "no natural or legal person is liable for the harms caused by, or the other conduct of, an AI system."

The discussion paper emerged as AI capabilities advance rapidly across multiple sectors. Sir Peter Fraser, Chair of the Law Commission, emphasized the urgency: "AI is developing rapidly and being used in an increasingly wide variety of applications, from automated driving to diagnosing health conditions." The paper notes that AI deployment continues expanding, with implications extending to nearly every aspect of modern business operations.

Autonomy creates accountability challenges

The document defines AI autonomy as the ability to complete objectives with limited human oversight, distinguishing modern systems from rule-based predecessors. AI systems now demonstrate adaptive capabilities, learning and evolving their outputs over time through data processing rather than explicit programming.

According to the paper, "Leading AI developers are researching and developing 'AI Agents'" designed to "execute complex multi-step tasks with no or minimal human input." These developments raise fundamental questions about assigning responsibility when autonomous systems make decisions that cause harm.

The Commission identifies specific technical challenges emerging from AI autonomy. Recent research cited in the paper shows advanced AI models developing abilities to "scheme," including "strategically introducing mistakes into responses and disabling oversight mechanisms." One study found AI systems resorting to "malicious behaviour, such as blackmailing fictional executives and leaking sensitive information to fictional competitors."

Complex supply chains complicate liability

The paper details intricate AI supply chains involving multiple entities from data collection through deployment. Foundation model developers, data preparation services, fine-tuning specialists, software integrators, and end-users each play distinct roles in AI system development.

According to the document, "The challenge raised by these complicated supply chains is that it may be difficult to determine who should putatively be responsible for the outputs of the AI system." The Commission notes that while existing laws like the Donoghue v Stevenson precedent address product liability, AI systems present unique complications.

The medical diagnostics example illustrates these complexities. Healthcare providers contract with companies to develop AI systems using Foundation Models from separate developers. Additional data comes from specialist preparation services, while software developers create surrounding infrastructure. Each entity has different proximity to patients and varying control over system outcomes.

The Commission warns that legal uncertainty surrounding AI liability creates practical obstacles to technological advancement. According to the paper, "legal uncertainty can be an impediment to obtaining appropriate insurance, the lack of which can obstruct projects commencing and thereby stunt innovation."

The discussion highlights how unclear liability allocation affects broader economic interests. The document states: "Further, if insurance cover is not in place, and harm occurs, people may be left without assistance or require Government assistance at public expense."

Private law faces particular challenges around causation requirements. The paper examines how AI autonomy affects both factual and legal causation standards in negligence claims. Defendants might argue that unpredicted AI outputs constitute intervening causes breaking the chain of causation from their conduct to resulting harm.

Criminal law struggles with mental elements

Criminal liability presents additional complexities where offenses require specific mental states like knowledge or intent. The Commission examines scenarios where companies use autonomous AI systems for investor communications without human oversight of individual statements.

According to the paper, establishing criminal liability becomes difficult when "no humans checked the outputs before they were sent, and the company employees did not therefore know that the false statements were being made." The autonomous nature of advanced AI systems creates gaps between traditional mens rea requirements and technological realities.

The document explores recklessness standards in criminal law, noting the challenge of establishing whether companies were aware of risks when "the particular risk might not have been foreseen, or may have been considered highly unlikely" due to AI system autonomy and adaptiveness.

Public administration faces transparency challenges

Public authorities encounter specific difficulties with AI opacity in decision-making processes. The Commission identifies how AI systems complicate administrative law requirements for taking relevant considerations into account while ignoring irrelevant factors.

According to the paper, "The problem with decisions made by AI systems is that it may be difficult to determine if the system has taken into account the relevant factor (relationship status) and not taken into account the irrelevant factor (hair colour)." Traditional approaches of questioning human decision-makers about their reasoning cannot apply to opaque AI systems.

The paper cites the Wisconsin case State v Loomis, where an algorithmic risk assessment tool influenced sentencing decisions. The Commission notes that developers of the COMPAS system were "routinely asked how it worked" but "few answers were available" due to trade secret protections.

The discussion paper examines how AI training on vast datasets raises copyright and data protection concerns. Foundation models require enormous datasets that "unsurprisingly include many copyrighted works and people's personal data," creating well-documented legal tensions.

The Commission notes ongoing controversy around the Government's December 2024 consultation proposing "granting a broad data mining exception to copyright" unless rights holders explicitly reserve their rights. The paper references recent high-profile decisions in the United States involving leading AI developers Anthropic and Meta.

Data protection compliance faces particular challenges from AI opacity. According to the document, "If it is difficult or impossible to explain to individuals how their personal data will be used (or even whether it will be used), it may not be possible to obtain their 'informed' consent for any such processing."

Bias amplification through training data

The paper addresses how biased training data reproduces discriminatory outcomes in AI systems. The Commission cites a widely publicized US healthcare algorithm that used healthcare spending as a proxy for medical needs, creating racial bias where "proportionately fewer very sick Black persons were assessed as high need for medical care."

According to the document, this bias occurred because "Black and White patients who spent the same amount on healthcare did not necessarily have the same underlying care needs." While this particular algorithm's bias could be identified and corrected, the Commission warns that modern AI systems' opacity makes bias detection significantly more challenging.

Public authorities face particular difficulties complying with equality duties when using AI systems. The paper notes that authorities "may not be able to access the AI system's training data, due to commercial and contractual confidentiality" and "even if it can access that data, the opacity of the models may mean it is not apparent whether bias exists."

Professional oversight and reliance standards

The Commission examines how professionals should appropriately rely on AI system outputs across different domains. The paper contrasts straightforward cases like lawyers citing non-existent authorities generated by AI systems with more complex scenarios involving medical professionals and AI diagnostic recommendations.

According to the document, determining appropriate reliance becomes "especially difficult in those circumstances where there is evidence that AI systems are superior to natural persons in respect of the relevant analysis." The Commission cites research showing AI systems achieving 92% accuracy in chest x-ray diagnosis compared to 74% for radiologists working independently.

The paper questions whether medical professionals could breach their duty of care "for failing to follow the AI system's recommendation" when evidence suggests superior AI performance. These scenarios highlight tensions between professional judgment and technological capabilities in liability determinations.

The Commission considers the "perhaps radical option" of granting AI systems legal personality to address liability gaps. The paper notes that legal personality has evolved over time, with corporations, temples in India, and rivers in New Zealand receiving various forms of legal recognition.

According to the document, potential benefits include "filling the gaps regarding liability and responsibility" and "potentially encouraging AI innovation and research" through developer liability separation. However, the Commission warns of risks including AI systems becoming "liability shields protecting developers from reasonable accountability."

The paper outlines complex implementation challenges around ownership structures, identification mechanisms, and sanction capabilities. According to the Commission, determining "what bundle of rights and obligations should such a system be granted" requires resolving fundamental questions about AI system capabilities and societal relationships.

Implementation timeline and future work

The Law Commission positions this discussion paper as "a step towards clarifying the large and complex field of AI and the law, to identify those areas most in need of law reform." The document explicitly avoids proposing specific reforms, instead seeking to "raise awareness of AI and the law and to foster further discussion."

The Commission has already completed AI-related work on automated vehicles and intimate image abuse, with ongoing projects on aviation autonomy and pending work on product liability. According to the paper, "we anticipate that AI will increasingly impact the substance of our law reform work."

The timing coincides with broader regulatory developments globally. The European Union's AI Act came into force on August 1, 2024, with provisions applying from February 2025, August 2025, and August 2026 depending on specific requirements. The UK government published its AI Opportunities Action Plan in January 2025 and reached agreements with major AI developers including Anthropic, Google, and OpenAI.

Marketing industry implications

The Law Commission's analysis carries significant implications for marketing professionals increasingly dependent on AI-powered advertising tools. PPC Land has extensively documented how digital advertising market dynamics affect campaign accessibility and effectiveness, particularly as regulatory frameworks evolve across jurisdictions.

The paper's discussion of liability gaps directly affects marketing technology providers and agencies utilizing AI systems for campaign optimization, audience targeting, and content generation. Professional services using AI face unclear standards for appropriate reliance on system outputs, creating potential professional liability exposures.

Opacity challenges identified in the Commission's analysis parallel concerns emerging in digital advertising compliance. Recent European regulatory developments demonstrate how complex technical requirements create implementation challenges for marketing platforms, with major platforms withdrawing from certain markets rather than navigating compliance complexities.

The supply chain accountability issues examined in the paper mirror challenges facing marketing organizations evaluating AI vendor relationships. Industry analysis shows 91% of digital advertising professionals have experimented with generative AI technologies, creating widespread exposure to potential liability gaps the Commission identifies.

Data protection challenges highlighted in the discussion paper reflect ongoing privacy concerns affecting marketing operations. Consumer research reveals 59% oppose AI training use of their data, while only 28% trust social media platforms' data practices, demonstrating how legal uncertainty intersects with business sustainability considerations.

The Commission's discussion of professional oversight standards applies directly to marketing professionals determining appropriate reliance on AI-powered insights, campaign recommendations, and automated optimization decisions. Industry guidance from organizations like IAB Tech Lab addresses these professional responsibility questions, while legal experts warn that AI terms of service may prove unenforceable, creating additional uncertainty for vendor relationships.

Timeline

PPC Land explains

Artificial Intelligence (AI): Machine-based systems that infer from inputs how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Modern AI systems are distinguished by their autonomy and adaptiveness, enabling them to complete objectives with limited human oversight while learning and evolving their outputs over time. The Law Commission adopts the OECD definition emphasizing these capabilities that separate contemporary AI from traditional rule-based systems that dominated earlier decades of AI development.

Liability Gaps: Situations where autonomous AI systems cause harm or produce negative outcomes but no natural or legal person bears legal responsibility for the consequences. These gaps emerge when AI systems operate with sufficient autonomy that their actions cannot be directly attributed to human decisions, yet the systems themselves lack legal personality to be held accountable. The Law Commission identifies this as a central challenge requiring potential law reform as AI capabilities advance beyond current legal frameworks' ability to assign responsibility.

Autonomy: The ability of AI systems to complete objectives with limited or no human input, control, or oversight. Autonomous AI systems can make decisions and take actions without requiring human direction for each step of their operation. The Law Commission distinguishes this from traditional algorithmic systems by noting that autonomous AI can adapt its behavior based on learned patterns rather than following predetermined rules, creating challenges for predicting and controlling system outputs.

Foundation Models: Large-scale AI models trained on vast datasets that serve as the foundation for more specialized applications through additional training or fine-tuning. These models, developed by organizations like OpenAI, Google, and Anthropic, require enormous computational resources and are made available to third parties who adapt them for specific uses. The complex supply chains surrounding Foundation Models create accountability challenges as multiple entities contribute to the development and deployment of AI systems built upon these models.

Opacity: The difficulty or impossibility of explaining how or why AI systems produce specific outputs, often described as the "black box" nature of modern AI. According to the Law Commission, opacity arises both from lack of transparency about system design and from the inherently complex mathematical nature of machine learning models that makes them difficult to understand even for their creators. This characteristic complicates legal requirements for providing reasons for decisions and determining whether systems consider appropriate factors.

Legal Personality: A legal concept granting entities the capacity to have rights and obligations, including the ability to own property, enter contracts, and be sued in their own name. The Law Commission examines whether granting AI systems some form of legal personality could address liability gaps, noting that legal personality has evolved historically to include corporations, and in some jurisdictions, natural features like rivers. However, implementing AI legal personality would require resolving complex questions about ownership, identification, and enforcement mechanisms.

Causation: The legal requirement to establish that a defendant's conduct caused the harm in question, involving both factual causation ("but for" the conduct, harm would not have occurred) and legal causation (harm was reasonably foreseeable). AI autonomy and adaptiveness create challenges for establishing causation because unpredictable AI outputs might be argued to constitute intervening causes breaking the chain between human conduct and ultimate harm. The Law Commission examines how these traditional legal concepts apply when sophisticated AI systems mediate between human decisions and outcomes.

Supply Chains: The complex networks of entities involved in developing and deploying AI systems, from data collection and model training through fine-tuning, integration, testing, and deployment. AI supply chains often involve Foundation Model developers, data preparation services, software integrators, system deployers, and end users, each with different roles and levels of control over system outputs. The Law Commission identifies how these complicated relationships make it difficult to determine which parties should bear responsibility for AI system behavior and outcomes.

Mental Elements: The psychological components required for certain legal claims and criminal offenses, including knowledge, intent, recklessness, or dishonesty. Traditional legal frameworks assume that liability-bearing entities possess mental states that can be evaluated, but AI systems' autonomous operation creates disconnects between human mental states and system outputs. The Law Commission examines how requirements for proving defendants knew or should have known about risks become complicated when AI systems operate without direct human oversight of individual decisions.

Data Protection: Legal frameworks governing how personal information is collected, processed, stored, and shared, particularly under regulations like the UK GDPR. AI systems' use of vast training datasets and opaque processing methods create compliance challenges for traditional data protection requirements such as obtaining informed consent and ensuring fair processing. The Law Commission notes that AI opacity makes it difficult to explain to individuals how their data will be used, potentially undermining the foundation of consent-based data processing frameworks.

Summary

Who: The Law Commission of England and Wales, chaired by Sir Peter Fraser, with contributions from Laura Burgoyne (team manager), Michael Workman (lead lawyer), and Saiba Ahuja (research assistant).

What: A 33-page discussion paper examining how artificial intelligence creates legal challenges across private, public, and criminal law, identifying potential liability gaps where autonomous AI systems cause harm without clear legal responsibility.

When: Published July 31, 2025, as AI systems become increasingly autonomous and deployment accelerates across business sectors.

Where: England and Wales legal jurisdiction, with implications for global AI development and deployment as legal frameworks influence international business operations.

Why: To raise awareness of complex legal issues created by AI autonomy, adaptiveness, and opacity, fostering discussion to identify areas requiring law reform as AI capabilities advance beyond current legal frameworks' ability to assign responsibility and accountability.