Oxford researcher analyzes limitations of new EU AI

Professor Sandra Wachter's recent research reveals regulatory loopholes in the EU AI Act, Product Liability Directive, and AI Liability Directive that may fail to address key ethical issues in AI development.

Oxford researcher analyzes limitations of new EU AI
Professor Sandra Wachter

On August 19, 2024, nearly two months after a political agreement was reached on the EU's landmark Artificial Intelligence Act (AI Act), Professor Sandra Wachter of the Oxford Internet Institute published an analysis highlighting several limitations and loopholes in the legislation. According to Wachter, strong lobbying efforts from big tech companies and EU member states resulted in the watering down of many key provisions in the final version of the Act.

Wachter, an Associate Professor and Senior Research Fellow at the University of Oxford who researches the legal and ethical implications of AI, argues that the AI Act relies too heavily on self-regulation, self-certification, and weak oversight mechanisms. The legislation also features far-reaching exceptions for both public and private sector AI uses.

Her analysis, published in the Yale Journal of Law & Technology, also examines the enforcement limitations of the related EU Product Liability Directive and AI Liability Directive. These frameworks predominantly focus on material harms while neglecting immaterial, monetary, and societal harms such as algorithmic bias, AI hallucinations, and financial losses caused by faulty AI products.

Key facts from Wachter's research

  • The AI Act introduced complex pre-market risk assessments that allow AI providers to avoid "high risk" classification and associated obligations by claiming their systems do not pose significant risk of harm.
  • Conformity assessments to certify AI systems' compliance with the Act will be conducted by providers themselves rather than independent third parties in most cases.
  • The Act focuses transparency obligations on AI model providers while placing very limited obligations on providers and deployers of AI systems that directly interact with and impact users.
  • Computational thresholds used to determine if general purpose AI models pose "systemic risks" are likely to cover only a small number of the largest models like GPT-4 while excluding many other powerful models with similar capabilities.
  • The Product Liability Directive and AI Liability Directive place a high evidentiary burden on victims of AI harms to prove defectiveness and causality, with limited disclosure mechanisms available from AI providers.
  • The two liability directives are unlikely to cover immaterial and societal harms caused by algorithmic bias, privacy violations, reputational damage, and the erosion of scientific knowledge.

To address these shortcomings, Wachter proposes requiring third-party conformity assessments, expanding the scope of banned and high-risk AI practices, clarifying responsibilities along the AI value chain, and reforming the liability directives to capture a broader range of harms. She argues these changes are necessary to create effective guardrails against the novel risks posed by AI in the EU and beyond, as the bloc's regulations are likely to influence AI governance approaches globally.

The European Commission, Council and Parliament reached a political agreement on the text of the AI Act in June 2024 after more than three years of negotiations. The final vote to formally adopt the legislation is expected later this year, with the Act projected to take effect in the second half of 2025. Talks are still ongoing regarding the two liability directives.

The AI Act is set to become the first comprehensive legal framework globally to regulate the development and use of artificial intelligence. Its risk-based approach prohibits certain AI practices deemed "unacceptable risk", while subjecting "high-risk" AI systems to conformity assessments, human oversight, and transparency requirements before they can be placed on the EU market.

However, Wachter's analysis suggests the legislation may not go far enough to protect fundamental rights and mitigate AI-driven harms. She notes that many high-risk areas like media, science, finance, insurance, and consumer-facing applications like chatbots and pricing algorithms are not adequately covered by the Act's current scope.

The analysis also highlights how last-minute lobbying efforts by EU member states France, Italy and Germany led to the weakening of provisions governing general purpose AI models like those underpinning OpenAI's ChatGPT. Strict rules were opposed out of concern they could stifle the competitiveness of domestic AI companies hoping to rival US tech giants.

Looking at enforcement, Wachter finds the Act's reliance on voluntary codes of conduct and self-assessed conformity inadequate. She advocates for mandatory third-party conformity assessments and external audits to verify providers' claims about their AI systems' risk levels and mitigation measures.

With respect to the Product Liability Directive and AI Liability Directive, key limitations include their focus on material harms and high evidentiary burdens placed on claimants. Wachter argues immaterial and societal damages like bias, misinformation, privacy violations and the erosion of scientific knowledge are unlikely to be captured, leaving major regulatory gaps.

To rectify these issues, the analysis proposes expanding the directives' scope to cover a wider range of harms, reversing the burden of proof onto AI providers, and ensuring disclosure mechanisms apply to both high-risk and general purpose AI systems. Wachter also recommends setting clear normative standards that providers must uphold rather than simply requiring transparency.

While acknowledging the EU's trailblazing efforts to govern AI, Wachter ultimately concludes that bolder reforms are needed to the AI Act and liability directives to create truly effective safeguards. She emphasizes the global implications, as the bloc's approach is expected to serve as a blueprint for regulations in other jurisdictions.

As legislators worldwide grapple with the complex challenge of mitigating AI risks while enabling innovation, Wachter's research offers a timely contribution to the debate. Her analysis provides policymakers with concrete recommendations to close loopholes, strengthen enforcement, and center AI governance on protecting rights and societal values.

Key Takeaways

  • The EU AI Act, while pioneering, contains several limitations and loopholes that may undermine its effectiveness in governing AI risks
  • Overreliance on self-regulation, weak enforcement mechanisms, and limited scope of "high-risk" AI systems are major shortcomings
  • The Product Liability and AI Liability Directives are ill-equipped to address immaterial and societal harms caused by AI
  • Reforms like third-party conformity assessments, expanded scope of harms, and reversed burden of proof could strengthen the regulations
  • As a likely global standard, improving the EU's approach is crucial to enable responsible AI innovation worldwide

Professor Wachter's research also explores potential solutions to address the limitations identified in the EU's AI regulations. She argues that closing current loopholes will be essential to upholding the European Commission's stated aims of the AI Act - to promote trustworthy AI that respects fundamental rights while fostering innovation.

One key recommendation is to expand the list of prohibited AI practices and add more "high-risk" categories under the AI Act. Wachter suggests that general purpose AI models and powerful large language models (LLMs) should be classified as high-risk by default given their vast capabilities and potential for misuse.

To strengthen enforcement, the analysis calls for mandatory third-party conformity assessments rather than allowing self-assessments by AI providers. External audits, similar to those required for online platforms under the Digital Services Act, could also help verify compliance and effectiveness of risk mitigation measures.

Wachter emphasizes the need for clear, normative requirements for AI providers, such as establishing standards for AI accuracy, mitigating bias, and aligning outputs with factual sources - not merely demanding transparency. Harmonized standards requested by the Commission should provide practical guidance in these areas.

Reforming the Product Liability and AI Liability Directives is another priority outlined in the research. Wachter proposes expanding their scope beyond material damages to capture immaterial and societal harms, while easing claimants' burden of proof in cases involving complex AI systems.

Drawing inspiration from a recent German court ruling that found Google liable for reputational damages caused by its autocomplete search suggestions, Wachter explores how a similar standard could apply to LLM providers whose models generate false, biased or misleading content.

The analysis further highlights the importance of tackling AI's environmental footprint, recommending that conformity assessments consider energy efficiency and that providers face incentives to reduce the carbon impact of resource-intensive AI models.

Finally, Wachter calls for an open, democratic process to determine the standards LLMs should be aligned with to mitigate the spread of misinformation and erosion of shared societal knowledge. She cautions against ceding this crucial governance question solely to AI providers.

In conclusion, Wachter's research offers a comprehensive critique of the gaps in the EU's emerging AI regulatory framework, along with a roadmap for policymakers to address them. While praising the bloc's proactive leadership, she argues much work remains to create a governance system capable of reining in AI's most pernicious risks.

As momentum builds worldwide to set rules and standards for AI, Wachter's analysis underscores the high stakes - not only for the EU, but for all jurisdictions looking to the EU as a model. Her insights provide valuable input for ongoing negotiations over the final shape of the AI Act and related directives.

With the global race to regulate AI intensifying, policymakers are urged to heed the lessons outlined in this research to close loopholes, strengthen safeguards for rights and societal values, and secure a framework that rises to the profound challenges posed by the technology. The consequences of getting it right could not be greater.

Key Facts

  • The final version of the EU AI Act was weakened due to lobbying from big tech companies and member states, relying heavily on self-regulation, self-certification, and including broad exceptions.
  • The Act introduced complex pre-market risk assessments that allow AI providers to avoid "high-risk" classification and obligations by claiming no significant risk of harm.
  • Most conformity assessments to certify AI systems' compliance will be conducted by providers themselves, not independent third parties.
  • Transparency obligations focus on AI model providers, with limited obligations on providers and deployers of AI systems interacting directly with users.
  • Computational thresholds for "systemic risk" classification of general purpose AI models will likely only cover a small number of the largest models like GPT-4.
  • The Product Liability Directive and AI Liability Directive place high evidentiary burdens on victims to prove AI defectiveness and causality, with limited disclosure mechanisms.
  • The liability directives are unlikely to cover immaterial and societal harms like algorithmic bias, privacy violations, reputational damage, and erosion of scientific knowledge.
  • Many high-risk AI applications in media, science, finance, insurance, and consumer-facing systems like chatbots and pricing algorithms are not adequately covered under the Act.
  • Lobbying efforts by France, Italy and Germany led to weaker provisions on general purpose AI models to avoid stifling domestic AI companies' competitiveness.
  • Wachter proposes mandatory third-party conformity assessments, expanded scope of banned and high-risk AI, clarified responsibilities in the AI value chain, and reformed liability directives to capture broader harms.
  • Recommendations include classifying general purpose AI models as high-risk by default, requiring external audits, setting clear standards for accuracy and bias mitigation, and easing claimants' burden of proof.
  • Wachter calls for an open, democratic process to determine standards for aligning large language models to mitigate misinformation and knowledge erosion risks.
  • The research highlights the global implications of the EU's approach, which is expected to serve as a blueprint for AI regulations in other jurisdictions worldwide.