The European Data Protection Board (EDPB) yesterday adopted a landmark opinion clarifying how data protection rules apply to artificial intelligence models, addressing key questions around anonymity, lawful processing, and data protection compliance.
According to the opinion document adopted by the EDPB on December 18, AI models trained on personal data cannot automatically be considered anonymous, and must undergo careful case-by-case assessment by data protection authorities to determine if they truly protect individual privacy.
"The development and deployment of AI models may raise serious risks to rights protected by the EU Charter of Fundamental Rights," states the EDPB in its 35-page opinion. The board emphasizes that both the right to private life and personal data protection could be impacted during model development and deployment phases.
The opinion responds to questions raised by Ireland's Data Protection Authority seeking to harmonize the application of privacy rules across Europe as AI adoption accelerates. It addresses three main areas: when AI models can be considered truly anonymous, how companies can justify legitimate interests for processing data, and the implications of using unlawfully processed data.
On anonymity, the EDPB establishes that for an AI model to be considered anonymous, both the likelihood of extracting personal data used in training and the possibility of obtaining such data through model queries must be "insignificant." The board notes that even when models are not designed to reveal personal information, training data may remain "absorbed" in the model's parameters.
Regarding legitimate interests as a legal basis for processing data, the opinion outlines a three-step assessment: identifying genuine legitimate interests, analyzing processing necessity, and balancing interests against fundamental rights. The EDPB cites examples of potentially legitimate uses like developing conversational agents or improving cybersecurity systems.
The board also addresses scenarios where AI models are developed using unlawfully processed personal data. When such models retain personal data and are deployed by the same controller, the initial unlawful processing could impact subsequent use. However, if a model is properly anonymized, the GDPR would not apply to its further operation.
"AI technologies may bring many opportunities and benefits to different industries and areas of life. We need to ensure these innovations are done ethically, safely, and in a way that benefits everyone," said EDPB Chair Anu Talus in the announcement accompanying the opinion.
The opinion arrives as organizations increasingly deploy AI systems, with data protection authorities across Europe reporting growing concerns from citizens. It follows the EDPB's earlier guidance on generative AI systems like ChatGPT released in May 2024.
For organizations developing or deploying AI models, the opinion requires implementing robust technical and organizational measures to protect personal data. These include pseudonymization, data minimization strategies, and measures preventing personal data extraction from models.
The board emphasizes that data protection authorities should assess AI models' compliance with privacy rules on a case-by-case basis, considering factors like the nature of processed data, processing context, and potential consequences for individuals.
This opinion aligns with the recently adopted EU AI Act, which requires providers of high-risk AI systems to ensure compliance with data protection laws. However, the EDPB notes that an AI system's self-declaration of compliance may not constitute conclusive evidence of GDPR adherence.
Organizations must now carefully document their data protection assessments when developing AI models, including Data Protection Impact Assessments where required. The opinion also recommends involving Data Protection Officers in evaluating legitimate interest assessments.
The EDPB's guidance comes at a critical time as organizations navigate complex intersections between AI innovation and privacy protection. While supporting responsible AI development, the opinion establishes clear boundaries to safeguard individual rights in an increasingly AI-driven world.
For supervisory authorities across Europe, this opinion provides a framework to assess AI models' compliance with data protection rules, while allowing flexibility to consider specific circumstances of each case.