Dutch data authority sets GDPR preconditions for AI models

Authority publishes guidance on lawful use of generative artificial intelligence.

Dutch data authority releases GDPR guidelines for AI development and deployment consultation
Dutch data authority releases GDPR guidelines for AI development and deployment consultation

The Dutch Data Protection Authority has launched a comprehensive consultation outlining how data protection laws should apply to generative artificial intelligence development and deployment. Released last week, the guidance marks a significant step toward establishing regulatory clarity for AI systems processing personal data.

PPC Land Newsletter

Get the PPC Land newsletter ✉️ for more like this

Subscribe

According to the Dutch Data Protection Authority, the consultation is open until 27 June 2025, just over a month after its announcement. The guidance document, titled GDPR preconditions for generative AI, establishes detailed requirements for companies developing or using AI systems that process personal data.

The publication comes as generative AI faces increasing scrutiny from European regulators. According to the AP, it is plausible that irregularities occurred during the development of foundation models. The overarching estimate is that, based on current practice, the vast majority of all generative AI models currently fall short in terms of legitimacy.

The authority's findings center on data collection practices used to train AI models. To train these models, almost all publicly accessible data on the internet has been used (scraped). Special categories of personal data have been placed on the internet, which have not been made public by a data subject themselves.

This creates significant legal challenges under European data protection law. The General Data Protection Regulation includes strict provisions for processing special categories of personal data, which include information about racial or ethnic origin, political opinions, religious beliefs, health data, and biometric information.

The Dutch authority acknowledges the complexity of the issue, noting that the continued use of these foundation models by Dutch and European parties is, therefore, not inherently unlawful, as follows from an analysis by the EDPB. This refers to guidance from the European Data Protection Board, which coordinates enforcement across the EU's 27 member states.

The consultation document outlines five key preconditions for lawful AI development and deployment. These include requirements that training data must be lawfully obtained, stricter conditions for collecting special categories of personal data, proper data curation to remove unwanted personal information, systems to facilitate data subject rights, and clear purpose descriptions for AI processing activities.

Technical implementation represents a particular challenge. The patterns that generative AI models have learnt are embedded in numbers, also known as 'weights'. The information stored herein is no longer explicitly represented but is implicitly part of the collection of weights.

This technical architecture complicates compliance with individual rights under European law. Data subjects can exercise their rights at different steps in the generative AI chain. Providers and deployers of generative AI models must establish a system to comply with the rights of data subjects when they receive requests from data subjects who want to exercise their rights.

The authority emphasizes that current AI model deployment often falls short of legal requirements. For deployers of generative AI applications, this means that they must take all reasonable measures to prevent the reproduction of erroneous or unwanted personal data.

The guidance also addresses emerging technical solutions. New technologies such as retrieval-augmented generation (RAG) and chains of thought (CoT) can offer a solution to reduce the reproduction of incorrect and unwanted personal data.

Looking ahead, the authority plans expanded oversight activities. The AP will also take a number of additional steps to make responsible progress with generative AI. We will start by identifying the questions and challenges surrounding the responsible development and deployment of generative AI.

For the marketing community, these developments signal a fundamental shift in how AI tools can be legally deployed. Companies using generative AI for content creation, customer service, or data analysis must now demonstrate compliance with detailed data protection requirements. The emphasis on data minimization and purpose limitation could particularly impact marketing applications that rely on broad data collection.

The consultation represents part of broader European efforts to regulate artificial intelligence while maintaining innovation. The AI Act will ensure that from 2025 there is appropriate and proactive oversight of foundation models and the organisations that develop these models.

PPC Land Newsletter

Get the PPC Land newsletter ✉️ for more like this

Subscribe

Timeline

May 23, 2025: Dutch Data Protection Authority releases comprehensive consultation on GDPR preconditions for generative AI, marking 29 days before today. The consultation remains open until June 27, 2025, giving stakeholders approximately one month to provide feedback on these crucial regulatory guidelines.

January 2024: Dutch government published its vision for generative AI, emphasizing balanced approaches to opportunities and risks.

January 2023: Dutch DPA established dedicated algorithms coordination directorate to oversee AI and algorithmic supervision.

Related Coverage