The Autoriteit Persoonsgegevens (AP), the Dutch data protection authority, has this month launched a public consultation on draft guidance that sets out in detail how organisations must explain automated decisions to the individuals those decisions affect. The consultation deadline is 26 May 2026.
What the guidance covers
The 31-page draft, titled "The right to an explanation in automated decision-making," addresses one of the most contested areas of the General Data Protection Regulation: the obligation under Article 22 GDPR to provide meaningful information about decisions made without meaningful human involvement. According to the AP, automated decision-making is becoming increasingly common in contexts ranging from online car-hire applications to loan assessments, and the document is intended for both organisations that must provide explanations and individuals who are entitled to receive them.
The guidance draws a clear line between two distinct types of explanation. A general explanation must always be available and is typically provided before any decision is made - for example, in a privacy statement when a user submits data. A specific or personal explanation is triggered when an individual requests access to their data or invokes rights under Article 22(3) GDPR. According to the AP, the specific explanation must include the personal data used as factors in the decision, the essential elements of the algorithm such as the weighting of factors and intermediate steps, and the relationship between the data processed and the final outcome.
These are not optional enhancements. Under Articles 13 and 14 GDPR, organisations have an active duty to provide this information. Under Article 15, individuals can request it on demand. The organisation must respond in principle within one month; if complexity requires more time, it may take up to two additional months but must inform the data subject within the first month that an extension is being applied.
Why explainability matters
The AP frames the explanation requirement as a tool that serves multiple interests simultaneously. For individuals, it creates the practical means to actually use rights that exist on paper - the right to human intervention, the right to express a point of view, and the right to challenge a decision. Without knowing how a decision was reached, those rights are difficult to exercise.
According to the AP, a Leiden researcher is cited in the document as warning that the erosion of trust in automated processes does not stop at the individual case: "People no longer have trust in the fairness of these processes, which ultimately undermines trust in administrative and judicial procedures as well."
The document also highlights a Finnish case in which a man was automatically rejected for a loan. After requesting an explanation, he discovered he would have received the loan had he been a woman, or had he had Swedish as his mother tongue. According to the AP, this case sparked wider debate about discrimination by algorithms and about the use of credit scores in general.
For organisations, the AP notes a more pragmatic benefit: transparent explanations can reduce unnecessary objection procedures, because individuals who understand which factors count can better estimate whether their application has a realistic chance.
The spectrum from insightful to opaque
One of the more technically detailed sections of the draft concerns the classification of algorithms and AI systems along what the AP describes as a spectrum of explainability, ranging from insightful models at one end to opaque models at the other.
Insightful models include rule-based systems with a limited number of rules, small regression models, and small decision trees. These are relatively easy to explain because the steps from input to output are designed by human beings and can be followed logically. The AP gives a worked example: a simple linear model predicting house prices in Groningen based on square metres produces the formula price = 100,000 + 5,000 times the surface area, where each additional square metre adds exactly 5,000 euros to the predicted price. Three characteristics make such models insightful: linearity, where input changes produce proportionate output changes; monotony, where the direction of change is consistent; and non-complexity, where the number of interacting factors remains small.
Models to be made transparent include rule-based systems with large numbers of rules, regression models with many variables, and large decision trees. More complex systems using machine learning - including neural networks - can sometimes also be made partially transparent through additional techniques. The AP identifies two broad categories of technique. The first is factor weighting, sometimes called feature importance in the literature, which shows how much each input variable contributed to a specific outcome. Tools such as SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations) can calculate these weights for individual decisions. The second is comparative or counterfactual explanation, in which an organisation shows the individual fictitious data points that would have led to a different outcome - answering the question of what would have to change for the decision to go the other way.
The AP is direct about the limits of these techniques. According to the document, an interdisciplinary study examining GDPR compliance of these methods found that none of the available techniques can yet be said to actually explain the decision in a way that is sufficiently complete for an individual to control it. Organisations may therefore need to supplement the output of these tools with additional contextual information.
Opaque models present a more fundamental problem. These are non-linear and non-monotone systems, often with large numbers of input factors, where the relationship between inputs and outputs is not recoverable by any currently available technique. According to the AP, for opaque models, there do not yet appear to be techniques that provide sufficient explanation in the context of automated decision-making. The guidance is unambiguous: an organisation that relies on an opaque algorithm for automated decisions must, before deploying it, determine whether it will be able to meet its explanation obligations at all.
The document includes a specific section on large language models, noting that asking a language model to explain itself is not a valid form of explanation under GDPR. The reasoning is that the model cannot evaluate its own parameters or training. Answering "3" to "what is the root of 9?" and then explaining "if you multiply 3 by itself you get 9" describes the answer, not the system. According to a 2024 academic paper cited by the AP, large language models cannot explain themselves - a finding with direct relevance for any organisation deploying generative AI in automated decision-making pipelines.
Governance and design
The guidance devotes an entire chapter to what the AP calls explainability-by-design - the principle that explanation capacity should be built into systems from the start rather than retrofitted after deployment. According to the document, a good explanation process consists of three phases: first, choosing which explanation techniques are needed for the model used and how to integrate them; second, formulating an explanation delivery strategy that ensures employees can communicate the explanation clearly and simply; and third, evaluating the process regularly through stakeholder feedback.
The AP connects this to the broader GDPR principle of data protection by design and by default under Article 25. Organisations that carry out a Data Protection Impact Assessment - which is likely required for most automated decision-making systems - should use that process to identify risks to transparency and establish how they will be mitigated.
On the question of trade secrets and gaming the system, the AP takes a measured position. A genuine intellectual property interest or a concrete and demonstrable risk of manipulation may justify limiting the scope of an explanation, but neither can justify providing no explanation at all. The organisation must still explain as much as possible, indicate that the explanation has been limited and why, and inform the individual of their right to file a complaint with the AP and to appeal to a court. A general concern about economic interests is not sufficient grounds for restriction.
Court cases shaping the framework
The draft draws heavily on recent European Court of Justice rulings. The judgment in Dun and Bradstreet, C-203/22, delivered on 27 February 2025, established that a complex algorithmic description, or a description of all the steps an algorithm follows, does not by itself constitute a concise and comprehensible explanation. This means that providing a mathematical formula alone does not satisfy the obligation. The CJEU also ruled in that case, and in the earlier SCHUFA Holding case, C-634/21, decided 7 December 2023, on the limits of what constitutes an adequate explanation for credit scoring - rulings that have direct implications for how financial and insurance profiling is structured and communicated in practice.
An Amsterdam district court ruling, ECLI:NL:RBAMS:2024:4019, also features in the draft. The court found that words such as "may," "could," and "possibly" are insufficiently clear and should be avoided in explanations. The guidance provides a concrete before-and-after example: the sentence "You scored high on FEAT_1_V.0.1. Therefore, you have been rejected for applying for this loan" is identified as technically opaque jargon. The preferred alternative states plainly that the loan application was rejected and that the amount of the loan requested was the factor with the most significant impact.
Why this matters for the marketing and advertising industry
For organisations that operate programmatic advertising, behavioural targeting, audience segmentation, or algorithmic creative selection in the European Union, the AP's guidance has clear practical implications. Many of the systems described in the draft - credit scoring, loan eligibility, insurance pricing - are structurally similar to the systems used in ad tech for determining which users see which offers, what prices they are shown, or whether they are included in or excluded from a campaign segment.
As PPC Land has reported, the Dutch DPA's consultation on the AI Act's prohibition on social scoring found that transparency safeguards alone cannot guarantee limiting unfair outcomes from automated scoring systems. Respondents in that consultation emphasised the importance of Explainable AI techniques that can provide explanations for generated outputs. The new explanation guidance sits in that same regulatory trajectory.
The AP's enforcement activity has been consistent and active. In May 2025, the authority launched a consultation setting out GDPR preconditions for generative AI, concluding that the vast majority of generative AI models currently fall short in terms of legitimacy under European law. In February 2026, the AP issued a formal warning against open-source AI agent systems, citing risks of data breaches, account takeovers, and remote code execution. And a survey of 1,480 Dutch residents published by the AP on 21 April 2026 found that nearly two in five respondents were unaware of their right to human intervention when an automated system makes a decision about them.
This last finding underlines the gap between existing legal rights and public awareness of those rights - a gap that the new explanation guidance is directly designed to close. The AP's 2026 annual plan, which focuses supervision on mass surveillance, AI, and digital resilience, makes clear that transparency and explainability of automated decision-making are enforcement priorities for the period through 2028.
Spain's data protection authority has taken a parallel approach. The AEPD published detailed guidance in early 2026 on the GDPR risks of agentic AI systems, noting that the decision-making processes of AI agents can generate more pronounced obstacles to achieving significant explainability. The AP's draft guidance arrives in this broader context of coordinated European attention to how automated systems interact with individual rights.
For advertising technology practitioners, the most concrete compliance implications concern profiling systems. The AP guidance is explicit that where profiling is used in an automated decision, the organisation must state this, and must explain the logic of the profiling as part of both the general and specific explanation obligations. Risk scores and categorisation outputs derived from profiling are covered by access requests. The AP states that derived data - including scores produced by an algorithm, even when not provided directly by the data subject - falls within the scope of the right of access. A risk score assigned to a user for insurance or financial purposes, or a propensity score used to determine advertising eligibility, would need to be disclosed if the individual requests it.
The Netherlands has also formally flagged concerns about the EU Digital Omnibus proposals, which would, among other changes, alter the rules on automated decision-making. The Dutch government submitted an analysis in November 2025 warning that certain amendments go beyond streamlining and threaten to undermine core data protection principles. The new explanation guidance is published against that backdrop of contested political negotiation over how firmly the GDPR's automated decision-making obligations will be maintained in the years ahead.
How to respond
According to the AP, feedback on the draft guidance can be submitted by email to [email protected]. The consultation closes on 26 May 2026. The AP has indicated it will publish a summary of responses, without identifying individual respondents, and will use the feedback to refine the final document before publication.
Timeline
- 7 December 2023 - Court of Justice of the EU rules in SCHUFA Holding (C-634/21), establishing limits on automated credit scoring under GDPR.
- 27 February 2025 - Court of Justice rules in Dun and Bradstreet (C-203/22) that complex algorithmic descriptions alone do not constitute a comprehensible explanation.
- 6 March 2025 - Dutch DPA launches public consultation on meaningful human intervention in algorithmic decision-making, open until 6 April 2025.
- 23 May 2025 - Dutch DPA publishes consultation on GDPR preconditions for generative AI models, open until 27 June 2025.
- 3 June 2025 - AP publishes summary of consultation responses on meaningful human intervention in algorithmic decision-making.
- 15 July 2025 - AP and Directie Coordinatie Algoritmes publish the fifth Netherlands AI and Algorithms Report, confirming a regulatory sandbox launching by August 2026.
- 2 August 2025 - AI Act's most stringent obligations enter into application, including prohibitions on social scoring under Article 5.
- 23 October 2025 - AP publishes second guide on building AI literacy in organisations.
- 16 November 2025 - Dutch DPA publishes consultation findings on AI social scoring prohibition under AI Act Article 5.
- 13 December 2025 - Netherlands raises formal concerns about EU Digital Omnibus proposals affecting GDPR automated decision-making rules.
- 12 February 2026 - AP issues formal warning against open-source AI agent systems, citing GDPR compliance risks.
- 21 April 2026 - AP publishes survey showing nearly two in five Dutch residents are unaware of their right to human intervention in automated decisions.
- 25 April 2026 - AP opens public consultation on draft guidance "The right to an explanation in automated decision-making," with a deadline of 26 May 2026.
Summary
Who: The Autoriteit Persoonsgegevens (AP), the Dutch data protection authority, published the draft guidance and opened the consultation. The document is addressed to both organisations required to provide explanations and individuals entitled to receive them.
What: A 31-page draft guidance document titled "The right to an explanation in automated decision-making," covering the legal basis for explanation obligations under GDPR Article 22, the distinction between general and specific explanations, a classification of algorithm types from insightful to opaque, techniques including SHAP values and counterfactual explanations, governance requirements including explainability-by-design, and the limits that trade secrets and anti-gaming considerations may place on explanation scope.
When: The consultation opened on 25 April 2026. Responses must be submitted by 26 May 2026. The AP will use the feedback to refine the final guidance document before publication.
Where: The consultation is addressed to organisations operating in the Netherlands and across the European Union under GDPR Article 22. Responses can be submitted by email to [email protected].
Why: Automated decision-making is increasingly used across sectors - from credit assessments to insurance pricing, job application screening, and online service access - and existing GDPR rights for individuals affected by such decisions are poorly understood. According to AP-commissioned research published four days before this consultation, nearly two in five Dutch residents are unaware they have the right to request human intervention in automated decisions that affect them. The guidance is designed to close the gap between rights on paper and practical enforcement, while also giving organisations a concrete reference point for compliance as the EU AI Act introduces further explainability requirements from 2026 onward.