The Dutch Data Protection Authority, the Autoriteit Persoonsgegevens (AP), published on 21 April 2026 the results of a major study into how Dutch residents perceive and understand the role of algorithms and artificial intelligence in decisions that affect their daily lives. The research, conducted by Invior, found that nearly two in five respondents are unaware of their right to human intervention when an automated system makes a consequential decision about them - a right embedded in Article 22 of the General Data Protection Regulation (GDPR). The findings arrive at a moment when algorithmic regulation in the Netherlands is accelerating, and the gap between legal protections on paper and public awareness of those protections is becoming a policy problem in its own right.

The scale of the research

The survey was carried out in April 2025 using an online questionnaire distributed to 10,000 members of the Tip-burgerpanel, the independent citizens panel of Invior, which has close to 100,000 members distributed across every municipality in the Netherlands. A total of 1,480 residents completed the questionnaire in full, representing a response rate of 15%. Results were weighted by age and gender to match the actual population composition of adults aged 18 and over in the Netherlands. At this sample size, outcomes can deviate from the true population view by approximately 2.5 percentage points, with 95% confidence.

The qualitative phase followed in May 2025. Three online focus groups were held on 7 May, each lasting a maximum of 90 minutes and moderated by a NOBTRA-certified facilitator. In total, 21 residents took part, selected on the basis of gender, age and self-reported digital skill level. Participants ranged from students and young professionals working in healthcare, education, government and technology, to retirees. Some were intensive users of AI systems; others described themselves as reluctant or distrustful of the technology.

The research covered four areas: whether residents can recognise risks around personal data in algorithmic contexts; how familiar they are with the AP itself; what they know of their legal rights; and what additional information or guidance they want.

What the numbers show about awareness

The headline finding is that 76% of respondents believe they encounter algorithms and AI on a daily basis. That figure rises to 88% among those who rate themselves as highly digitally skilled, and falls to 44% among those with the lowest self-assessed digital skill. The divergence matters: respondents in the least skilled group were also the most likely to say they never encounter algorithms at all - a belief that is almost certainly inaccurate given the pervasiveness of algorithmic systems in credit scoring, social housing allocation, recruitment and online advertising.

Awareness of the fact that personal data is processed by these systems is more nuanced. According to the report, 57% of respondents said they knew this was happening and understood how it worked. A further 36% said they knew personal data was involved but did not understand the mechanics. Only 1% said they were entirely unaware. Yet the focus groups complicated this picture considerably. Participants described an understanding that is, at best, approximate. According to the report, "even residents who believe they understand how algorithms and AI process personal data have only a general picture of this." The complexity of the systems, their abstract character and the opacity of the organisations deploying them all contribute to a sense of helplessness rather than informed control.

The emotional register is skewed towards concern. When asked what comes to mind when thinking about algorithms and AI processing personal data, 60% cited worries about information security and 59% cited privacy concerns. By contrast, 42% acknowledged that the technology offers considerable possibilities and 36% said it makes things easier. Very few respondents reported feeling safer (1%) or experiencing enjoyment (4%) as a result of algorithmic data processing. The focus groups added colour to this result: participants described specific anxieties about what happens to data already held in large databases, and concern that some applications are designed not to help users but to keep them engaged. One focus group participant stated that "some apps are literally made to get us addicted."

The marketing and advertising angle

For marketing professionals, the scenario most directly relevant to their work is the one on targeted advertising. Respondents were presented with a situation in which someone organising a party visits several websites for party supplies and then, days later, sees their social media feeds fill with advertising and articles on the same subject. An overwhelming 87% rated this as very probably or probably caused by algorithmic personal data processing. The figure was the highest across all six scenarios tested. Only 2% found this very unlikely.

A closely related question asked how likely respondents thought it was that websites collect personal data and share it with other companies. Here 82% rated this as very probable or probable. The tracking cookie compliance landscape in the Netherlands has been a live enforcement area for the AP over the past several years, with investigations into cookie banners completed as recently as March 2025.

The survey finding suggests that the general public understands the commercial data economy at a high level - people accept that browsing behaviour drives advertising - but the mechanism that produces that outcome, including the role of consent, cookie law and real-time data sharing, remains opaque. According to the report, the focus group discussions revealed that participants are aware they often accept terms and conditions without reading them simply to gain access to digital services. The draw of social media and other platforms outweighs the critical attitude that privacy law assumes people should adopt. One participant described their daughter clicking "agree" without reading, adding: "And her whole class does the same."

Chatbots on webshops were the second most recognised scenario. Roughly two thirds of respondents (68%) considered it probable or very probable that chatbots process personal data through algorithms and AI. A large majority also thought it probable that chatbot interactions lead to more targeted advertising.

Less visible uses draw lower awareness

Awareness drops sharply when the algorithmic connection is not obvious. Only 36% of respondents considered it probable or very probable that the social housing allocation scenario - in which a longstanding applicant is never invited to view properties while their peers are - was caused by algorithmic personal data processing. The scenario was drawn from a real case documented by the AP, in which a man named Jason discovered, through a data access request, that a software system had classified him incorrectly during an initial automated selection stage, excluding him from viewings because of an error in how his income was calculated.

Recruitment was similarly underrecognised: 39% of respondents considered it probable or very probable that repeated rejections after applying to multiple large companies reflected algorithmic decision-making. According to a 2020 report by the College voor de Rechten van de Mens cited in the research, more than one in ten employers use algorithms when selecting and evaluating job applicants, with pre-screening and automated filtering on competency tests being common practices among larger organisations. These systems introduce risks of exclusion and discrimination that are harder to detect precisely because the link to an algorithm is not visible to the applicant.

The AP launched a public consultation on meaningful human intervention in algorithmic decision-making in March 2025, followed by a summary of responses in June 2025 from government bodies, independent foundations, companies and researchers. The consultation highlighted persistent implementation challenges, including automation bias - the tendency of human reviewers to defer to algorithmic outputs rather than genuinely overrule them.

The facial recognition scenario showed middling awareness: 50% of respondents considered it probable or very probable that being flagged as a suspect by shopping centre security cameras involved algorithmic processing. This is notable given that in 2020 the AP issued a formal warning to a supermarket for using facial recognition to compare faces of all entering visitors against a database of individuals who had previously received bans. The practice was found to involve biometric data processing, which is subject to stricter conditions under the GDPR.

The rights gap

The most technically significant finding in the survey concerns knowledge of GDPR rights. Across several well-established rights, awareness is relatively high. According to the report, 89% of respondents knew they have the right to know what an organisation does with their data and to ask for that information. 84% knew they must be informed in advance of what an organisation intends to do with their data and why, a requirement established in Article 13 of the GDPR. 81% knew they have the right to have their data corrected or deleted.

Yet the right to human oversight of automated decisions, derived from Article 22 of the GDPR, is far less well known. The GDPR states that individuals must not be subjected to decisions based solely on automated processing that produce legal effects or similarly significantly affect them, without human intervention. According to the survey, 37% of respondents did not know this right exists. The focus groups confirmed that people regard this right as important in principle - they believe AI must always be subordinate to human judgement on consequential matters - but they had not been aware they could invoke it. As one participant put it: "Technology can handle trivial matters, but humans must be able to steer things where it really matters."

There is also a widespread misconception running in the opposite direction. The report includes a deliberate false option in the list of rights presented to respondents: "I have the right not to be treated by a computer." This is not an established right under the GDPR. Yet 37% of respondents believed it was. The group most likely to hold incorrect beliefs about their rights - both in terms of thinking they have rights they do not, and not knowing about rights they do have - skews towards people with lower digital skill levels. However, even among the most digitally skilled group, 40% incorrectly believed they had no right to human oversight.

Adverse decisions and the reporting gap

The survey asked whether respondents had ever experienced an organisation making an adverse decision about them without being able to explain why. According to the report, 19% said yes. The consequences they described were varied: impersonal rejection after a job application, blocked accounts, unwarranted fines, and denial of loans. Open responses added wrongful termination or incorrect award of government benefits, discrimination based on name or origin, and problems with debt collection. These are not trivial outcomes. They map closely to the domains where the AP has documented algorithmic errors - housing, finance, recruitment - and where the Dutch government has raised concerns about proposed GDPR amendments that could weaken automated decision-making protections.

Of those 19% who had experienced an adverse automated decision, only 44% made a complaint to the organisation involved. Just 5% contacted the AP directly. And 39% made no complaint at all. The reasons given are instructive. According to the report, 31% of non-reporters did not know they could make a complaint, and 27% did not know where or how to do so. A further 20% did not consider the situation serious enough, and 18% saw no point in complaining. Focus group participants said they did not know where to start, and doubted they would be believed.

Awareness of the AP itself

Before the survey, approximately one third of respondents said they knew the AP well. More than half had heard of it without being able to describe its role in detail. Only 4% said they had never heard of it. After being shown a brief description of the AP's function, 62% of those who had previously said they did not recognise it acknowledged they had encountered it before.

The AP is primarily associated with its media presence and its role as a supervisory authority. About 34% of respondents recognised the AP as a reporting point for personal data misuse. Its advisory and educational functions are considerably less well known. The focus groups revealed that respondents want a central reporting point - something with the clear, responsive character they associate with existing hotlines for specific types of abuse. One participant cited the hotline for reporting child sexual abuse material as an example of an institution where reports are visibly acted upon.

Nearly three quarters of respondents placed primary responsibility for protecting personal data from algorithmic processing with the AP itself. This is legally inaccurate: under the GDPR, primary responsibility lies with the organisations that deploy these technologies, described as data controllers. The AP's role is supervisory and enforcement-oriented rather than protective in a direct operational sense. Participants in the focus groups expressed doubts about whether organisations can be trusted to regulate themselves, particularly when commercial interests are involved. According to the report, one participant used the phrase "it feels like the butcher judging his own meat."

The AP's consultation on GDPR preconditions for generative AI, launched in May 2025, and the broader trajectory of Dutch AI regulation, including the planned launch of a regulatory sandbox by August 2026, suggest that the institutional framework is developing rapidly. The citizen survey data now provides the AP with a baseline measure of public understanding against which future awareness campaigns can be assessed.

What residents want

According to the report, 43% of respondents said they wanted additional information or guidance on how to protect their personal data from algorithmic processing. A slightly larger group - 44% - said they did not need more information, and 13% were unsure.

Among those who did express a desire for more information, 66% preferred a general website as the delivery channel, and 53% favoured online campaigns. Traditional broadcast media remained important: 39% mentioned television news and 36% newspapers. A quarter wanted information through evening sessions or public lectures. Physical campaigns - posters, leaflets - were chosen by 23%, and radio by 18%.

The focus groups introduced a historical reference point: participants cited the old Dutch Postbus 51 public information campaigns as a model worth reviving. Those advertisements, which ran across broadcast media for decades, used simple visual storytelling to explain social and civic issues to the general population. Focus group members stressed that campaigns must show the positive as well as the negative dimensions of AI. In the words of one participant: "If you only frighten people, it does not land. Show what can go right too."

School education was a consistent theme. Participants from across the age and skill spectrum argued that children need to learn about data, privacy and algorithmic risk as early as possible - treating digital literacy as comparable to traffic safety education. The AP published a comprehensive guide on building AI literacy in organisations in October 2025, addressing the institutional dimension of this gap. The citizen survey now raises the parallel question of public literacy.

Implications for the marketing industry

The survey has direct relevance for anyone operating in the digital advertising and data-driven marketing ecosystem. The study shows that public trust in the personalised advertising model is conditional and fragile. Nearly three in five respondents describe privacy concern as one of their first associations when thinking about AI and personal data. The near-universal recognition of the advertising targeting scenario - 87% - indicates that the mechanism connecting browsing behaviour to ad delivery is now broadly understood at a surface level. But understanding that something happens is different from accepting that it is legitimate.

The Dutch authority's consultation on meaningful human oversight in automated decision-making, whose results were released in June 2025, found that organisations across sectors are struggling to implement genuine human review of algorithmic outputs. For the advertising industry, where automated systems make decisions about audience selection, pricing, targeting eligibility and creative delivery at speeds that preclude meaningful human oversight of individual decisions, the regulatory trajectory is towards greater accountability. The citizen data now shows that the public wants this accountability, and that the absence of visible enforcement is a factor eroding trust.

Timeline

Summary

Who: The Autoriteit Persoonsgegevens (Dutch Data Protection Authority), together with research and advisory firm Invior, which conducted the survey and qualitative research on the AP's behalf.

What: A mixed-methods study combining an online survey of 1,480 Dutch residents and three online focus groups with 21 participants, examining awareness of algorithmic and AI personal data processing, knowledge of GDPR rights, familiarity with the AP, and preferences for information provision.

When: The quantitative survey was distributed on 7 April 2025; the three focus groups took place on 7 May 2025; the full research report was published in May 2025; the AP published the findings publicly on 21 April 2026.

Where: The research was conducted nationally across the Netherlands, drawing on members of the Tip-burgerpanel, which has close to 100,000 members in every Dutch municipality.

Why: The AP commissioned the research to establish a baseline of citizen knowledge and perception ahead of developing targeted interventions to improve public awareness of algorithmic data processing and the rights individuals hold under the GDPR. The authority identified a need for this data to make its outreach and enforcement communications more effective and better matched to the actual state of public understanding.

Share this article
The link has been copied!