The Belgian Data Protection Authority this month released a 15-page brochure titled "The Impact of Artificial Intelligence (AI) on Privacy," the first publication in a new series called "AI & Data Protection." Dated April 2026, the document is aimed at citizens who interact with AI systems in daily life - through chatbots, mobile apps, connected devices, and online platforms - and requires no technical or legal background to read.
The release follows a December 2024 brochure from the same authority on "Artificial Intelligence Systems and the GDPR: A Data Protection Perspective." The new document builds on that foundation, but shifts focus from industry and regulators toward ordinary users. It is intended, according to the brochure, to give individuals practical tools for maintaining control over personal data "in a world where AI plays a dominant role."
For marketing and advertising technology professionals, the publication carries direct implications. AI systems are now central infrastructure for audience segmentation, behavioral targeting, automated bidding, and content personalization - and the rights described in this document apply to every individual whose data moves through those pipelines. The Belgian DPA's 2026-2028 strategic plan explicitly targets large-scale advertising technology platforms and data brokers as enforcement priorities, making citizen literacy around these rights a precursor to regulatory action.
What the document defines as an AI system
The brochure opens with a legal definition drawn from Article 3(1) of the AI Act, describing an AI system as "a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments."
That definition adopts a lifecycle perspective. The document notes the definition distinguishes development from deployment, recognizing that characteristics can appear in one phase without carrying over to the other.
A key conceptual distinction concerns AI models versus AI systems. According to the brochure, an AI model is "an algorithm trained on a dataset to perform a set of tasks predefined or learned through training." The AI system, by contrast, integrates that model with other components - monitoring tools, APIs, interfaces, and infrastructure. The brochure uses a cooking analogy: the AI model is the recipe, while the AI system bakes the cake. The output quality depends on the data (ingredients), the model architecture (the recipe), and the algorithm (the steps).
This distinction matters in regulatory and technical contexts alike. What separates AI systems from traditional automation tools - such as pre-defined customer segmentation or fixed-rule data masking - is their "ability to infer from data or knowledge," according to the document. Traditional systems operate on pre-defined rules; AI systems learn and adapt.
Six categories of AI applications
The document classifies AI applications into six categories, each with distinct data dependencies.
Expert systems simulate human decision-making in specific domains, such as automated clinical diagnostics. They typically require structured data: health records, legal case histories, or diagnostic information. Autonomous systems - self-driving vehicles, drones - process location data, sensor readings, and biometric identifiers. Cognitive computingmimics human thought to interpret unstructured data such as emails, voice recordings, and chat logs, often used in evaluating patient information.
Computer vision systems interpret images and video for recognition, tracking, and analysis. According to the brochure, these can process facial images, gait patterns, and video footage - categories with significant implications for surveillance applications. AI-powered robots interact with physical environments and may use audio-visual data and location data. Natural language processing systems - chatbots being the clearest example - process chat histories and voice commands to generate text or speech.
This taxonomy matters for data professionals because each category implies a different dataset. The advertising and programmatic sector routinely employs NLP, cognitive computing, and computer vision systems, all of which rely on personal data collected at scale.
Eight phases of the AI lifecycle
The brochure's most technically detailed section maps data processing activities across eight lifecycle phases. Understanding these phases is important for organizations that must demonstrate GDPR compliance - and for individuals seeking to understand at which points their data is used.
The first phase is problem definition: identifying the AI system's purpose, objectives, success criteria, and regulatory obligations. The second is data collection and acquisition, which the document describes as gathering raw data from sources including social media activity (posts, likes, comments), customer databases (names, emails, purchase history), transaction records (credit-card payments, loyalty-card logs), browsing history (visited websites, search queries, click patterns), public records, and smart devices (voice commands, GPS location, fitness tracker metrics).
Phase three covers data storage and management, requiring encryption, access controls, and privacy-preserving mechanisms. Phase four is data cleaning and preparation - correcting errors, standardizing formats, and where possible applying pseudonymisation or anonymisation. At this stage, data is typically split into three datasets: training, validation, and testing.
Phase five is training and validation, where datasets are used to build and assess the model, measuring accuracy, fairness, and generalisability. Phase six is deployment and inference - the system receives live input and generates outputs, with filtering mechanisms applied to avoid prohibited outputs. Phase seven involves monitoring, maintenance, and governance, including model fine-tuning, retraining, and compliance audits. The eighth and final phase covers data retention, deletion, or archiving. Personal data must be retained only as long as necessary and securely deleted or anonymised when no longer needed.
The document notes that phases seven and eight often run simultaneously, representing a continuous process rather than a discrete endpoint.
Privacy risks: scale and opacity
The brochure's section on privacy risks offers a candid account of how AI amplifies threats that previously existed at smaller scale. According to the document, "risks that were once isolated - such as manual profiling, limited surveillance, or targeted advertising - can now be transformed into systemic concerns." AI systems can be used, it states, to profile millions of users, apply facial recognition across public spaces, and deliver highly personalized and targeted advertisements.
The speed, scale, and opacity of automated decision-making make violations difficult to prevent. Article 5(1)(c) of the GDPR - the data minimisation principle - is specifically at risk, the document notes, because AI systems are capable of processing vast quantities of personal data in real time, potentially far exceeding what is necessary for the stated purpose.
There is a second, subtler risk. Using machine learning techniques, AI systems can derive sensitive attributes - sexual orientation, religious beliefs, health conditions, emotional state - from indirect data sources such as browsing history, purchase records, or voice tone. This happens, according to the brochure, "often without the individual's knowledge or consent." These inferences may also be probabilistic rather than verified, meaning they carry inherent inaccuracy even as they are used to make consequential decisions.
The marketing sector is directly implicated. Brussels has proposed GDPR amendments that would create a legitimate interest basis for AI training, including the processing of special categories of personal data. Privacy advocates have warned that the changes could eliminate protections for inferred sensitive data used in online advertising. The Belgian DPA's new brochure, by contrast, anchors its analysis firmly in existing law.
GDPR Article 5 and the core compliance principles
Section three of the document outlines how the GDPR applies to AI systems. Article 5 of the GDPR requires that personal data be processed lawfully, fairly, and transparently; collected for specified and legitimate purposes; adequate, relevant, and necessary; accurate and up to date; stored no longer than necessary; and appropriately secured. Controllers must also be able to demonstrate compliance - the accountability principle.
A notable technical point: even AI systems designed to avoid processing personal data will process it incidentally during development. Systems trained to recognize personal data for the purpose of filtering it out must, by definition, include that data in training, validation, and testing datasets. Personal data is therefore processed in the development phase regardless of intent.
Transparency obligations require that controllers explain the logic underpinning automated decision-making in clear and accessible language. This obligation is particularly demanding when the system's outputs are opaque or probabilistic.
Four practical areas for individuals
The brochure identifies four areas individuals should review. First, privacy policies and default settings: agreeing to a privacy policy grants the controller permission to process the personal data described therein. Settings related to chat history, search history, AI training, personalised advertising, analytics sharing, and automatic cloud backups are often enabled by default and can be adjusted. Second, caution when sharing personal information: before entering medical details, financial data, or personal images into chatbots or online platforms, individuals should consider whether they would be comfortable sharing the same information in a room with strangers - a test the document explicitly proposes.
Third, software and device updates: regular updates reduce security vulnerabilities and the risk of unauthorized data access. Fourth, application and device permissions: camera, microphone, location data, contacts, and file access should be reviewed and limited where not essential.
Eight data subject rights explained
The document catalogues eight rights under the GDPR that individuals can exercise against data controllers.
The right to information (Articles 13-14) requires controllers to notify individuals of processing activities. Where data is collected indirectly - via web scraping or licensing agreements - controllers must inform the data subject within one month of collection. The right of access (Article 15) enables individuals to confirm whether their data is being processed and to obtain a free copy if necessary to understand the context of processing.
The right to erasure (Article 17) applies when data is no longer necessary for its original purpose, when consent is withdrawn, when data has been unlawfully processed, or when a legal obligation requires deletion. The right to object(Article 21) is unconditional in the context of direct marketing - no justification is required. Once an objection is made for other purposes, the controller must halt processing unless it can demonstrate overriding legitimate grounds.
The right to restriction (Article 18) pauses all processing, not merely specific purposes - distinguishing it from the right to object. The right to rectification (Article 16) covers both correction of inaccurate data and completion of incomplete data. The right to data portability (Article 20) entitles individuals to receive their data in a machine-readable format and to have it transmitted directly to another controller where technically feasible.
The right not to be subject to automated decision-making (Article 22) prohibits decisions based solely on automated processing - including profiling - that produce legal effects or similarly significant impacts, unless the processing is necessary for a contract, authorized by law, or carried out with explicit consent. For this right to apply, the decision must lack meaningful human involvement. The document defines meaningful human involvement as the participation of an individual with "sufficient authority to alter the outcome of the processing activity." Additionally, under Article 15(1)(h) of the GDPR, data subjects are entitled to an explanation of the logic involved in automated decision-making and its envisaged consequences.
This right has substantial implications across automated bidding, credit scoring, content moderation, and recruitment screening. Spain's data watchdog in February 2026 published a 71-page analysis of agentic AI under the GDPR, noting that AI agent architectures make explainability increasingly difficult to achieve.
Exercising rights against a controller
The brochure provides procedural guidance for individuals who wish to exercise these rights. Controllers must respond within one month of receiving a request, with an extension of up to two additional months permitted in justified circumstances. The Belgian DPA provides template letters for formulating such requests. Individuals are advised to address requests to the designated Data Protection Officer where one exists, retain evidence of the request, and escalate to the Belgian DPA if the controller fails to respond adequately. Escalation paths include requesting mediation or filing a formal complaint - the latter can result in warnings, administrative fines, or orders to cease specific processing activities.
Both the Belgian and French data protection authorities have separately published guidance on preventing tech companies - specifically naming Meta, TikTok, Microsoft, and X - from using personal data to train AI systems.
For the advertising and marketing community, the brochure lands at a moment of significant regulatory uncertainty. The European Commission has proposed major GDPR amendments that would expand the lawful bases for AI training. The European Parliament committees voted 101 to 9 in March 2026 to postpone AI Act high-risk system deadlines to December 2027 and August 2028. The GDPR's AI training legal landscape across jurisdictions remains fragmented, with regulators converging on principles while diverging on enforcement approaches. The Belgian DPA's new citizen-facing publication sits in that context: a regulatory authority publishing tools for individual enforcement at precisely the moment when the broader framework is under legislative revision.
Timeline
- December 2024 - Belgian DPA publishes "Artificial Intelligence Systems and the GDPR: A Data Protection Perspective," the foundational document referenced in the April 2026 brochure
- December 23, 2025 - Belgian DPA publishes 2026-2028 strategic framework targeting advertising technology platforms and data brokers as enforcement priorities
- November 2025 - European Commission proposes major GDPR changes through the Digital Omnibus, including legitimate interest basis for AI training
- November 2025 - European Commission proposes additional GDPR amendments covering automated decision-making and personal data definitions
- February 2026 - Spain's AEPD publishes 71-page guide on agentic AI and GDPR compliance, covering Article 22 automated decisions and prompt injection risks
- March 18, 2026 - European Parliament committees vote 101-9 to delay AI Act high-risk system obligations to fixed 2027 and 2028 deadlines
- April 15, 2026 - Belgian DPA publishes "The Impact of Artificial Intelligence (AI) on Privacy," first brochure in the new "AI & Data Protection" citizen series
- Ongoing - Legal analysis of GDPR AI training legal bases across 19 regulatory jurisdictions documents continuing divergence in enforcement approaches
Summary
Who: The Belgian Data Protection Authority (Belgian DPA), the national supervisory authority responsible for enforcing the GDPR in Belgium.
What: Publication of a 15-page citizens' brochure titled "The Impact of Artificial Intelligence (AI) on Privacy," the first in a new "AI & Data Protection" series. The document defines AI systems and models, maps the 8-phase AI lifecycle, describes privacy risks including inference of sensitive data, and catalogues 8 GDPR data subject rights with procedural guidance for enforcement.
When: Published today, April 15, 2026, version dated April 2026. It follows a December 2024 predecessor document covering AI systems and the GDPR from an industry and regulatory perspective.
Where: Published by the Belgian DPA as a public document, targeted at citizens across the European Union who interact with AI systems in daily life - including through online platforms, mobile applications, and connected devices.
Why: The Belgian DPA states the brochure was prepared because the "complexity and opaqueness of AI systems make it difficult to understand the personal data being gathered, the purpose of processing, or the way decisions are taken," resulting in a "loss of control over personal data and limiting individuals' capacity to challenge unfair outcomes." The publication arrives as European legislative proposals seek to expand the legal bases for AI training and as the Belgian DPA prepares to intensify enforcement against large-scale data processing operations.