Spain's data protection authority this month released a comprehensive technical guide examining how agentic artificial intelligence systems create new privacy risks that existing compliance frameworks were not designed to handle. The Agencia Española de Protección de Datos (AEPD) published the 71-page document - titled "Agentic Artificial Intelligence from the Perspective of Data Protection," version 1.1 - in February 2026, establishing one of the most detailed regulatory analyses of AI agent architecture yet produced by a European data protection authority.

The document is not an enforcement ruling. It is a structured technical and legal framework intended to help organisations that deploy AI agents understand where GDPR obligations arise and what measures can reduce risk. At 71 pages, it covers AI agent architecture, memory systems, prompt injection vulnerabilities, automated decisions under Article 22 of the GDPR, and a catalogue of recommended technical measures. The AEPD makes clear that "both the irrational rejection of agentic AI and its uncritical acceptance in the processing of personal data can be harmful."

What the AEPD says an AI agent actually is

The guide opens by defining what constitutes an AI agent - something regulators have rarely attempted with technical precision. According to the AEPD, an AI agent is "an artificial intelligence system that uses language models to meet a goal," one that "acts appropriately according to their circumstances and objectives, is flexible in the face of changing environments and goals, learns from experience and makes appropriate decisions given their perceptual and computational limitations."

The document distinguishes agents from ordinary chatbot interactions by pointing to their core characteristic: decomposing complex tasks into subtasks "executed in a planned way creating a Chain-of-thoughts, each of them implemented with different tools and that perceive the environment through access to internal and external services." This Chain-of-thoughts structure - the pipeline by which an agent breaks down problems into sequential logic steps - is central to the document's analysis because it determines how personal data flows through a system that may involve dozens of components, each with its own data retention and processing behaviour.

Six defining characteristics set agents apart from passive AI tools: autonomy, environmental perception, action capacity, proactivity, planning and reasoning, and memory with adaptability. Unlike a static language model that responds to a single prompt, an agent can initiate actions without explicit instructions, connect to databases, execute code, send communications, and build profiles of users over time. The AEPD notes that standardised protocols such as the Model Context Protocol (MCP) and the Agent-to-Agent Protocol (A2A) now enable systematic connectivity between these systems - the same MCP infrastructure that advertising platforms have raced to adopt to enable autonomous campaign management.

Multi-agent architectures receive specific attention. According to the AEPD document, a multi-agent system "combines multiple agents, where each agent's behaviour and responsibilities are strictly defined, they share information and decisions, and they are able to collaborate, compete, or negotiate with each other to achieve more elaborate goals." These architectures can take centralised, sequential, distributed, or hierarchical forms. Each configuration carries different implications for determining which entity is a controller, which is a processor, and which obligations fall to whom under GDPR.

The vulnerability landscape

The core of the document is a detailed taxonomy of vulnerabilities - weaknesses that can be exploited to compromise personal data - organised into four categories: interaction with the environment, service integration, memory, and autonomy.

On environmental interaction, the AEPD highlights that when agents connect to external services they create "de facto partial outputs" - data flows that may not be visible to the user or the controller, yet contain personal information. Uncontrolled access to internal repositories such as email accounts, meeting notes, customer databases, and HR records risks violating the GDPR's minimisation principle because the agent may draw on far more data than any given task requires.

The document introduces a concept with direct relevance to organisations deploying off-the-shelf agentic tools: "BYOAgentic (Build Your Own Agentic)." Just as BYOD (Bring Your Own Device) created unmanaged security gaps in corporate environments, and BYOAI (Bring Your Own Artificial Intelligence) imported unvetted AI tools, the ease of deploying agentic workflows - through platforms with visual, no-code interfaces - means employees may build and run data-processing agents without governance oversight. The AEPD warns that "the temptation for unqualified users to be dazzled by its possibilities and carry out deployments outside the governance and information policies of the entity" creates serious compliance exposure. Introducing agentic AI into processing, the document states, "implies redesigning a process of the organisation in which at least the functional, ICT and quality managers should intervene, in addition to the DPO when appropriate."

Memory is treated as both a capability and a risk vector. Agents rely on short-term memory, which retains context within a single session, and long-term memory, which spans conversations and builds up over time. Long-term memory subdivides further into semantic memory (facts and concepts, including user profiles), episodic memory (past actions and how to perform tasks), and procedural memory (rules for completing tasks). The AEPD identifies four dimensions of memory risk: relevance (what gets stored must be controlled), consistency (stored data must be accurate and up to date), retention (minimum necessary data must be enforced), and integrity (stored information can be manipulated, enabling model poisoning).

Shadow memory - the operational logs generated by each component - receives particular attention. These logs can serve as a data protection measure by enabling auditability and traceability. But they become a risk if they enable "hyper-surveillance" of users, if authorised personnel misuse them, or if they feed back into model training without appropriate legal basis. In 2025 alone, according to a footnote in the AEPD document, more than 200 million personal data breach notifications were submitted to the AEPD by Spanish data controllers, "which means that an average of four personal data breaches were communicated to each Spanish citizen."

Autonomy, Article 22, and decisions about people

The AEPD maps four levels of agent autonomy, from "agent proposes, human operates" through to "agent operates, human observes." These levels correspond directly to the intensity of GDPR scrutiny that should apply. At the lowest autonomy level, the agent functions essentially as a recommendation engine. At the highest, it executes consequential actions - modifying records, sending communications, entering into service agreements - with no human checkpoint.

Automated decisions under Article 22 of the GDPR are a particular focus. The regulation restricts automated decisions that produce legal effects or similarly significant impacts on individuals, requiring either explicit consent, contractual necessity, or a specific national law. According to the AEPD, agentic systems make this more complex because "the decision-making processes of AI agents can generate more pronounced obstacles to achieving significant explainability." The document also distinguishes between Article 22 decisions - which carry specific legal obligations - and a broader category of "other automated actions" that may not meet the Article 22 threshold but still carry risk: data deletion, contract entry, communications to third parties.

The authority warns against automation bias - the documented tendency of humans to accept automated outputs without sufficient critical analysis - noting it is exacerbated when systems operate with high autonomy and insufficient transparency. This has direct relevance for the advertising sector, where agentic systems are now executing live media campaigns autonomously, often with limited human review of individual decisions.

Compounding errors are identified as a specific risk in long processing chains. A poorly constructed query to a database returns incomplete data; the agent treats it as complete; erroneous inferences follow; wrong tasks execute. Each step multiplies the error. "The accuracy of an AI agent decreases as a task requires more steps," the document notes, drawing on a structural weakness that applies whether the agent is managing a travel booking or processing a customer complaint.

Prompt injection: the attack the industry has underestimated

The AEPD dedicates significant space to prompt injection - a class of attack in which malicious instructions are embedded in data the agent processes, redirecting its behaviour. The document classifies prompt injection into direct attacks (where a user introduces inputs designed to override the agent's intended behaviour) and indirect attacks (where hidden instructions are placed in PDFs, emails, websites, or other sources the agent is expected to read).

The scope of attack vectors catalogued is extensive. Zero-click attacks activate malicious instructions the moment an agent reads content - no user interaction required. Data exfiltration via URL parameters embeds stolen credentials or other data as image parameters, invisible to users. Session hijacking exploits the broad access agents hold across email, CRM systems, project management tools, and messaging services, allowing a single compromised command to move laterally through an organisation's entire digital infrastructure. Memory poisoning corrupts the RAG (Retrieval Augmented Generation) repositories an agent draws on, embedding biased or false content into persistent knowledge stores that then influence future decisions.

Long pipeline attacks are designed specifically for the Chain-of-thoughts architecture: "The adversary could introduce malicious information early in the Chain-of-thoughts, knowing that the content will go through several transformations, will be combined with legitimate data, and the agent will treat it as reliable information in later phases." The delayed trigger variant goes further, embedding instructions that lie dormant until a specific condition is met.

These are not theoretical. The document cites real attack patterns, and security research published in July 2025 identified fundamental weaknesses in MCP implementations that could expose marketing technology platforms to exactly these kinds of data risks.

The compliance obligations

Chapter V of the AEPD document works through the standard GDPR compliance framework as it applies to agentic systems, identifying where existing rules create novel challenges.

Determining controller and processor status becomes significantly more complex in multi-agent systems. An organisation deploying an agentic AI is typically the controller, while providers of external LLM services, workflow automation platforms, and cloud memory systems function as processors - or in some cases, independent controllers. The AEPD emphasises that this mapping must be documented and that data processing agreements must be in place.

Transparency is harder to achieve when decisions "emerge from chains of inference distributed among several agents and tools." Users interacting with an agent may have no visibility into which services were queried, what data was used, or how intermediate results shaped the final output. The document ties this to the concept of "technological fog" - the risk that stakeholders accept an AI-powered system as legitimate simply because it uses AI terminology, without understanding or evidence.

Data minimisation requires specific technical implementation. According to the AEPD, effective minimisation for agentic systems involves defining explicit access policies for every repository the agent can reach, cataloguing both structured and unstructured data sources, filtering data streams before they enter the agent's context window, implementing granular controls on what information persists in memory, and pseudonymising users wherever possible.

Data Protection Impact Assessments are required when processing is likely to result in high risk to individuals. The AEPD notes that almost any agentic deployment involving special categories of personal data - health information, political opinions, biometric data - or profiling at scale would likely trigger this obligation. The involvement of the Data Protection Officer is identified as "key" both in assessing proposed deployments and in ongoing governance.

What measures the AEPD recommends

The document's final section runs through a taxonomy of recommended technical and organisational measures. Several stand out for their specificity.

The "Rule of 2" appears in the risk management section. Without elaborating it as a formal standard, the AEPD discusses the precautionary principle in terms of layering human oversight requirements - particularly the "principle of four eyes" (requiring two authorised persons to approve sensitive actions) and the concept of escalation paths that define when agent actions must pause for human review.

Memory compartmentalisation is recommended as a structural design requirement, not an optional enhancement. Different processing activities should maintain logically or physically separate memory stores, preventing data from one purpose contaminating decisions made for another. The document is explicit: "the use of the same agentic AI in the organisation for different processing without taking into account the need for data compartmentalisation between the processing could cause excessive processing of personal data."

Golden Testing Practices - described as "having a set of procedures and data designed, repeatable and prepared to compare the current result of a system with a reference result considered correct" - are recommended as a mechanism for maintaining explainability, catching behavioural drift, and satisfying auditability requirements under GDPR.

On the question of credential management, the AEPD warns that "excessive permissions to agents are a critical factor as a broad-privileged model can be used as a pivot point between different systems," allowing a single compromised agent to access databases, internal services, and sensitive credentials across an organisation's entire infrastructure.

Why this matters for marketing and advertising

The AEPD document has direct implications for the advertising and marketing technology sector. AI agents are now running live media campaigns, managing customer journeys, personalising content in real time, and increasingly operating without human approval at the campaign execution level. Each of these functions involves the processing of personal data - targeting parameters, behavioural signals, purchase history, location data - at scale and at speed.

The IAB Tech Lab's agentic roadmap, published in January 2025, extended OpenRTB and AdCOM standards to accommodate AI agents in programmatic transactions. The Ad Context Protocol, launched in October 2025, introduced nine core tasks for autonomous campaign management - from inventory discovery through budget allocation to performance optimisation. Usercentrics acquired MCP Manager in January 2026 specifically to extend consent management frameworks into agentic workflows, recognising that traditional consent infrastructure was not built for systems that access data autonomously.

The AEPD document provides the most detailed regulatory mapping yet produced of how GDPR applies to these systems. It does not create new law, but it signals how the Spanish regulator - which has previously imposed fines of €1.8 millionfor unlawful personal data processing - expects organisations to approach compliance when deploying agents.

The European Commission's proposed GDPR amendments circulated in November 2025 would, if adopted, create a legitimate interest basis for AI training and narrow the definition of personal data. But those proposals remain contested and unresolved. In the meantime, the existing GDPR framework applies. The AEPD's conclusion is measured: "an implementation of agentic AI taking into account data protection by design makes possible to define agent-based personal data processing that incorporates privacy enhancing technologies that offer superior guarantees to other ways to implement the processing." Properly built, an agentic AI can itself become a privacy-enhancing technology - one that proactively monitors service contracts and terms of service for compliance gaps. Poorly built, it becomes the source of the breach.

Timeline

  • May 2018 - GDPR enters into force across the European Union, establishing the unified data protection framework within which agentic AI must now comply.
  • August 2024 - EU AI Act enters into force, creating additional compliance obligations for high-risk AI systems that intersect with GDPR requirements. EDPB recommends data protection authorities as AI market surveillance bodies.
  • October 2024 - European Data Protection Board publishes 2024-2025 work programme, including planned guidance on AI and generative systems. Details at PPC Land.
  • November 2024 - Anthropic launches Model Context Protocol, establishing the technical standard that would become foundational infrastructure for agentic AI deployment across advertising platforms.
  • December 2024 - EDPB publishes opinion clarifying that AI models trained on personal data cannot automatically be considered anonymous. Covered by PPC Land.
  • February 2025 - European Commission withdraws ePrivacy Regulation proposal after eight years of failed negotiations, alongside withdrawal of AI Liability Directive. PPC Land coverage.
  • May 2025 - Dutch Data Protection Authority launches consultation on GDPR preconditions for generative AI. Covered by PPC Land.
  • October 2025 - Ad Context Protocol launches with 23 participating advertising technology companies, establishing automated campaign management standards. PPC Land coverage.
  • November 2025 - European Commission circulates draft amendments to GDPR through Digital Omnibus initiative, proposing new legitimate interest basis for AI training. PPC Land coverage.
  • November 2025 - IAB Tech Lab publishes Agentic RTB Framework for public comment, defining how AI agents participate in real-time advertising transactions. PPC Land coverage.
  • January 2026 - Usercentrics acquires MCP Manager to extend consent management into AI-driven workflows. PPC Land coverage.
  • January 2026 - Yahoo DSP integrates agentic AI, enabling autonomous campaign execution. PPC Land coverage.
  • February 2026 - AEPD publishes "Agentic Artificial Intelligence from the Perspective of Data Protection," version 1.1, establishing 71-page compliance framework for AI agent deployments under GDPR.
  • February 2026 - AEPD issues formal warning to Tools for Humanity over planned biometric iris-scanning operations in Spain. PPC Land coverage.

Summary

Who: The Agencia Española de Protección de Datos (AEPD), Spain's national data protection authority, headquartered at Calle Jorge Juan 6, 28001 Madrid.

What: The AEPD published "Agentic Artificial Intelligence from the Perspective of Data Protection," version 1.1 - a 71-page technical and legal guide mapping GDPR compliance obligations for organisations that deploy AI agent systems. The document covers AI agent architecture, memory risks, prompt injection attacks, autonomy levels, automated decisions under Article 22 GDPR, and a catalogue of recommended technical and organisational measures including memory compartmentalisation, data minimisation policies, golden testing practices, and governance frameworks.

When: The document was published in February 2026.

Where: The guide applies to any organisation operating as a data controller or processor in the European Union that deploys agentic AI systems in the processing of personal data. The document references Spanish and EU regulatory frameworks and was produced by Spain's national supervisory authority.

Why: Agentic AI systems introduce structural privacy risks that differ in kind, not just degree, from those posed by conventional AI tools: they operate autonomously, access multiple internal and external services simultaneously, build persistent memory profiles of users, and execute consequential actions without mandatory human checkpoints. The AEPD published the guide to help controllers and processors make evidence-based decisions about deploying these systems in a way that complies with GDPR, manages risk to data subjects' rights and freedoms, and avoids both uncritical adoption and irrational rejection of a technology that is now embedded in business processes across public and private sectors.

Share this article
The link has been copied!