The Digital Regulation Cooperation Forum (DRCF) on 31 March 2026 published a foresight paper titled "The Future of Agentic AI," setting out the UK's most detailed cross-regulatory assessment yet of autonomous AI systems that act on behalf of users rather than simply responding to them. The paper was produced jointly by the Competition and Markets Authority (CMA), the Financial Conduct Authority (FCA), the Information Commissioner's Office (ICO), and Ofcom - four regulators whose combined remits cover data protection, financial services, online safety, and market competition. It does not set policy but explicitly aims to accelerate informed debate across industry, government, and civil society.
According to the paper, agentic AI is defined as "systems of AI agents that behave and interact autonomously to achieve their objectives." The distinction from conventional generative AI is technical and consequential. Where a standard AI model responds to a prompt, an agent can assess goals, decompose them into subtasks, retrieve real-time data, execute actions such as making payments, and retain memory of past interactions. The paper states that "information retrieval alone does not make a system an agent" - what matters is whether the underlying technology enables the system to "take in data, make decisions, and carry out actions."
The DRCF identifies five levels of AI autonomy, arranged from least to most independent. At Level 1 sits the basic tool - a reactive system with no initiative or memory, capable only of what it is explicitly asked. Level 2 is the assistant, which can plan a few steps using approved tools and propose actions while deferring execution to the user. Level 3, the operator, handles a complete bounded workflow once authorised - submitting an expense claim, running a data pipeline, opening and closing support tickets - while monitoring progress and reporting back. Level 4, the collaborator, remains mostly theoretical at the time of publication: a semi-independent system able to initiate multi-step work without constant prompts but still requiring human approval for high-impact decisions. Level 5, the autonomous actor, is described as "still largely theoretical" - a system that can make decisions, manage resources, and improve itself with minimal human involvement, constrained only by safety controls such as kill switches and spending limits.
Most practical deployments today, the paper notes, sit at Levels 2 and 3. Customer-support copilots that triage tickets before handing them to humans, workflow agents that automate expense claims and data-pipeline runs, and fraud detection systems in financial services are the current standard. The DRCF does not treat this as a reason for complacency. Significant venture capital investment amounting to billions of dollars for agent-focused startups, it observes, may signal expectations of broader deployment and systemic impact within the near future.
What agents actually do
The technical architecture of an individual AI agent, according to the paper, rests on three core capabilities: reasoning and planning, tool use, and memory. The reasoning layer interprets user intent and sequences the steps needed to complete complex tasks. Tool use enables the agent to interact with external software - web services, third-party apps, enterprise systems - using protocols such as the Model Context Protocol (MCP) and Agent2Agent (A2A), the adoption of which has, the DRCF states, "made implementing tool use much easier." Memory may be internal - stored in the agent's context - or external, drawing on SQL databases or vector stores as a form of extended recall.
Multiple agents can combine. When they do, the DRCF describes the resulting structure as an "agentic AI system" where a supervisor agent might wait for accommodation-booking agents to complete their tasks before delegating instructions to flight-booking agents - a form of automated orchestration that can pursue goals even when individual steps encounter failure. These systems can also have governance features to trace data and action flows between agents, enabling auditing, and permissions features to restrict what specific agents can access or do.
For the advertising and marketing sector, this architecture is already attracting intense attention. The Ad Context Protocol launched by Scope3, Yahoo, PubMatic, Swivel, Triton Digital, and Optable in October 2025 positioned a unified open-source interface through which AI agents could discover inventory, compare pricing, and activate campaigns without custom integration for each platform. The launch divided the industry: supporters saw it as necessary infrastructure, while critics questioned whether it addressed the foundational transparency problems in programmatic advertising. As IAB Tech Lab CEO Anthony Katsur warned in December 2025, "Protocols don't solve for misaligned incentives and bad actors." The DRCF's paper provides the regulatory framework through which those concerns are now likely to be examined.
The opportunity case
The DRCF is not presenting agentic AI as a threat to be suppressed. The paper makes a sustained argument for its economic value, particularly for consumers and businesses managing routine transactional tasks. It describes the consumer promise as a "delegation layer" between people and the digital services they use: instead of navigating multiple websites, forms, logins, comparison tables, and customer-service queues, an agent could translate a person's intent - "sort my bills," "renew this policy," "find me a holiday within X budget and book it" - into a sequence of steps across tools and services, executed autonomously.
A large-scale study cited in the paper found productivity improvements of around 14 to 15 per cent in issues successfully resolved per hour from a generative AI assistant deployed in customer support, with particularly large gains for less experienced workers. The UK Government Digital Service's trial of Microsoft 365 Copilot across 20,000 staff reported average self-reported savings of 26 minutes per day. Allianz has launched an agentic AI solution to automate food spoilage claims using seven specialised AI agents that autonomously retrieve evidence from multiple sources and collaborate to process each claim.
The CMA, according to the paper, has built and deployed agentic AI to detect consumer harms such as drip pricing at scale. The authority's bid-rigging detection work addresses procurement representing more than £300 billion of UK public spending each year.
Risks: old problems, new scale
The paper does not separate the opportunity case from a systematic account of risk. Several categories receive detailed treatment.
Data protection is flagged as an area where agentic systems can exacerbate existing problems rather than create entirely new ones. Agents often require access to large volumes of personal and operational data, shared across multiple agents and integrated with external tools. The ICO has published its own Tech Futures report on agentic AI and stresses that organisations must maintain traceable logs and meaningful human involvement in decisions with legal or similarly significant effects. The data minimisation principle - requiring organisations to use only data necessary for the specific processing purpose - applies to agentic systems and, according to the paper, may be tested by the temptation to give agents "broad or unfettered access to data and resources to improve performance or accuracy."
Spain's data protection authority, the AEPD, published a 71-page technical guide in February 2026 establishing one of the most detailed regulatory analyses of AI agent architecture yet produced by a European data protection authority, covering prompt injection vulnerabilities, automated decisions under Article 22 GDPR, and a catalogue of recommended technical measures. The DRCF paper arrives in this same regulatory moment, and sits alongside an EU Parliament committee vote in March 2026 backing amendments to the AI Act that would push high-risk compliance deadlines to December 2027 and August 2028.
Algorithmic collusion receives extended treatment in the DRCF paper - and it is among the most technically specific sections. According to the paper, experiments have shown LLM-based agents colluding in multiple settings without explicit instruction. In price-setting, bidding markets, and financial markets, agents "repeatedly converged to supra-competitive prices and maintained them, even when the environment was noisy." LLMs were observed to "divide markets" and collude by dynamically adjusting resource allocation strategies. The paper acknowledges these results came from controlled experimental settings rather than real-world deployments, but states this "may warrant caution in deploying agents in roles like pricing." The CMA published separate analysis on AI and collusion on 4 March 2026.
The potential for hidden communication strategies between agents is raised as a speculative but technically grounded concern. According to the paper, "AI systems can be covertly trained to hide messages within ordinary text, without the user knowing." Emergent communication protocols faster than natural language - including human-developed examples like the gibberlink and Agent2Agent protocols - could enable covert coordination at network scale that is difficult for regulators or businesses to detect.
Action bundling is a risk specific to agentic systems. An agent executing a purchase might simultaneously pull personal data from several sources, compare products, make a recommendation, accept terms, initiate payment, share data with third parties for fulfilment, and send confirmation messages - without the user experiencing each step as a separate decision. The question this raises is whether consumers have genuinely understood what they have tasked the agent with, and at what point they are re-engaged in the process. The paper identifies this as an area where consumer protection law applies directly.
Cybersecurity risks are treated as both a benefit and a liability. Agentic AI may enhance detection and response to threats. But agents granted broad permissions over sensitive data - emails, browser history, customer records - create expanded attack surfaces. Prompt injection attacks, where malicious instructions are hidden in content that the AI processes, are identified as a well-known risk that is amplified in agentic systems because of their ability to autonomously ingest content from multiple sources including the open internet. The paper notes that a group recently used agentic AI to perform 80 to 90 per cent of an attack lifecycle.
Regulatory overlap - and coordination
A single agentic deployment can simultaneously trigger concerns across all four DRCF regulators. The paper illustrates this through a hypothetical large UK retailer deploying an autonomous customer assistant that recommends products, handles returns, applies loyalty discounts, and integrates with payment processors, credit reference agencies, and couriers.
From an ICO perspective, the automated decisions involved - offering credit, applying discounts - may trigger data protection provisions on solely automated decisions with legal or similarly significant effects, requiring meaningful human involvement. The FCA's Consumer Duty, which already applies to financial services firms using agentic AI, requires demonstration of good outcomes for consumers. Ofcom's remit could be engaged if the assistant retrieves information from multiple websites to compare pricing, potentially constituting a regulated search service under the Online Safety Act. The CMA's competition and consumer protection powers are relevant if the agent steers users away from rivals' offers or if its terms for subscriptions or cancellation are unfair.
According to the paper, all four regulators agree that "AI agents do not fall outside existing UK regimes: obligations around transparency, fairness, safety, consumer protection and competition continue to apply as Agentic AI develops." Agentic autonomy does not remove organisational responsibility for legal compliance.
The IAB Tech Lab's approach to fragmentation in agentic advertising - using containerised agents and standardised protocols - illustrates the practical challenge of governance across systems that operate across multiple platforms simultaneously. The DRCF's paper addresses the regulatory layer above those technical standards.
What comes next
The DRCF commits to horizon-scanning work in 2026 and 2027 across three areas: the future of interfaces between users, firms, and digital services; consumer robotics and physical AI; and the near-term consumer experience of new technological developments. Further research into consumer attitudes towards AI is also planned, alongside monitoring of how regulatory tools can support safe adoption.
Individual regulators have their own pipelines. The ICO is updating guidance on AI and automated decision-making, anticipated for publication in March 2026, and is developing a statutory Code of Practice on AI and automated decision-making. The FCA is evaluating results from cohort 1 of its Supercharged Sandbox, continuing AI Live Testing, and conducting the fourth edition of its AI/ML survey jointly with the Bank of England. Ofcom will publish the 2026/27 edition of its strategic approach to AI later this year. The CMA will continue its work on Agentic AI and consumers and its practical guidance for businesses using AI agents in customer-facing roles.
For marketing professionals, the practical implication is that agentic systems - already arriving in campaign management, customer support, audience targeting, and workflow automation - are operating within a regulatory framework that four major UK authorities have now formally described as applicable, cross-cutting, and under active development. The UC Berkeley framework for AI agents published in February 2026 and Spain's AEPD guide from the same month pointed in the same direction: governance requirements are accumulating faster than many deployments have anticipated.
Timeline
- May 2024 - German Conference of Data Protection Supervisors publishes first guidelines on AI and data protection, establishing early precedent for agentic systems. PPC Land coverage
- July 2025 - French CNIL finalises recommendations for AI system development under GDPR. PPC Land coverage
- October 2025 - Ad Context Protocol launches with Scope3, Yahoo, PubMatic, Swivel, Triton Digital, and Optable, sparking industry debate. PPC Land coverage
- November 2025 - IAB Tech Lab publishes Agentic RTB Framework version 1.0 for public comment. PPC Land coverage
- November 2025 - European Commission circulates draft GDPR amendments through Digital Omnibus, proposing legitimate interest basis for AI training. PPC Land coverage
- December 2025 - IAB Tech Lab CEO Anthony Katsur warns industry to fix transparency issues before embracing agentic AI. PPC Land coverage
- January 2026 - CMA proposes first conduct requirements under Digital Markets, Competition and Consumers Act, targeting Google's handling of publisher content in AI features. PPC Land coverage
- January 2026 - IAB Spain 2026 digital roadmap puts agentic AI at advertising's centre. PPC Land coverage
- February 2026 - Spain's AEPD publishes 71-page technical guide on GDPR risks in agentic AI deployments. PPC Land coverage
- February 2026 - UC Berkeley publishes framework as AI agents raise oversight concerns. PPC Land coverage
- 4 March 2026 - CMA publishes analysis on AI and collusion: frontiers, opportunities, and challenges.
- 12 March 2026 - IAB Europe publishes explainer on agentic advertising scaling. PPC Land coverage
- 18 March 2026 - EU Parliament committees adopt joint report on Digital Omnibus on AI, 101 votes in favour. PPC Land coverage
- 30 March 2026 - Academic paper published in International Data Privacy Law finds global regulators converge on AI training legal bases but diverge on application. PPC Land coverage
- 31 March 2026 - DRCF publishes "The Future of Agentic AI" foresight paper, co-authored by CMA, FCA, ICO, and Ofcom.
Summary
Who: The Digital Regulation Cooperation Forum, comprising the Competition and Markets Authority, Financial Conduct Authority, Information Commissioner's Office, and Ofcom - four UK regulatory bodies with overlapping jurisdiction over digital markets.
What: A foresight paper titled "The Future of Agentic AI," published on 31 March 2026, defining agentic AI systems, mapping a five-level autonomy spectrum, identifying emerging opportunities in consumer tasks and business operations, and cataloguing risks including algorithmic collusion, action bundling, prompt injection, data minimisation failures, and consumer rights challenges. The paper affirms that existing UK regulatory frameworks apply to agentic systems and calls for cross-regulator coordination as the technology scales.
When: The paper was published on 31 March 2026, drawing on a public call for views conducted through the DRCF Thematic Innovation Hub in Autumn 2025 and a series of internal cross-regulatory workshops.
Where: The paper addresses the UK regulatory landscape but has implications across any jurisdiction where agentic AI systems operate, including the EU, where parallel regulatory developments under the AI Act and GDPR reform are simultaneously underway.
Why: Agentic AI is moving from conceptual to operational faster than regulatory frameworks have been updated to address it. A single deployment can simultaneously trigger data protection, financial regulation, online safety, and competition concerns - across four separate regulatory remits. The DRCF paper is designed to establish a shared analytical baseline, avoid regulatory gaps or contradictions, and signal to businesses what compliance expectations will look like as autonomous AI systems become more prevalent in consumer-facing and enterprise contexts.