The OECD published a working paper that attempts something the technology industry has struggled to do on its own: define precisely what agentic AI is, how it differs from individual AI agents, and where the conceptual boundaries lie. Released on 13 February 2026 as part of the OECD Artificial Intelligence Papers series (No. 56), the document is titled "The Agentic AI Landscape and Its Conceptual Foundations" and runs to 34 pages of analysis, definitions, and adoption data. It arrives at a moment when advertising platforms, data companies, and regulators are simultaneously racing to deploy autonomous AI systems and struggling to agree on what those systems actually are.

The paper was prepared by Luis Aranda and Kasumi Sugimoto from the OECD's AI and Emerging Digital Technologies division, under the strategic direction of Audrey Plonk, Deputy Director of the OECD Directorate for Science, Technology and Innovation, and Karine Perset, Deputy Head of that division. It was presented and discussed in November 2025 at the fourth Plenary meeting of the Global Partnership on Artificial Intelligence (GPAI) and at a dedicated agentic AI expert workshop attended by over 190 experts. The final document incorporates input from delegations representing Brazil, Canada, Chile, Colombia, Denmark, Germany, Greece, Israel, Japan, Mexico, Saudi Arabia, Singapore, Slovenia, Türkiye, the United Kingdom, and the United States, as well as Business at OECD and Civil Society Information Society Advisory Council.

What the OECD paper actually says

The analysis opens with a finding that will resonate with anyone who has attended an advertising technology conference in the past twelve months: the terms "AI agents" and "agentic AI" are used interchangeably across the industry, creating significant conceptual confusion. According to the paper, while these concepts share foundational characteristics, they are not the same thing. The distinction carries real consequences for governance, procurement, and risk management.

The OECD defines AI agents as systems that can perceive and act upon their environment with a degree of autonomy, using tools as needed to achieve specific goals and adapt to changing inputs and contexts. Agentic AI, by contrast, generally refers to systems composed of multiple co-ordinated AI agents that can break down tasks, collaborate, and pursue complex objectives autonomously over extended periods. Agentic AI systems are designed to operate in more open-ended, less predictable physical or virtual environments and to function with minimal human supervision.

That distinction - between a single agent and a system of co-ordinated agents - is central to the paper. It has direct implications for how liability is assigned, how oversight is structured, and how standards bodies write technical specifications.

The OECD uses its own established definition of an AI system as the analytical framework. According to the OECD Council Recommendation on Artificial Intelligence, an AI system is "a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments." The paper maps definitions of AI agents and agentic AI against six key elements from that definition: objectives, outputs, autonomy, influence on environment, adaptiveness, and inference. A seventh element - data and input - appears in the framework but is classified as "occasional" in how frequently it appears in definitions of agents, suggesting the field treats data handling as assumed rather than defining.

Autonomy is not the same as agency

One of the paper's more technically precise contributions is its distinction between autonomy and agency. These terms are often used interchangeably in product marketing, but the OECD draws a clear line between them. Autonomy generally refers to a system's capacity to act without direct human involvement. Agency, by contrast, involves a system's capacity for independent goal formulation, long-term reasoning, and strategic adaptation. A thermostat or a basic autonomous vehicle, the paper notes, can operate independently yet lack the capacity to reason about goals or interact meaningfully with its environment.

The paper describes four levels of action autonomy. No-action autonomy, where the system can make recommendations but only the human decides whether to act. Low-action autonomy, where the system suggests an action but only proceeds if a human approves. Medium-action autonomy, where the system acts on its own unless a human steps in to stop it. And high-action autonomy, where the system acts entirely on its own without human involvement. This four-level taxonomy maps directly onto governance questions that the advertising industry is currently grappling with - specifically, how to assign accountability when an AI agent spends a budget autonomously, places a bid, or selects a target audience without a human reviewing each decision.

The OECD identifies three features as prevalent across most AI agent definitions: objectives, outputs (typically in the form of actions), and autonomy. Two features appear frequently but not universally: influence on environment and adaptiveness. Inference is also frequently referenced, though often implicitly rather than explicitly. Data and input appear as the least emphasized element - cited by only 4 of 18 sources reviewed, compared to 18 citing both objectives and outputs.

The architecture of agentic AI

The paper draws careful attention to what makes agentic AI architecturally different from a single agent. Agentic systems typically operate through distributed problem-solving: multiple specialized agents work together, with tasks decomposed and delegated across the system. According to the paper, inference in agentic AI is evolving from immediate generation to a sophisticated process of "deliberative reasoning" or "test-time compute," where systems engage in internal chains of thought as well as recursive multi-agent critique and self-reflection during inference.

This architectural complexity has implications for where the industry is headed. Recent initiatives including the Model Context Protocol (MCP) and the Agent-to-Agent protocol (A2A) represent attempts to create shared standards for connecting AI agents and applications to external tools, data sources, and systems. The paper also references an experimental platform called Moltbook - described as a decentralised social network designed exclusively for AI agents, where humans are "welcome to observe" - as an early demonstration of large-scale agentic interactions and emerging collective dynamics.

The paper describes agentic AI as operating within what it calls a socio-technical paradigm. Rather than treating these systems purely as technical tools, the OECD argues they are embedded in social contexts, operating alongside human, artificial, and institutional agents. This relational perspective leads to a more demanding set of design requirements: not just intelligence or autonomy, but the capacity for negotiation, co-ordination, and adherence to social norms. According to researchers Virginia Dignum and Frank Dignum, cited in the paper, the true benefit of agentic AI would lie in its ability to operate within and contribute to a social context, rather than simply following goals or maximising outcomes.

Developer adoption: the numbers

Perhaps the most practically relevant section for marketing professionals is the paper's analysis of adoption data. According to data cited in the paper, GitHub activity saw a 920% increase in repositories using agentic AI frameworks such as AutoGPT, BabyAGI, OpenDevin, and CrewAI from early 2023 to mid-2025. That figure, sourced from SuperAGI, reflects the pace of developer experimentation even before major advertising technology platforms began deploying agents at scale in late 2025.

The Stack Overflow Developer Survey provides a more granular view. The survey received more than 49,000 responses from 177 countries and covered 62 questions in total. It defines AI agents as "autonomous software entities that can operate with minimal to no direct human intervention using artificial intelligence techniques." Results show that about half of respondents are already using, or plan to use, AI agents in their work, while 38% have no plans to adopt them. The remaining respondents use AI exclusively in copilot or autocomplete mode.

Security and accuracy concerns are widespread. The vast majority of developers highlight opportunities to further strengthen the security, privacy, and accuracy of AI agents. More specifically, 56.1% of respondents strongly agreed they had concerns about security and privacy of data when using AI agents, with a further 25.3% somewhat agreeing. On accuracy, 57.1% strongly agreed they were concerned about information accuracy, with 29.8% somewhat agreeing. Those figures - representing over 80% of respondents across both dimensions expressing concern - sit in notable contrast to the confident deployment announcements coming from advertising platforms throughout late 2025.

Software engineering is the most common use case for AI agents among the developer population surveyed, with data and analytics following closely. Notably, 64% of respondents identifying as data scientists, engineers, or analysts are using agents primarily for data and analytics work - a figure directly relevant to marketing technologists building measurement and attribution workflows.

The most popular tools reveal a landscape where general-purpose LLMs still dominate: ChatGPT, GitHub Copilot, Google Gemini, Claude Code, and Microsoft Copilot lead in out-of-the-box agent usage. For orchestration and agent frameworks, Ollama, LangChain, LangGraph, Vertex AI, and Amazon Bedrock Agents are most cited. Memory and data management relies heavily on Redis, GitHub MCP Server, Supabase, and Chromadb. Observability tools include Grafana combined with Prometheus, Sentry, Snyk, New Relic, and Langsmith.

Why this matters for the marketing industry

The OECD paper lands at a particular moment for the advertising technology sector. Throughout late 2025 and early 2026, platforms moved rapidly from announcing agentic capabilities to deploying them in live campaigns. Amazon launched its Ads Agent at the unBoxed conference in November 2025, automating campaign management across Amazon Marketing Cloud and the Amazon DSP. IAB Tech Lab unveiled its Agentic RTB Framework in November 2025, establishing containerized agent deployment standards in real-time bidding. PubMatic launched AgenticOS in January 2026 with live campaigns running. Magnite embedded a seller agent into SpringServe to support the Ad Context Protocol in January 2026.

This deployment velocity arrived without any shared definitional framework. The Ad Context Protocol launched in October 2025 to immediate skepticism, with observers questioning whether the industry needed another protocol before fixing structural problems. IAB Tech Lab CEO Anthony Katsur characterized the current discourse as reaching "a fevered pitch" of hype in December 2025, warning that protocols alone cannot resolve dysfunction caused by misaligned business incentives. UC Berkeley's Center for Long-Term Cybersecurity published a risk management framework in February 2026 specifically addressing the governance gap as autonomous systems moved from testing to production.

Spain's data protection authority, the AEPD, published a 71-page guide in February 2026 examining how agentic AI creates new GDPR risks. The document introduced the concept of "BYOAgentic" - employees deploying agentic workflows through no-code interfaces outside formal governance structures - as an emerging compliance exposure. The AEPD's analysis and the OECD's working paper were published within days of each other, reflecting simultaneous regulatory and analytical attention to the same definitional gaps.

The OECD paper itself acknowledges this governance urgency. According to the document, as agentic AI systems become more capable and widely deployed, several areas merit further exploration. First, greater clarity on the different architectures underpinning agentic AI and its technical stack. Second, the development of relevant typologies distinguishing systems by domain of application, level of autonomy, adaptiveness, tool access levels, or capacity to influence their environment. Third, accountability, explainability, and transparency in complex socio-technical environments. The paper also flags challenges related to global disparities, sovereignty, and resource efficiency - including water and energy use - as areas requiring policy attention.

The OECD Expert Group on AI Futures had previously mapped AI trajectories in a November 2024 report drawing on 70 leading AI specialists. That earlier report outlined ten priority benefits and ten critical risks, establishing a policy framework that the February 2026 working paper now extends into the specific territory of autonomous agent systems.

One element the OECD paper does not address directly is what the advertising industry refers to as the "black box" problem - the difficulty of attributing specific outcomes to autonomous agent decisions when those decisions happen at speed, across multiple platforms, without legible audit trails. The Stack Overflow data showing 80%+ developer concern about accuracy and security suggests this concern is pervasive rather than marginal. Analysis published in January 2026found that the promise of complete autonomy in advertising remains largely theoretical, with agencies maintaining approval workflows for brand-sensitive decisions rather than implementing wholesale automation.

What comes next

The OECD paper explicitly frames itself as the first project within a broader agentic AI workstream. Future analytical work could build on this foundation by developing policy-relevant typologies and improving empirical evidence on adoption and use across different contexts. The paper calls for cross-country indicators tracking development and adoption of AI agents, including job posting trends, reported incidents, investment activity, and research outputs.

The reference code for the working paper is DSTI/DPC/GPAI(2025)/18/FINAL, reflecting that it was finalized in 2025 before its February 2026 publication. It is published under Creative Commons Attribution 4.0 International licence, making it freely available for use with attribution. Comments can be sent to the Directorate for Science, Technology and Innovation at 2 rue André Pascal, 75775 Paris Cedex 16, France.

For the marketing and advertising technology community, the most immediate value of the OECD paper is definitional clarity. When a platform claims to offer "agentic" capabilities, the framework now exists to ask specific questions: Is this a single AI agent or a system of co-ordinated agents? What level of action autonomy does it operate at? How does it handle task decomposition and delegation across extended timeframes? What infrastructure and protocols govern agent-to-agent interaction? The answers to those questions have direct operational, legal, and financial implications for anyone deploying autonomous systems to manage advertising spend.

Timeline

Summary

Who: The OECD's AI and Emerging Digital Technologies division, led by authors Luis Aranda and Kasumi Sugimoto, with strategic direction from Audrey Plonk and Karine Perset, supported by input from 16 OECD member country delegations, over 190 expert workshop attendees, and contributors including Stuart Russell (UC Berkeley) and Francesca Rossi (IBM).

What: A 34-page working paper (OECD Artificial Intelligence Papers No. 56) that maps definitions of AI agents and agentic AI across academic and technical sources, distinguishes the two concepts using the OECD AI system definition as a framework, and presents adoption data from the Stack Overflow Developer Survey (49,000+ respondents, 177 countries) and GitHub repository activity showing a 920% increase in agentic AI framework repositories from early 2023 to mid-2025.

When: Published 13 February 2026. Presented and discussed in November 2025 at the GPAI fourth Plenary meeting and an expert workshop. Published under reference code DSTI/DPC/GPAI(2025)/18/FINAL.

Where: Published by OECD Publishing, Paris. The analysis draws on global sources and the adoption data covers developers from 177 countries. The work is part of the OECD Horizontal Project on Thriving with AI: Empowering Economies and Societies.

Why: Significant variation exists in how AI agents and agentic AI are understood across industry, academia, and policy. The lack of shared definitions creates governance gaps as autonomous systems are deployed at scale across advertising, finance, healthcare, and other sectors. The paper is intended as the first project within a broader OECD agentic AI workstream, providing a conceptual foundation for future policy-relevant typologies, accountability frameworks, and cross-country adoption indicators.

Share this article
The link has been copied!