The advertising industry's long romance with precision targeting is fraying. Speaking on the MadTech Mailbag episode of ExchangeWire's MadTech Podcast, published April 3, 2026, CEO Rachel Smith and COO Lindsay Rowntree answered listener questions on three interlocking topics: the end of hyper-personalization as a dominant strategy, the credibility of tools designed to measure brand visibility inside large language models, and the realistic timeline for agent-to-agent media buying. Their answers, addressed to practitioners at Generation Media, eight&four, and Responsible Marketing Advisory, painted a picture of an industry somewhere between adjustment and reinvention.
Precision targeting and the quality-over-quantity shift
Lauren Whyman, director of client services at Generation Media, asked what the industry can do to create impact without laser-focused targeting - given the retreat of third-party cookies, the restrictions on mobile IDs, and the increasing fragility of even first-party data. The question has become, according to Smith, one of the most common topics in conversations with industry practitioners. "We're right in the midst" of building conference agendas and content that revolve around this very issue, she noted, placing the discussion in early April 2026.
The answer Smith offered was not a technical workaround. It was a reframing. The industry, she argued, is moving away from hyper-personalization and toward a model that prioritizes quality over quantity - one that looks more like pre-digital advertising than anything that has been normalized in the programmatic era. "We are absolutely moving away from this kind of hyper targeted" approach, she said, and the direction of travel is toward contextual signals, attention metrics, and high-intent environments. That is not necessarily a bad thing.
Rowntree built on this, pointing to a conceptual confusion that had been embedded in how agencies organized themselves. Precision and effectiveness had been treated as synonyms. They are not. "Effectiveness was conflated with the word precision," she said. "We felt like precision and effectiveness were the same thing but actually they're not the same thing. We just used precision and effectiveness together because precision is measurable and effectiveness is not." She pointed to the structural artifact of this confusion: agencies had precision teams, not outcomes teams. The naming shaped the ambition.
What replaces hyper-targeting is a more composite picture. Identity graphs, what Rowntree described as "super signal aggregators," allow marketers to bring signals together underpinned by deterministic data, without relying on third-party cookies. Probabilistic data, once treated as a dirty word in targeting circles, is being rehabilitated. It no longer means merely "probably" - it has become more scientifically grounded than it was a decade ago, according to Rowntree. Contextual targeting, brand lift studies, and attention tools are gaining credibility as the industry tests which proxies hold up in a less identified world.
Creative is another variable the conversation surfaced. Rowntree argued that the obsession with data and targeting had caused creative to be deprioritized - that the industry had focused on how to target a person, rather than how to communicate with that person. The shift away from hyper-personalization may, paradoxically, force more thoughtful creative decisions.
Smith added a broader structural observation: the open web, so long the default frame for programmatic discussions, represents a diminishing share of addressable media. CTV, retail media, audio, digital out-of-home, and the in-app ecosystem have all grown in sophistication. Measurement across these channels requires different tools, not a single unified solution inherited from open-web programmatic. "I firmly believe it's not just this drive towards legislative controls around data use and therefore the deprecation of third-party cookies that has created this environment. It's a general trend."
LLM visibility tools: two technical methods, limited standardization
Chloe Singleton, who holds a quarterly column at ExchangeWire, raised what is becoming an urgent question for brand and SEO teams: there is a rush of brands wanting to buy tools that show LLM visibility information. What is the credibility of those tools, and how far have they come?
Rowntree opened by clarifying what LLM visibility actually means in operational terms, acknowledging it may not be on everyone's radar. The core problem is straightforward. If a user queries an LLM - "what is the best DSP for a mid-size brand?" - and that LLM lists five names, the brands listed without a click-through have no way of knowing they appeared. Web analytics capture only traffic that arrives. Everything else is invisible. "If a tree falls in the wood and doesn't make a sound and no one's there to hear it, did it make a sound?" she said. "Nobody would know that that brand was ever listed apart from the person that saw the response."
The tools addressing this problem work through two distinct technical architectures. The first is an API wrapper: the tool pings an LLM using its API, submits a prompt, and records the response. This approach is scalable and cheaper to operate, but the response is synthesized rather than organic - it does not replicate how a real user would interact with the system. The second method is client mimicry: the tool scrapes a browser and simulates human behavior. This is more representative of real usage, but it sits in a legal gray area relative to most LLMs' terms of service, and it does not scale in the same way.
Tools currently operating in this space include LLM Refs, Profound, Semrush Sensor, and Adobe's own product. These platforms assess whether a brand was cited in a response, what sentiment surrounded the mention, the brand's share of voice relative to competitors, and signals relevant to generative engine optimization - GEO - which Rowntree described as "the next big hill to climb." Semrush documented its own experience with LLM visibility in October 2025, growing its AI share of voice from 13% to 32% in one month after discovering that ChatGPT mentioned every competitor but not Semrush itself, despite the platform's content being cited hundreds of times. Adobe launched its LLM Optimizer on October 14, 2025, reporting a 200% increase in LLM visibility for Adobe Acrobat after content adjustments, and a 41% increase in LLM-referred traffic to its own pages.
A critical caveat from Rowntree: these tools do not disclose upfront whether they use API wrappers or client mimicry. That transparency gap matters. For teams that want to stay within the terms of service of the LLMs they are monitoring, it is a question to ask before committing budget. "It's something for you to ask them how they actually get this data."
There are also deeper structural problems. Research published on PPC Land in November 2025 documented how personalization features in ChatGPT and similar systems cause the same prompt to generate different results for different users - an issue that fundamentally undermines the accuracy of any tool attempting to report a single, standardized response. The non-deterministic nature of LLMs means that 40% to 60% of cited sources change monthly, according to data referenced in Semrush's own analysis.
Smith raised the question of whether LLMs themselves will eventually build the measurement tools. If these platforms move toward advertising, she argued, they will need to be able to tell brands and agencies where their brand appeared and what metrics surrounded that appearance. Google already has the infrastructure logic for this. Microsoft, with Copilot, is positioned similarly. Microsoft's February 2026 positioning of its grounding technology described GEO as the practice of understanding content participation in AI-driven experiences through citations and contributions to answers rather than traditional rankings.
Where does GEO sit in an agency? Rowntree posed the structural question directly. Paid visibility within LLMs logically falls to PPC teams. Organic visibility, the synthesized response the LLM generates without a paid trigger, looks more like an SEO function. But SEO ranking in Google does not translate to visibility in an LLM. The two systems look for different signals. "The LLM is looking for sentiment and that is what it is using to pick up and pull its results." That is categorically different from keyword authority and backlink profiles.
LinkedIn's own experience, documented in February 2026, showed the platform suffered significant LLM-driven traffic losses before forming a cross-functional AI Search Taskforce. The taskforce brought together SEO, PR, editorial, paid media, and brand teams. The experience illustrated that GEO is not cleanly owned by any single existing function.
Both Smith and Rowntree agreed that standards-based organizations - the IABs in particular - have an obvious role to play. "There's absolutely a role for IABs here to try to be working with LLMs or owners of LLMs to say well you know how can we try to implement some measurement standards in this environment." Without standardization, different tools will produce different numbers from the same prompt, and no brand or agency will be able to assess whether any of it is working. The situation is analogous to the pre-viewability era of display advertising: theoretically trackable, practically unreliable.
Agent-to-agent buying: infrastructure gaps and the autonomous driving analogy
Emily Roberts, head of digital at Responsible Marketing Advisory, asked how likely agent-to-agent buying is in the future and what the industry will look like as it gets there.
Smith was unequivocal that agent-to-agent buying is coming. The direction of travel is not in question. What is in question is the timeline. She referenced a conversation she had in September 2025 with Ali Nur Muhammad from Nodals, whose platform provides granular data on publisher inventory quality to support more precise advertiser outcomes matching. In that conversation, Muhammad described what his system was building toward: a buy-side agent speaking directly to a sell-side agent, with the advertising transaction happening between them. "We're going to be operating in a world where there's going to be a buyside agent speaking to a sellside agent and that's where the advertising transaction is going to happen."
The analogy Smith used was autonomous vehicles. Driverless cars exist and are operating on roads in the United States right now. But full deployment is not imminent, because the safety standards, legal frameworks, and infrastructure requirements have not been fully worked out. The technology exists before the conditions for safe scaled deployment do. Agentic advertising buying is in a similar position. "We are absolutely not there yet. A lot still needs to happen on both sides of the advertising value chain."
What needs to happen is considerable. The Ad Context Protocol launched in October 2025 with six founding members, including Scope3, Yahoo, and PubMatic, aiming to establish technical standards for AI agents operating across advertising platforms. But as ad tech veteran Ari Paparo noted in November 2025 analysis covered by PPC Land, the protocol faces obstacles that automation alone cannot overcome - data incentives remain misaligned between parties, and long-tail publishers lack the scale and data quality that agentic systems require. Swivel introduced agentic transaction capabilities in November 2025, connecting buyer and seller AI agents in what it positioned as the first revenue-generating agentic buy. Optable and PubMatic announced a partnership in March 2026 embedding Optable's Audience Agent directly into PubMatic's AgenticOS, offering a concrete early example of agents handling audience discovery and activation.
But early infrastructure demonstrations are a long way from a functioning market. Rowntree identified a structural problem that would compound in an agentic environment: the existing programmatic ecosystem carries unresolved problems of transparency, supply chain opacity, inconsistent definitions, and fragmented data quality. "If we put the current version of the open web into and hand it over to the agents, we're handing over all the problems that it has as well. And we can't expect the agents to fix the problems."
She went further. AI hallucination is a concrete risk in any environment where agents handle numerical outputs. She described asking Claude, Gemini, ChatGPT, and Perplexity the same detailed inheritance tax question using an identical prompt. "Every single LLM came back with a completely different calculation." In an advertising context, a misinterpretation of how a number is calculated "could mean your campaign go from spending £100 to a million pounds." Guardrails, she argued, are not optional - they are the baseline requirement before any agentic system is trusted with budget.
Definitions are another unresolved problem. Rowntree raised viewability as an example. A marketer's definition of viewability may differ from their DSP's definition, which may differ again from a third-party measurer's definition. If all three are feeding inputs to an agent without standardized definitions, the agent will either arbitrate between them using its own training data or pick one arbitrarily. The IAB Tech Lab's Agentic RTB Framework, released for public comment in November 2025, is an attempt to address this class of problem at the standards level. Agentic AI infrastructure dominated the industry conversation through the end of 2025, with SSPs and DSPs adding new capabilities to their platforms, and the IAB Tech Lab scheduling a public webinar for January 28, 2026, to address the framework's implications.
Both Smith and Rowntree agreed that humans will remain essential for some time - not as transaction processors, but as interpreters of outputs and overseers of the data quality that feeds agentic systems. "One of those will be overseeing the data that goes into those agentic platforms," Smith said. "AI tools are only as good as what we're feeding them." Entry-level roles in an agentic future, Rowntree speculated, will look less like search executive positions - the role she herself once held - and more like agent oversight and quality control functions.
When everything becomes agent-to-agent and every campaign is run by AI, Smith raised a further implication: creative and data quality become the differentiators. The infrastructure ceases to be a competitive advantage because everyone uses it. "If I'm an agency pitching to a brand AI doesn't even need to come up in conversation because it is what it is - is how we do everything. So what else can we deliver? And that's exactly what you say - it has to be the insights, it has to be the strategy, it has to be the research, it has to be the human."
Timeline
- September 2025 - ExchangeWire CEO Rachel Smith speaks with Ali Nur Muhammad of Nodals about agent-to-agent buying infrastructure, describing the conversation as a preview of what a buy-side agent speaking to a sell-side agent might look like
- October 14, 2025 - Adobe launches LLM Optimizer, reporting a fivefold increase in citations for Adobe Firefly and a 200% increase in LLM visibility for Adobe Acrobat after implementation
- October 15, 2025 - Ad Context Protocol launches with six founding members including Scope3, Yahoo, and PubMatic, establishing technical standards for AI agent advertising interactions
- October 17, 2025 - Semrush discloses it grew AI share of voice from 13% to 32% in one month using a five-step optimization process after finding ChatGPT recommended every competitor but not Semrush
- November 3, 2025 - LLM tracking tool accuracy concerns emerge as SEO professional Lily Ray documents that ChatGPT's personalization causes tracking tools to report responses that almost never match real user results; industry expert Ari Paparo questions AdCP's viability for media buying
- November 4, 2025 - Swivel introduces agentic transaction capabilities for programmatic advertising, connecting buyer and seller AI agents with partnerships across SpringServe, FreeWheel, Publica, and Kevel
- November 9, 2025 - PPC Land reports on the accuracy crisis facing LLM tracking tools, documenting how personalization features and non-deterministic AI behavior undermine the reliability of monitoring data
- November 12-13, 2025 - IAB Tech Lab releases Agentic RTB Framework version 1.0 for public comment, establishing standardized specifications for deploying AI agents within real-time bidding infrastructure
- February 10-12, 2026 - Microsoft introduces Bing Webmaster Tools updates for GEO visibility and positions its grounding technology as powering nearly every major AI assistant in the market
- February 15, 2026 - Microsoft releases updated AI marketer's guide explaining how LLMs learn, what Retrieval Augmented Generation does, and how brands can optimize for both paid and organic LLM visibility
- March 12, 2026 - Optable and PubMatic announce integration of Optable's Audience Agent into PubMatic's AgenticOS, offering a live demonstration of the Advertising Context Protocol across real programmatic infrastructure
- April 3, 2026 - ExchangeWire publishes MadTech Mailbag episode featuring CEO Rachel Smith and COO Lindsay Rowntree answering industry questions on targeting, LLM visibility tools, and agent-to-agent buying
Summary
Who: Rachel Smith, CEO of ExchangeWire, and Lindsay Rowntree, COO of ExchangeWire, speaking on the MadTech Podcast hosted by editor Aimee Newell Tarin. Questions came from Lauren Whyman of Generation Media, Chloe Singleton of eight&four, and Emily Roberts of Responsible Marketing Advisory.
What: A mailbag podcast episode covering three interconnected topics in ad tech: the collapse of precision targeting as a dominant advertising strategy and what replaces it; the technical credibility of tools measuring brand visibility inside large language models, including the distinction between API wrapper and client mimicry methodologies; and the timeline and infrastructure requirements for agent-to-agent programmatic media buying.
When: The episode was published on April 3, 2026. Events discussed range from September 2025 conversations about agentic infrastructure to tools launched in October and November 2025, and industry developments through the first quarter of 2026.
Where: The MadTech Podcast is produced by ExchangeWire, a UK-based ad tech media company. The topics discussed are global in scope, with specific references to programmatic infrastructure in the US and UK, and to international consumer behavior shifts across AI search platforms.
Why: The episode matters because it surfaces practitioner-level thinking on questions that are moving from theoretical to operational. LLM visibility tools are already being purchased by brands and agencies without standardized methodology or validated accuracy. Agent-to-agent buying is being piloted at the infrastructure level. And the abandonment of precision targeting as the default measurement framework is forcing a rethink of how the value of advertising is demonstrated - not in a distant future, but in campaigns running today.