Google today pulled back the curtain on the product and design process behind its search text ads, with two of the company's key ad experience architects detailing how creative decisions are made, tested, and eventually reach the search results page. The disclosure came through the "Ads Decoded" podcast, published on February 25, 2026, hosted by Ginny Marvin with guests Abby Butler, Product Manager on the Ads UI team, and Adam Bullock, UX Lead for Search Ads.

The episode is notable for its specificity. Rarely do Google product and design figures discuss internal processes at this level of detail, including what kinds of experiments fail, how asset flexibility is being reimagined under AI pressure, and what the "predicted to improve performance" phrase on ad notifications actually means.

What the Ads UI team actually does

A common misconception among marketers is that "Ads UI" refers to the Google Ads campaign management tool. Butler was direct in addressing this. Her team, according to her own description in the episode, is "responsible for essentially creating and optimizing ad experiences to deliver optimal user value, advertiser value" - specifically the text ads appearing at the top and bottom of search results pages, not the advertiser interface.

Bullock's role as UX Lead sits on the design side of the same coin. His team works with UX research partners, running ongoing foundational research to understand how real users engage with the search results page. The pairing of product management and UX design, operating in close collaboration, represents a structure that governs every format change that reaches the live search page.

The "five in the box" method

One of the more technically significant disclosures concerns how Google vets experiments before they move toward launch. Bullock described what he called a "five in the box" approach: every design session involves one representative each from data science, engineering, UX research, product, and design. According to Bullock: "an engineer will have a really great point that I would have never come up with or vice versa the other way around the room."

This framework reflects Google's stated principle of being hypothesis-driven. According to Butler, every test begins with some grounded rationale - prior experiments, user research results, or, in some cases, pure intuition combined with pattern recognition across earlier findings. The hypothesis is then stress-tested across disciplines before any live experiment begins. Even small-scale tests, Butler noted, receive meaningful traffic volumes at Google's scale.

The process from idea to launch is not linear. "Rarely you get it on the first shot," according to Bullock. Iteration cycles are standard, and the time between a first experiment and a public rollout can involve multiple rounds of testing and ramping. Bullock characterized it as "a lot of iteration" with diligence applied "at different altitudes, depending on where we are in the process."

How assets are being redeployed

One of the more technically granular parts of the conversation concerned how headline and sitelink assets interact on the search results page. Butler explained that headlines - traditionally appearing as the primary clickable text in a search ad - can now serve alongside sitelinks when predicted to drive better performance. According to her account, "up to two of your headline assets may serve along with your site links" as of early 2025.

This is a significant shift in how advertisers should think about their asset libraries. The conventional assumption that a headline will always serve in the headline position, and a sitelink will always serve as a sitelink, no longer holds. Assets are being treated as modular, deployable content that the system can place wherever relevance and predicted performance justify doing so.

Google's comprehensive guide to responsive search ads optimization explains the mechanics in detail, including the asset learning thresholds - individual assets require more than 500 impressions, and complete ads need over 2,000 impressions in the "Google Search: Top" segment over a 30-day period - before optimization becomes meaningful.

The image asset dimension adds another layer. Butler described the trajectory from thumbnail image formats to collage formats and landscape formats, noting that these more expansive formats became possible as advertisers provided more image assets over time. The design team treats assets as raw material. As Bullock put it, "all it does is give us more buckets of paint to make some great things with."

Google's AI-powered asset changes from 2024 illustrate when Google first announced that responsive search ads could dynamically switch to a single headline - a departure from the previous minimum-of-two requirement - when the system predicted performance improvement from doing so.

What "predicted to improve performance" actually means

The phrase appears frequently in Google's ad notifications and product documentation but has long drawn frustration from advertisers who find it opaque. Butler addressed it directly. According to her account, "predicted performance is really based on what our system knows about this format in this query, in this specific context." The system evaluates the probable outcome of a given format variation against all other eligible formats competing for the same placement, in the same context, at that moment.

The evaluation is not simply optimizing for click volume. Butler clarified that the system is "optimized to drive a lot of high quality clicks" and, where relevant, conversions matching the advertiser's bidding strategy. The granularity runs deep - format, query type, contextual signals, and historical data on similar combinations all feed into what the system predicts will happen. It is, as Butler acknowledged, "a pretty complicated system."

The hide sponsored results button

One of the more talked-about recent changes is the option for users to collapse all text ads on the search results page via a "hide sponsored results" control. PPC Land covered this when it launched on October 13, 2025, alongside the sticky "Sponsored results" grouping header that now persists at the top of the viewport as users scroll.

Bullock explained the design rationale. The team looked at how Google's organic search team was grouping results with a call-to-action at the end of a clustered section - a pattern already established in the broader search design system. The hide button emerged from asking how to apply that same pattern to ads while also giving users more direct control over their experience. According to Bullock: "instead of reinventing the wheel is let's find experience they already know, that they're familiar with."

Butler addressed the apparent paradox of a button that hides ads being described as valuable for advertisers. The argument is that users who choose not to hide ads - and most, according to Butler, do not hide them when they see relevant content - represent genuine engagement signals. The data generated from hide interactions also becomes one more lever for measuring user experience quality and identifying where the ad content is or is not meeting user expectations.

An AI-generated ad format that didn't make it

Not everything gets through. Butler disclosed one specific case: an "answer-seeking" ad format tested over the last couple of years, which used large language models to generate a direct answer to a user's question as ad copy. The premise was that standard text ads do not directly answer questions, so the team built a new UI around LLM-generated responses.

According to Butler: "the content was like superb quality and we just found it didn't compete with our current formats. So we tabled it." The format attracted positive reactions from advertisers interviewed during testing, but the performance data did not support a launch. Butler described it as one of the team's first AI-generated format explorations - and one that could return in a different form at a later point.

The disclosure matters because it illustrates that the volume of internal ideas is much higher than the volume of public launches. The testing filter is active and outcomes-based, not simply a queue for deployment.

AI Mode, conversational search, and what it means for text ads

The conversation shifted toward what AI Mode and AI Overviews mean for the future of search text ads. Google began testing ads in AI Mode during Q3 2025, and AI Mode has grown to more than 75 million daily active users since its March 2025 launch. Shopping ads in AI Mode were announced on February 11, 2026, two weeks before this podcast episode.

Butler noted that as users shift from single queries to multi-turn conversations, the number of touchpoints between user and advertiser increases. According to her: "instead of users coming and just submitting a single query, they're now having a conversation and there's multiple queries happening and that much more opportunity to connect these users with advertisers throughout this journey and experience."

Bullock identified simplification as the core design principle being imported from AI experiences into traditional search ads. The synthesis function of AI - taking complex multi-part queries and distilling them into relevant, structured responses - is being studied as a model for how ad experiences might evolve. The learning, according to Bullock, will "bridge back to SERP."

Direct Offers: how the pilot works

The episode addressed the Direct Offers pilot, announced January 11, 2026, as part of Google's Universal Commerce Protocol launch. The format allows retailers to present exclusive discounts - currently focused on percentage-based discounts, with bundles and free shipping planned - directly within AI Mode when a shopper signals high purchase intent.

The mechanics are straightforward: merchants configure relevant offers and unique coupon codes in their Merchant Center accounts. Google's AI then determines when displaying that offer is contextually appropriate. As an example from the episode: a consumer searches for "best headphones," finds a specific product through organic listings, and at that point, retailers stocking that product can surface a direct offer to help close the sale against competitors.

Two questions dominated community feedback on this feature. First, whether the discount would be shown to users who would have purchased anyway - an incrementality concern. According to the episode's framing, the goal is to help merchants close high-intent consumers who have not yet committed to a specific seller, not to offer discounts to confirmed buyers. Second, whether advertisers control what is shown: merchants set the offers and codes themselves, and Google's AI decides when to surface them, not what the offer contains.

The pilot remains limited to a small group of US advertisers. Google VP Dan Taylor confirmed in January that the company is working with a specific set of advertisers on the pilot and has not indicated a broader availability timeline.

Asset breadth as structural preparation

The through-line of the episode is asset diversity as infrastructure for an increasingly AI-mediated search environment. Whether text ads are serving on traditional SERP, appearing alongside sitelinks, showing within AI Overviews - which expanded to 11 countries on December 19, 2025 - or eventually being deployed within AI Mode conversations, the asset pool determines what the system has to work with.

Butler and Bullock both emphasized that impressions per individual asset are not the right optimization metric. The system may deploy a specific asset infrequently but rely on its presence to unlock a format that would otherwise be unavailable. Google's AI powers RSA technical documentation makes clear that the three-phase process - analyzing query context, assembling combinations, evaluating options - depends entirely on the depth of the available asset pool.

The direction of travel is toward more fluid asset deployment across a wider range of surfaces. The team is explicitly thinking about current assets in terms of future use cases that do not yet exist. As Bullock noted: "just because it's a headline and a headline only, but we can put it in some other experience that we're going to build in the future."

Timeline

Summary

Who: Abby Butler, Product Manager on Google's Ads UI team, and Adam Bullock, UX Lead for Search Ads, speaking on the "Ads Decoded" podcast hosted by Ginny Marvin, Google's Ads Product Liaison.

What: A detailed disclosure of how Google designs, tests, and rolls out changes to search text ad formats - including asset flexibility in responsive search ads, the origins of the "hide sponsored results" button, the mechanics of "predicted to improve performance," a failed AI-generated ad format experiment, and how AI Mode and Direct Offers are shaping the next phase of search advertising.

When: The episode was published on February 25, 2026.

Where: The "Ads Decoded" podcast, produced by Google Ads. The developments discussed affect all advertisers running text ads on Google Search globally.

Why: As AI experiences including AI Overviews and AI Mode restructure how users interact with Google Search, the rules governing how text ads are assembled, placed, and measured are changing in ways that are not always visible from campaign dashboards. Understanding the design principles and testing processes behind these changes helps marketing professionals make better decisions about asset creation, campaign structure, and the degree of control they retain over how their brands appear across an increasingly fluid search environment.

Share this article
The link has been copied!