Google quietly expands asset A/B testing to all Performance Max campaigns
Google extends Performance Max asset experimentation beyond retail campaigns, enabling advertisers to test creative combinations within single asset groups.
Google has expanded asset-level A/B testing capabilities to all Performance Max campaigns, moving beyond the retail-only experiments introduced in 2024. The beta feature allows advertisers to compare two distinct asset sets within single asset groups to determine optimal creative combinations.
Web marketer Dario Zannoni identified the expanded functionality on January 9, 2026, sharing documentation on LinkedIn that detailed the new testing framework. According to Google's help documentation, the feature enables advertisers to divide assets into three categories: control group assets, treatment group assets, and common assets that serve across both test variations.
The methodology differs from traditional campaign-level experiments by maintaining all testing within existing asset group structures. Common assets continue serving to 100% of campaign traffic alongside control and treatment assets distributed according to defined traffic splits, ensuring baseline creative elements remain consistent while variations undergo evaluation.
Testing framework requires minimum four-week commitment
Google's Experiment Guidance System calculates required test duration based on campaign characteristics, though the platform recommends minimum four to six week testing periods for statistical validity. According to the help documentation, "The end date is determined by the Experiment Guidance System, which calculates the necessary duration for statistically significant results based on the current campaign selection."
The extended timeline addresses Performance Max learning phase requirements and ad delivery stabilization periods. Tests running less than four weeks risk incomplete data collection during algorithm optimization cycles, potentially producing unreliable performance comparisons.
Traffic allocation between control and treatment groups remains customizable, allowing advertisers to weight traffic distribution based on risk tolerance and available impression volume. Unequal splits enable conservative testing approaches where treatment assets receive smaller traffic portions during initial evaluation phases.
Asset locking prevents mid-test modifications
The platform implements strict asset editing restrictions throughout active experiments. All assets within tested asset groups enter view-only mode when experiments begin, preventing additions, removals, or modifications until test completion.
According to Google's documentation, "When an experiment starts, you can't edit, add, or remove any assets in the asset group that's being tested until the experiment ends." The restriction ensures test validity by eliminating confounding variables that could corrupt performance comparisons.
Asset approval requirements apply to newly uploaded treatment assets. Materials violating Google Ads policies face disapproval and exclusion from experiments, potentially invalidating tests if critical creative elements fail review processes. Both control and treatment assets count toward asset group limits, requiring advertisers to manage total asset quantities within platform constraints.
Asset group limitation restricts experiments to single asset groups per test. Advertisers managing multiple asset groups within campaigns must conduct sequential experiments rather than parallel tests across different creative collections.
Buy ads on PPC Land. PPC Land has standard and native ad formats via major DSPs and ad platforms like Google Ads. Via an auction CPM, you can reach industry professionals.
Application options determine campaign asset composition
Experiment completion presents two distinct application pathways. The "Add treatment assets to campaign" option incorporates tested B assets into active asset groups, expanding available creative inventory with validated performers. The "Keep control assets in campaign" option remains selected by default, preserving original A assets alongside newly added treatment materials.
Unchecking the default option removes control assets entirely, replacing existing creative inventory with treatment alternatives. According to the documentation, "If you uncheck this box, the 'A' set of assets will be removed from the asset group."
The "End experiment" option terminates tests without implementing changes. Treatment assets uploaded specifically for experiments get discarded, returning asset groups to pre-test configurations. This pathway enables risk-free experimentation where advertisers evaluate creative directions without committing to deployment.

Feature extends October 2024 retail experiment launch
Google initially introduced Performance Max asset testing for retail campaigns in October 2024, limiting functionality to product feed-based advertising. The retail-specific experiments enabled merchants to measure incremental impact from supplementary images, text, and video assets beyond standard product catalog materials.
The current expansion removes vertical restrictions, making asset comparison tests available across all Performance Max campaign types including lead generation, local store promotions, and online sales objectives. Non-retail advertisers gain equivalent creative testing infrastructure previously exclusive to e-commerce operations.
Performance Max automation has historically limited creative control granularity, with asset-level performance data aggregated rather than individually attributed. The asset testing capability addresses longstanding advertiser requests for controlled creative evaluation within automated campaign structures.
The testing framework complements existing Performance Max experimentation options. Campaign-level experimentscompare Performance Max against alternative campaign types, while custom experiments enable multi-variant testing across audiences, bidding strategies, and campaign configurations.
Setup requires campaigns page experiment navigation
Advertisers access asset experiments through the Experiments section within the Campaigns menu. The setup process begins with selecting the plus button under the "All experiments" tab, followed by choosing "Assets" under testing variables and "Assets provided by you" under variable types.
Campaign type selection requires Performance Max specification before asset group identification. The control arm card displays existing assets automatically, while treatment arm cards enable selection or upload of comparison assets. According to the documentation, advertisers "can choose to add additional assets or completely switch to new assets in the treatment arm."
Experiment naming occurs during setup through the "Experiment name" field. Start dates default to the following day, with automated scheduling preventing same-day experiment launches. The platform prevents experiment creation if campaigns contain incompatible features, already have active experiments during selected dates, or use shared budgets outside experiment structures.
Policy violations and technical errors block experiment creation
Several technical conditions prevent experiment initialization. Campaigns removed from accounts cannot support experiments, requiring selection of alternative active campaigns. Shared budget assignments and portfolio bidding strategies incompatible with experiment frameworks must be removed before test creation.
According to Google's troubleshooting documentation, "If your original campaign already has a shared budget, is part of a portfolio bidding strategy or is using Smart Bidding Exploration, remove your campaign's shared budget."
Asset groups exceeding 15 assets reach platform maximums, requiring asset removal before experiment eligibility. Keyword policy violations prevent affected terms from copying to experiment structures, necessitating keyword editing before experiment creation proceeds.
The beta designation indicates potential functionality changes as Google refines the testing infrastructure based on advertiser feedback and performance data. Early adopters provide valuable usage patterns informing future development priorities and feature enhancements.
Testing infrastructure reflects broader experimentation strategy
Google's expansion of Performance Max testing capabilities aligns with platform-wide experimentation infrastructure development. The company reduced incrementality testing budget requirements to $5,000 in May 2025, lowering barriers for small and medium-sized advertisers seeking controlled measurement frameworks.
Published guidance on building marketing experimentation capabilities in January 2025 emphasized systematic testing as core competency for performance marketing organizations. Asset-level experiments provide tactical implementation pathway for strategic testing philosophies outlined in broader documentation.
The asset testing framework addresses creative optimization challenges inherent to fully automated campaign structures. Performance Max consolidates inventory access, bidding, and creative assembly into unified systems, limiting advertiser control over individual optimization levers. Structured experiments restore measurement rigor to creative performance evaluation within automated environments.
Asset comparison tests enable validation of creative hypotheses without full campaign restructuring. Advertisers testing new messaging themes, visual approaches, or video content formats can evaluate performance impact through controlled experiments before committing creative resources to untested directions.
The four to six week minimum testing duration reflects Performance Max learning phase requirements and statistical significance thresholds. Shorter tests risk premature conclusions drawn from incomplete algorithm optimization cycles, while extended tests provide confidence in performance differences attributable to creative variations rather than random fluctuations.
Common assets maintain consistency across test variations
The three-category asset structure separates tested variables from consistent baseline elements. Control and treatment groups contain assets undergoing direct comparison, while common assets serve all users regardless of test arm assignment. This structure enables focused testing of specific creative elements while maintaining brand consistency and message continuity.
Common assets might include logo images, primary brand messaging, or established high-performing materials. Control and treatment assets typically represent competing creative directions, alternative messaging frameworks, or new visual approaches under evaluation.
Traffic split customization enables risk management through conservative test designs. Advertisers uncertain about treatment asset performance can allocate 80% traffic to proven control assets while exposing 20% to new variations, limiting potential negative impact from underperforming creative tests.
Equal 50/50 splits accelerate data collection by maximizing treatment asset exposure, reducing time required to reach statistical significance thresholds. The approach suits advertisers confident in treatment asset quality or willing to accept higher short-term risk for faster learning.
The asset locking mechanism prevents inadvertent test contamination through mid-experiment changes. Without editing restrictions, advertisers might modify assets during tests, invalidating comparisons by introducing uncontrolled variables into treatment or control groups. View-only mode ensures test integrity by freezing asset configurations for experiment duration.
Beta status indicates ongoing development
Beta designation signals active development with potential functionality adjustments as Google refines the feature. Early adoption provides access to cutting-edge testing infrastructure while accepting possibility of bugs, workflow changes, or temporary unavailability during development cycles.
Google typically maintains beta features for extended periods while gathering usage data and performance feedback. Video experiments simplified in October 2024 remained in beta for months as the company iterated on testing frameworks based on advertiser implementation patterns.
Feedback mechanisms enable beta participants to influence feature development priorities. Advertisers encountering limitations or workflow inefficiencies can submit feedback through Google Ads interface, potentially driving future enhancements aligned with practical usage requirements.
The asset testing expansion follows Google's pattern of launching features in limited contexts before broader rollout. Retail-only availability during initial launch enabled platform refinement in controlled environment before exposure to diverse campaign types with varying optimization objectives and creative requirements.
Implications for creative strategy and campaign management
Asset-level experimentation capabilities fundamentally alter Performance Max creative strategy by enabling data-driven creative decisions within automated structures. Advertisers previously relying on platform-level optimization signals gain granular performance validation for specific creative hypotheses.
The testing infrastructure supports iterative creative development processes. Initial experiments identify high-performing asset characteristics, informing subsequent creative production prioritizing validated elements. Continuous experimentation cycles enable progressive creative optimization rather than static asset deployment.
Creative teams benefit from concrete performance feedback on specific visual approaches, messaging frameworks, and video content styles. Asset experiments provide objective performance data informing subjective creative decisions, aligning artistic direction with quantifiable business outcomes.
The expanded testing availability democratizes creative optimization capabilities previously limited to retail advertisers. Lead generation campaigns, local service providers, and B2B Performance Max implementations gain equivalent creative testing infrastructure, enabling cross-vertical application of structured experimentation methodologies.
Subscribe PPC Land newsletter ✉️ for similar stories like this one
Timeline
- January 9, 2026: Web marketer Dario Zannoni shares Google's expanded Performance Max asset testing documentation on LinkedIn
- October 2024: Google introduces asset experiments for retail Performance Max campaigns
- October 2024: Google simplifies video experiments for creative performance testing
- May 2024: Google announces in-campaign experimentation for Performance Max
- January 2025: Google publishes comprehensive guide on building marketing experimentation capabilities
- May 2025: Google reduces incrementality testing budget requirements to $5,000
- October 2023: Google launches Performance Max Uplift experiments
Subscribe PPC Land newsletter ✉️ for similar stories like this one
Summary
Who: Google Ads platform, serving Performance Max advertisers across retail and non-retail verticals, with feature discovery attributed to web marketer Dario Zannoni.
What: Beta expansion of asset-level A/B testing to all Performance Max campaigns, enabling controlled comparison of two asset sets within single asset groups through control, treatment, and common asset categories with customizable traffic splits.
When: Identified January 9, 2026, expanding October 2024 retail-only asset testing feature to all campaign types with recommended minimum four to six week testing durations determined by Experiment Guidance System.
Where: Available through Experiments section in Google Ads Campaigns menu, accessible during asset group configuration with results viewable through standard experiment reporting interfaces.
Why: Addresses advertiser demand for creative performance validation within automated Performance Max structures, enabling data-driven asset optimization decisions previously unavailable due to aggregated performance reporting and limited control granularity in fully automated campaigns.