A new industry report published this week by Retail Economics, Amazon Web Services, Botify, and DataDome has put a precise number on how dramatically artificial intelligence has disrupted the underlying mechanics of retail discovery - and it is a number that should concentrate minds across search, e-commerce, and advertising alike. For every single visit OpenAI's systems deliver to a retail website, those same systems perform 198 crawls. Google, by comparison, generates one visit for every six crawls. The disparity, drawn from analysis of approximately 200 retail and e-commerce websites, illustrates how AI platforms interact with the web in a fundamentally different way from the search engines that have shaped digital marketing for the past two decades.

The report, titled "The Future of Search and Discovery: A strategic playbook to understand agentic commerce," is based on a consumer survey of 6,000 nationally representative respondents across the UK, US, and France, conducted in November 2025. Its conclusions range from quantitative measurements of bot traffic growth to qualitative assessments of consumer trust, and it arrives at a moment when agentic commerce infrastructure is being built at pace across every major platform.

Bot traffic has multiplied - and skewed analytics

The headline infrastructure finding is stark. According to Botify's analysis, AI-driven bot traffic across the approximately 200 retail and e-commerce websites examined increased 5.4 times during 2025, with the index moving from a baseline of 100 in the first quarter to roughly 640 by the fourth quarter. The growth was not linear. A particularly sharp acceleration occurred in the weeks preceding September 2025, when crawl intensity rose sharply as AI systems refreshed and ingested product data. Shortly after, OpenAI expanded its commerce-related capabilities, including agent-led shopping and in-chat purchasing features. Visits from OpenAI to retail websites then increased 200% month on month in September 2025 following that rollout - a direct illustration of the relationship between platform-level capability updates and referral traffic patterns. PPC Land reported on OpenAI's instant checkout launch on September 29, 2025, covering how the Stripe-backed Agentic Commerce Protocol enabled direct purchases within ChatGPT conversations.

The category-level picture is even more granular. According to the report, food and grocery experienced a 29-times increase in AI-driven bot traffic over the course of 2025, driven by the high volatility of prices and stock levels that make the category valuable for AI systems to monitor continuously. Home and DIY saw an 11-times increase. Electronics and appliances also crawled significantly. The divergence reflects a structural insight: AI systems treat retail categories differently based on how frequently data changes, not purely on commercial significance or retailer performance.

The scale and velocity of this automated traffic introduces a measurement problem that retailers have only begun to grapple with. According to the report, AI bot systems generate high-volume, concurrent requests that are not always distinguishable from human browsing in traditional analytics. The consequences are concrete. When Google removed a technical shortcut - the &num=100 parameter - that many tracking tools relied on in early September 2025, Botify's enterprise retail clients reported search impressions fell by approximately 67%, while clicks stayed largely flat and average position appeared to improve. Click-through rate growth then increased by approximately 150%, not because performance had genuinely changed, but because the data was no longer contaminated by synthetic bot impressions. Much of the apparent growth in impressions had been driven by AI bots capable of making 100 or more simultaneous requests, not by real consumers.

Nearly 80% of websites exposed to agent spoofing

Perhaps the most operationally urgent finding in the report concerns security. DataDome, the bot and agent trust management company that co-produced the research, analysed 698,214 live websites using a spoofed "ChatGPT AI assistant" user-agent. The result: 79.7% did not block or challenge the impersonation attempt. Of those, 79.2% returned a "200 OK" response code, meaning the spoofed agent was admitted without challenge. Only 17.2% returned a "403 Forbidden" response.

This is not an abstract vulnerability. According to the report, spoofable user agents and incomplete IP lists make it difficult for retailers to distinguish legitimate AI agents from stealth or human-driven automation using shared infrastructure. The practical effect is that malicious actors can clone weakly declared AI agents to exploit pricing, inventory, or checkout flows. The report notes that DataDome's threat research team, Galileo, recently identified that 80% of AI agents do not declare themselves properly when visiting websites. That figure underpins a broader argument that retailers face "skewed performance metrics that undermine commercial decisions and expose them to fraud."

The risk is asymmetric. Blocking all AI traffic to protect against spoofing carries a different cost: if brands do not allow AI bots to find and use content on their websites, according to the report, those systems may find data elsewhere - from third-party review sites, forums, or competitors. PPC Land has tracked how Amazon chose the restrictive path, blocking AI bots from OpenAI, Anthropic, Meta, Google, and Huawei in August 2025, a strategy that runs in parallel with Amazon's development of its own proprietary AI shopping tools.

The 1-in-198 ratio and what it means for discovery

The visit-to-crawl ratio is worth dwelling on. It signals that for OpenAI's systems, the primary purpose of engaging with retail websites is not delivering visitors but rather ingesting, validating, and comparing information within their own interfaces. Discovery and evaluation increasingly happen inside AI interfaces before a consumer ever reaches a retailer's site. This challenges the foundational assumption of SEO: that being crawled translates, over time, into being visited.

The report frames this as a shift in where influence operates. According to Botify's data, Google drives one visit per six crawls, compared with one per 198 for OpenAI. In practical terms, a product that ranks highly in Google search still generates traffic directly. A product evaluated by an OpenAI agent may shape a recommendation without ever producing a referral visit. Conversion attribution, session metrics, and bounce rate become less meaningful as a result. Brainlabs reported earlier in 2025 that AI search visitors can be worth 4.4 times more than traditional organic traffic, but that premium depends entirely on the visitor arriving at a website in the first place - an outcome the 1-in-198 ratio suggests is far from guaranteed.

The report introduces a taxonomy of AI-led traffic that distinguishes between training crawlers (such as GPTBot from OpenAI and ClaudeBot from Anthropic), live retrieval crawlers (such as ChatGPT-user and Perplexity-user, which fetch fresh content in real time), index-building crawlers (such as OAI-SearchBot and PerplexityBot), AI assistants and shopping agents (such as ChatGPT, Microsoft Copilot, Gemini, and Amazon Rufus), agentic browsers (such as Perplexity Comet, ChatGPT Atlas, and Gemini integrated into Chrome), and malicious or exploitative bots(unauthorised scrapers, competitive intelligence bots, and automated fraud traffic). Each category carries different implications for governance and access policy. PPC Land reported on OpenAI's revised ChatGPT crawler documentation in December 2025, which created different compliance standards for different crawler types.

JavaScript invisibility and the structured data imperative

A separate technical finding deserves attention among search and e-commerce professionals. According to the report, most AI bots cannot read content rendered in JavaScript. If a brand's product data - pricing, availability, specifications, reviews - sits behind JavaScript, AI systems will see only a stripped-down version of the page. The report illustrates this with a comparison: a shoe product page viewed by a consumer shows size, colour options, materials, price, and promotional details; the same page seen by most AI bots shows only a handful of visible text labels and a stripped visual shell.

The consequence is direct. If AI systems cannot access or interpret a retailer's data, that retailer may never appear in AI-mediated discovery. The report places structured, authenticated, and accessible data at the centre of its five identified forces of disruption, alongside discovery shifts, infrastructure requirements, LLM evolution, and measurement change. Poor metadata or inconsistent taxonomies can make products invisible to AI crawlers entirely. PPC Land reported in December 2025 on Google's documentation clarifications around JavaScript rendering for error pages, reinforcing the same underlying technical vulnerability.

The report identifies Answer Engine Optimisation (AEO) as the growth layer built on top of traditional SEO. Traditional keyword rankings, organic impressions, click-through rate, domain authority, and bounce rate - the standard dashboard of digital marketing performance - were built for a world of links and human clicks. They do not show how AI agents see, interpret, and act on content. The report proposes a new generation of performance metrics: agent inclusion rate (what proportion of products or pages are recognised and surfaced by AI agents), discovery visibility (presence rate across multimodal environments), engagement confidence index (how often consumers act on AI-surfaced results), structured-data coveragetrust signal strengthvisibility-to-sale ratio, and discovery ROI index. These are emerging standards, not yet widely deployed, but the report argues they are necessary to understand commercial impact in an AI-mediated environment. An SEO expert released a related AI search content optimisation checklist in June 2025 that addressed similar requirements around server-side rendering and structured data coverage.

Consumer adoption: 73%, but trust lags

The consumer survey component of the report draws from 6,000 respondents across the UK, US, and France surveyed in November 2025, with 2,000 per country. According to Retail Economics, 73% of consumers across the three markets have consciously used AI in some form over the past twelve months. Of those, 38% have used AI assistants specifically for shopping tasks including product ideas, suggestions, or comparisons. A further 34% have used AI features on retailer websites or apps. Twenty-one percent have used AI tools to make decisions or support purchases.

The US records the highest adoption rate at 73%, with France at 69% and the UK at 68% - closer to each other than might be expected given differences in digital culture. Among 18-to-24-year-olds, approximately one in four use AI assistants regularly and one in five use them day-to-day. Among those aged 55 and older, fewer than one in ten report day-to-day use. The gap widens further when examining AI use relative to other discovery channels: among 18-to-24-year-olds, AI assistants and social discovery channels exert influence that matches or exceeds traditional search engines in the discovery phase.

Trust, however, tells a different story. Thirty-two percent of consumers across the surveyed regions say they do not trust AI-enabled search and discovery. Whereas 38% feel comfortable with recommendations from tools like ChatGPT, Microsoft Copilot, and Gemini, far fewer are willing to let those systems act on their behalf. Nearly half - 49% - say discovery is something they want to do themselves, not something to outsource. The report describes this as a "key tension in the shift towards agentic commerce: people value the benefits afforded by AI, but don't yet feel fully confident to delegate decisions."

The trust gap is structured by age and income. Higher-income consumers exhibit greater confidence in AI systems, likely reflecting greater familiarity from work settings. Middle-aged, high-affluence consumers emerge as the most AI-trusting segment. Least affluent consumers show the lowest trust, where concerns about risk, accuracy, and control are most acute.

Which categories and missions face the earliest exposure

The report maps retail categories by consumer trust in AI-led discovery and willingness to use AI, weighted by typical spend. Electronics and appliances consistently lead across all three markets. Purchases in this category involve technical specifications, rapid product cycles, and meaningful price differences - exactly the conditions where AI assistance in comparison and shortlisting is most valued. Travel and leisure sits close behind. Clothing and footwear shows rising exposure, with large online ranges and frequent browsing creating fertile ground for AI-led personalisation.

Categories sitting lower on both axes include jewellery, beauty, and homewares - purchases that involve emotional, tactile, and personal judgements where consumers still seek human reassurance. Food and grocery shows strong regional variation: the US shows higher openness to AI assistance in grocery discovery, while France reflects a stronger food culture centred on freshness and physical inspection.

Shopping missions follow a parallel gradient. According to the survey, consumers show the highest willingness to delegate to AI for "considered or technical purchases" and for "buying gifts for others" - both missions involving uncertainty, high information load, and benefit from structured comparison. Routine replenishment sits at the bottom of the willingness scale across all three markets. The pattern is consistent: AI assistance is welcomed where decisions feel cognitively demanding, and resisted where habitual or emotional judgement dominates.

Amazon Rufus provides a commercial datapoint that anchors these projections. According to the report, more than 250 million customers used Rufus during 2025, with interactions up 210% year on year. Customers who use Rufus while shopping are over 60% more likely to make a purchase during that session. Amazon's full-year financial results subsequently confirmed that Rufus generated nearly $12 billion in incremental annualized sales during 2025, with more than 300 million customers using the tool throughout the year.

Four consumer personas and three readiness workstreams

The report identifies four distinct shopper personas in relation to AI-assisted discovery. AI-first optimisers (10% of the total, skewing younger at an average age of 38) use AI assistants as their primary discovery tool and show 47% complete trust in AI for research and comparison. Assisted explorers (55%, average age 42) welcome AI as a practical co-pilot for shortlisting and comparison but want to remain in the approval loop. Guarded adopters (16%, average age 53) use AI in controlled, low-risk ways but scrutinise results and hesitate before delegating meaningful decisions. Human loyalists(19%, average age 62) rarely use AI for shopping and require concrete evidence of benefit before adopting more meaningfully.

The strategic section of the report organises its recommendations into three readiness workstreams. The first concerns traffic policy for AI bots and agents - establishing which systems should be allowed, blocked, limited, or monetised, with continuous trust assessment and dynamic behaviour-based security. The second concerns data readiness and product information management - standardising product attributes, metadata, and taxonomy to create a single machine-readable source of product truth, and testing how AI crawlers actually extract and interpret that data. The third concerns on-site AI experiences - building conversational, voice, and embedded-agent user experiences that complete the discovery-to-purchase loop without losing the customer to a competitor's AI interface.

Cloudflare's launch of pay-per-crawl in July 2025 and its subsequent Markdown for Agents service in early 2026represent infrastructure-level responses to exactly these workstreams, creating mechanisms for retailers to control and monetise AI access to their content while reducing the token cost of that access by approximately 80%.

The report concludes that early-mover advantages are emerging, but brands that delay action risk becoming harder to find, harder to trust, and easier to replace. The age of agentic search and discovery, it argues, will arrive gradually - but the transition is already underway, and hastening.


Timeline

  • 2010-2014 - Keyword search dominates retail discovery; consumers type exact phrases into search engines with results ranked on keywords and page relevance.
  • 2014-2017 - Behavioural and personalised search takes hold; retailers use cookies and browsing history to introduce recommendation engines.
  • 2017-2019 - Mobile, social, and voice discovery expands search beyond text through smartphones, Alexa, Siri, Facebook, YouTube, Instagram, and TikTok.
  • 2019-2021 - Visual and contextual discovery arrives with Amazon Lens, Pinterest Lens, and Google Lens enabling image-based shopping.
  • 2022-2024 - Generative discovery begins; ChatGPT and Google AI Overviews transform search into dialogue, summarising and comparing products.
  • August 7, 2023 - OpenAI announces GPTBot; major websites begin implementing blocks within two weeks. Coverage on PPC Land
  • 2024 - Bot traffic exceeds human website visitors for the first time, according to Imperva data cited in Brainlabs research. Coverage on PPC Land
  • May 2024 - Google launches GoogleOther-Image and GoogleOther-Video crawlers for research and development data gathering. Coverage on PPC Land
  • Q1 2025 - AI-driven bot traffic baseline established at index 100 across approximately 200 retail and e-commerce websites analysed by Botify.
  • April 18, 2025 - Microsoft launches Copilot Merchant Program for retail integration. Coverage on PPC Land
  • April 28, 2025 - OpenAI introduces shopping features to ChatGPT, reporting over 1 billion weekly searches. Coverage on PPC Land
  • July 1, 2025 - Cloudflare launches pay-per-crawl service in private beta. Coverage on PPC Land
  • July 16, 2025 - SEO expert warns Google's AI could eliminate website clicks amid deteriorating crawl-to-visit ratios. Coverage on PPC Land
  • August 21, 2025 - Amazon blocks AI crawlers from OpenAI, Anthropic, Meta, Google, and Huawei. Coverage on PPC Land
  • Early September 2025 - Surge in AI crawl intensity at retail websites precedes OpenAI's commerce capability expansion; Google removes &num=100 tracking parameter, causing apparent 67% drop in search impressions.
  • September 29, 2025 - OpenAI launches Instant Checkout for ChatGPT with Stripe partnership and Agentic Commerce Protocol. Coverage on PPC Land
  • September 2025 - OpenAI commerce capabilities expand; visits from OpenAI to retail websites increase 200% month on month, per Botify analysis.
  • October 6, 2025 - Independent analyst questions commercial viability of agentic commerce despite ChatGPT checkout launch. Coverage on PPC Land
  • November 2025 - Retail Economics consumer survey of 6,000 respondents conducted across UK, US, and France.
  • November 13, 2025 - Google launches agentic checkout and AI shopping tools for the holiday season. Coverage on PPC Land
  • November 17, 2025 - Google Search Console adds annotations; Google's AI Mode gains agentic features including table reservations. Coverage on PPC Land
  • November 25, 2025 - UK research shows 85% of consumers planning AI-assisted holiday shopping would trust agents to place orders and pay. Coverage on PPC Land
  • Q4 2025 - AI-driven bot traffic reaches 5.4x the Q1 2025 baseline across retail websites analysed by Botify.
  • December 9, 2025 - OpenAI revises ChatGPT crawler documentation with significant policy changes. Coverage on PPC Land
  • December 18, 2025 - Google clarifies JavaScript rendering behaviour for error pages. Coverage on PPC Land
  • January 8, 2026 - Microsoft launches Copilot Checkout with PayPal, Shopify, and Stripe integration. Coverage on PPC Land
  • Early 2026 - Cloudflare launches Markdown for Agents, reducing AI token costs by 80%. Coverage on PPC Land
  • February 2026 - Amazon confirms Rufus generated nearly $12 billion in incremental annualised sales during 2025, with over 300 million users. Coverage on PPC Land
  • March 5, 2026 - Greenough Agency pitches the Retail Economics/AWS/Botify/DataDome report to PPC Land.
  • March 7, 2026 - "The Future of Search and Discovery: A strategic playbook to understand agentic commerce" published by Retail Economics, AWS, Botify, and DataDome.

Summary

Who: Retail Economics, Amazon Web Services, Botify, and DataDome published the report. The consumer research covers 6,000 nationally representative consumers in the UK, US, and France. Key data contributors include Botify's analysis of approximately 200 retail and e-commerce websites, and DataDome's security test of 698,214 live websites. AJ Ghergich, Global VP of AI at Botify, is available for comment on the findings.

What: A 35-page strategic report measuring the scale and commercial implications of AI-driven crawling and agentic discovery in retail. Core quantitative findings include: AI bot traffic grew 5.4 times during 2025; OpenAI generates 1 visit per 198 crawls compared to Google's 1 visit per 6 crawls; 79.7% of websites are unprotected against AI agent spoofing; 73% of consumers have used AI in some form; and 38% have used AI specifically for shopping tasks. The report introduces a new taxonomy of AI traffic types and a set of next-generation performance metrics for the agentic era.

When: The consumer survey was conducted in November 2025. The bot traffic analysis covers the full calendar year 2025. The report was published today, March 7, 2026.

Where: The report covers retail and e-commerce markets across the UK, US, and France for consumer data. The bot traffic analysis draws from approximately 200 retail and e-commerce websites globally. The security analysis of agent spoofing covers 698,214 live websites internationally.

Why: AI systems now function as gatekeepers between brands and consumers, shaping consideration sets before shoppers ever visit a retailer's site. The combination of rapidly escalating bot traffic, widespread vulnerability to agent spoofing, and the invisibility of JavaScript-rendered content to most AI crawlers creates material commercial risk for retailers who have not yet adapted their data infrastructure, traffic governance, and measurement frameworks to the agentic era. The report argues that early-mover advantages are already emerging and that delay increases the risk of being excluded from AI-mediated discovery entirely.

Share this article
The link has been copied!