Google added Google-Agent to its official list of user-triggered fetchers on March 20, 2026, formalizing an identity for AI-powered systems hosted on Google infrastructure that browse the web on behalf of users. The update, published to Google's crawling infrastructure documentation, introduces both a new user agent string and a dedicated IP range file, completing a formal record that site owners can use to identify this category of automated traffic.

The addition marks a notable step in how Google publicly accounts for its AI agents. These are not conventional search crawlers that autonomously index the web. They operate only when a user actively triggers them. According to the documentation changelog, the Google-Agent user agent is "rolling out over the next few weeks," meaning webmasters may already be observing this traffic before it reaches full deployment.

What the documentation says

According to the official crawling infrastructure documentation, Google-Agent is used by agents hosted on Google infrastructure to navigate the web and perform actions upon user request. Project Mariner, Google's experimental AI agent capable of completing tasks inside a web browser, is cited directly as an example of the product associated with this fetcher.

The user agent strings differ depending on device type. The mobile agent string reads: Mozilla/5.0 (Linux; Android 6.0.1; Nexus 5X Build/MMB29P) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/W.X.Y.Z Mobile Safari/537.36 (compatible; Google-Agent; +https://developers.google.com/crawling/docs/crawlers-fetchers/google-agent). The desktop variant reads: Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; Google-Agent; +https://developers.google.com/crawling/docs/crawlers-fetchers/google-agent) Chrome/W.X.Y.Z Safari/537.36.

Both strings include a reference URL pointing to the Google developer documentation page. The W.X.Y.Z placeholder in the Chrome version number will reflect the actual browser version at the time of the request, consistent with how other Google crawler strings are formatted.

IP ranges for Google-Agent will not appear in the existing user-triggered-fetchers.json or user-triggered-fetchers-google.json files. According to the documentation, Google-Agent uses a separate file: user-triggered-agents.json. This distinction matters for security teams and server administrators who build automated verification rules based on published IP range files.

Robots.txt and the fetcher classification

Where Google-Agent sits in the three-tier taxonomy of Google's crawling infrastructure is important for publishers. According to the verify-requests documentation, Google classifies its crawlers and fetchers into three categories. Common crawlers, including Googlebot, always respect robots.txt for automatic crawls. Special-case crawlers, such as AdsBot, may or may not follow robots.txt depending on product agreements. User-triggered fetchers - the category into which Google-Agent falls - ignore robots.txt rules because the fetch was explicitly requested by a human user, not by an autonomous system.

This places Google-Agent alongside tools like Google Site Verifier, Google NotebookLM, Google Pinpoint, and Google Messages. All of them bypass robots.txt not as an act of disregard for publisher preferences, but because their operation is user-initiated rather than automated discovery.

The implications are substantive. A site owner who has blocked Googlebot from certain pages via robots.txt cannot assume those pages are invisible to Google-Agent. If a user directs an AI agent to retrieve content from a restricted URL, the fetcher will attempt to access it regardless of the robots.txt directive. According to the documentation, the general technical properties of Google's crawlers also apply to user-triggered fetchers - including the caution that user agent strings can be spoofed.

How verification works

Site administrators can verify whether a request genuinely originates from Google-Agent using the same two-step DNS process that applies to other Google crawlers. According to the verification documentation, the process involves running a reverse DNS lookup on the accessing IP address, checking that the returned hostname resolves to either googlebot.comgoogle.com, or googleusercontent.com, then running a forward DNS lookup to confirm the hostname resolves back to the original IP. For user-triggered fetchers controlled directly by Google, the reverse DNS mask matches google-proxy-***-***-***-***.google.com. For fetchers originating from user-owned infrastructure on Google Cloud Platform, it resolves to ***-***-***-***.gae.googleusercontent.com.

For large-scale automated verification, administrators can match the crawler's IP against the published JSON files. The daily refresh schedule for Google's IP range files - introduced in March 2025 - means these lists update more frequently than the previous weekly schedule, reducing the window of exposure to potential IP spoofing.

The broader context: a year of crawling documentation updates

The March 20 change is the latest in a sustained sequence of documentation updates Google has made to its crawling infrastructure over the past year. The changelog published alongside the documentation tells the story clearly.

On November 20, 2025, Google migrated its entire crawling documentation from Google Search Central to a new dedicated crawling infrastructure site. The move reflected the fact that Google's crawlers serve far more than search - they underpin Shopping, News, Gemini, AdSense, and now AI agents. The November 2025 update also introduced HTTP caching specifications, transfer protocol details, and expanded technical guidance for webmasters.

In December 2025, Google migrated additional documentation covering faceted navigation, crawl budget optimization, HTTP status code handling, and DNS error debugging. The December 2 update also expanded the Google Read Aloud user agent page, clarifying that Read Aloud uses stateless rendering and requires direct page access to read meta tags - a detail that had generated community questions.

The year began with a notable addition on January 21, 2026, when Google added Google Messages to the user-triggered fetchers list. That addition explained to site owners why they might see requests carrying the GoogleMessages user agent: the Messages platform generates link previews whenever users share URLs in chat threads. On February 3, 2026, Google moved file size limit documentation - establishing that all crawlers fetch only the first 2MB of most file types - from Googlebot-specific pages to the broader crawling infrastructure documentation. This clarified that the 2MB limit documented in February 2026 applies infrastructure-wide, not exclusively to Googlebot.

On March 3, 2026, the overview page about Google's web crawling was added - a resource page consolidating basic educational information about how Google's crawling works, prompted by recurring questions from site owners.

Then, on March 20, came the Google-Agent addition.

Also noted in the documentation: Google is experimenting with the web-bot-auth protocol, using the identity https://agent.bot.goog. No further technical detail is provided in the documentation itself, but the mention signals ongoing standardization work for how AI agents authenticate and identify themselves to web servers.

The web-bot-auth protocol detail

The web-bot-auth reference is brief but technically significant. It suggests Google is exploring a cryptographic or protocol-level mechanism for AI agents to prove their identity to web servers without relying solely on IP range verification or user agent strings - both of which, the documentation explicitly warns, can be spoofed. The https://agent.bot.goog identity would, if standardized, give publishers a more reliable way to confirm that a browser-navigating request is genuinely from a Google-hosted agent.

This connects to broader industry work on bot authentication. As AI agents increasingly navigate the commercial and informational web - completing purchases, retrieving content, and executing tasks on behalf of users - the infrastructure for distinguishing legitimate agent traffic from malicious bots becomes commercially important.

Why this matters for the marketing and publishing community

For publishers monetizing through digital advertising, the implications of user-triggered AI agents accessing page content without leaving traffic in analytics as human sessions are real and growing. If a user delegates a research or browsing task to an AI agent, the agent visits the pages; the user does not. Page views generated by agents do not carry the same engagement signals as visits from human readers. This affects metrics used to value inventory, report campaign performance, and justify editorial investment.

Identifying Google-Agent traffic in server logs - now possible with the documented user agent string and IP range file - gives publishers the data layer needed to separate agent-generated requests from human visits. Without this separation, engagement rate calculations, session duration figures, and page-level audience metrics become harder to interpret accurately.

The Search Off the Record podcast episode from March 12, 2026 clarified that Googlebot is not a standalone program but rather a name used by one team within a shared crawling infrastructure - the same infrastructure that now serves NotebookLM, Gemini, AdSense, Shopping, and AI agents. Google-Agent is the newest named client of that shared system.

For SEO practitioners, the robots.txt exclusion for user-triggered fetchers raises a practical question: if content is restricted from autonomous crawlers but accessible to user-triggered fetchers, what controls exist for AI agents acting on behalf of users? Currently, the documented answer is that user-triggered fetchers bypass robots.txt entirely. The web-bot-auth experiment may be an early step toward more granular controls, but no such mechanism is available at the time of this writing.

The pace of documentation additions over the past year - Google revamped its documentation structure as far back as September 2024, then migrated the entire site in November 2025, then added four new fetchers between October 2025 and March 2026 - reflects the speed at which new Google products are generating new categories of web traffic. Each product that allows users to supply URLs and retrieve content creates a new type of fetcher that webmasters need to identify and, if desired, accommodate or block at the infrastructure level rather than through robots.txt.

Timeline

Summary

Who: Google, through its crawling infrastructure documentation team, with implications for website owners, SEO professionals, server administrators, and digital publishers.

What: Google added the Google-Agent user agent to its official list of user-triggered fetchers on March 20, 2026, published two device-specific user agent strings, introduced a new IP range file (user-triggered-agents.json), and noted experimentation with the web-bot-auth protocol using the identity https://agent.bot.goog. The fetcher is associated with AI agents running on Google infrastructure, including Project Mariner, that browse the web on behalf of users.

When: The documentation update was published March 20, 2026, with the Google-Agent user agent described as rolling out over the following weeks.

Where: The update appears in Google's crawling infrastructure documentation at developers.google.com, within the user-triggered fetchers reference page and the crawling documentation changelog.

Why: The Google-Agent fetcher formalizes a traffic category that will increasingly appear in server logs as AI agents perform browsing tasks on behalf of users. Documentation of the user agent string and IP range file gives site owners the tools to identify, track, and respond to this traffic - distinct from both conventional search crawlers and autonomous AI training bots.

Share this article
The link has been copied!