Google yesterday formally clarified that its spam policies apply to all generative AI responses in Search - including AI Overviews and AI Mode - closing a gap in documentation that had left unanswered whether the same enforcement mechanisms that govern traditional web search results also extend to AI-generated summaries.

The clarification was published on May 15, 2026, as part of a documentation update to Google Search Central. According to Google's official changelog, the change involved clarifying "that our spam policies also apply to generative AI responses in Google Search," with the stated reason being "to make it clear that the spam policies apply to all of Google Search, including generative AI responses."

The update does not introduce new rules. It applies existing ones. Every policy already covering traditional blue-link search results - cloaking, scaled content abuse, link spam, site reputation abuse, inauthentic mentions, doorway pages, hidden text, scraping, thin affiliation, and more than a dozen others - now explicitly governs what appears inside AI Overviews and AI Mode as well. For practitioners who had been operating as though the generative AI layer existed in a separate policy space, the documentation removes that assumption.

Why the gap existed

AI Overviews launched at scale in 2024 and expanded rapidly across markets. The feature generates synthesised answers at the top of search results, drawing from indexed web content but presenting information in a format distinct from ranked links. AI Mode - a more expansive version of AI-driven search available in Google's interface - followed as a separate surface. Neither was explicitly named in the spam policies documentation until today.

The absence of explicit coverage created a practical question: if a site used tactics that violated Google's spam policies and appeared as a result in an AI Overview citation, was that a violation? The documentation said nothing. Google's enforcement history offered some guidance - the company's automated spam systems, including SpamBrain, are designed to operate at the ranking signal level rather than at the level of any individual search feature - but the policy language itself had not been updated to reflect the new surfaces.

The spam problem inside AI Overviews was documented publicly as early as May 2025, when practitioners identified a pattern of self-promotional listicles being cited as authoritative sources inside AI-generated answers. Creating an article that ranked a company as "the best" in its category - including on the company's own website - was sufficient to get that claim surfaced by AI Overviews. Lily Ray, VP of SEO and AI Search at Amsive, described the tactic at the time as "remarkably simple" and questioned why Google's fact-checking and consensus mechanisms were not filtering it out.

What the spam policies cover

The policies now confirmed to apply to AI Overviews and AI Mode span a wide range of practices. Several are directly relevant to how practitioners have been attempting to influence AI-generated responses specifically.

Scaled content abuse is defined in the policies as generating many pages "for the primary purpose of manipulating search rankings and not helping users." The documentation explicitly names "generative AI tools or other similar tools" as a means by which this abuse can occur. Creating large volumes of low-differentiation content using language models in hopes of appearing in AI-generated summaries falls within this definition. Sites hosting such content are instructed to exclude it from Search indexing. The policy also covers scraping feeds or search results to generate pages, and stitching content from different web pages without adding value - both tactics that practitioners have tested in AI search contexts.

Inauthentic mentions is addressed in the spam documentation with language that directly connects to the manipulation pattern identified last year. According to the policies, Google's generative AI features "can show what's being said about products and services across the web, including in blogs, videos, and forum discussions." The document states that seeking inauthentic mentions "isn't as helpful as it might seem," with core ranking systems focused on high-quality content and other systems designed to block spam - and generative AI features described as depending on both.

Lily Ray flagged this passage in a LinkedIn post published today, posing the question of whether it covers paid brand mentions and reciprocal brand mention arrangements. In a reply within the same thread, she drew an analogy to Google's Penguin update from over a decade ago. The Penguin update made unnatural link patterns detectable at scale, shifting the dynamic from one where volume determined success to one where the nature of the link determined risk. Ray's parallel suggests the inauthentic mentions provision may function similarly: not as an immediate wipeout of all manipulated brand mentions, but as a signal that detection is the trajectory.

Cloaking - presenting different content to search engines and users with intent to manipulate rankings - is prohibited, and the extension of spam policies to AI surfaces makes its application to AI crawlers explicit. Google and Bing jointly signalled as early as February 2026 that maintaining separate markdown pages or modified content specifically for AI crawlers constitutes cloaking. That warning was informal guidance at the time; today's documentation update formalises the policy basis for that position.

Link spam provisions carry over unchanged in substance but now explicitly apply in the AI context. Buying or selling links for ranking purposes, excessive link exchanges, and using automated services to create links are violations regardless of whether the goal is to influence traditional search rankings or AI Overview citations. The same logic applies to site reputation abuse - publishing third-party content on a host site primarily to benefit from that site's established ranking signals - a policy Google has been enforcing since May 2024 and which has been the subject of a European Commission investigation under the Digital Markets Act since November 2025.

Doorway abuse - creating pages or sites to rank for specific queries and funnelling users to intermediate pages - applies to attempts to appear in AI-generated responses through artificially constructed entry points. So does scraping, defined as taking content from other sites without adding original value, which has particular relevance to the category of AI-optimised pages that simply repackage existing information from multiple sources into a new format.

The enforcement question

Formalising a policy and enforcing it against a specific class of manipulation are different things. Google's track record with spam policy enforcement shows a gap between the introduction of a rule and the deployment of detection systems capable of acting on it at scale.

The March 2026 spam update, released on March 24 at 12:18 PDT and completing in a record 19.5 hours, applied globally to all languages without disclosing its thematic focus. Whether it targeted AI-generated content spam, link spam, site reputation abuse, or some combination was never specified. The site reputation abuse policy itself, introduced in March 2024, did not receive algorithmic enforcement at the time of introduction - manual enforcement began in May 2024, and algorithmic detection continued developing through 2024 and 2025.

The same trajectory is plausible for the inauthentic mentions provision. Google's history with link spam - from early manual penalties through the introduction of Penguin in 2012 to the full deployment of SpamBrain for link spam detection in December 2022 - shows that building reliable detection for a new class of manipulation takes years. Brand mention manipulation targeting AI-generated responses is a newer tactic, and the signals for detecting it are less established than those for detecting unnatural link patterns. Clarifying the policy is the first step.

Google's quality rater guidelines were updated in January 2025 to add 11 pages of expanded spam identification criteria, including the first formal definition of generative AI in those guidelines. In April 2025, Google directed quality raters to identify pages with main content generated by automated or generative AI tools and potentially flag them as lowest quality. Those rater signals feed into the training of ranking systems. The documentation update today represents the policy layer catching up with the signals layer.

Context: what the industry had been doing

The absence of explicit AI spam policy coverage did not stop practitioners from attempting to optimise for AI Overviews specifically. Google's John Mueller warned in August 2025 that aggressive promotion of AI SEO acronyms - including GEO (Generative Engine Optimization) and AEO (Answer Engine Optimization) - may itself signal spam and scamming activity. The warning was informal, posted on Bluesky, and did not carry the weight of policy documentation.

In January 2026, Danny Sullivan - then still Google's Search Liaison - warned explicitly against fragmenting content into bite-sized chunks for LLM optimisation, framing the tactic as one that would not survive improvements to Google's ranking systems. That guidance came through the Search Off the Record podcast rather than through documentation. Today's update is the first time the policy documentation itself has caught up with the informal guidance that had been accumulating over the prior year.

The reaction in the SEO community has been pointed. Commenting in the LinkedIn thread around today's release, several practitioners noted that the inauthentic mentions provision echoes structural problems with AI Overviews that Google had been slow to address. One commenter noted that Google may have a significant head start over other platforms in detecting inauthentic mentions, given the company's long history of building systems to identify unnatural backlink patterns - the underlying detection problem is similar, even if the signal is different.

What the policies say about automated detection

Google's spam policies documentation states that policy-violating practices are detected "both through automated systems and, as needed, human review that can result in a manual action." Sites that violate policies "may rank lower in results or not appear in results at all." The same consequences now explicitly extend to AI Overviews and AI Mode.

The documentation also covers policy circumvention - defined as continuing to engage in actions intended to bypass spam or content policies, including using existing or creating new subdomains, subdirectories, or sites to continue distributing violating content. Attempts to game AI Overview citations through technical workarounds that mirror the structure of policy violations will fall under this provision.

One area worth attention is the machine-generated traffic provision, which covers sending automated queries to Google. This has practical relevance for operators of AI agent infrastructure that queries Google's systems programmatically - a growing category as AI Mode expands. The policies state that machine-generated traffic consumes resources and interferes with Google's ability to serve users, and that such activities violate both spam policies and Google's Terms of Service.

Why this matters for the marketing community

The formalisation of spam policy coverage across AI surfaces carries specific implications for how marketing teams should approach content strategy and vendor relationships. Services that promise to improve inclusion in AI Overviews through tactics such as manufacturing brand mentions, building artificial citation networks, or producing high-volume AI-generated content clusters now operate in explicit policy violation territory - not just in the grey zone that existed when documentation was silent on the question.

PPC Land has tracked the broader spam policy history since Google introduced scaled content abuse, expired domain abuse, and site reputation abuse as named policy categories in March 2024. Each of those additions followed a similar pattern: observed manipulation at scale, informal guidance, policy documentation, and eventually enforcement. The inauthentic mentions extension to AI surfaces appears to follow that same arc, now with the documentation phase complete.

The practical enforcement question - how Google's systems will distinguish organic brand mentions from manufactured ones inside AI-generated responses - remains open. The policy provides the legal framework. The technical detection capability will determine how quickly it changes behaviour.

Timeline

Summary

Who: Google, through its Search Central documentation team. The policy clarification affects all website owners, publishers, SEO practitioners, and marketing teams whose content appears - or seeks to appear - in Google's AI Overviews and AI Mode responses.

What: Google formally extended all existing spam policies to cover generative AI responses in Google Search, including AI Overviews and AI Mode. The clarification does not introduce new rules but explicitly applies the full set of existing spam prohibitions - among them scaled content abuse, inauthentic mentions, cloaking, link spam, site reputation abuse, and doorway abuse - to the AI-generated layer of Search for the first time.

When: The documentation update was published on May 15, 2026, as part of Google's Search Central changelog. It was flagged in the same changelog entry that also announced the publication of Google's first official guide on optimising for generative AI features in Search.

Where: Google Search Central, the company's developer documentation platform at developers.google.com/search. The spam policies document was last updated on May 15, 2026, according to the page footer.

Why: According to Google's official changelog, the update was made "to make it clear that the spam policies apply to all of Google Search, including generative AI responses." The implicit reason is that AI Overviews and AI Mode had been operating as surfaces without explicit spam policy coverage in documentation, creating ambiguity about whether manipulation tactics targeting those surfaces carried the same enforcement risk as manipulation targeting traditional search results.

Share this article
The link has been copied!