A former Google Search Quality engineer has pulled back the curtain on how the company's spam-fighting operation actually works - from the daily queue of algorithmically flagged websites to the quiet manual actions that website owners never see inside Google Search Console.
Pedro Dias, who spent almost six years at Google on the Search Quality and spam-fighting team before leaving at the end of 2011 - around the time the Penguin algorithm launched - gave a detailed account of internal processes in a video interview published on April 7, 2026, on the YouTube channel of SEO consultant Ernesto Ortiz. The video, titled "Ex Google Spam Fighter Reveals How They ACTUALLY Catch Your Website," had gathered 175 views by the time the LinkedIn post sharing it was published three days later. The conversation covers territory rarely discussed openly: how manual reviews are structured, how intent determines whether something is spam, and why some spam is deliberately left alone.
The queue system and human review
At the entry level of the Search Quality team, according to Dias, the work is largely repetitive. Reviewers work through a queue of websites that have been flagged by algorithmic systems uncertain about a site's nature or intent. "The algorithms that dictate quality are not confident enough on making a decision on what the website is," Dias explained in the interview, "so the website comes into a queue and you review the website."
The volume is significant. Dias declined to specify exact numbers but said reviewers go through large quantities of sites on a daily basis. As seniority increases, the work shifts. More experienced team members stop relying on the queue and instead identify problematic queries and search results independently, drawing on internal systems that show where user satisfaction with search results is low. At senior levels, the work becomes more investigative - following spam reports, tracing link networks, mapping out entire ecosystems of coordinated low-quality content before acting on them all at once.
This investigative structure - pulling a thread and following it until the whole picture emerges - is not something widely documented outside Google. It provides context for why enforcement actions sometimes appear to target entire networks rather than individual sites. Google's SpamBrain system, which began targeting link spam at scale from December 2022, represented a formal move toward automating what had been painstaking manual investigative work.
Intent over technique
The central theme running through Dias's account is that spam is defined by intent, not by the presence of any particular technical behavior. This is a more nuanced position than most SEO practitioners encounter in public documentation.
"Low effort, high monetization intent - when these two things come together, that often signals towards something wrong," Dias said. A site with sparse content, heavy advertising, and affiliate links everywhere is a strong early signal. But even that pattern requires judgment. Dias described cases where his first instinct was that a site was spam and then, on closer inspection, the apparent oddities turned out to be the result of a technical malfunction rather than deliberate manipulation.
The committee structure at Google exists partly for this reason. No single person defines what constitutes spam. Policies are developed collaboratively, and even manual actions that a senior reviewer is confident about get vetted by a colleague before going to production. Dias was explicit: "Nobody at Google defines what is quality or what is spam single-handedly." That committee process provides a check on individual judgment, especially in cases involving edge conditions and markets where low-quality content dominates simply because there is little else available.
The implication for SEO practitioners is significant. Correlation-based analysis - observing that sites with a certain feature tend to rank well or poorly - can produce deeply misleading conclusions. Dias pointed out that website owners sometimes observe a competitor doing something that looks suspicious, but the suspicious element may have nothing to do with why that site ranks. The actual signal could be entirely different.
Fingerprinting and scale
How does Google connect multiple websites to the same operator or intent? Dias described the concept of "fingerprinting" - identifying patterns across sites that, while they may differ in surface appearance, share underlying characteristics that reveal common origin or coordination.
"You can fingerprint a lot of things," he said, noting that humans are limited in how varied they can make things when building websites at scale. Even AI-generated content falls into this trap. "AI is good at patterns - it's more refined at patterns than humans - so that's why AI can see patterns where humans probably couldn't." Two or three sites can look distinct. At 10, 20, or 100, the pattern becomes visible. Google's systems - both human and algorithmic - look for the common denominator rather than judging each site in isolation.
The unit of analysis, Dias emphasized, is the unified intent behind a corpus of sites, not necessarily a physical person. A business, two businesses, or a network with a shared operational logic can all trigger the same logic: if the common denominator is strong enough, action is justified.
This has direct implications for how Google handles link spam. A site that appears clean in isolation may be part of a network where the coordinated behavior changes the interpretation of every component.
Cloaking and the limits of technical rules
Dias addressed cloaking directly - a topic where public SEO guidance tends toward simple rules that, he argued, miss the point. The standard assumption is that serving different content to Googlebot than to users is always spam. The reality, according to Dias, is more conditional.
A site that shows different content to users in Germany, Spain, and the US because of genuine geolocation requirements is not cloaking in the harmful sense. Google, he noted, primarily crawls from US data centers but does conduct spot checks from servers temporarily leased in other locations specifically to verify that geolocated content matches what local users see. "The bulk of the crawling is certainly done from the US and US data centers. There is some ad hoc crawl that will be done from servers everywhere around the globe just to make sure Google is seeing the right content that users see."
The harmful version of cloaking is showing content exclusively to Google that no human anywhere would see. That is treating the search engine as a special case to manipulate, not serving users. The reverse - hiding certain facets or navigation elements from Google to manage crawl budget while keeping them available to users - is, in Dias's framing, not cloaking at all. It is the inverse.
Google and Bing confirmed in February 2026 that creating dedicated markdown or JSON pages for AI crawlers but not human visitors violates the same logic - showing content to bots that users cannot see falls into the category of manipulation regardless of stated intent.
Silent manual actions and the PageRank cut
One of the more technically specific disclosures in the interview concerns how Google handles repeat offenders. Not all manual actions are visible in Google Search Console. According to Dias, Google has the ability to cut PageRank from a site - removing its ability to pass link value to other sites - without notifying the site owner that this has happened.
"You can choose to issue a manual action and show it to you," Dias explained. "But if they think that you are a repeat offender or someone that has a history of manipulation, I'd rather do the work and not say to you that I did the work." The consequence is that the site owner continues selling or buying links that now have zero value, because the passing of PageRank has been silently severed.
This echoes what a September 2025 federal court memorandum established through antitrust proceedings: Google's quality signal is primarily derived from on-page content rather than external links alone, and the company's control over how PageRank flows through the web is far more granular than publicly described.
The strategic logic behind silent actions is deliberate. If a manipulative operator learns their tactics have stopped working, they will adapt and try new ones. If they believe the tactics are still working, they have no incentive to escalate. This "spammer's dilemma," as Dias called it - wanting to fly under the radar while also wanting to be visible - creates a natural tension that Google can exploit.
Prioritization: not all spam is worth fighting
A point Dias made repeatedly is that Google does not treat all spam as equally urgent. The decision of whether to act depends heavily on whether the spam is actually having an impact on search results users see. A spam site with no visible traffic, no ranking impact, and no real-world effect on searchers is a low priority. Acting on it would consume resources without improving the experience for anyone.
"If a tree falls in the forest - if spam has no visibility, what do you benefit in going after it and killing it?" Dias asked. The corollary is that some very old, obviously spammy tactics still appear in the index not because Google is unaware of them, but because they are operating at low enough volume and in low-competition enough niches that the effort to eliminate them is not justified relative to their impact.
This helps explain a pattern that SEO consultant Gagan Ghotra documented in December 2025: Google's confirmed enforcement actions have decreased from 10 per year in 2021 to 4 in 2025, while perceived volatility has increased. The volume of enforcement is not the metric - targeted, high-impact enforcement is.
SEO as product management
In the second part of the conversation, Dias shifted from describing Google's systems to describing how he applies that knowledge in current consulting work. His framing of SEO as product management rather than a technical discipline is worth noting for the marketing community.
"SEO is no more than a product manager for Google," Dias said. The job, in his account, is to understand the criteria by which Google rewards a website, identify gaps between how the website is designed to work and how users actually interact with it, and make changes that improve both simultaneously. The human system and the machine system need to overlap "almost entirely."
The practical implication is that technical work - fixing crawlability, addressing client-side rendering problems, restructuring navigation - has clear beginnings and endings. The more valuable work is being embedded in client teams over time, participating in product meetings, and identifying where untapped value exists that SEO can surface rather than create.
He was direct about the limits of the discipline: "SEO alone doesn't create any value. SEO is a magnifying glass. If the website is good, you are going to magnify the good." A business without genuine differentiated value cannot be rescued by technical optimization. The January 2025 expansion of Google's search quality rater guidelines - which added 11 pages specifically on spam identification - formalized the same principle from the other side: the systems evaluating quality are increasingly oriented toward assessing genuine utility rather than technical compliance.
Automation skepticism in the AI era
The final section of the interview addressed the current SEO landscape directly. Dias expressed concern about the industry's confidence in AI-powered tools for measuring visibility, particularly for AI search systems like Google's AI Overviews or platforms like ChatGPT.
"I have more questions than I have certainties," he said, "and what I see in the industry is the opposite - people are more certain of themselves than they doubt themselves." The measurement tools claiming to track AI search visibility are, in his view, not reliable enough to base business decisions on. The underlying reason is that not even the engineers who build these systems fully understand why one token follows another in an LLM output.
He also flagged AI citation bias as a structural issue. According to research he referenced, brands that are already recognizable have substantially higher chances of being cited by AI systems than newer, lesser-known brands. An AI that has been trained on a corpus with a cutoff date carries its biases forward, and its tendency is to validate what it already believes rather than seek contradictions.
The warning connects to broader patterns. Google's John Mueller warned in August 2025 that aggressive promotion of AI SEO acronyms - GEO, AIO, AEO - may itself signal spam behavior. That warning positioned much of the AI search optimization discourse as a repeat of earlier SEO cycles: new terminology, familiar incentives, similar risks.
Dias acknowledged that AI is genuinely useful in his own work - for drafting, editing, and organizing content - but he uses it in a supervised mode where his own judgment remains the final check. "You should make it in a way that I don't think we are ready to have AI producing loads of amounts of unsupervised outputs and content and then put your success depending on that."
Timeline
- End of 2011: Pedro Dias leaves Google Search Quality team after almost six years, around the time the Penguin algorithm launches
- January 2023: Google activates SpamBrain AI system targeting link spam at scale
- January 23, 2025: Google expands search quality rater guidelines by 11 pages focused on spam identification
- August 14, 2025: Google's John Mueller warns that AI SEO acronym promotion may signal spam tactics
- August 26, 2025: Google launches August 2025 spam update, completing after 27 days on September 22
- September 4, 2025: Federal court memorandum in US v. Google reveals PageRank's role and Google's granular quality controls
- December 26, 2025: SEO expert Lily Ray warns that excessive optimization is now triggering Google penalties
- December 29, 2025: SEO consultant Gagan Ghotra documents five-year shift from predictable updates to continuous volatility
- February 5, 2026: Google and Bing warn that separate markdown pages for AI crawlers violate cloaking policies
- March 24, 2026: Google releases March 2026 spam update, completing in record 19.5 hours
- April 7, 2026: Pedro Dias interview published on Ernesto Ortiz YouTube channel
- April 10, 2026: LinkedIn post by Ernesto Ortiz sharing the interview goes live, gathering 45 reactions and 5 comments
Summary
Who: Pedro Dias, a former Google Search Quality and spam-fighting team engineer who worked at Google for almost six years and left at the end of 2011. The interview was conducted by Ernesto Ortiz, founder of Structural SEO, and published on his YouTube channel on April 7, 2026. Dias currently works as an independent consultant at visively.com and writes a publication called The Inference.
What: A detailed account of how Google's spam detection and manual review systems work internally - including the queue-based review process for junior staff, the investigative work done at senior levels, the committee structure for policy decisions, the role of intent in determining what qualifies as spam, the fingerprinting of networks at scale, the existence of silent manual actions including PageRank cuts, and the prioritization logic that determines which spam gets acted on and which does not.
When: The interview was published on April 7, 2026, and the LinkedIn post sharing it was published on April 10, 2026. The events and processes described by Dias reflect his experience up to late 2011, with commentary on how those dynamics apply to search and AI systems as of 2026.
Where: The interview was published on Ernesto Ortiz's YouTube channel and shared via a LinkedIn post on April 10, 2026. Dias's consultancy, Visively, and his publication, The Inference, are his current platforms. The processes described took place within Google's Search Quality team, which Dias described as operating with market-specific specialists and committee-based policy decisions.
Why: The interview matters for the marketing and SEO community because it provides a rare, technically grounded account of how intent - not technique - drives enforcement decisions at Google. As AI-generated content floods the web and new algorithmic updates arrive with increasing frequency, understanding the actual logic behind manual reviews, silent penalties, and prioritization decisions is directly relevant to anyone managing organic search visibility. The framing of SEO as product management rather than a technical checklist also challenges how many marketing teams and agencies position and scope their SEO work.