Microsoft yesterday published a detailed technical blog post examining how the role of the web index is changing as AI systems take on a more central role in answering user queries. The post, titled "Evolving role of the index: From ranking pages to supporting answers," was co-authored by Krishna Madhavan, Knut Risvik, and Meenaz Merchant from Microsoft AI and appeared on the Microsoft Bing Blogs on May 6, 2026. It is one of the most technically detailed public statements Microsoft has made about how its indexing infrastructure handles the distinct demands of AI grounding versus conventional web search.
The announcement arrives as the search industry grapples with a fundamental question: what does an index actually need to do when the goal is no longer pointing users to pages, but constructing AI-generated answers from those pages?
Two systems sharing the same foundations
According to the post, both traditional search and grounding for AI rely on the same foundational infrastructure - crawling billions of pages, evaluating content quality, and ranking results by relevance. Microsoft describes this as "same foundations, different optimization problems." That framing matters, because it pushes back against a common misconception. Grounding does not replace search. It builds on top of it.
But the objectives diverge sharply. Traditional search is designed to answer the question: which pages should a user visit? Grounding asks a different question entirely: what information can an AI system responsibly use to construct a response?
According to the post, those two questions "sound similar" but "are not." The distinction has cascading implications for every layer of the system, from what gets indexed to how quality is measured.
The unit of value shifts from documents to evidence
In traditional search, the unit of value is the document - a page a human can click, skim, evaluate, and act upon. The index needs to be accurate enough that users find what they want. Imperfect ranking is acceptable because the human is in the loop, capable of scanning results, skipping irrelevant hits, and trying again.
Grounding changes that calculus. According to Microsoft, the unit of value shifts from documents to "groundable information" - discrete, supportable facts with clear provenance. When an AI system synthesizes an answer from multiple sources, the individual contributions of those sources can collapse into a single statement. The user sees the answer, not the retrieval process. They cannot easily scan and self-correct the way a human scanning a results page can.
This places the burden of accuracy much further upstream. The index itself must guarantee higher fidelity before the AI system ever constructs a response.
What the index must measure differently
Perhaps the most technically significant section of the post concerns measurement. Traditional search quality is evaluated through user behavior and ranking performance - are the most relevant results appearing at the top? Are users finding what they need? Are near-duplicate pages being collapsed efficiently? All of these measures assume a human evaluator who can skip bad results.
Grounding introduces a different set of requirements, according to Microsoft. Factual fidelity becomes critical. The process of breaking content into retrievable chunks and transforming it for fast lookup can distort the meaning of the original page in ways that never appear in any ranking signal. If a chunk misrepresents a claim, and an AI system uses that chunk to construct an answer, the error propagates silently.
Source attribution quality also takes on new significance. Not all indexed content carries equal evidentiary weight. A Wikipedia summary, a peer-reviewed study, and a marketing blog may all rank in traditional search results - and that is fine, because the human decides which to trust. When an AI system constructs an answer, it must make that determination on behalf of the user. The index, according to Microsoft, needs to understand those distinctions.
Freshness carries entirely different stakes. In traditional search, stale content degrades ranking relevance - an inconvenience a user can work around. In grounding, a stale fact can produce a directly wrong answer. The post notes that the index must also account for coverage gaps in high-value content, ensuring the specific facts people are likely to ask about are actually retrievable and groundable.
Finally, and perhaps most consequentially, the post addresses contradictions. In traditional search, when two sources disagree, the system can surface one above the other and leave arbitration to the user. In grounding, silent arbitration between contradictory sources is dangerous. An AI system that chooses between conflicting claims without flagging the conflict may present a wrong answer with apparent confidence.
Abstention as a feature, not a failure
One of the more significant conceptual points in the post is Microsoft's treatment of abstention. In traditional search, failing to return a result is a coverage failure. In grounding, abstaining from an answer when evidence is insufficient, stale, or conflicting is described as a "valid outcome" - "a deliberate judgment about what the available evidence can justify."
This matters for how practitioners think about content strategy. A page that ranks in traditional search and a page that grounds an AI answer are not the same thing. The grounding index, according to Microsoft, must be capable of making that distinction and must have the discipline to withhold rather than speculate.
Retrieval as a loop, not a single step
Traditional search is typically a single-interaction model: query in, ranked results out. Fast, predictable, easy to reason about. Grounding systems operate differently. According to Microsoft, a system grounding an AI answer may need to ask follow-up questions, refine retrieval based on intermediate results, combine evidence from multiple sources, and re-evaluate when confidence is low.
This iterative structure changes the error profile of the index entirely. Early retrieval steps that introduce subtle errors can compound through subsequent reasoning steps in ways that no human reviewer would catch in real time. The post notes that grounding systems cannot rely on the safety net traditional search provides - the ability of users to scan, skip, and course-correct on the fly. Retrieval systems must therefore optimize not just for one-shot accuracy, but for "consistent, repeatable behavior across iterative use."
The description echoes what Microsoft has been building toward across a series of product releases over the past two years. Sitemaps were repositioned as critical infrastructure for AI search discoverability in August 2025, with Microsoft emphasizing structured signals including the lastmod field as key inputs for determining what gets recrawled and reindexed. The same announcement noted that AI-powered systems operate with greater selectivity than traditional crawlers, focusing on content that demonstrates clear freshness and relevance signals.
The measurement gap
Microsoft's post ends with what may be its most candid observation: the hard part is not the technology, it is the measurement. According to the authors, "We have decades of practice measuring search quality. We are still learning what it means to measure grounding quality rigorously: not just whether an answer was retrieved, but whether the evidence behind it was accurate, fresh, attributable, and consistent."
That statement lands differently given what Microsoft has been releasing in recent months. The AI Performance dashboard in Bing Webmaster Tools, launched in public preview on February 10, 2026, provided publishers with the first visibility into how their content is cited across Microsoft Copilot and AI-generated summaries in Bing. The dashboard tracks total citation counts, average cited pages per day, grounding queries, and page-level citation activity. In March 2026, Microsoft expanded it further with a grounding query-to-page mapping feature that connects specific queries to the exact pages they are pulling from.
Those tools represent the publisher-facing manifestation of the measurement challenge the post describes. If the index is accountable for the quality of evidence it provides - rather than just the relevance of results it surfaces - then publishers and AI systems alike need tools that reflect that accountability.
Why this matters for the marketing community
The distinction Microsoft is drawing has direct implications for how content performs in AI-driven search environments. As PPC Land has covered, Microsoft grounding infrastructure powers nearly every major AI assistant in the market. The company's Copilot advertising business surpassed $20 billion in annual revenue by April 2025, with search and news advertising revenue climbing 21 percent in recent quarters. That commercial scale means grounding is not a theoretical concern - it is a live factor in how brands and publishers are discovered.
The post from Madhavan, Risvik, and Merchant also references a November 2025 blog post from Microsoft on "Optimizing content for inclusion in the era of AI," described as a practical companion to the more theoretical framing of the May 2026 piece. That earlier post outlined concrete steps to make information easier to interpret, cite, and verify in AI experiences. Bing Webmaster Tools has been acquiring tools that let publishers act on those recommendations with more precision - including the data-nosnippet HTML attribute introduced in October 2025, which allows marking specific page sections to exclude them from both traditional search snippets and AI-generated answers while maintaining full indexing eligibility.
Microsoft's broader guidance to marketers, published February 11, 2026, described the three-stage process by which AI systems surface brands: baseline trained knowledge, grounded refinement via retrieved web content, and precision signals from structured first-party data. The technical post from May 6 sits upstream of that framework - it explains the infrastructure that determines what information is available for grounding in the first place.
The practical question for content creators and SEO practitioners is what this technical architecture means for how pages are written and structured. According to related coverage on PPC Land, AI search engines do not index or retrieve whole pages - they break content into passages and retrieve the most relevant segments for synthesis. Each passage must maintain semantic cohesion and remain independently understandable without requiring context from other page sections. That is not how pages are typically written for traditional search, where broader narrative context is a standard feature.
The May 6 post reinforces this architectural reality from the infrastructure side. Factual fidelity, clear provenance, freshness, and the ability to represent contradictions - these are not content-quality buzzwords. They are, according to Microsoft, the core technical requirements that the grounding index is being built to satisfy. Pages that do not meet these requirements are candidates for abstention, not synthesis.
Barry Schwartz of RustyBrick commented on Krishna Madhavan's LinkedIn post announcing the piece, asking whether Bing was creating a separate index for grounding AI, distinct from the traditional search index. Madhavan responded that the post was "a technical post on the evolving technical characteristics of the index" - a clarification that stops short of confirming separate infrastructure but underscores the significance of the divergence Microsoft is documenting.
Timeline
- July 24, 2024 - Bing introduces generative search experience combining LLMs and traditional results
- October 16, 2024 - Bing Webmaster Tools extends Search Performance data retention to 16 months and launches Recommendations feature
- January 31, 2025 - Microsoft integrates Grounding with Bing Search into Azure AI Agent Service
- August 7, 2025 - Bing Webmaster Tools adds device and country data filtering
- August 9, 2025 - Microsoft repositions XML sitemaps as critical infrastructure for AI-powered search discoverability
- August 11, 2025 - Microsoft shuts down Bing Search APIs; Grounding with Bing Search becomes the recommended alternative at $35 per 1,000 transactions
- August 25, 2025 - Microsoft extends Bing Webmaster Tools data retention to 24 months
- October 15, 2025 - Microsoft introduces
data-nosnippetHTML attribute for Bing, allowing selective exclusion from search snippets and AI answers - February 10, 2026 - Microsoft launches AI Performance dashboard in Bing Webmaster Tools in public preview
- February 12, 2026 - Microsoft positions grounding as the invisible infrastructure powering nearly every major AI assistant
- February 15, 2026 - Microsoft publishes updated AI marketer's guide explaining how AI search surfaces brands
- March 23, 2026 - Microsoft expands AI Performance dashboard with grounding query-to-page mapping feature
- May 6, 2026 - Microsoft publishes "Evolving role of the index: From ranking pages to supporting answers," co-authored by Krishna Madhavan, Knut Risvik, and Meenaz Merchant of Microsoft AI
Summary
Who: Krishna Madhavan (Principal Product Manager, Senior Director, Microsoft AI and Bing), Knut Risvik, and Meenaz Merchant, all from Microsoft AI, authored the post. The audience is publishers, SEO professionals, content creators, and developers working with AI systems that rely on Bing's grounding infrastructure.
What: A technical blog post explaining how the requirements of an index built for AI grounding diverge from those of an index built for traditional web search, across five dimensions: factual fidelity, source attribution quality, freshness, coverage of high-value facts, and contradiction handling. The post argues that grounding demands a categorically different measurement framework from search, and introduces abstention as a deliberate and valid system behavior.
When: Published May 6, 2026, on Microsoft Bing Blogs.
Where: Microsoft Bing Blogs, with supporting commentary via Krishna Madhavan's LinkedIn post. The technical infrastructure described operates across Bing's search and AI systems globally, including Copilot and partner AI assistants that license Microsoft grounding capabilities.
Why: As AI systems increasingly construct answers rather than return links, the infrastructure supporting those answers needs to satisfy different quality requirements - ones that existing search quality metrics were never built to capture. Microsoft's publication of this post signals that the company is defining those requirements publicly, which matters for publishers and practitioners whose content either qualifies as groundable evidence or does not.