Google reports 65% surge in visual searches as AI mode drives multimodal adoption
Advanced search features reach 1.5 billion monthly users while industry debates traffic impacts.

Google's visual search technology achieved 65% year-over-year growth as artificial intelligence fundamentally transforms how billions of users access information online. According to Robby Stein, who leads Google's AI Search team, this dramatic increase reflects shifting user behavior toward multimodal queries combining camera, voice, and traditional text inputs.
The surge in visual searching represents one of the most significant changes in search behavior since Google's founding 25 years ago. "In the multimodal case, there's 65% increase year-over-year on visual searching on Google," Stein revealed during a July 23, 2025 podcast interview. "People are using their camera, they're using onscreen context. So, you take a screenshot or on Android devices, use circle to search to ask a question about anything that you're looking at."
Subscribe the PPC Land newsletter ✉️ for similar stories like this one. Receive the news every day in your inbox. Free of ads. 10 USD per year.
The growth coincides with massive adoption of AI Overviews, Google's AI-generated search summaries that now serve 1.5 billion users monthly. These features have fundamentally altered query patterns, with users asking substantially longer and more complex questions than traditional keyword-based searches. Stein explained that users now formulate "multi-sentence questions comparing multiple products on multiple dimensions" rather than basic keyword searches.
Google's technological infrastructure enables this transformation through what the company calls "query fan-out technique." The system processes user questions by breaking them into multiple subtopics while simultaneously issuing hundreds of related searches. This approach differs fundamentally from traditional search algorithms that rank individual web pages based on relevance signals.
The AI Mode feature, launched initially in March 2025, exemplifies these advanced capabilities. This conversational search interface processes queries that are typically 2-3 times longer than conventional searches. Users can engage in back-and-forth conversations with Google's systems, asking follow-up questions and refining their information needs within the same session.
Multimodal functionality has become particularly popular among younger users who naturally transition between different input methods. The Circle to Search feature, available on over 300 million Android devices, allows users to capture screenshots or circle objects on their screens to generate instant AI-powered responses. According to Stein, these younger demographics "naturally are able to move between these modalities between photo between voice between text and just ask what's on their mind more naturally."
The search executive provided concrete examples of how AI capabilities transform everyday information seeking. Rather than searching for basic terms like "beaches SF," users now input detailed requests such as planning a "daddy daughter beach day" with specific requirements for accessibility, parking availability, and closing hours. The AI system can process these complex, contextual requests and provide comprehensive recommendations.
Shopping and educational queries drive much of the multimodal growth. Users photograph clothing worn by celebrities or influencers to "shop the look," while students capture images of geometry problems to receive step-by-step assistance. These use cases demonstrate how visual search extends beyond simple object identification to complex problem-solving scenarios.
Technical implementation relies heavily on Google's 25 years of accumulated web knowledge and real-time information systems. The company maintains databases containing 50 billion products in its product graph and 250 million places in its location database. This vast information repository, when "encoded correctly into models and accessible via models," enables dramatically expanded search capabilities beyond traditional text-based queries.
However, the transformation raises significant concerns within the marketing and publishing communities. Research published on PPC Land demonstrates that AI Overviews reduce organic click-through rates by 34.5% when present in search results. The study examined 300,000 keywords, comparing performance before and after AI Overview implementation.
Publishers report severe traffic declines as users increasingly receive answers directly within Google's interface rather than clicking through to external websites. Analysis reported by PPC Land reveals that users typically consume only the top third of AI-generated summaries, with median scroll depth reaching just 30% of AI Overview content.
Despite criticism, Google maintains that clicks from AI Overviews deliver superior engagement quality. John Mueller, Google's Search Advocate, stated in July 2025 that "when people click to a website from search results pages with AI Overviews, these clicks are of higher quality, where users are more likely to spend more time on the site." The company argues that reduced click volume compensates through improved user engagement and conversion rates.
The global expansion continues systematically across major markets. AI Mode launched in India on June 24, 2025, and extended to the United Kingdom on July 28, 2025. According to PPC Land coverage, the feature also reached Google Workspace accounts across the United States in July 2025, bringing advanced search capabilities to millions of business users.
Advertising integration represents another significant development. Google has begun incorporating advertisements directly within AI Overviews and confirmed plans to introduce advertising within AI Mode. These changes create new revenue opportunities while addressing concerns about sustainability of the traditional web ecosystem built on organic traffic.
The measurement and analytics infrastructure has evolved to accommodate these new features. PPC Land reported that Google officially confirmed on June 17, 2025, that AI Mode clicks, impressions, and position data now count toward totals in Search Console performance reports.
Search behavior studies indicate fundamental shifts in user expectations and interaction patterns. Traditional SEO approaches focused on ranking individual pages become less relevant as AI systems synthesize information from multiple sources to generate comprehensive responses. Content creators must now optimize for inclusion within AI-generated summaries rather than solely pursuing top organic rankings.
The technology builds upon Google's existing infrastructure while introducing novel capabilities. Voice search through AI Mode addresses professional environments requiring hands-free interaction. Visual search through Google Lens integration allows contextual questions about captured images. The conversational interface eliminates barriers between different communication preferences and device capabilities.
Industry analysis suggests these developments position Google strategically against emerging competitors in AI-powered search. Companies like OpenAI with ChatGPT and other conversational AI platforms present alternative approaches to information discovery. Google's response leverages its massive web index and real-time data systems to maintain competitive advantages while adapting to changing user expectations.
The implications extend beyond search technology to fundamental questions about web ecosystem sustainability. Publishers face mounting pressure to adapt monetization models as traditional traffic patterns change. Content creators must balance optimization for both AI consumption and human readability, potentially requiring new technical approaches to content structure and metadata implementation.
Looking ahead, Google's vision encompasses even more sophisticated interactions between users and information systems. The company envisions scenarios where users can "literally start talking to Google" during car rides or while walking, creating continuous conversational relationships with search technology rather than discrete query-and-response interactions.
These developments represent what Stein characterizes as "a unique moment in time happening right now where it's possible for services like Google to just do so much more for people given the capabilities of AI." The 65% growth in visual searching serves as one measurable indicator of this transformation, though the full impact on information access patterns will likely unfold over months and years rather than weeks.
Subscribe the PPC Land newsletter ✉️ for similar stories like this one. Receive the news every day in your inbox. Free of ads. 10 USD per year.
Timeline
- March 5, 2025: Google launches AI Mode as experimental feature for Google One AI Premium subscribers
- April 7, 2025: Google expands multimodal capabilities in search with AI Mode rollout
- April 17, 2025: Ahrefs releases study showing 34.5% reduction in clickthrough rates when AI Overviews are present
- May 12, 2025: First comprehensive UX study of Google's AI Overviews published, revealing users only read top third of content
- May 20, 2025: Google opens AI Mode to all US users, removing waitlist restrictions
- June 17, 2025: Google confirms AI Mode data integration with Search Console performance reports
- June 24, 2025: Google launches AI Mode in India with multimodal search capabilities
- July 2, 2025: Google extends AI Mode to Workspace accounts in the US
- July 8, 2025: AI Mode becomes fully available to Indian users without Labs signup requirement
- July 9, 2025: Google adds AI Mode integration to Circle to Search across 300 million Android devices
- July 22, 2025: Google disputes Pew study showing AI Overviews reduce clicks by half
- July 23, 2025: Robby Stein reveals 65% year-over-year growth in visual searching during podcast interview; Google reports AI Overviews drive 10% search growth
- July 28, 2025: Google introduces AI Mode to UK users
Subscribe the PPC Land newsletter ✉️ for similar stories like this one. Receive the news every day in your inbox. Free of ads. 10 USD per year.
Key Terms Explained
AI Overviews: Google's flagship artificial intelligence feature that generates comprehensive summaries at the top of search results pages, now serving 1.5 billion users monthly. These AI-powered summaries synthesize information from multiple web sources to provide direct answers to user queries, fundamentally changing how people interact with search results. The feature has expanded to 200 countries and 40 languages since its broader deployment, though it has generated controversy due to studies showing 34.5% reductions in organic click-through rates when present in search results.
AI Mode: Google's most advanced conversational search interface that allows users to engage in back-and-forth dialogue with artificial intelligence systems. Launched initially in March 2025, this feature processes queries that are typically 2-3 times longer than traditional searches and supports multimodal inputs including text, voice, camera, and image uploads. The system employs Google's custom version of Gemini 2.5 to handle complex, exploratory questions that would previously require multiple separate searches.
Multimodal Search: The integration of multiple input methods including text, voice, camera, and image uploads within a single search interface. This technology allows users to naturally transition between different communication preferences, such as photographing objects for information, speaking questions aloud, or typing complex queries. Google's implementation combines visual search capabilities from Google Lens with conversational AI to create comprehensive responses that understand both individual elements and overall context within images.
Visual Search: Technology that enables users to search using images rather than text queries, showing 65% year-over-year growth according to Google's latest data. Users can photograph objects, take screenshots, or upload images to receive AI-generated information about what they're viewing. The technology has become particularly popular for shopping queries, where users photograph clothing or products to find similar items or purchase options, and educational scenarios where students capture homework problems for assistance.
Query Fan-out Technique: Google's advanced processing method that breaks down user questions into multiple subtopics while simultaneously issuing hundreds of related searches. This approach differs fundamentally from traditional search algorithms by synthesizing information from various sources rather than simply ranking individual web pages. The technique enables comprehensive responses to complex queries by analyzing multiple aspects of a question and gathering relevant information from across Google's vast knowledge systems.
Circle to Search: Google's gesture-based search feature available on over 300 million Android devices that allows users to circle, tap, or gesture on any object displayed on their screen to generate instant search results. Recently enhanced with AI Mode integration, this feature exemplifies the shift toward intuitive, visual search interactions. Users can activate the feature by long-pressing the home button or navigation bar, then selecting elements within their current app or screen for immediate AI-powered information.
Organic Click-through Rates: The percentage of users who click on traditional website links in search results, which has become a critical metric in evaluating the impact of AI-powered search features. Research consistently shows significant declines in these rates when AI Overviews appear, with studies documenting reductions ranging from 34.5% to over 50% depending on query types. This metric has become central to industry debates about the sustainability of web publishing models as users increasingly receive answers directly within Google's interface.
Search Console: Google's analytics platform that provides website owners with data about their search performance, recently updated to include AI Mode traffic data. The integration, confirmed on June 17, 2025, allows publishers to monitor how their content performs within AI-powered search experiences alongside traditional metrics. However, the data merges with existing totals rather than providing separate categorization, creating challenges for publishers trying to understand the specific impact of AI features on their traffic.
Gemini 2.5: Google's latest large language model specifically customized for search applications within AI Mode and other advanced search features. This artificial intelligence system provides enhanced reasoning capabilities, multimodal processing, and improved accuracy measures compared to earlier model versions. The implementation allows Google's AI search features to handle complex queries while maintaining the accuracy standards required for search environments, supporting the advanced capabilities that enable 65% growth in visual searching.
Content Optimization: The evolving practice of creating and structuring web content to maximize visibility within AI-powered search results rather than traditional organic rankings. Publishers must now consider how their content might be synthesized within AI Overviews and AI Mode responses, requiring new approaches to content structure, metadata implementation, and source attribution. This fundamental shift challenges traditional SEO strategies as content creators balance optimization for both AI consumption and human readability while adapting to reduced direct website traffic.
Subscribe the PPC Land newsletter ✉️ for similar stories like this one. Receive the news every day in your inbox. Free of ads. 10 USD per year.
Summary
Who: Robby Stein, Vice President of Product at Google's AI Search team, leads development of AI-powered search features serving 1.5 billion monthly users. The transformation affects publishers, marketers, and billions of search users globally.
What: Google's visual search technology achieved 65% year-over-year growth while AI Overviews and AI Mode fundamentally change how users interact with search engines. The multimodal capabilities combine camera, voice, and text inputs to process complex queries that are 2-3 times longer than traditional searches.
When: The 65% visual search growth was announced during a July 23, 2025 podcast interview, following months of AI feature rollouts that began with AI Mode's launch in March 2025 and expanded globally throughout 2025.
Where: The transformation affects Google search globally, with AI Overviews operating in 200 countries and 40 languages. AI Mode has expanded from the United States to India, the United Kingdom, and Google Workspace accounts, while Circle to Search operates on over 300 million Android devices worldwide.
Why: Google's substantial AI infrastructure investments and shifting user behavior drive the transformation toward direct answer provision and multimodal interaction. Users increasingly expect comprehensive responses to complex questions rather than simple link collections, while Google aims to maintain search dominance against emerging AI-powered competitors while creating new advertising opportunities within AI-generated responses.