Google expands AI search with new canvas and video features
Advanced search capabilities include real-time video analysis and AI-powered study planning across Chrome desktop.

Google announced major expansions to its AI-powered search capabilities on July 29, 2025, through a series of social media posts by Robby Stein, VP of Product for Google Search. The updates introduce Search Live with video input, enhanced Chrome desktop integration, and Canvas functionality designed to transform how users interact with search technology.
Search Live with video input represents the most significant advancement in multimodal search capabilities. The feature enables real-time voice conversations with Google Search while pointing a camera at objects, documents, or environments. "If you want to ask about a hands-on science project or what's in your textbook, Search Live with video input gives you an AI-powered learning partner that can see what you see," Stein explained in his announcement.
Subscribe the PPC Land newsletter ✉️ for similar stories like this one. Receive the news every day in your inbox. Free of ads. 10 USD per year.
The video-enabled search operates through continuous visual analysis combined with conversational AI. Users can engage in real-time discussions about physical objects, written materials, or complex scenarios while the system maintains access to web resources. This represents a fundamental shift from traditional text-based queries toward immersive, multimodal search experiences.
Chrome desktop users gain AI Mode access through Google Lens integration starting immediately. The new functionality allows users to search their screen content directly through Lens, generating AI Overviews with contextual insights. "When you need to ask about something you see on the web, we're launching a new way to access AI Mode through Lens in Chrome desktop," according to Stein's technical explanation.
The Chrome implementation includes a planned "Dive deeper" feature that will enable follow-up conversations within AI Mode. This capability addresses user demands for more sophisticated research workflows that combine visual content analysis with conversational search interfaces. The feature builds upon Google's existing Lens technology while incorporating advanced AI reasoning capabilities.
Canvas functionality introduces comprehensive study planning capabilities within AI Mode. The feature synthesizes information from web sources alongside user-uploaded files, presenting organized content through a side panel interface. Users can refine outputs through natural language follow-ups, creating dynamic documents that evolve based on research needs.
"Canvas in AI Mode will soon help you build study plans, with info from the web as well as your own file uploads," Stein detailed. The system accommodates educational applications while extending to travel planning, project organization, and complex research tasks requiring multiple information sources.
Technical implementation requires enrollment in the AI Mode Labs experiment, which provides access to experimental AI search capabilities. Google positions these features as cutting-edge developments requiring user opt-in through the company's search experimentation platform. Access remains limited to users who specifically enable the Labs functionality.
The July 30 announcement follows several months of AI search developments at Google. The company expanded AI Mode to UK users on July 28, introduced automated business calling features on July 16, and enhanced Circle to Search with AI Mode integration on July 9. These developments demonstrate Google's systematic approach to AI search deployment across global markets.
Video search capabilities extend beyond simple object recognition to complex analytical tasks. The system can process lengthy video content, analyze spatial relationships, and understand document structures while maintaining conversation context. Previous Gemini models focused primarily on initial video segments, but enhanced processing now analyzes entire video sequences effectively.
Marketing professionals face significant implications from these AI search developments. PPC Land research indicates that 57% of marketers have already modified SEO or content strategies since AI Overviews launched. The introduction of video-enabled search and Canvas functionality may require additional optimization approaches.
The visual search integration creates new content visibility opportunities while potentially disrupting traditional search traffic patterns. Analysis from PPC Land shows that AI Overviews drive more than 10% additional queries globally, though organic click-through rates have declined 54.6% year-over-year for affected searches.
Technical infrastructure supporting these features relies on Google's Gemini 2.5 model, which provides enhanced multimodal processing capabilities compared to earlier versions. The system processes text, images, audio, and video simultaneously while maintaining accuracy standards required for search applications. This technical foundation enables the real-time analysis necessary for video-enabled search conversations.
Privacy considerations remain paramount as Google expands AI search capabilities. The company has not disclosed specific data handling procedures for video input or Canvas file uploads. Users participating in Labs experiments should understand that experimental features may involve different privacy protections compared to standard Google Search functionality.
Content creators must adapt to AI search environments that increasingly synthesize information rather than directing users to original sources. Industry analysis from PPC Land demonstrates that traditional SEO practices remain relevant, though optimization strategies must evolve to accommodate AI consumption patterns.
The competitive landscape intensifies as Google advances AI search capabilities while maintaining search market dominance. Microsoft, OpenAI, and other technology companies continue developing alternative AI search platforms, creating pressure for Google to demonstrate clear technological advantages through features like video-enabled search and Canvas functionality.
Global deployment timelines remain unclear for the newly announced features. Google typically implements AI search capabilities in the United States first before expanding to international markets. The AI Mode Labs requirement suggests limited initial availability followed by broader rollouts based on user feedback and technical performance.
Educational applications represent a primary use case for video-enabled search capabilities. Students can analyze textbook content, discuss laboratory experiments, or explore complex diagrams through conversational interfaces. This functionality may transform how educational content gets consumed and understood across academic environments.
Business applications extend beyond educational use cases to include technical documentation analysis, product research, and professional training scenarios. The ability to combine visual analysis with comprehensive web research through Canvas functionality addresses complex workplace information needs that traditional search cannot efficiently satisfy.
Timeline
- March 5, 2025: Google reveals AI Mode as experimental feature for complex queries using Gemini 2.0 technology
- May 20, 2025: Google launches AI Mode with virtual try-on and advanced shopping features for all US users
- June 30, 2025: Google begins June core update affecting ranking systems globally
- July 8, 2025: Google launches AI Mode in India for all users without Labs enrollment
- July 9, 2025: Circle to Search gains AI Mode integration across 300 million Android devices
- July 16, 2025: Google introduces Gemini 2.5 Pro and Deep Search capabilities with automated business calling
- July 24, 2025: Google launches Web Guide Search Labs experiment using AI to organize search results
- July 28, 2025: Google introduces AI Mode to UK users through search results tab and mobile applications
- July 29, 2025: Google announces Search Live with video input, Chrome Lens integration, and Canvas functionality
Subscribe the PPC Land newsletter ✉️ for similar stories like this one. Receive the news every day in your inbox. Free of ads. 10 USD per year.
Summary
Who: Robby Stein, VP of Product for Google Search, announced the new AI search capabilities through social media posts. The features target students, researchers, and professionals requiring advanced search capabilities.
What: Google introduced Search Live with video input for real-time visual analysis, AI Mode integration through Chrome Lens, and Canvas functionality for organizing research from web sources and uploaded files.
When: The announcement occurred on July 30, 2025, at 1:00 PM Eastern Time through a series of Twitter/X posts. Features require enrollment in AI Mode Labs experiment for access.
Where: The capabilities deploy initially to users enrolled in Google's AI Mode Labs experiment, with Chrome desktop integration available immediately and Canvas functionality launching soon.
Why: Google aims to maintain search market leadership against AI-powered competitors while addressing user demand for more sophisticated, multimodal search experiences that combine visual analysis with conversational interfaces.
Key Terms Explained
AI Mode: Google's most advanced artificial intelligence search interface that processes complex, multi-part queries through conversational interactions. The system utilizes a customized version of Gemini 2.5 model to understand nuanced questions and provide comprehensive responses that synthesize information from multiple web sources. AI Mode represents a fundamental departure from traditional keyword-based search toward semantic understanding and natural language processing.
Search Live: The newly announced feature enabling real-time voice conversations with Google Search while using camera input to analyze physical objects, documents, or environments. Search Live transforms search from a text-based activity into an immersive, multimodal experience where users can discuss what they see with an AI-powered assistant that maintains access to web resources throughout the conversation.
Canvas: An AI-powered organizational tool within Google's search ecosystem that helps users build comprehensive study plans and research documents. Canvas synthesizes information from web sources alongside user-uploaded files, presenting organized content through a side panel interface that users can refine through natural language follow-ups and iterative improvements.
Gemini 2.5: Google's latest large language model specifically customized for search applications within AI Mode and related features. This artificial intelligence system provides enhanced reasoning capabilities, multimodal processing, and improved factuality measures compared to earlier model versions. Gemini 2.5 enables the complex video analysis, spatial understanding, and document processing required for advanced search interactions.
Multimodal Search: Search technology that processes and understands multiple types of input simultaneously, including text, voice, images, and video content. This approach enables users to interact with search systems through natural combinations of communication methods rather than being limited to traditional text queries, creating more intuitive and comprehensive search experiences.
AI Overviews: Google's feature that presents AI-generated summaries of search results using artificial intelligence to provide concise overviews of information available on specific topics. These summaries appear alongside traditional search results and aim to help users quickly understand key information before deciding which websites to visit, though they have significantly impacted click-through rates to external websites.
Chrome Lens Integration: The newly announced capability allowing users to search screen content directly through Google Lens within Chrome desktop browsers. This integration generates AI Overviews with contextual insights about visual content displayed on web pages, enabling users to ask questions about images, text, or other visual elements without leaving their current browsing session.
Labs Experiment: Google's testing framework for experimental search features that require user opt-in before access becomes available. The Labs system allows Google to deploy cutting-edge AI capabilities to limited user groups for feedback and performance evaluation before broader rollouts, ensuring stability and effectiveness of new search technologies.
Video Input: The capability for users to point their camera at objects, documents, or environments while engaging in real-time conversations with Google Search. Video input represents a significant advancement in search interaction design, enabling continuous visual analysis combined with conversational AI to create more natural and informative search experiences.
Query Fan-out: The technical methodology underlying AI Mode's processing capabilities that breaks down complex user questions into multiple subtopics. This approach simultaneously issues hundreds of related searches to gather comprehensive information before synthesizing responses, enabling AI Mode to explore topics more thoroughly than traditional search algorithms that process single queries independently.