Google launches Gemini 3 with generative UI for dynamic search experiences
Google introduced Gemini 3 on November 18, 2025, featuring state-of-the-art reasoning, generative UI capabilities, and Google Antigravity development platform.
Google announced Gemini 3 on November 18, 2025, introducing its most advanced artificial intelligence model alongside a novel implementation of generative UI that dynamically creates visual experiences and interactive interfaces in response to user queries. The release represents a significant advancement in how AI systems generate not just content but complete user experiences tailored to individual prompts.
The generative UI capability allows Gemini 3 to create web pages, games, tools, and applications automatically designed and fully customized for any question or instruction, according to research published by Google. These prompts can range from a single word to detailed multi-paragraph specifications. The implementation differs fundamentally from static, predefined interfaces where AI models typically render content.
According to Yaniv Leviathan, Google Fellow, Dani Valevski, Senior Staff Software Engineer, and Yossi Matias, Vice President and Head of Google Research, the generative UI work demonstrates effective viability of this paradigm through human evaluations that strongly preferred these interfaces compared to standard language model outputs when generation speed was excluded from consideration.
The technology debuts in two Google products. The Gemini app receives the capability through an experiment called dynamic view, while AI Mode in Google Search integrates generative UI for subscribers of Google AI Pro and Ultra plans in the United States. Dynamic view uses Gemini's agentic coding capabilities to design and code fully customized interactive responses for each prompt, understanding that explaining the microbiome to a five-year-old requires different content and features than explaining it to an adult.
Technical architecture combines three elements
The generative UI implementation uses Google's Gemini 3 Pro model with three critical additions. Tool access provides the server with capabilities including image generation and web search, allowing results to be made accessible to the model or sent directly to user browsers. System instructions guide the implementation with detailed specifications including goals, planning, examples, and technical details covering formatting, tool manuals, and common error avoidance. Post-processing addresses potential issues through outputs passed through a set of processors.
Gemini 3 Pro achieves state-of-the-art performance across major AI benchmarks. The model tops the LMArena Leaderboard with a breakthrough score of 1501 Elo, according to Google executives. It demonstrates PhD-level reasoning with scores of 37.5% on Humanity's Last Exam without tool usage and 91.9% on GPQA Diamond. Mathematical capabilities reach new levels with 23.4% on MathArena Apex.
Multimodal reasoning capabilities distinguish Gemini 3 from previous iterations. The model scores 81% on MMMU-Pro and 87.6% on Video-MMMU, setting benchmarks for understanding complex visual content. Factual accuracy improvements show in the 72.1% score on SimpleQA Verified, indicating high capability at solving complex problems across science and mathematics with reliability.
Coding performance represents another significant advancement. Gemini 3 Pro tops the WebDev Arena leaderboard with 1487 Elo and achieves 54.2% on Terminal-Bench 2.0, which tests model ability to operate computers via terminal. The system greatly surpasses Gemini 2.5 Pro on SWE-bench Verified with 76.2%, a benchmark measuring coding agent capabilities.
New agentic development platform launches
Google introduced Antigravity, a new agentic development platform that enables developers to operate at higher, task-oriented levels rather than managing individual code lines. Using Gemini 3's advanced reasoning, tool use, and agentic coding capabilities, Antigravity transforms AI assistance from a developer toolkit into an active partner, according to the announcement.
The platform features agents elevated to a dedicated surface with direct access to the editor, terminal, and browser. Agents can autonomously plan and execute complex, end-to-end software tasks simultaneously while validating their own code. The system comes integrated with Gemini 2.5 Computer Use model for browser control and the image editing model Nano Banana.
Logan Kilpatrick, Product Lead for Google AI Studio and the Gemini API, stated that Gemini 3's remarkable prompt adherence and multi-step tool calling supercharge fullstack app development platforms. Developers reported seeing incredible results particularly in UI and frontend workflows when incorporating the model into agentic code development setups.
Gemini 3 arrives in the Gemini app with upgraded capabilities including conversational editing, creative composition tools, and enhanced logic reasoning for complex scene generation. The model applies state-of-the-art reasoning to complex problems, with responses that are more helpful, better formatted, and more concise than previous versions. Google extended temporary chat features and personalization capabilities to the Gemini app in August 2025.
AI Mode receives major enhancements
AI Mode in Google Search gains Gemini 3's reasoning power to grasp unprecedented depth and nuance for difficult questions. The system unlocks new generative UI experiences providing dynamic visual layouts with interactive tools and simulations generated specifically for individual queries. Google AI Pro and Ultra subscribers in the United States can access these capabilities by selecting "Thinking" from the model drop-down menu in AI Mode.
Gemini 3's advanced reasoning allows Google Search's query fan-out technique to perform more searches uncovering relevant web content. Because the model more intelligently understands user intent, it can find new content previously missed. Enhanced automatic model selection will route challenging questions in AI Mode and AI Overviews to this frontier model while continuing to use faster models for simpler tasks, according to Elizabeth Hamon Reid, Vice President of Engineering for Search.
When Gemini 3 detects that an interactive tool will help users better understand topics, it uses generative capabilities to code custom simulations or tools in real-time and adds them into responses. A user researching mortgage loans can receive a custom-built interactive loan calculator directly in the response to compare options and see long-term savings. All responses include prominent links to high-quality content across the web.
The implementation represents research breakthroughs leading to product innovation, according to the announcement. Google sees potential in extending generative UI to access wider sets of services, adapt to additional context and human feedback, and deliver increasingly helpful visual and interactive interfaces. Current limitations include generation times that can exceed one minute and occasional output inaccuracies, areas of ongoing research.
Buy ads on PPC Land. PPC Land has standard and native ad formats via major DSPs and ad platforms like Google Ads. Via an auction CPM, you can reach industry professionals.
Developer access and pricing
Gemini 3 Pro becomes available in preview at $2 per million input tokens and $12 per million output tokens for prompts of 200,000 tokens or less through the Gemini API in Google AI Studio. Enterprise customers can access the model through Vertex AI. The pricing applies to API usage, with rate limits available in Google AI Studio at no charge.
Developers can integrate Gemini 3 Pro into applications immediately via Google AI Studio and Vertex AI for Enterprise. The API introduces new thinking levels and more granular media resolution parameters, along with stricter validation for thought signatures critical for preserving model thoughts across multi-turn conversations. Technical documentation provides comprehensive details on building with the model.
Third-party platform integration enables Gemini 3 usage in Cursor, GitHub, JetBrains, Manus, Replit, and other development environments. Android Studio integration provides professional Android developers with Gemini 3 for AI assistance in application development. The model becomes available in Gemini CLI, Google's command line tool for agentic coding that lets developers delegate coding tasks directly from terminals.
Gemini 3 Deep Think mode pushes boundaries of intelligence further, delivering step-change improvements in reasoning and multimodal understanding capabilities for solving more complex problems. In testing, Deep Think outperforms Gemini 3 Pro's already impressive performance on Humanity's Last Exam with 41.0% without tool use and GPQA Diamond at 93.8%. The mode achieves 45.1% on ARC-AGI-2 with code execution, demonstrating ability to solve novel challenges.
Gemini Agent handles multi-step workflows
Google introduced Gemini Agent, an experimental feature handling multi-step tasks directly inside the Gemini app. The system connects to Google apps to manage Calendar, add reminders, or organize inboxes with priorities and draft replies for approval. Users can provide precise instructions such as researching and booking mid-size SUVs for trips within specific budgets using details from email.
Built on insights from Project Mariner and powered by Gemini 3's advanced reasoning, Gemini Agent breaks down complex requests using tools including Deep Research, Canvas, connected Google Workspace apps like Gmail and Calendar, and live web browsing. Users remain in control with the system designed to seek confirmation before critical actions like purchases or sending messages. The feature marks another step toward a generalist agent, rolling out for Google AI Ultra subscribers in the United States.
Agentic AI development accelerated significantly across the marketing industry in November 2025, with Amazon, Google, and IAB Tech Lab launching autonomous campaign tools. Google's Ads Advisor and Analytics Advisor reached all English-language accounts in early December, bringing Gemini-based campaign optimization to advertisers through conversational interfaces.
The Gemini 3 launch follows systematic progression of Google's AI capabilities. Gemini 1 introduced native multimodality and long context windows expanding types and amounts of information that could be processed. Gemini 2 laid foundations for agentic capabilities and advanced reasoning and thinking, helping with complex tasks and ideas. Gemini 2.5 Pro topped LMArena for over six months before this release.
Research published by Google Cloud in November 2025 projects the agentic AI market could reach approximately $1 trillion by 2035-2040, with over 90% of enterprises planning integration within three years. These autonomous systems differ from conventional automation through their ability to autonomously reason, decide, and act while solving complex business problems.
The generative UI research created PAGEN, a dataset of human expert-made websites that will be released to the research community. Evaluations compared generative UI experiences against various formats including websites designed by human experts for specific prompts, top Google Search results, and baseline language model outputs in text or markdown formats. Sites designed by human experts achieved highest preference rates, followed closely by results from the generative UI implementation with substantial gaps from other output methods.
Visual layout generates immersive, magazine-style views complete with photos and modules that invite user input to tailor results further. Dynamic view uses Gemini 3's agentic coding capabilities to design and code custom user interfaces in real-time suited to prompts. These experiments roll out today, with users potentially seeing only one initially to help Google learn about the features.
Subscribe PPC Land newsletter ✉️ for similar stories like this one
Timeline
- December 2023: Google introduces Gemini as most capable multimodal AI model
- March 2025: Google announces AI Mode for experimental search features
- June 2025: AI Mode data begins counting in Search Console performance reports
- July 2025: Google reveals Gemini multimodal advances including video understanding
- August 2025: Gemini introduces temporary chats and personalization features
- October 2025: Google expands AI Mode to over 40 countries and territories
- November 2025: Google Cloud releases comprehensive agentic AI framework guideline
- November 18, 2025: Google launches Gemini 3 with generative UI capabilities and Google Antigravity platform
Subscribe PPC Land newsletter ✉️ for similar stories like this one
Summary
Who: Google, under CEO Sundar Pichai and Google DeepMind CEO Demis Hassabis, released Gemini 3 for users, developers, and enterprises. Research team members Yaniv Leviathan, Dani Valevski, and Yossi Matias developed generative UI implementation. Product managers including Josh Woodward for the Gemini app, Elizabeth Hamon Reid for Search, and Logan Kilpatrick for developers led product integration.
What: Gemini 3 is Google's most intelligent AI model featuring state-of-the-art reasoning, multimodal understanding, and agentic coding capabilities. The release includes generative UI implementation that dynamically creates visual experiences and interactive interfaces, Google Antigravity agentic development platform, Gemini Agent for multi-step task handling, and enhanced AI Mode in Search with interactive tools and simulations. The model achieves 1501 Elo on LMArena Leaderboard and 1487 Elo on WebDev Arena.
When: Google announced Gemini 3 on November 18, 2025, with immediate availability in the Gemini app, AI Mode in Search for Google AI Pro and Ultra subscribers, Google AI Studio, Vertex AI, Google Antigravity, and Gemini CLI. Gemini 3 Deep Think mode will reach Google AI Ultra subscribers in coming weeks following safety evaluations.
Where: Gemini 3 rolls out globally through the Gemini app, with AI Mode enhancements initially limited to United States users with Google AI Pro and Ultra subscriptions. Developer access spans Google AI Studio, Vertex AI for enterprises, Google Antigravity, Gemini CLI, and third-party platforms including Cursor, GitHub, JetBrains, Manus, and Replit. Gemini Agent remains exclusive to Google AI Ultra subscribers in the United States.
Why: The release addresses growing demand for AI systems that can bring any idea to life through advanced reasoning, multimodal understanding, and agentic capabilities. Generative UI enables more natural, helpful interactions by creating custom interfaces rather than forcing users to work within predefined templates. For marketing professionals, the technology transforms how search results present information and how developers build applications, potentially reshaping traffic patterns, user behavior, and content discovery as AI-mediated interactions become more sophisticated.