Google this week detailed a substantial update to Circle to Search, the visual search feature built into select Android devices, introducing multi-object recognition that allows users to search for every item in a complete outfit or scene simultaneously. The announcement came through the Made by Google Podcast, Season 9 Episode 4, published on April 27, 2026, in which Harsh Kharbanda, Director of Product Management at Google, walked through the technical architecture behind the changes. The update is now available on the Pixel 10 series and the Samsung Galaxy S26.

The new capability - called Find the Look - represents a fundamental shift in how Circle to Search processes images. Previously, the feature identified single objects within a circled selection. Now it decomposes an entire image into its constituent parts, runs a separate visual search for each one, and returns shoppable results for all of them at the same time. A virtual try-on tool sits at the end of that chain, letting users place any found garment onto a photo of themselves before deciding whether to buy.

From screenshots to gestures: a brief history

Circle to Search did not appear from nowhere. According to Kharbanda on the podcast, the origins trace back to Google Lens, a visual search product that Google has developed since 2018. The concept itself is older still - Google Goggles, an early experiment in image recognition, was explored around 2012 or 2014, though Kharbanda noted the technology at that time was simply not ready for broad use.

The problem Google observed was a friction point that will be familiar to anyone who has spotted something interesting on social media. Users were taking screenshots, switching out of whatever app they were in, opening Google Lens, and uploading the image manually. It worked, but it was slow. Circle to Search, launched on January 31, 2024, on Samsung Galaxy S24 and Pixel 8 devices, was designed to collapse that sequence into a single gesture. A long press on the navigation handle at the bottom of the screen freezes the current screen, and the user can then draw a circle around whatever they want to search.

Kharbanda described what he felt when he first used the circle gesture: "it just felt so intuitive... I was like, oh yeah, I'm going to use this like 10 times a day." The use cases he mentioned in the podcast ranged from translation of text in family chat messages to fact-checking claims that his mother forwarded via WhatsApp. The feature was never exclusively about fashion - but fashion turned out to be where users pushed it hardest.

The shopping behaviour that drove this update was identifiable early in Circle to Search's life, according to Kharbanda. During the COVID period, when users spent more time on their phones and more time on social media, Google Lens was already seeing a pattern: people would photograph influencer posts, particularly outfits they liked, and upload those screenshots to Lens to find similar items in their price range.

Younger users drove this behaviour most strongly. According to Kharbanda, younger and female users use visual shopping through Circle to Search "a lot more" because they are budget-conscious and have a specific aesthetic they want to recreate. The problem was that the feature worked cleanly for single items. If a user wanted to find a jacket they saw on an influencer, that was straightforward. But an outfit is not a jacket. It is a jacket, a top, jeans, shoes, a bag, possibly sunglasses. Circling each item one by one was workable but laborious, especially for looks worn by people like Taylor Swift, where every component is a distinct product with its own purchasing path.

"Circle to Search didn't really work well for that," Kharbanda said on the podcast. "It worked really well for single objects, and you could circle one by one... and some of these influencers have sunglasses. So there's like, you know, five [items]. A lot to circle."

How Find the Look works technically

The solution required solving three distinct problems in sequence, each building on the last.

The first challenge was model training. Google needed to train the underlying model on a sufficiently diverse range of images - female and male influencers, different age groups, different garment types including jackets, skirts, dresses and accessories. Without that breadth, the model would not generalise reliably across the range of looks users actually wanted to search. Early versions of the model, Kharbanda explained, would miss shoes entirely because shoes occupy a small proportion of the total image. Handbags and sunglasses also caused early problems. Shoes are small. Sunglasses are prominent but thin. The model had to be specifically tuned to treat each component as equally important regardless of its visual footprint.

The second challenge was reliable deconstruction. Even once the model could identify all the components in theory, it had to do so consistently in practice. Partial occlusion made this particularly hard. If a jacket partially covers a shirt, the visual information available for that shirt is reduced. The system has to infer what the shirt looks like from limited cues and still return visually similar products. According to Kharbanda, the team had to "really do a good job of deconstructing that look and finding the shirt... and then like retrieving other visually similar shirts."

The third challenge was ranking and availability. Finding visually similar products is not enough. Users expect those products to be in stock, available from retailers they trust, priced within a browsable range, and localised to their geography. Kharbanda described this as "a bunch of ranking and quality problems on top of that." Each result page shown to the user had to meet quality thresholds across all of those dimensions simultaneously.

The model powering this is Gemini 3. According to Kharbanda, it is the latest model update that enables the system to "really look at the image in its whole... and break it down and think through what are the different parts of the image that are really interesting." The same visual search process that finds individual products is then applied separately to each identified component.

The virtual try-on step

Once a user taps Find the Look and the deconstruction runs, they see individual product cards for each item in the outfit - cap, shirt, shorts, shoes, whatever the image contains. Tapping any product card opens a closer view. From there, a "Try it on" button appears. Tapping that button places the selected garment onto the user, using a photo of the user taken at the time or from their device.

Kharbanda demonstrated this live during the podcast recording, using a golf outfit he found on social media. He invoked Circle to Search, circled the entire look, tapped Find the Look, selected the shirt, and used Try it on. "Probably not the thing that I want to wear," he said of the result, but it confirmed the value of seeing a garment on oneself before purchasing rather than relying on a model whose body proportions may differ significantly.

This virtual try-on functionality is not new to Google's ecosystem in isolation. Google launched AI Mode with a virtual try-on feature using personal photos in May 2025, covering shirts, pants, skirts, and dresses across billions of apparel listings in the Shopping Graph. Google Lens had already introduced Shopping ads in October 2024, at a point when visual searches on Lens had reached nearly 20 billion per month, with 20 percent of those searches being shopping-related. What the Circle to Search update adds is the integration of that try-on capability directly into the end of an automated multi-object search workflow, reducing the steps a user needs to take from spotting a complete look to evaluating a specific product on their own body.

Beyond fashion: skincare routines, plant collections, and awards ceremonies

The multi-object recognition is not restricted to clothing. Kharbanda gave three additional examples during the podcast that illustrate the broader application.

A product manager on his team who follows skincare content on Instagram encountered a skincare routine post listing 14 different products. She circled the entire post and asked the system to find all the products, provide reviews for each, and rank them by price. The system searched for all 14 simultaneously and returned the structured results.

Kharbanda himself came across a social media post showing a collection of plants and did not know the names of any of them. He circled the image and asked for the plant names, their growing conditions, and whether they would thrive indoors. The system identified each plant individually and returned that information.

A third scenario involved a photograph from an awards ceremony - the Golden Globes or a similar event - showing five actors together. Kharbanda knew two of them but not the others. He circled all five and asked for a table showing who each person was and what they had won. The system constructed that response across all five subjects.

The common thread is that the user's question is about the whole image, not a fragment of it. According to Kharbanda, the previous constraint of single-object search meant users had to decompose their own questions before asking them. The update removes that constraint.

What comes next, according to Google

Kharbanda gave a forward-looking signal at the end of the podcast that is relevant to understanding where Circle to Search is heading. He described work underway that would allow the feature to analyse not just an image but the full context of what a user is looking at - an entire PDF, a full webpage, or a complete video - with the user's permission. "So with your permission, you can have us look at the full PDF, let's say if you are on a PDF, or the full webpage that you are on, or the full video that you're looking at, and we can really search through that entire thing."

That would extend the feature from image decomposition to document and video comprehension - a considerably broader scope. No timeline was given for those capabilities.

The current update is available now on the Pixel 10 series. The Samsung Galaxy S26 was mentioned by podcast host Rachid Finge as a supported device.

Why this matters for marketers and advertisers

For the marketing community, the implications of multi-object recognition in Circle to Search connect directly to how product discovery is changing on mobile. Google's visual search capabilities have been expanding steadily, and the commercial surface area of those capabilities is growing alongside them.

When Circle to Search decomposes an outfit, it does not return a single search result - it returns a structured set of shoppable product pages for every identified item. Each of those pages is a potential point of commercial interaction. Shopping ads already run within Google Lens visual searches, and the same infrastructure applies here. Advertisers running Shopping campaigns and Performance Max campaigns are eligible to appear in these visual search results without additional configuration, because the ad eligibility follows the product feed.

Google's 2026 retail ad approach, as described in a recent PPC Land analysis, situates Google Merchant Center product data as the foundation for a range of surfaces including free listings, the Gemini shopping experience, AI Mode, virtual try-on in Google Lens, Business Agent, and brand profiles. Circle to Search with Find the Look now adds another surface to that list. Retailers whose product data is not structured to surface correctly in visual search environments may find themselves absent from results when users are at a moment of high purchase intent.

The population that uses visual shopping through Circle to Search is also specific. Kharbanda characterised it as younger and budget-conscious, with a strong sense of personal style and a tendency to search for ways to recreate looks seen on social media at a lower price point. That audience profile is relevant to any brand operating in fashion, beauty, or lifestyle categories.

Google's AI Mode also introduced visual search exploration in late 2025, with visual search queries reported to have grown 65 percent year-over-year as of July 2025. Search Live, launched in September 2025, added real-time camera feed integration for voice and visual queries. The Circle to Search update announced today fits within that larger pattern of Google expanding the inputs users can bring to a search - moving from a typed query toward any combination of image, gesture, voice, and context.

Timeline

Summary

Who: Google, specifically Harsh Kharbanda (Director of Product Management for Circle to Search and Google Lens), speaking on the Made by Google Podcast hosted by Rachid Finge.

What: A major update to Circle to Search introducing multi-object recognition, a feature called Find the Look that decomposes an entire outfit or scene into individual shoppable items, and a virtual try-on tool that places selected garments onto a user's own photo. The update is powered by Gemini 3.

When: The podcast episode detailing the update was published on April 27, 2026. The update is available today on supported devices.

Where: The feature is available on the Google Pixel 10 series and the Samsung Galaxy S26. The podcast was published on the Made by Google YouTube channel.

Why: Users were already using Circle to Search to shop for individual items from social media, but the feature could only process one object at a time. The update addresses that limitation by allowing users to search for an entire look in one gesture, from initial discovery through product identification, visual matching, availability filtering, and try-on - without leaving the app they are using.

Share this article
The link has been copied!