Google really wants you to use Google Search through its visual skillset, which of course means through Google Lens. The search giant has announced a new feature for Lens that leans heavily into their advanced in AI to help you utilize Lens in a new, more personalized way.
Today, Google announced multisearch for Lens. The idea here is to fire up Google Lens, take a snapshot of an item you want to know more about (or pull from an image on your phone), and then add additional context or refine the search to specific colors or brands or visual attributes.
That may not make a ton of sense, so watch this GIF:
See how we’re doing this? You open the Google app, then tap the Lens icon, take a photo of something or choose an image from your phone, wait for the initial results to pop up, and then hit the “Add to your search” button to go further. In the example above, someone searches for a dress in an orange color, but they specifies that they’d like to see what it looks like in green. Google is so damn smart, it then shows the dress in green that you could buy. Pretty wild.
Google says that multisearch is available as a beta feature in the US and that you can try it out today.