Google is adding a new dimension to its experimental AI Mode by connecting Google Lens’s visual abilities with Gemini.
AI Mode is a part of Google Search that can break down complex topics, compare options, and suggest follow-ups. Now, that search includes uploaded images and photos taken on your smartphone.
The result is a way to search through images the way you would text but with much more complex and detailed answers than just putting a picture into reverse image search.
You can literally snap a photo of a weird-looking kitchen tool and ask, “What is this, and how do I use it?” and get a helpful answer, complete with shopping links and YouTube demos.
AI Eyes
If you take a picture of a bookshelf, a plate of food, or the chaotic interior of your junk drawer, the AI won’t just recognize individual objects; it will also explain their relationship to each other.
You might get a suggestion of other dishes you can make with the same ingredients, whether your old phone charger is in the drawer or what order you should read those books on the shelf. You can see how it works above.
Essenitally, the feature fires off multiple related questions in the background about the entire scene and each individual object. So when you upload a picture of your living room and ask how to redecorate it, you’re not just getting one generic answer. You’re getting a group of responses from mini AI agents asking about everything in the room.
Google isn’t unique in this pursuit. ChatGPT includes image recognition, for instance. However, Google’s advantage is decades of search data, visual indexing, and other data storage and organization.
If you’re a Google One AI Premium subscriber or are approved to test it through Search Labs, you can test out the feature on the Google mobile app.