Google and Samsung are finding ways to spread AI to as many phones as possible without major OS updates, or requiring new hardware. Google’s latest update to Lens shows how it plans to do exactly that.
Google has announced that Lens, the visual search engine that uses images to identify objects, will be able to conduct searches with videos. Google gives the example of visiting an aquarium and taking a video of fish you want to identify.
“Open Lens in the Google app and hold down the shutter button to record while asking your question out loud, like, ‘why are they swimming together?’ Our systems will make sense of the video and your question together to produce an AI Overview, along with helpful resources from across the web.” A Google blog explained. The AI overview is a generated summary of information, which you may have seen appear at the top of Google searches.
Google has also updated Lens with the ability to ask questions with voice when doing a photo search, users can now point the phone’s camera at a subject and ask about it. Previously, questions could only be typed.
Elsewhere, the company is updating Lens’s shopping tool, which identifies items for sale based on a picture the user takes. Google says that the results pages will be “dramatically” more useful with more detailed information on reviews and prices across several retailers.
In the example it gives, Google shows a backpack search with results from retailers like Dick’s Sporting Goods and The North Face. One of the major issues with shopping via Lens has long been its low-quality results that surface unknown stores and random links, so it will be interesting to see if the company has improved that aspect of it.
Crucially, these updates will roll out to all phones running Google Lens on iOS and Android, which is hundreds of millions of handsets. Google says that Lens is responsible for 20 billion visual searches every month. These updates add generative AI into more hands without necessitating new hardware.
The obvious benefit here is that adding advanced AI technology to the most accessible products also gently onboards users into Google’s ecosystem. Some of them may become paying users through buying a Pixel phone or paying for a Gemini Advanced in the future.
With the launch of the Galaxy S24 earlier this year, both Samsung and Google spent a lot of time and money on advertising the Circle to Circle feature. I didn’t fully understand why considering Samsung’s call translation, generative image creator and Note Assist features were far more impressive examples of AI.
However, in the intervening months, Google has added full-page text translations, QR code scanning, and song identification to Circle to Search. Expanding the app’s abilities far beyond its original remit to become an all-purpose AI-powered search app.
Google Lens appears to be getting a similar treatment and more new features will surely follow soon. Whether you want AI on your phone or not, it seems to be inevitable, but new hardware won’t be a necessity.