Google’s AI-powered ‘multisearch’, which combines text and images into one search, goes global – AapkaDost

Posted on

Among other AI-focused announcements, Google today shared that its newer “multisearch” feature would now be available to global users on mobile devices wherever Google Lens is already available. The search feature, which allows users to search using both text and images at the same time, was first introduced last April as a way to modernize Google Search to better take advantage of the smartphone’s capabilities. A variation on this, “multisearch near me,” which focuses searches on local businesses, will also be available globally in the coming months, as will multisearch for the web and a new Lens feature for Android users.

As Google previously explained, multisearch is powered by AI technology called Multitask Unified Model, or MUM, which can understand information in various formats, including text, photos, and videos, and then generate insights and connections between topics, concepts, and ideas. Google put MUM to work within Google Lens’ visual search features, where it would allow users to add text to a visual search.

“We’ve redefined what we mean by search by introducing Lens. Since then, we have brought Lens directly to the search bar and continue to add new capabilities such as shopping and step-by-step homework help,” Prabhakar Raghavan, Google’s SVP responsible for Search, Assistant, Geo, Ads, Commerce and Payments products, said at a press event in Paris.

For example, a user might view a photo of a shirt they like in Google Search and then ask Lens where to find the same pattern but on a different kind of clothing, such as a skirt or socks. Or they can point their phone at a broken part on their bike and type into Google Search a query like “how to fix.” This combination of words and images can help Google process and understand queries that it previously couldn’t handle, or that would be more difficult to enter with text alone.

The technique is most useful when looking for stores, where you can find clothes you like, but in different colors or styles. Or you can take a picture of a piece of furniture, such as a dining set, to find items that match, such as a coffee table. In multisearch, users can also refine and refine their results based on brand, color and visual attributes, according to Google.

The feature was made available to US users last October and expanded to India in December. As of today, Google says multisearch is available to all users worldwide on mobile, in all languages ​​and countries where Lens is available.

The “multisearch near me” variant will also expand soon, Google said today.

Google announced last May that it would be able to direct multisearch queries to local businesses (referred to as “multisearch near me”), which would return search results of the items users were searching for that matching inventory at local retailers or other companies. For example, in the case of the bike with the broken part, you could add the text “near me” to a search with a photo to find a local bike shop or hardware store that had the replacement part you needed.

This feature will be available in the coming months for all languages ​​and countries where Lens is available, according to Google. It will also expand beyond mobile devices in the coming months with support for multisearch on the web.

In terms of new search products, the search giant teased an upcoming Google Lens feature, noting that Android users would soon be able to search for what they see in photos and videos in apps and websites on their phone while still in the app or on the website. Google calls this “search on your screen” and said it will also be available wherever Lens is offered.

Google also shared another milestone for Google Lens, noting that people now use the technology more than 10 billion times a month.

Leave a Reply

Your email address will not be published. Required fields are marked *