Google Pushes Boundaries Of AI For Search

Google has begun to pilot artificial intelligence technology built on a transformer architecture that is 1,000 times more powerful than BERT (Bidirectional Encoder Representations from Transformers), which the company rolled out in search in 2019 for English-language queries. 

Today at the virtual Google I/O conference, the company announced new features and product updates in Search and Lens based on Multitask Unified Model (MUM) Google’s latest AI milestone.

MUM can understand complex questions, so in the future, people searching for answers will need fewer searches to get things done.

MUM is multimodal, so it understands information across text and images, said Prabhakar Raghavan, senior vice president of search, said during the Google I/O. In the future, Google plans to expand to more modalities like video and audio.

“It’s pushing the boundaries of natural-language understanding,” he said.

Trained across 75 different languages and multiple modalities simultaneously, MUM has the potential to transform the way people search and understand more context.

Avid hikers planning their next trip might ask search what should they do different to climb one mountain versus another. MUM would know the person wants to compare hikes or gear for two different mountains -- Mt. Adams versus Mt Fuji, for example. It would also provide pointers to go deeper on some topics, and translate from different languages to serve up more information.

It will allow someone to take a photo of their hiking boots and ask the engine whether they can be used to climb a specific mountain. The interactive experience is made possible with help from Lens, with options to search, copy, or listen to text that has been translated.

Google researchers built MUM on the tail of BERT, which they fed with huge amounts of text, something like 2.5 billion words from Wikipedia, and thousands of books.

Next story loading loading..