Google Blames 'Data Voids' For Nonsensical Search Results

“Data voids” or “information gap” are the terms that Google head of search Liz Reid used to describe how AI Overview spit out inaccurate results from odd and obscure queries.

Last week when Google rolled out AI Overview in general availability, some strange results surfaced.

Some of the advice was for people to eat rocks and put glue on their pizza to help the cheese stick to the pizza. Some of those results were traced back to decade-old comments across the web that were intended as a spoof when originally written.

“Some odd, inaccurate or unhelpful AI Overviews certainly did show up,” Reid wrote in a post. “And while these were generally for queries that people don’t commonly do, it highlighted some specific areas that we needed to improve.l.

Reid said Google identified an ability to interpret nonsensical queries and satirical content. There isn't much web content that seriously contemplates that question, which is why this type of information is referred to as data voids.

advertisement

advertisement

Reid explained how AI Overviews work differently from chatbots and other large language model (LLM) products. They do not just generate an output based on training data.

Billions of queries come in daily. “While AI Overviews are powered by a customized language model, the model is integrated with our core web ranking systems and designed to carry out traditional 'search' tasks, like identifying relevant, high-quality results from our index,” Reid wrote. “That’s why AI Overviews don’t just provide text output, but include relevant links so people can explore further.”

AI Overviews generally don't "hallucinate" or make things up in the ways that other LLM products might. When AI Overviews get it wrong, it’s usually for other reasons such as misinterpreting queries, misinterpreting a nuance of language on the web, or not having a lot of great information available.

Google is working on making improvements based on the examples, from the past couple of weeks. The company was able to determine patterns where there were mistakes made, and has made more than a dozen technical improvements to the systems.

Reid also explains how Google is fixing AI Overviews through detection mechanisms for nonsensical queries that should not show an AI Overview, as well as limiting the inclusion of satire and humor content.

Those changes include updates to limit the use of user-generated content in responses that could offer misleading advice -- including the addition of triggering restrictions for queries where AI Overviews were not proving to be as helpful.

Next story loading loading..