The recent introduction of AI Overviews has ignited discussions about the reliability and accuracy of generative AI. While it's essential to scrutinize these new technologies, it's equally important to recognize that the underlying issues are not new. For over a decade, the search world has grappled with similar challenges.
AI Overviews are just about a week old, so we haven’t had the luxury of time for comprehensive data gathering. Instead, we've witnessed a deluge of screenshots highlighting embarrassing and occasionally dangerous answers to niche questions. For example, during the Bing ChatGPT launch, an informal analysis by the Washington Post revealed that approximately 10% of answers were problematic. These errors, while concerning, are not without precedent.
The history of search engines is replete with instances of inaccuracy. In 2022, Google search results erroneously suggested that Snoopy had assassinated Abraham Lincoln. Going further back to 2017, Google snippets falsely claimed certain U.S. Presidents were members of the KKK and labeled women as inherently evil. In 2014, Google provided inappropriate answers about eating sushi. These examples underscore the enduring challenge of delivering precise answers.
advertisement
advertisement
Generative AI represents an extension of these ongoing efforts, albeit with heightened scrutiny due to its broader implications for publishers, businesses, the environment, job markets, and potential discrimination. This heightened awareness explains the increased focus on the accuracy of AI today.
Yet the core issue lies not in the method -- whether AI or traditional algorithms -- but in the pursuit of a single definitive answer. Given the vast amount of inaccurate information and the polarized nature of contemporary discourse, achieving the "right" answer without requiring users to sift through multiple sources seems nearly impossible. Yet search engines continue this quest.
In 2005, former Google CEO Eric Schmidt encapsulated this challenge by saying: "When you use Google, do you get more than one answer? Of course you do. Well, that’s a bug." Schmidt set the course for the company's pursuit of a single, definitive answer, and this goal persists today.
At the recent Google Marketing Live event, the company reiterated this vision, quoting Schmidt: “We should be able to give you the right answer just once. We should know what you meant.” This intuitive goal remains compelling, but is feasible only if it avoids creating the illusion of certainty, which can be more damaging than ambiguity. A more practical approach might be to enhance the user experience by offering a well-curated set of choices. This would allow users to navigate information effectively without misleading them into believing there is only one correct answer.
Rather than providing shortcuts, search engines can encourage critical thinking, so users won’t rely solely on Google, Bing, ChatGPT, or any other platform to define the truth. These tools are designed to inform, but individuals must determine what is true or best for themselves.
A new concern with AI Overviews is the inclusion of ads alongside generated content, introducing potential brand safety issues and other unintended consequences. While some controls may mitigate these risks, it would be prudent for brands and search engines to refrain from incorporating ads in these experiences until there is a clearer understanding of their impact at scale.
As AI continues to evolve, maintaining a balanced perspective is essential. We must call out and address concerns while ensuring our reactions are proportional to the actual issues. By understanding the historical context and focusing on improving user experience, we can better navigate the challenges posed by AI Overviews and similar technologies.