People searching online have begun to recognize the helpfulness of generative AI search, especially when breaking down complex tasks where pieces of answers can be scattered across multiple sites and repositories.
Scattered content underscores the need for connectors to pull disparate information together into a centralized location that a large language model (LLM) can help to make sense, according to a report released today.
The presence of LLMs also necessitates a function that grounds responses in source content, which search can also help with — specifically to reduce or eliminate hallucinations,
also known as fake answers.
AI-powered cloud service Coveo, which
provides enterprise search, personalization and recommendations, found that 49% of customers have experienced AI hallucinations that led to widespread skepticism.
The report, released Tuesday — titled Customer Effort Is at an All-Time High - Is Generative Search The Key -- is based on a survey of 4,000 consumers in the U.S. and the U.K.
The survey of 4,000 consumers analyzed customer behaviors, expectations and areas of frustration by examining the entire end-to-end customer journey through experiences. The report is the Customer Experience (CX) edition of Coveo's annual industry report.
Findings point to frustrating digital experiences that drive ghosting, but consumers are willing to give companies second chances. Some 72% of customers abandon websites after negative experiences, but 62% said they are willing to try again, especially Gen Z and Millennials — by refining searches, using filters, or browsing site categories.
Shoppers' willingness to share data has fallen for the first time in three years. The report shows 53% of respondents said they are happy to share data when shopping online if it means they get better deals and offers, with the figure rising to 60% and 62%, respectively, for Gen Z and Millennials.
However, when comparing the rate for the same question in the 2024 report, it was 12 points higher at 65%.
The top reason — at 53% — was not being able to easily search for and find the information on their own. Some 28%, up this year from 21% in the prior year, said they have seen an increase in AI hallucinations.
Hallucinations arise from unfettered access to information, where concepts can collide without curation. Organizations are being held responsible for the information their chatbots provide. Hallucinations or wrongful access to information can open up organizations to significant risk and cost.
Advanced search engines coupled with content indexing do not just help LLMs access the information that it can then use to generate a response. Advanced engines that use techniques like retrieval augmented generation, also known as RAG, can safeguard businesses from an LLM’s more playful side, or the ill intent of a user’s query. When a search platform provides an LLM with solely the information relevant not only to the question at hand, but also from sources a searcher has permissions to access, it generates accurate and secure answers, the study details.
Some 49% of consumers participating int he study have experienced a hallucination while using a generative AI tool. About 22% at work, 21% while shopping, and 24% during other personal activities. It makes sense that more exposure allows for more opportunities for confidently incorrect information. Half of these respondents said they experience hallucinations on a weekly basis, while the other half say it occurs monthly or less frequently. U.S. respondents reported more weekly experiences than UK respondents — at 52% versus 49%, respectively.
Still, 42% said they always fact check a generated answer with the U.S. at 43% and the U.K. at 41%. Generation Z check answers 47% of the time, compared to Millennials at 44%, Gen X at 40% and Baby Boomers/Silent Gen at 36%. The data points for the need of Retrieval-Augmented Generation (RAG), a method that enhances AI models to craft trustworthy customer experiences.
When it came to trust in enterprise-approved tools versus open source, enterprise-approved tools narrowly won by three points. Trust in both increased year-over-year, with the difference between trusting enterprise approved and open source remaining the same.