AI Assistants Provide Answers Full Of Errors, Study Says

Artificial intelligence (AI) assistants -- an information source for millions of people -- routinely misrepresent news content regardless of country or AI platform, according to new research from the European Broadcasting Union (EBU) and the BBC. 

The study found that 45% of all AI answers served up to people had at least one significant issue. And 31% contained serious sourcing problems such as missing, misleading or incorrect attributions.

Moreover, 20% featured hallucinated details and outdated information.  

Of all the AI platforms, Gemini performed the worst, with 76% of its responses showing significant issues, mostly because of poor sourcing.  

"This research conclusively shows that these failings are not isolated incidents," said EBU media director and deputy director general Jean Philip De Tender. "They are systemic, cross-border, and multilingual, and we believe this endangers public trust. When people don’t know what to trust, they end up trusting nothing at all, and that can deter democratic participation." 

advertisement

advertisement

The researchers cite the Reuters Institute’s Digital News Report 2025, which shows 7% of total online news consumers use AI assistants to get their news, and that number rises to 15% for people under age 25. 

The researchers have also released a News Integrity in AI Assistants Toolkit.

‘We’re excited about AI and how it can help us bring even more value to audiences,” says Peter Archer, program director, generative AI for BBC. "But people must be able to trust what they read, watch and see. Despite some improvements, it’s clear that there are still significant issues with these assistants. We want these tools to succeed and are open to working with AI companies to deliver for audiences and wider society.” 

Next story loading loading..