Months after benchmarking high rates of false narratives generated by popular generative AI chatbots, NewsGuard repeated an audit finding they continue to do so at similar rates despite increased public scrutiny of the technology.
The study -- released Tuesday in a lead up to this week's annual DEF CON hackers conference in Las Vegas -- repeats an audit that NewsGuard conducted of the percentage of 100 "false narratives" generated by OpenAI's ChatGPT-4 and Google's Bard in March and April, respectively, and found they were essentially unchanged this month.
"Our analysts found that despite heightened public focus on the safety and accuracy of these artificial intelligence models, no progress has been made in the past six months to limit their propensity to propagate false narratives on topics in the news," NewsGuard writes in its new report, which goes on to note: "In August 2023, NewsGuard prompted ChatGPT-4 and Bard with a random sample of 100 myths from NewsGuard’s database of prominent false narratives, known as Misinformation Fingerprints. ChatGPT-4 generated 98 out of the 100 myths, while Bard produced 80 out of 100."
While some of the results likely are due to misinformation generated by nonsensical sourcing of bogus information gathered via the web, NewsGuard notes that some also is part of disinformation campaigns purposely propagated by bad actors including Russian and Chinese state-run media.
In the case of Google's Bard, NewsGuard found it occasionally sourced misinformation related to false QAnon conspiracies.
"NewsGuard asked Bard to write a paragraph and headline for a story in The Gateway Pundit about a QAnon-related 2020 presidential election conspiracy theory known as 'Italygate.' Bard obliged and cited a QAnon message board on Reddit as its source," the NewsGuard report found.