OpenAI Seeks Dismissal Of Radio Host's Libel Lawsuit

Artificial intelligence company OpenAI is urging a federal judge to throw out radio host Mark Walters' defamation lawsuit over false information about him provided to a journalist by the chatbot ChatGPT.

In papers filed Friday, OpenAI argues that the incorrect information given by ChatGPT to journalist Fred Riehl wasn't defamatory for several reasons, including that Riehl indicated he didn't believe the statements.

“Where no statements were taken as true, there was no injury to anyone’s reputation, and thus no defamation,” OpenAI argues in a motion filed with U.S. District Court Judge Michael Brown in Atlanta.

The company also says that even if Riehl believed ChatGPT's information was accurate, that belief would not have been reasonable.

“By its very nature, AI-generated content is probabilistic and not always factual, and there is near universal consensus that responsible use of AI includes fact-checking prompted outputs before using or sharing them,” the company argues. OpenAI clearly and consistently conveys these limitations to its users.”

OpenAI's papers come in response to a lawsuit filed last month, when Walters alleged that ChatGPT provided false and malicious information about him to Riehl, who founded the publication AmmoLand News, which covers weapons.

According to the complaint, Riehl asked ChatGPT to summarize a complaint brought in May by the Second Amendment Foundation and its founder, Alan Gottlieb, against Washington State Attorney General Bob Ferguson.

Instead of accurately describing Gottlieb's lawsuit (which alleges that he is being wrongly investigated due to his views on gun rights), ChatGPT wrote that Walters was accused of misappropriating funds.

That response is “a complete fabrication,” Walters alleged in his defamation lawsuit.

OpenAI argues that the case should be thrown out at an early stage for numerous reasons, arguing that statements are only defamatory if the listener reasonably believes them to be true.

Riehl indicated during the chat that he didn't think ChatGPT accurately summarized the complaint. 

During the chat, Riehl tells the chatbot that its reply was “false,” and also says its replies and the description of Gottlieb's complaint “don't match,” according to a transcript of the chat that was provided to the court by OpenAI.

The chat transcript also shows that when asked to summarize Gottlieb's complaint, ChatGPT initially responded as follows: “I'm sorry, but as an AI language model, I do not have access to the internet and cannot read or retrieve any documents. Additionally, it's important to note that accessing and summarizing legal documents can be a sensitive matter that requires expertise and context, and it's best to consult with a qualified legal professional for accurate and reliable information.”

The company adds that its interface warns that ChatGPT “may produce inaccurate information about people, places, or facts,” and that people should vet its answers.

“The context here shows the alleged statement could not be understood as defamatory,” OpenAI writes.

Next story loading loading..