
OpenAI's large language model ChatGPT might be
able to write essays, diagnose some diseases, and even pass the bar exam, but the technology comes with a pretty significant drawback -- it sometimes spits out incorrect information.
That
tendency to “hallucinate” -- meaning to make up false information out of whole cloth -- has now resulted in what's believed to be the first defamation lawsuit against OpenAI.
In a
complaint brought this week in state court in Georgia, radio host Mark Walters alleges that ChatGPT provided false and malicious information about him to journalist Fred Riehl, who founded the
publication AmmoLand News, which covers weapons.
According to the complaint, Riehl asked ChatGPT to summarize a
complaint brought last month by the Second Amendment Foundation and its founder, Alan Gottlieb, against Washington State Attorney General Bob Ferguson. Gottlieb claims in that lawsuit that he is being
wrongly investigated by state authorities due to his views on gun rights.
advertisement
advertisement
Instead of accurately describing Gottlieb's lawsuit, ChatGPT allegedly wrote that Gottlieb's complaint accused Walters of misappropriating funds.
That response “is a complete
fabrication and bears no resemblance to the actual complaint, including an erroneous case number,” Walters' complaint alleges.
He notes that he has never even been in a position to
misappropriate funds from the Second Amendment Foundation, given that he has no employment or other official relationship with the organization.
“ChatGPT’s allegations concerning
Walters were false and malicious, expressed in print, writing, pictures, or signs, tending to injure Walter’s reputation and exposing him to public hatred, contempt, or ridicule,” the
complaint alleges.
He is seeking monetary damages and attorney's fees.
While Walters' lawsuit appears to be the first of its kind, it almost certainly won't be the last, given the
technology's tendency to hallucinate.
“There's no doubt that ChatGPT is going to trigger an enormous amount of lawsuits,” Santa Clara University law professor Eric Goldman tells
MediaPost. “The question is whether any of them are going to stick.”
Even before Walters brought suit, others accused ChatGPT of libelous fabrications. In April, for instance, a
mayor in Australia reportedly threatened to sue after
learning that ChatGPT wrongly accused him of bribery.
ChatGPT's hallucinations also have proven to be problematic even when they are not libelous. In one highly publicized example, a lawyer
filed a brief written with ChatGPT that contained citations to fake cases.
Goldman says that as
news spreads of the technology's limitations, people who sue for defamation could have a hard time proving that anyone could reasonably believe ChatGPT's responses are accurate.
OpenAI also
could attempt to argue it's protected by Section 230 of the Communications Decency Act, which immunizes web companies from liability for material posted by third parties.
Whether that argument
will be successful will depend on the specific facts, Goldman says.
For instance, if ChatGPT provides false information, but is quoting a third party word-for-word, Section 230 should apply,
Goldman says. But if ChatGPT is simply making up incorrect information, Section 230 wouldn't protect the company.
Regardless of Section 230, Walters might not be entitled to much in the way of
damages even if he proves his case, according to Goldman. That's because, according to the complaint, the false information was only conveyed to one person -- Riehl.
OpenAI hasn't yet
responded to MediaPost's request for comment.