OpenAI has won a defamation suit concerning its ChatGPT machine learning tool that editors and legal teams should be studying.
The Superior Court of Gwinnett County, Georgia issued a summary judgment on Monday, holding that ChatGPT had not defamed Mark Walters, a radio talk show host and advocate for Second Amendment rights.
Walters is also East Coast spokesperson for the Second Amendment Foundation (SAF), according to the opinion.
The alleged defamation occurred when a journalist and fellow gun-rights advocate named Frederick Riel sought input from ChatGPT on a lawsuit filed by SAF against the Attorney General of the state of Washington.
After several tries, ChatGPT served up an inaccurate summary of the complaint, stating that it involved allegations of embezzlement against an SAF treasurer and chief financial officer. In another attempt on ChatGPT, this party was identified as one Mark Walters.
advertisement
advertisement
Does this false information constitute defamation? Apparently not, given that “the output obtained by Mr. Frederick Riel on May 3, 2023 contained clear warnings, contradictions, and other red flags that it was not factual,” wrote Judge Tracie Cason.
Indeed, ChatGPT's warnings, refusals and inconsistent responses “objectively established to any reasonable reader that the challenged ChatGPT output was not stating ‘actual facts,’” Cason continued.
The judge also ruled that Walters is a public figure with 12 million listeners who cannot establish actual malice or negligence. Moreover, “Walters has conceded that he did not incur actual damages here.,” Cason observed. Nor did he request a retraction or correction.
The judge added, “Riel, the only person who received the challenged ChatGPT output, was ‘always skeptical’ about ChatGPT's output, established after approximately an hour and a half that the output was not true, and did not republish it.”
This gets ChatGPT off the hook, at least in this case. It was not known at deadline if an appeal would be filed. But there is a lesson to be learned.
Journalists cannot use ChatGPT responses, believing that the disclaimers will save them from a defamation action. As Riel did, the reporter must independently confirm any purported “facts” presented by ChatGPT—or not publish them. As many experts have said, AI can only be used under human supervision.
Reuters broke this story on Tuesday.