AI Garages: Fake News Sites Can Harm Legitimate Publishers

Microsoft is embroiled in a defamation lawsuit in Ireland, thanks to an AI chatbot that paraphrased another article. 

The AI-driven piece, “Prominent Irish broadcaster faces trial over alleged sexual misconduct,” was published by a “fly-by-night journalism outfit,” BNN Breaking, that went dark in April, The New York Times writes.

It featured a photo of Irish talk-show host Dave Fanning, who was absolutely not the person in question. 

The article was published on, the Microsoft web portal, and was visible for hours to viewers in Ireland who used Microsoft Edge as a browser. 

The offending piece was quickly taken down, but Fanning filed a defamation suit, stating that the damage to his reputation was not so quickly reversed, according to the Times. 



BNN Breaking is gone, but Microsoft still has to face the music. 

That was one of just several generation AI errors fomented by BNN, the Times continues. 

The danger for legitimate publishers,” the Times writes, is that “A.I.-generated content is upending, and often poisoning, the online information supply.”

This has to be sobering for any responsible news outlet tinkering with AI. There have been several gaffes by respectable publishers, and the BNN incident points to the danger that exists when legitimate news content is reused in some way by AI “chop shops,” as the Times dubs them. 

In another AI episode, a free China-based app called NewsBreak published a story titled, “"Christmas Day Tragedy Strikes Bridgeton, New Jersey Amid Rising Gun Violence in Small Towns," according to Reuters

There was no such shooting. 

NewsBreak blamed the error on another content source, and the company told Reuters: "When NewsBreak identifies any inaccurate content or any violation of our community standards, we take prompt action to remove that content."

However, NewsBreak, the most downloaded app in the U.S., has published erroneous stories at least 40 times since 2021, Reuters alleges, citing unreported court documents from “copyright infringement cases, cease-and-desist emails and a 2022 company memo registering concerns about ‘AI-generated stories.’”

Of course, even inaccurate journalism is protected by the First Amendment. 

That said, legitimate newsrooms have to protect themselves in three ways: employ technology that scans for scraping and misuse of content; prevent inadvertent pickup of content from fake news purveyors by their own reporters; and choose content partners very carefully.

This problem could spread just as rapidly as AI itself. 


Next story loading loading..