Fake 'Guardian' Article Attributed To ChatGPT, Publication Forms AI Working Group

The Guardian is facing a new problem caused by ChatGPT: the appearance of articles it never published.  

The newspaper first noticed the problem when a researcher came across a Guardian article published a few years before. 

After an extensive search and querying the bylined reporter, the Guardian concluded that the article was a fake that the AI had made up. 

“Its fluency, and the vast training data it is built on, meant that the existence of the invented piece even seemed believable to the person who absolutely hadn’t written it,” writes Chris Moran, the head of editorial innovation for the Guardian. 

In a sense, the newspaper’s brand name and good will had been hijacked.

Earlier this week, the paper's archive team was asked by a student about another missing article from a named journalist. There was no trace of the piece in the Guardian’s systems.  

advertisement

advertisement

The story about these fakes was originally reported by Futurism. 

This is probably happening to other periodicals. As Moran puts it, the question for responsible news organizations is “ simple, and urgent: what can this technology do right now, and how can it benefit responsible reporting at a time when the wider information ecosystem is already under pressure from misinformation, polarization and bad actors". 

Moran continues, “This is the question we are currently grappling with at the Guardian. And it’s why we haven’t yet announced a new format or product built on generative AI.”

Instead, The Guardian has created a working group and small engineering team to focus on “learning about the technology, considering the public policy and IP questions around it, listening to academics and practitioners, talking to other organizations, consulting and training our staff, and exploring safely and responsibly how the technology performs when applied to journalistic use.”

Next story loading loading..