Newsweek's AI Etiquette: How The Publisher Defines The Rules

Newsweek is using AI for various tasks—hopefully, with more success than publishers that have suffered gaffes in the past. And it is trying to do so in an ethical way.

The publication states on a refreshed standards page that it believes “AI tools can help journalists work faster, smarter and more creatively.”

However, it adds, “AI is not accountable to Newsweek readers: we are. The burden of ensuring that all stories or other content meets Newsweek standards rests with our writers, editors and producers, always.”

So what are the new rules?

For one, Newsweek will avoid publishing AI-generated images that appear lifelike, including video and stills.

Journalists will be involved at all stages of assigning, reporting and publishing written content in AI. But any journalist using an AI tool on a core journalism function must disclose that to their editor and the publishing desk. 



And tools that have not been previously used at Newsweek must be approved by the standards editor. 

The above rules do not apply to supporting functions like note-taking, transcription and video script writing, writing social copy, A/B testing headlines or adding metadata or selected images. But the journalist using AI will be responsible for avoiding errors. 

Newsweek announced alternations in its AI policy last September. 

“I think that the difference between newsrooms that embrace AI and newsrooms that shun AI is really going to prove itself over the next several months and years,” says Jennifer H. Cunningham, the new executive editor of Newsweek, according to NiemanLab, which broke the story.  

Cunningham adds, “We have really embraced AI as an opportunity, and not some sort of bogeyman that’s lurking in the newsroom.”



Next story loading loading..