Microsoft is under fire for allegedly publishing an AI-generated poll next to a news story in The Guardian.
The
Guardian says that Microsoft damaged its journalistic reputation by publishing the automated poll next to a Guardian story about the death of Lilie James, a 21-year-old
water polo coach who was found dead after sustaining serious head injuries at a school in Sydney, Australia last week.
The survey, created by an AI program and circulated via
Microsoft’s news aggregation service, asked: “What do you think is the reason behind the woman’s death?” Readers were offered three choices: murder, accident or
suicide, The Guardian reports.
The poll was deleted after angry comment by readers. One wrote, “This has to be the most pathetic, disgusting poll I’ve ever
seen.”
Another said that the reporter of the news story should be fired, although that person had nothing to do with the poll.
advertisement
advertisement
“This is clearly an inappropriate use
of genAI [generative AI] by Microsoft on a potentially distressing public interest story, originally written and published by Guardian journalists,” wrote Anna Bateson, chief executive of
the Guardian Media Group,
The episode demonstrated “the important role that a strong copyright framework plays in enabling publishers to be able to negotiate the
terms on which our journalism is used,” Bateson continued.
Bateson added, “This is clearly an inappropriate use of genAI by Microsoft on a potentially distressing
public interest story, originally written and published by Guardian journalists," and that Microsoft might consider running a statement taking responsibility.
The survey appeared
in the Microsoft Start product.
Microsoft, which has a license to run Guardian content, agrees that the survey was ill-placed.
“We have deactivated
Microsoft-generated polls for all news articles and we are investigating the cause of the inappropriate content,” a spokesperson wrote. “A poll should not have appeared alongside an
article of this nature, and we are taking steps to help prevent this kind of error from reoccurring in the future.”
This flap illustrates again that artificial intelligence is dangerous
when paired with journalism. Publishers seem to have little control of how their content is used or whether automated stories are accurate and in line with their standards. They have to monitor
information partners and call them on infractions.
Readers might give the Guardian, an excellent journalistic organization that operates without a paywall, the benefit of the
doubt. It remains to be seen if they extend that to Microsoft.