AI Sways Judgment On Breaking News, Misinformation, Study Suggests

Warnings about misinformation serve up on Twitter and Facebook posts and on other social media platforms, but many skeptics say it’s not enough.

Now research from Rensselaer Polytechnic Institute suggests that artificial intelligence can help readers make better judgements to correctly identify fake news -- but only on breaking stories and when the reader has not yet formed an opinion.

Should social media and search engines lean on AI to determine whether news informs or misinforms? Should it help to influence the reader in believing one way or another?

The acceptance of algorithmic advice depends on the individual’s prior beliefs about the news topic, how well-established those beliefs are, how the advice is provided, and what cues are received from others, per researchers.

“It's not enough to build a good tool that will accurately determine if a news story is fake,” wrote Dorit Nevo, an associate professor in the Lally School of Management at Rensselaer and one of the lead authors of this paper. “People actually have to believe the explanation and advice the AI gives them, which is why we are looking at tailoring the advice to specific heuristics.

advertisement

advertisement

The research -- titled Tailoring heuristics and timing AI interventions for supporting news veracity assessments and published in Computers in Human Behavior Reports --  suggests that if platforms reach readers early on when the story breaks and use specific rationales to explain why the AI is make the judgment, they're more likely to accept the advice.

The technology is less effective when used to flag issues with stories on frequently covered topics in which people have established beliefs, such as climate change and vaccinations, according the team of Rensselaer researchers.

“Regardless of the feature extraction method used, when training algorithms to detect fake news, training data must be labeled as fake or legitimate,” according to the paper. “This requires the researcher to either pass their own judgment on the veracity of news or to rely on outside journalistic organizations for labeling.”

Researchers use labeling that is done at source-level using third-party journalistic organizations, such as NewsGuard or Media Bias/Fact Check, according to the paper.

NewsGuard, for example, has journalists rate news sources based on a nine criteria. This is how they check to determine validity:  

  1. The source does not repeatedly publish false content.
  2. The source gathers and presents information responsibly.
  3. The source regularly corrects or clarifies errors.
  4. The source handles the difference between news and opinion responsibly.
  5. The source avoids deceptive headlines.
  6. The source’s website discloses ownership and financing.
  7. The source clearly labels advertising.
  8. The source reveals who is in charge, including any possible conflicts of interest.
  9. The source provides information about content creators.

Researchers explored the method employed to assess news and then offer targeted AI advice that focuses on them, and focus on the problem of understanding the effectiveness of AI advice in fake news situations under varying conditions.

Next story loading loading..