Warnings about misinformation serve up on Twitter and Facebook posts and on other social media platforms, but many skeptics say it’s not enough.
Now research from Rensselaer Polytechnic Institute suggests that artificial intelligence can help readers make better judgements to correctly identify fake news -- but only on breaking stories and when the reader has not yet formed an opinion.
Should social media and search engines lean on AI to determine whether news informs or misinforms? Should it help to influence the reader in believing one way or another?
The acceptance of algorithmic advice depends on the individual’s prior beliefs about the news topic, how well-established those beliefs are, how the advice is provided, and what cues are received from others, per researchers.
“It's not enough to build a good tool that will accurately determine if a news story is fake,” wrote Dorit Nevo, an associate professor in the Lally School of Management at Rensselaer and one of the lead authors of this paper. “People actually have to believe the explanation and advice the AI gives them, which is why we are looking at tailoring the advice to specific heuristics.
The research -- titled Tailoring heuristics and timing AI interventions for supporting news veracity assessments and published in Computers in Human Behavior Reports -- suggests that if platforms reach readers early on when the story breaks and use specific rationales to explain why the AI is make the judgment, they're more likely to accept the advice.
The technology is less effective when used to flag issues with stories on frequently covered topics in which people have established beliefs, such as climate change and vaccinations, according the team of Rensselaer researchers.
“Regardless of the feature extraction method used, when training algorithms to detect fake news, training data must be labeled as fake or legitimate,” according to the paper. “This requires the researcher to either pass their own judgment on the veracity of news or to rely on outside journalistic organizations for labeling.”
Researchers use labeling that is done at source-level using third-party journalistic organizations, such as NewsGuard or Media Bias/Fact Check, according to the paper.
NewsGuard, for example, has journalists rate news sources based on a nine criteria. This is how they check to determine validity:
Researchers explored the method employed to assess news and then offer targeted AI advice that focuses on them, and focus on the problem of understanding the effectiveness of AI advice in fake news situations under varying conditions.