People searching the Internet for answers typically are not equipped to monitor the truth, mainly because most fake news is written in a deceptive way. News gets accessed from many sources and not all are legitimate, making it tough to determine credibility.
This could all soon change. Researchers at Indiana University are working on building technology that would fact-check online content, but the program is in its early stages.
At the moment it is rudimentary, says Filippo Menczer, director of the university's Center for Complex Networks and Systems Research. "We are moving toward the idea of having some sort of computational fact checking," he said.
Today, the technology can determine the reliability of statements such as: "Paris is the capital of Russia." It's not clear if and when it would be available for search queries on engines, in Facebook or on publisher sites.
Is there a way to build a more sophisticated algorithm to screen news for the truth? That's a complex question, Menczer says, because the distinction sits on a fine line.
"I would be wary of a system that would judge content to be false, even if that's possible," he said. "Some would consider that a type of censorship. The distinction between fake, mistake, misleading and bias is a blurry one."
Putting the ethical issues aside, he said there is no technology that comes close to being able to read a piece of text, put it into content and understand it, and determine whether it's true, misleading or false.
While people need to read news with a more skeptical eye, there are signals in metadata that could be used by a machine learning algorithm to flag suspected content.
For example, signals could consider the source of the news or if there are similar articles fact-checked and debunked.