Last year, I participated in a two-day gathering of news professionals, academics, and data geeks at the MIT media lab. The mission that we'd accepted was to explore the emerging field of fake news — and to try and understand what could be done to mitigate it.
One year later, MisInfoCon convened again — the fourth gathering of the group, this time in Washington D.C., with the Capitol dome standing just outside our window. The location of the conference, the Newseum, was both appropriate and ironic, as the Newseum is facing an economic crisis and likely to shut its doors in the foreseeable future.
I went to DC with the fervent hope that in the year MisinfoCon gathered at MIT there had been a breakthrough. I was hoping someone had invented software to hunt down and demolish fake news bots, fake accounts, and internet trolls that had blurred fact and fiction. Sadly, that wasn't the case.
advertisement
advertisement
Instead, there had been a flourishing of tools.
MentionMapp -- Mentionmapp Analytics investigates the digital ecosystem using visualization tools, machine- and human intelligence, revealing how information and misinformation flows through networks on Twitter.
Polygraph.info -- a fact-checking website produced by Voice of America and Radio Free Europe/Radio Liberty, meant to serve as a resource for verifying the increasing volume of misinformation being distributed globally.
Botometer (formerly BotOrNot) -- checks the activity of a Twitter account and gives it a score based on how likely the account is to be a bot. Higher scores are more bot-like.
FakerFact is an AI tool t built to help see if an article might be fake news.
TruePic is a way to check and verify the veracity of images.
And a hat tip to Micah Sifry, co-founder of Personal Democracy Forum and Civic Hall, a far better note-taker than I am. His take on the conference here .
But as much as it was good to see solid technology being deployed, and researchers and hackers working to fight fake news, the overall takeaway from these two days was, for me, undeniable: The torrent of misinformation is growing, becoming more sophisticated, and in many ways more effective.
For clarity, Deen Freelon, an associate professor at UNC School of Media and Journalism defined misInformation as "The surreptitious purposeful distribution of message intended to harm targets and/or benefit sources.”
It came as a bit of a surprise to find that the federal government was one of the presenters. Adam Hickey, a deputy assistant attorney general, explained that the FBI’s unpublicized sharing of information with social media companies is a “key component” of the Justice Department’s strategy to counter covert foreign influence efforts.
“It is those providers who bear the primary responsibility for securing their own products and platforms,” Hickey said. He explained that the agency doesn’t often “expose and attribute” ongoing foreign influence operations — partly to protect the investigations, methods and sources, and partly “to avoid even the appearance of partiality.”
But one thing is clear from talking to attendees: Attention to misinformation is greater because the problem is growing, as bots, trolls, and hackers become ever-more sophisticated about creating and disseminating falsehoods.
No matter what business you’re in, or how you use the web, there’s a fake-news target painted on your information intake. So awareness and activism is essential to fight the growing tide of fabrications and falsehoods.