Following the 2016 election cycle, Facebook was under harsh scrutiny for the spread of fake news stories that may have tipped the election in Trump’s favor. To counter this, the company formed a fact-checking coalition that would investigate suspicious articles and alert users if a shared article was deemed factually incorrect.
Facebook partnered with outlets like the Poynter Institute and Politifact to review news stories via independent fact checkers.
Politifact reports applying a false label to at least 1,722 URLs over the course of the last year. Just this fall, following the Las Vegas shooting, the outlet identified fake articles with titles like “Celine Dion’s 16-year-old Rene-Charles Angelil’s was among the 58 victims who were killed.” There were also fake reports surrounding Hurricanes Harvey, Irma and Maria, along with the recent Alabama special Senate election.
The program was criticized at its start, with some users calling the practice censorship of the internet, a ludicrous claim considering fake news stories exist and have swayed social-media users with false information. Remember "Pizzagate?"
The criticism didn’t stop Facebook from attempting to weed out stories from users feeds. However, according to the company, a strange trend emerged. Flagging some articles as disputed made some users more likely to believe them, rather than less.
Tessa Lyons, a product manager with Facebook, wrote in a statement: “Academic research on correcting misinformation has shown that putting a strong image, like a red flag, next to an article may actually entrench deeply held beliefs — the opposite effect to what we intended. Related Articles, by contrast, are simply designed to give more context, which our research has shown is a more effective way to help people get to the facts. Indeed, we’ve found that when we show Related Articles next to a false news story, it leads to fewer shares than when the Disputed Flag is shown.”
Today, the company announced a new plan for tagging fake news going into the new year.
Relying on two fact checkers to label a story fake would take an exorbitant amount of time, which allowed a dubious piece to make its way around the site. To counter this, the platform, working with fact-checking partners, will offer similar stories from alternative outlets to add context to the news a user is receiving, rather than label stories fake without an alternative. This requires only one fact checker to review a story, cutting down the amount of time it takes to label it.
The issues surrounding last year’s election were a wake-up call to how the internet and social media can be weaponized by those with dangerous intentions. Facebook, which boasts more than 2 billion users, is grappling with the fall out. Its scrambling to find solutions only reinforces the severity of the issue.