Commentary

This Election, Facebook Failed Us All

By now the narrative is clear: The outcome of Tuesday’s election was a surprise to many individuals in the U.S. and around the world and to supporters of both political parties.


In any instance when a predicted outcome is proven wrong there is a reflection period, which often sparks a curiosity. Presidential campaigns have used the Internet to support their efforts dating back to 1996, and the role of the Internet and social networks on election outcomes has significantly picked up over the past several election cycles. 

Facebook COO Sheryl Sandberg estimated  that over 2 million people registered to vote after seeing a reminder on Facebook this year, and Pew estimates that in 2016 the majority of U.S. adults (62%) get their news from social media channels.

As a population, we know the impact social media can have in shaping election outcomes, particularly since Obama’s successful 2008 campaign.

As social networks, including Facebook, Instagram and Twitter, become increasingly influential in shaping public opinion, these networks evolved to take on more control of how information is shared, sorted and prioritized.

The Evolution of Facebook’s Algorithm

To understand how we got here, it’s worth looking back at the history of Facebook’s algorithm. In 2009, Facebook debuted a new type of default sorting order, based on popularity, which was quantified by engagement on each post.  

Subsequent updates were made in 2013, 2015 and 2016.  Each update refined further which information users would see in their newsfeed, as Facebook attempted to provide users with more personalized content that met their interests.

This also allowed Facebook to target advertising to specific users based on geographic location, age, hobbies and interests, which is Facebook’s key advertising value proposition. This targeting also applies to political beliefs.

The issues created or reinforced by Facebook’s algorithm are three-fold:

Much of the news shared on Facebook is biased and coming from biased or fake news sites. Individuals with no political affiliation were rolling out Web sites and sharing biased news simply to turn a profit.

For example, BuzzFeed reported over 100 pro-Trump sites being run by teenagers in Macedonia who say they “don’t care about Donald Trump” and are simply responding to straightforward economic incentives. These individuals were publishing sensationalist and often false content that caters to Trump supporters to up their Web traffic.

Many of these Facebook sites have hundreds of thousands of followers and Buzzfeed's research found that “the most successful stories from these sites were nearly all false or misleading.”

This creates an echo chamberinside and outside of politics.

It’s unclear how Facebook assigns political affiliation to an individual.  Individuals are increasingly using Facebook as their only source of news and information because they expect to consume a large and diverse variety of research, news and opinion all in one place. 

Since how the inner workings of the process are not fully understood, undecided voters may take certain actions, like commenting on or sharing a piece of partisan information, which could potentially skew the content that is directed to them on a moving forward basis.

The biggest issue is that users don’t have control of their own journey, based on their actions. It’s not possible to protect how user’s newsfeeds evolve based on the actions they take.

The biggest concern: the majority of Facebook users likely have little to no understanding of the social net's algorithm and may not realize the content they see is being uniquely targeted to them. Most users also are not able to discern the authority of the news and whether it’s biased.

Fact vs. Fiction

The amount of fake or fictional content floating around on social networks is also a growing concern, particularly coupled with Facebook’s algorithm that gives more authority to posts that receive more engagement.

Facebook has created a “report fake news” button that is meant to allow users to help Facebook screen and remove false news and information from the site. The problem is that many users don’t fact check information before they share it; the current algorithm can allow false information to become viral in a short period. Significant and irreversible damage can be done in a matter of minutes

What is the Solution?

Social media users have no responsibility to verify information or vet the information they share.  

The question then lies in the responsibility of social networks to act as a responsible third party in terms of what content they promote and why. Before algorithms came into play, it would be harder to make this point, but as soon as social networks began organizing and and prioritizing information for users, they assumed some responsibility for virality of content.

Google’s algorithms incorporate many variables, but credibility of news sources plays a significant role in rankings. Moving forward, Facebook may consider ranking news sources for validity and indicate the rank on posts, or prioritizing credible news sources over less credible ones. Better measures must also be put in place to remove false information and a more effective system should be implemented to let users know if they shared something that was factually untrue.

 

 

 

 

Next story loading loading..