Commentary

Facebook Guidelines Reveal Awkward Balancing Act

Facebook’s ubiquity as a vehicle for individual self-expression has made it rich, but now it is also forcing the world’s largest social network to confront issues of free speech and liberty that were previously the sole preserve of governments and philosophers.

How far does the individual’s right to communicate extend, especially when the messages may be harmful to others? Is it the responsibility of the public to police itself, and what happens when it fails to do so?

Should the powers that be focus solely on eradicating criminal behavior, or does their authority also extend to questions of morality?

Answers to these perplexing questions are not to be found in the training manuals for Facebook content moderators published by the Guardian on Monday, but the documents do give a detailed portrait of Facebook’s de facto decisions regarding the right and responsibilities it accords its users, including the limits it imposes on their discourse.

Facebook’s roughly 4,500 human moderators around the globe face an incredibly complicated task, with some unexpected ethical dilemmas and gray areas, which were only amplified by the introduction of live streaming video.

The social network’s rules for moderators touch on controversial and upsetting content in a range of categories, including graphic violence, sex, terrorist propaganda, misogyny and racism, non-sexual child abuse, cruelty to animals, self-harm, and credible threats of violence.

For example, one of the difficult ethical questions Facebook must parse is what to do when an individual uses live streaming video to broadcast an attempted suicide or self-harm.

In addition to traumatizing other viewers, such videos could easily inspire copycat attempts – but allowing people in distress to continue live streaming holds out the hope that someone might be able to contact and engage with them, or that law enforcement might locate them in time.

In the end, Facebook’s content standards follow the latter argument, but these videos do not remain publicly available afterwards.

Another tricky area is videos showing cruelty to animals or people.

According to Facebook’s rules, “We do not allow people to share photos or videos where people or animals are dying or injured if they also express sadism.”

The guidelines explain that the social network doesn’t focus on the subject of the photo or video, but rather accompanying verbal commentary or captions expressing sadistic pleasure.

Otherwise, however, videos of people or animals dying are allowed on the site: “Videos of violent deaths are disturbing, but can help create awareness.” In these cases the social network resorts to “age gates” meant to limit exposure to children, and adds a warning screen for adults.

When it comes to threats of physical violence, Facebook will allow users to post content contain threats except towards people in certain key categories, including heads of state, specific law enforcement officers, witnesses, activists and journalists, and people on “hit lists” created by terrorist or criminal organizations. 

For ordinary users, Facebook will only remove content if it contains a “credible threat.”

Here, the guidelines reasonably point out people commonly use threats of violence to express anger or frustration, with no intention of carrying them out – but this requires the moderators to make subtle judgments based on certain details regarding the tone and context.

For example, moderators should be able to tell the difference between “I hope someone kills you” and “I hate foreigners and I want to shoot them all” (the former passes, the latter does not).

A more pressing question is whether Facebook can even moderate content that is clearly prohibited, including live streaming video of murders, assaults and armed robberies.

As multiple examples have shown in recent months, Facebook’s reliance on policing by the community of users leaves it vulnerable to situations in which no one flags the video for at least a few hours, allowing it to propagate and go viral, making it much harder to remove permanently from the Internet.

In one case, a Facebook user posted a video of himself committing a murder that was viewed thousands of times, but not flagged for an hour and 48 minutes, allowing numerous users to download and share the video.

Next story loading loading..