Commentary

Amid Uproar, Facebook Promises Faster Response

Reeling from another gruesome crime video posted on the social network, Facebook is scrambling to explain how new procedures will help ensure that inappropriate content is swiftly removed in future. But it’s an open question whether any system based on self-policing by the Facebook community will ever be able to respond fast enough to satisfy critics and advertisers.

In the widely reported incident, on Sunday a man in Cleveland named Steve Stephens posted a video of himself shooting and killing 74-year-old Robert Godwin Sr., having apparently chosen the victim at random. The video was not live-streamed but was posted within two minutes of being recorded; at the time of writing, Stephens is still on the run, sought by state and federal law enforcement in a national manhunt.

The latest incident put Facebook on the spot once again, just as the social network was struggling to allay concerns among advertisers about fake news and other types of inappropriate content (issues that also affect other big-tech platforms including YouTube and Twitter). Many observers asked why the video was still being shared on Facebook several hours after it was first posted, prompting pleas from the man’s family to remove it.

In a blog post on Monday, Facebook reviewed the timeline of events, highlighted its current rules for dealing with inappropriate content, and promised new measures to speed the process.

The timeline shows that once Facebook was alerted to the video, relatively rapid action followed, including disabling Stephens’ account and blocking all his videos. According to Facebook, Stephens posted his first video, stating his intention to commit murder, at 11:09 a.m.; this video was never reported. Two minutes later, at 11:11 a.m., he posted the video of the shooting. Eleven minutes after that, at 11:22 a.m., he confessed to the murder in a live-streaming video which lasted for five minutes.

According to Facebook, the first report of the Live video of Stephens’ confession was received shortly after it ended at 11:27 a.m. However, the video of the shooting itself wasn’t reported until significantly later, at 12:59 p.m. Once the video of the crime was reported, Stephens’ account was disabled and all videos blocked within half an hour, at 1:22 p.m.

In the blog post, Facebook vice-president of global operations Justin Osofsky promised faster review as well as more automated screening to prevent offensive content from being re-posted: “In addition to improving our reporting flows, we are constantly exploring ways that new technologies can help us make sure Facebook is a safe environment. Artificial intelligence, for example, plays an important part in this work, helping us prevent the videos from being re-shared in their entirety… We are also working on improving our review processes. Currently, thousands of people around the world review the millions of items that are reported to us every week in more than 40 languages. We prioritize reports with serious safety implications for our community and are working on making that review process go even faster.”

However, these measures may not be able to address the basic problem: what if the community fails to report offensive content in a timely fashion, or even actively propagates it? Community policing of content relies on the “wisdom of crowds,” but as the time lag of one hour, 48 minutes in this case would seem to suggest, there’s no guarantee that the “right” person (meaning one who shares the sensibilities of the broader community) will see a video and report it. 

In other words, while community policing may be generally reliable, flagging and removing the majority of offensive content in a relatively timely fashion, there will always be those statistical outliers — incidents in which the offensive content slips through the cracks. Given the renewed vigilance of big brand marketers about maintaining brand-safe environments, is this a chance they’ll be willing to take?

Next story loading loading..