Why Facebook Can't Stop Anti-Semitic Ads Without Cultural Context

Journalist (and former MediaPost columnist) Bob Garfield took to the stage in Philadelphia last week, sharing 90 remarkable minutes of his life story for his one-man show "Ruggedly Jewish." It was the abridged version -- to be sure. But along the way he covered artichokes, an armed abduction, a celery museum -- and anti-Semitism. 

Bob told a story about his young daughter Ida. She is half Serbian. And so, when she ordered a bowl of matzo ball soup, she wanted to know if it really means "kitten head." It seems the Croatian words for cat is "maka." Maka ball soup -- delicious!

It all comes down to translation. That's also a relevant fact when talking about Facebook, which, as of 2016, supported 142 languages. So in order to be able to separate soup from kittens, Facebook needs to be able to sort out who’s speaking, the reference, and the intent of the word matzo. 

And, as it turns out (((Rosenbaum))) has a deeply anti-Semitic meaning. This I did not know. Maybe you did, that the triple parentheses symbol, also called (((echo))), is now accepted as signaling anti-Jewish sentiment



Asking Facebook not to sell ads targeting “Jew hater” (which happened during an investigation by ProPublica), is most certainly reasonable, in any language. Rob Leathern of Facebook told ProPublica, "[W]e’re also building new guard rails in our product and review processes to prevent other issues like this from happening in the future.”

But what Facebook is running into is extraordinarily complex: building a global platform that can understand local context. Words have meaning that change as you cross borders. Asking for a “fag” in England gets you a cigarette, while in the U.S. it will get you a harsh rebuke. Same word, different meaning. 

And if you dive into Reddit, the conversations aren’t even in words any longer. The most complex and often disturbing conversations are in acronyms. TIFU, NSFL, DM;HS and ELI5 are all tame ones.

While Facebook can flag the N-word and other clearly offensive ones, once you get into (((name))), you’re playing a never-ending game of whack a mole, in which the hateful can outrun even the most agile algorithm. 

So, what are we do to do?

The answer may be to embrace a solution that has served us well in the past: Use humans. Not just any humans -- local people, who can bring context and understanding to communication. 

The New York Times is having remarkable success in growing its relevance and reach with social video, growing its social video presence on Facebook, Instagram, YouTube and Twitter. Tubular Insights reports the Times has 2,000 videos on its Facebook page, reaching 1 billion total video views as of August 2014. “In March, 82% or 72.6 million of NYT’s total video views stemmed solely from followers watching short videos on its Facebook account.”

The Times is also winning on other platforms, as well.

“With just over 861k subscribers on YouTube, NYT created a channel where its video journalism could easily take front and center. Thanks to more than 8,500 videos, ranging from political news pieces to human interest stories, NYT has captured more than 365 million views to date, with each clip seeing around 42.8k views,” according to Tubular Insights.

What can Facebook learn from the Times' success in video? Voices that are on the ground, local, relevant and topical, connect with readers/viewers. Trying to understand how phrases, words, images, and symbols are received within the context of local languages and cultures isn’t a technology problem. It’s all about cultural context. 

Bob Garfield knew that with his Kitten Soup story. Now he needs to explain it to Mr. Zuckerberg.

2 comments about "Why Facebook Can't Stop Anti-Semitic Ads Without Cultural Context".
Check to receive email when comments are posted.
  1. Maureen Mauk from Mattel, September 19, 2017 at 1:29 p.m.

    Your discussion on language and Facebook's filtering is interesting. Do you mean to suggest that Facebook/ Reddit/ etc should "use humans" to filter the good acronyms/ words from the "bad ones" or just use humans to further populate the tapestry of the platform?

  2. steven rosenbaum from replied, September 19, 2017 at 2:02 p.m.

    I think that humans could play a curatorial role on a number of fronts.  People could volunteer to be 'first line' curators, and Facebook could take their flags and notifications with priority. At the same time, users could get more controls to filter objectionable content based on personal standards. But this gives users more control than a centralized algorithm currently allows for. 

Next story loading loading..