Commentary

Monkey See, Monkey Don't

Last week, the company called Alphabet -- which is mainly Google -- lost $28 billion in market cap. Let us put that into perspective. The sum $28 billion is equal to the total 2016 sales in Apple’s App Store. It represents the entire valuation of Snap Inc. For $28 billion, you can buy 13,461,538,000 packages of Twizzlers.

The cause of this hemorrhage was the automated placing of brand ads next to hate speech. For instance, a trailer for the new DreamWorks cartoon feature “Boss Baby” is a pre-roll for some neo-Nazi pinhead explaining the menace of International Jewry. 

Oops. Can you imagine David Geffen, Steven Spielberg and Jeffrey Katzenberg's delight? Shortly after the Times of London reported on Hategate, a YouTube advertiser exodus began. AT&T, Johnson & Johnson, McDonald's, Toyota and hundreds more. 

advertisement

advertisement

This all makes me sad. It also makes me happy, because, well, I told you so. Back in 2006 -- the year Daniel Powter topped the Billboard charts and Dick Cheney shot his hunting partner -- I mused on the prospects for this year-old outfit freshly acquired by Google for $1.6 billion:

“It's said that if you put a million monkeys at a million typewriters, eventually you will get the works of William Shakespeare. When you put together a million humans, a million camcorders, and a million computers, what you get is YouTube.”

Seemed to me back then that consumer-generated content would give way to more professional content, and streaming video would become the way the world chooses to consume moving images. Check. Check. And after I dealt with the nettlesome problem of avoiding lawsuits from intellectual property owners, I offered this: 

“The greatest obstacle facing Monkeyvision isn't jurisprudence. It is prudence itself….Will advertisers risk associating themselves with violence, pornography, hate speech, or God knows what lurks out there one click away? ‘Advertisers and brands are enormously risk-averse,’ Magnify.net's Rosenbaum says. ‘The question now is how the raw and risky is made safe and comfortable. It's not a little question. It's a big question.’" 

And nearly 11 years later, a question without an answer. 

The overall problem is that metadata for videos remains thin, and relies mainly on the tagging by posters, who don't typically label their work “hate speech” or “kill the Jews.” And the technology for the semantic Web, AI, image screening and other means for detecting repulsive content is simply inadequate. Where porn, racism and gore are concerned, the flagging mechanisms are woefully intermittent. 

Call it algo-arrhythmia.

What is surprising about this scandal is that the videos -- many of which have accrued views in the tens of thousands -- weren't noticed by non-deplorables and called to YouTube’s attention. Equally surprising is that no reporting mechanism exists for which content, exactly, an advertiser underwrites. Should this data not be fed in real-time to the sponsors or the agencies for audit? Historically, after all, vigilance over adjacency appropriateness has been everyone's business.  

But, as usual, in this case ad tech is the problem, not the solution. And the consequences are unforgivable. One thinks of what London's mayor, Sadiq Khan, observed about the facts of life in a world that includes evil. 

“What I do know,” Khan said, “is part and parcel of living in a great global city is you gotta be prepared for these things, you gotta be vigilant.” The worst people will do the worst things. Those in charge must do whatever they can to prevent mayhem, and when mayhem nonetheless occurs, they had better be prepared to pick up the pieces.

YouTube can't be blamed for its technical incapacity to monitor every fresh video post. But there is no excuse for the failure to rapidly, systematically identify outrages. This is Monkeyvision we're talking about. We need more than gatekeepers.

We need zookeepers. 

2 comments about "Monkey See, Monkey Don't".
Check to receive email when comments are posted.
  1. Dean Fox from ScreenTwo LLC, March 27, 2017 at 2:54 p.m.

    Very well explained, Bob, but one man's hate speech is another man's free speech. One man's entertainment is another man's pornography.  One man's news is another man's propaganda.

    The haters, the hackers, the spammers, the fake news generators will eventually find a way around any automated filtering scheme. Even real-time human filters have their own blind spots or biases, so they will fail at least some of the time. This is the tradeoff we are making for nearly instantaneous access to a world of information, real and fake news, and entertainment.

  2. Jeff Martin from IMF, March 28, 2017 at 8:32 a.m.

    And the technology for the semantic Web, AI, image screening and other means for detecting repulsive content is simply inadequate.

    This is the case, today. Technology provides scale for the solutions needed here. Schmidt talked about increasing manual review time on CNBC. That doesn't scale, but it sounds good to brands. AI is the solution. Google has already implemented AI to tackle complex solutions. The engineers who created them have, in at least one case, said that the AI had already moved to a level that they themselves couldn't tell you exactly why it came to the decisions it did. They just know, through testing, that they were the right decisions. When AI truly learns on its own, that is when this cat and mouse game can significantly change.

    All that said, Dean is dead-on: but one man's hate speech is another man's free speech. One man's entertainment is another man's pornography.  One man's news is another man's propaganda.

    A zookeeper controls the entire aspect of the animals lives which they tend to. That sounds like an Orwellian future.

Next story loading loading..