The cause of this hemorrhage was the automated placing of brand ads next to hate speech. For instance, a trailer for the new DreamWorks cartoon feature “Boss Baby” is a pre-roll for some neo-Nazi pinhead explaining the menace of International Jewry.
Oops. Can you imagine David Geffen, Steven Spielberg and Jeffrey Katzenberg's delight? Shortly after the Times of London reported on Hategate, a YouTube advertiser exodus began. AT&T, Johnson & Johnson, McDonald's, Toyota and hundreds more.
This all makes me sad. It also makes me happy, because, well, I told you so. Back in 2006 -- the year Daniel Powter topped the Billboard charts and Dick Cheney shot his hunting partner -- I mused on the prospects for this year-old outfit freshly acquired by Google for $1.6 billion:
“It's said that if you put a million monkeys at a million typewriters, eventually you will get the works of William Shakespeare. When you put together a million humans, a million camcorders, and a million computers, what you get is YouTube.”
Seemed to me back then that consumer-generated content would give way to more professional content, and streaming video would become the way the world chooses to consume moving images. Check. Check. And after I dealt with the nettlesome problem of avoiding lawsuits from intellectual property owners, I offered this:
“The greatest obstacle facing Monkeyvision isn't jurisprudence. It is prudence itself….Will advertisers risk associating themselves with violence, pornography, hate speech, or God knows what lurks out there one click away? ‘Advertisers and brands are enormously risk-averse,’ Magnify.net's Rosenbaum says. ‘The question now is how the raw and risky is made safe and comfortable. It's not a little question. It's a big question.’"
And nearly 11 years later, a question without an answer.
The overall problem is that metadata for videos remains thin, and relies mainly on the tagging by posters, who don't typically label their work “hate speech” or “kill the Jews.” And the technology for the semantic Web, AI, image screening and other means for detecting repulsive content is simply inadequate. Where porn, racism and gore are concerned, the flagging mechanisms are woefully intermittent.
Call it algo-arrhythmia.
What is surprising about this scandal is that the videos -- many of which have accrued views in the tens of thousands -- weren't noticed by non-deplorables and called to YouTube’s attention. Equally surprising is that no reporting mechanism exists for which content, exactly, an advertiser underwrites. Should this data not be fed in real-time to the sponsors or the agencies for audit? Historically, after all, vigilance over adjacency appropriateness has been everyone's business.
But, as usual, in this case ad tech is the problem, not the solution. And the consequences are unforgivable. One thinks of what London's mayor, Sadiq Khan, observed about the facts of life in a world that includes evil.
“What I do know,” Khan said, “is part and parcel of living in a great global city is you gotta be prepared for these things, you gotta be vigilant.” The worst people will do the worst things. Those in charge must do whatever they can to prevent mayhem, and when mayhem nonetheless occurs, they had better be prepared to pick up the pieces.
YouTube can't be blamed for its technical incapacity to monitor every fresh video post. But there is no excuse for the failure to rapidly, systematically identify outrages. This is Monkeyvision we're talking about. We need more than gatekeepers.
We need zookeepers.