The apparent outrage by advertisers is puzzling, though. What? You didn’t know? Nobody saw this coming? Or was it FOFO: Fear of Finding Out? Maybe it was simply that there is a new kind of bad thing: hate speech. Maybe brands, under pressure, just need to lop off the low-yield contexts, and this is just a good excuse. Maybe the romance of the open Web is withering under fire from fraudsters and terrorists and fake news. Welcome to the combat zone.
YouTube, for one, has aimed to correct the problem by creating new classes of content to be avoided. That’s good, but it probably should have happened before it got caught. Trust erodes fast when advertisers find out after the fact that a media company may have damaged a brand. Second chances are in short supply.
The fact is, the companies who now dominate online advertising do so on the back of “free” content — free to them. The incredulity of traditional publishers is understandable. Why should adjacency to basement-made video fetch the same prices as adjacency to well-curated, well-edited content?
One problem is that editorial control does not scale, and technology is not quite ready to create the perfect badness filter. My guess is that it can be (AI maybe?), but advertisers and their agencies need to press for it. It will make costs go up, but overall quality and reach will go up, too.
The Current State
Today, we have ways to limit unfortunate adjacencies. They all work to some extent, though there are problems — and make no mistake, those problems are symmetrical. Either side can be hurt by gaffes from the other.
Creative approval is, of course, the first line of defense for publishers and broadcasters, who are just as afraid of bad ads as advertisers are of bad contexts.
Site-based white lists work, but implicitly trust that a site will constrain itself to content acceptable to all its advertisers. This is nearly impossible for any huge site, given the range of topics. There are companies that develop customized white lists (for example, Trust Metrics) that dig deeply into topical material using spidering, publisher relationships, human reviews, etc.
There are brand-safety services that work pretty well. They work with a combination of spidering and real-time tags, but the job is still tough. For example, technology can detect naked bodies in video frames — but there is nothing that will parse a sound track for the nuance of carefully postured racism. Video content compounds the difficulty in all cases.
If ads were women, and content were men, we would need something like Tinder to create the matches in real time. Now, there’s a startup!
Old Tapes and New Media
Advertising, by its nature, promulgates the values of the people who pay for it, but advertisers are happy to sell a product to anyone. A person viewing an ISIS recruiting video probably thinks ISIS is cool. If your brand wants to be cool to that person, maybe you should place an ad there.
Advertisers don’t want to fund groups they don’t like, of course, and it’s certainly the prerogative of any advertiser to withhold money from a publisher who makes money by supporting unsavory causes. However, in an editorial-free, mass aggregation scheme like YouTube, it’s pretty hard to know what causes or issues are being supported. The money goes to a bank account, not a manifesto.
Through the lens of one-to-one marketing, though, the damage may not be as bad we think. In mass marketing, so many people see an adjacency that the adjacency itself may speak louder than the brand. But with the Web, people select the niche content they like. They are unlikely to be offended by content they chose. What portion of the audience took a negative message about the brand based on the adjacency? If the answer is a vanishingly small portion, then taking out a huge publisher seems like a rash reaction.
A few facts would be very instructive here.