Another week, another brand safety and ad fraud avalanche. It is hard to keep up with the continual revelations about well-known brands being caught up in situations and environments they certainly did not want to be seen in. What's especially worrisome is that most of these advertisers are paying a third party to keep this from happening -- and it happened anyway.
First, there was the news from ad security firm Adalytics, which investigated whether ad tech vendors are monitoring or actually facilitating ads on websites that host child sexual abuse material (CSAM). The report highlights that websites like imgbb.com and its affiliate ibb.co, which allow anonymous photo uploads without user registration, have been found to host explicit content, including potential CSAM. Adalytics discovered this issue while researching how U.S. government ads were served to bots and crawlers. Upon finding explicit imagery of a young child, Adalytics immediately reported the incident to the FBI and other authorities.
advertisement
advertisement
The report emphasizes the lack of transparency and accountability in digital advertising, raising concerns about "brand safety." (Full report here). Several major advertisers such as Amazon, MasterCard, Starbucks and PepsiCo -- and many, many others -- were found to have their ads served on these problematic websites.
How could this happen, when many of these advertisers pay good money for brand safety monitoring? While it’s hard to know all the ins and outs of each marketers’ approach to brand safety, the advertisers referenced in the Adalytics report ended up with their ads on websites hosting explicit content due to a lack of transparency and accountability from ad tech vendors. These vendors, which include Amazon, Google and Microsoft, failed to provide advertisers with detailed page URL-level reporting, making it difficult for brands, or their third party monitoring service providers, to investigate where ads were being served. The findings have prompted lawmakers to demand answers from the implicated ad tech vendors.
The other news was similar in nature. A MediaPost article reported that independent advertising agency Aimclear had notified Microsoft about invalid traffic and junk leads that affected the performance of client campaigns. Aimclear observed patterns of fake leads, including spam bots, fake names, and nonexistent businesses, described as a "very sophisticated click farm." Again, Microsoft is a trusted provider with a plethora of safeguards in place -- or so we hoped, anyway.
What to do with all this information? Well, step 1 is to be aware that this is most likely happening with your ads, just as it happened to the long list of advertisers mentioned in the Adalytics study. Step 2 is to sharpen your digital ad placement strategy. You won’t be safe, but you will be safer, as opposed to not changing anything.
Step 3 is the hardest step. That requires you to decide if, as a company, you want to do business with platforms that clearly do not care about your or anybody’s digital well-being. Last year, a group of industry leaders and alums formed a movement called “Advertising: Who Cares?” . In this group, issues like brand safety are actively explored, including how advertisers are, in effect, financing platforms that allow for awful content to proliferate. That is a moral issue for advertisers, which is why I qualified it as the hardest step.