Traffic fraud is a complex topic, but in the IAB conversations I saw marketers focusing their anger on one specific subject: paying for “bots” — those automated impression generators that come to a Web site and artificially drive up traffic volume. Many in the room expressed how quickly this problem has grown to a point that even their Fortune 500 CEOs are involved and that it is absolutely suppressing digital media investment.
Marketers vehemently demanded that something needs to be done now -- which prompted me to think about the importance of understanding the work that has already been done to improve the authenticity of digital media -- and to consider what more needs to be done to make sure traffic is high-quality, safe and real.
The idea of “good” and “bad” traffic is interwoven with how, and from whom, media is sourced. Because our transparency tools are still evolving, it’s difficult for marketers to see that progress has been made and have trust in their investments.
All Bots Are Not Created Equal
I spoke to a number of publishers during the event, but one particular conversation was an “a-ha” moment for me. Not all bots are designed to inflate traffic. Bots can be used to harvest content, scrape for data, capture analytics or countless other site automation tasks. In fact, the more popular the site or premium the publisher, the more likely that bots are used for these “good” reasons. We live in an automated environment — and automation by definition means bots. If history has taught us anything, it’s that there will be more automation. But not all of these bots are intended to defraud marketers.
Premium Publishers Are Penalized Too
It’s not just the advertisers who suffer because of how bot detection tools work. Even premium publishers can get caught up in buyer “clean sweeps.” Once a publisher is blacklisted in a buying platform, they will never receive money from the platform again. In addition to being bad for the publisher, this is a double whammy for buyers who initially pay for bots and then subsequently lose high-quality media experiences because of blacklisting.
Impression-Level Analysis is the Key
Because this problem affects programmatic media buys AND direct non-programmatic media buys we need to get really tactical about evaluating the quality of every impression at media delivery time. To be able to evaluate media quality at high scale, technology is a requirement. This kind of policing can’t be done manually by hand. Manual blacklisting can take entire good sites out of the media mix for bad reasons. But just understanding whether an impression is a bot is still not the right approach. There is a perspective that is even more fundamental.
Focus on the Human
Whether your buys are being done programmatically or directly, as an industry we certainly need to eliminate bots as best we can — but more importantly, we need to begin talking about how we serve ads to real people. Services and technologies need to help us authenticate that an impression equals a real life person so marketers can buy media without fear. There will always be more technologies and more bots, and trying to optimize them out is almost futile. You can’t just “whack-a-mole” your way to better quality traffic. Yes, it can be complex and costly to focus on delivering to real humans, but what’s even more costly are the wasted media dollars spent that never turn into sales. It’s time to enable media to be not only focused on accountability but also ensuring that ads are delivered to real consumers on real Web sites who drive real purchases.