How Advertisers Can Beat Bots
Recent press coverage estimates that fraudulent ad delivery costs the industry $400 million annually, and while that number is small in the face of an industry measured in the tens of billions, it is indicative of a persistent problem.
Bots are still generating Web traffic and delivering a glut of inventory to the marketplace – ad impressions that will never end up in front of a human being. No advertiser wants any of their budget to contribute to the millions lost to fraud.
Much of the conversation around fraud thus far has focused on the publisher side and how less-reputable sites flood exchanges with bot-driven inventory. These “ghost sites” are a problem, but by looking at fraud from an audience perspective, rather than a publisher’s lens, advertisers can defeat the bots.
One of the easiest methods to avoid bots is to work with others in a data co-op. Offline co-ops have allowed retailers to reach consumers for years. By pooling mailing lists and purchase information, direct mail advertisers could rest assured their catalog was going to a real person. Online, co-ops are great protection against fraud due in large part to the pooled data from several advertisers.
Through a shared cookie pool, it’s relatively easy to detect users who appear once on a site and never show up again across every site participating in the co-op. This “one-hit wonder” traffic pattern is an indicator of likely fraud, and easy to avoid.
Another easy way to avoid buying media against bot traffic is to utilize data sets that eliminate non-human traffic. Purchase data is a great place to start, especially for online retailers who often buy on a cost-per-click basis. Bots can go deep into the purchase funnel on a page, but can’t make a purchase, and likely never will have that ability.
By leveraging purchase data, advertisers mitigate the risk of tracking non-existent consumers. Past purchases are not only an indication of a living, breath human, but also a clear sign of interest in a specific product. Armed with this information, advertisers avoid bots while focusing on their best customers.
The other side of the coin with using data sets is identifying users that never make a purchase. Bots driving loads of traffic on one site will click frequently but never make a purchase. There is a chance, sometimes, that this pattern is the product of an indecisive consumer needing more time to finalize a purchase, but in most cases it indicates bot traffic.
The goal here is to identify which sites are driving the clicks. As the recent coverage has shown, suspect publishers are the primary source of ad fraud. It’s possible for reputable sites to get attacked by bots, but it’s far more common for a publisher to throw up a low-quality site, build ad space and run bots against that site to commit click fraud. This can fool a buying platform, unless that platform is looking for patterns in the traffic. When one site is pushing lots of inventory with a high-click rate and very few purchase details, it’s another indicator of fraud, and advertisers can avoid both that site and any cookies connected to the site.
The best way to defeat bots is a combination of the above methods, which requires constant segmentation of cookies to identify which sets exhibit a probability of bots. When users are filtered programmatically, it weeds out the potentially suspect inventory, giving advertiser access to only real-life humans. When matching those cookies to other data sources, advertisers can get great campaign results while eliminating any fear of fraud.