Wildly different fraud numbers require greater disclosure to resolve
For example, two vendors measuring the exact same display campaign report entirely different numbers -- one reports 2% NHT (non-human traffic) and the other reports 40%. How is that possible?
What makes this even harder to understand is that neither vendor would provide details of how the measurements were done, what assumptions went into the measurements, or why something was labeled NHT in the first place (their supposed “secret sauce”).
To compound the challenges of measurement, most bot detection vendors do not report what portion of the data is not measurable, what portion of the data yields too little information for labeling, or whether they are extrapolating from a small sample. And most don’t measure for humans; they only measure for bots.
An important fact to keep in mind is that saying “10% is bots” does NOT mean the other 90% is human. In fact, a quarter to a half of the data may simply not be measurable. So do they assume the unmeasurable visits are NHT or not NHT? If the vendors don’t carve this out or disclose their assumptions, how would you know?
Strategic omissions and half-truths must be eliminated
At the end of 2016, when WhiteOps published its study on combating Methbot, a dozen other vendors and ad exchanges immediately responded with their own press releases saying they were not affected at all by Methbot.
Granted, the dollars lost to fraud may not actually be the millions per day or billions annualized, as estimated by the report; but it probably wasn’t “$1.20” or “0.002%” either. All of these vendors aren’t liars, right?
Right. They looked at the list of IP addresses published with the study and found practically none of those specific IP addresses in that specific month of data. Of course they would find next to nothing.
Other data further supports the fact that within 24 hours of the publication of the study, the IP addresses listed were no longer in use by the bad guys to make money anyway. And it is well known that bad guys can quickly rotate Web sites, dump cookies, and respawn bots with new IP addresses whenever these stop making money for them in order to cover their tracks.
Context is paramount when reporting ad fraud
So, it should be clear that providing context is crucial to the accuracy of reporting ad fraud and critical to the proper understanding of it. For example, “Google said it blocked 1.7 billion ‘bad ads’ in 2016.” Cool. That’s a big number, right? Of course it is; until you consider that Google alone serves more than 18 billion ads PER DAY (as reported in 2012, so it’s probably more now). So it blocked less than a tenth of the ads for a single day, over the course of the entire year. Not so big a deal any more, right?
Also, remember that $7.2 billion number, reported by the Association of National Advertisers/WhiteOps as the ad fraud number for 2016? Is it 11% of $70 billion of digital ad spend in 2016? Nope. The study only measured display ad campaigns; no search ads were measured. Display spend in 2016 was $28 billion, according to the Interactive Advertising Bureau's full-year report. Further, that study didn’t measure display ads inside Facebook or Google Display ads in Adwords, because third-party ad trackers are not permitted. So you remove a combined $16 billion in 2016, according to eMarketer, to leave $12 billion of display ad spend outside of Google and Facebook. The $7.2 billion should thus be divided by $12 billion to get a 60% rate of ad fraud in display. Hmmm.... Without this context, most people would have thought ad fraud was 11%.
(Many of the figures I cite here have a corresponding visual in my slide deck, seen here.)
In the shift towards greater transparency, let’s cut out the half-truths, strategic omissions, and other forms of spin that prolong the ill-effects of ad fraud, a cancer sapping the very life out of the digital advertising ecosystem. Thus, with greater clarity and understanding ad fraud, we can begin the important work of cleaning up the digital ad ecosystem, together.