Two of the hottest issues in digital measurement right now are viewability and non-human traffic (NHT), which includes but is not limited to fraud. It shouldn’t be surprising, then, to discover
that these two issues converge.
By my count, 16 services are currently accredited by the Media Rating Council (MRC) for viewability measurement, and there has been a marketplace expectation that
MRC-accredited viewability measurement providers should report converging metrics for viewability if two or more such solutions are deployed on the same campaign. As you probably know, this has not
been the case. Disparities between measurement providers were a driver behind the IAB
calling for an industry standard of 70% viewability—meaning buyers and sellers should agree that 70% of purchased impressions on a campaign must be viewable. Not surprisingly, the 4A's is of a different mind, saying in a letter to members that it will “not
endorse” the IAB’s proposed 70%.
These are logical negotiating positions for each organization to take, and I’m disinclined to get in the middle.
What does make this
a measurement issue, though, is the fact that IAB cites measurement differences of 30% to 40% between accredited providers as one of the reasons for the 70% threshold. There seems to be a perception
that such deviations are driven by the inability of some providers to measure the viewability status of all served impressions.
I think to some extent that perception arises from the fact
that, at one point, unfriendly cross-domain iFrames presented a barrier to measurement of a sizable share of served impressions. But most of the major measurement providers have sussed the
cross-domain iFrame challenge, as you can see in this summary chart provided by the MRC. I urge
all viewability vendor patrons to review this table. Similarly, ABC in the U.K. recently published its
first Viewability Report, which looked at the measurement capabilities of four vendors in an exhaustive array of browser/OS configurations (Disclosure: my company, comScore, is included in both
round-ups.)
In a piece last week, the Washington Post’s Jeff Burkett coined the term “dark viewability” to describe the phenomenon of publishers losing credit for
served impressions that viewability providers are unable to measure, suggesting that those reporting higher viewability are able to measure more campaign impressions.
While I am sympathetic to
Burkett, and to all publishers who need and deserve credit for all viewable inventory, I don’t believe there is such a thing as dark viewability. Many of the viewability providers can measure
upwards of 95% of a campaign’s impressions, and the viewability for the unmeasured impressions is typically modeled based on the disposition of the measured impressions.
So why then do
accredited providers still deviate materially on reported campaign viewability?
It turns out that the primary driver of these differences is the extent to which different providers
exercise diligence in identifying and filtering out NHT – which the MRC requires be classified, a priori, as not viewable. Ultimately, the more fraud you catch, the lower the viewability you
report.
Let me be very clear. If your measurement provider is reporting unrealistically high viewability, you are probably paying for fraud.
It is not uncommon for publishers or
agencies to run the technologies of multiple viewability vendors on a single campaign, as part of the evaluative process when making a purchase decision from among several providers. Often we have
seen these bake-offs produce data like this: (click to view)
In this
illustration, Measurement Provider A has identified 4 million impressions as fraudulent or otherwise non-human; a priori, these are treated as not viewable. Each vendor classifies 80% of the
“post-filtration” impressions (column C) as viewable. Since Measurement Provider A excluded 4 million impressions as NHT, their reported campaign viewability is 55% (7.2 million divided
into the total impression count of 13 million); Measurement Provider B, filtering no traffic as NHT, reports a viewability of 80% (10.4 million divided into 13 million.)
So what can you do
about this, if you are a publisher, advertiser, or agency depending on viewability measurement? First off, there is good news; the MRC has recognized that differential NHT filtration may well be a
driver of reported differences, and is working to make sure the viewability standards reflect this.
Also, George Ivie, CEO of the MRC, tells me that agencies report that vendor disparities
have been coming down of late, as different accredited vendors adopt the MRC’s points of guidance, which is good news. One of these points of guidance refers to treatment of NHT.
For
now, though, it’s important to note that the MRC requires that accredited measurement providers break out total impressions, impressions flagged as NHT due to active filtration, and impressions
which are known viewable, known not viewable, and impressions for which viewability is unknown. Users should dig into these metrics, in order to understand how fraud detection is (or isn’t)
driving reported viewability.