On Viewability, Fraud, And Measurement Disparities

Two of the hottest issues in digital measurement right now are viewability and non-human traffic (NHT), which includes but is not limited to fraud. It shouldn’t be surprising, then, to discover that these two issues converge.

By my count, 16 services are currently accredited by the Media Rating Council (MRC) for viewability measurement, and there has been a marketplace expectation that MRC-accredited viewability measurement providers should report converging metrics for viewability if two or more such solutions are deployed on the same campaign. As you probably know, this has not been the case. Disparities between measurement providers were a driver behind the IAB calling for an industry standard of 70% viewability—meaning buyers and sellers should agree that 70% of purchased impressions on a campaign must be viewable. Not surprisingly, the 4A's is of a different mind, saying in a letter to members that it will “not endorse” the IAB’s proposed 70%.

These are logical negotiating positions for each organization to take, and I’m disinclined to get in the middle.

What does make this a measurement issue, though, is the fact that IAB cites measurement differences of 30% to 40% between accredited providers as one of the reasons for the 70% threshold. There seems to be a perception that such deviations are driven by the inability of some providers to measure the viewability status of all served impressions.

I think to some extent that perception arises from the fact that, at one point, unfriendly cross-domain iFrames presented a barrier to measurement of a sizable share of served impressions. But most of the major measurement providers have sussed the cross-domain iFrame challenge, as you can see in this summary chart provided by the MRC. I urge all viewability vendor patrons to review this table. Similarly, ABC in the U.K. recently published its first Viewability Report, which looked at the measurement capabilities of four vendors in an exhaustive array of browser/OS configurations (Disclosure: my company, comScore, is included in both round-ups.)

In a piece last week, the Washington Post’s Jeff Burkett coined the term “dark viewability” to describe the phenomenon of publishers losing credit for served impressions that viewability providers are unable to measure, suggesting that those reporting higher viewability are able to measure more campaign impressions.

While I am sympathetic to Burkett, and to all publishers who need and deserve credit for all viewable inventory, I don’t believe there is such a thing as dark viewability. Many of the viewability providers can measure upwards of 95% of a campaign’s impressions, and the viewability for the unmeasured impressions is typically modeled based on the disposition of the measured impressions.

So why then do accredited providers still deviate materially on reported campaign viewability? 

It turns out that the primary driver of these differences is the extent to which different providers exercise diligence in identifying and filtering out NHT – which the MRC requires be classified, a priori, as not viewable. Ultimately, the more fraud you catch, the lower the viewability you report.

Let me be very clear. If your measurement provider is reporting unrealistically high viewability, you are probably paying for fraud.

It is not uncommon for publishers or agencies to run the technologies of multiple viewability vendors on a single campaign, as part of the evaluative process when making a purchase decision from among several providers. Often we have seen these bake-offs produce data like this: (click to view)

In this illustration, Measurement Provider A has identified 4 million impressions as fraudulent or otherwise non-human; a priori, these are treated as not viewable. Each vendor classifies 80% of the “post-filtration” impressions (column C) as viewable. Since Measurement Provider A excluded 4 million impressions as NHT, their reported campaign viewability is 55% (7.2 million divided into the total impression count of 13 million); Measurement Provider B, filtering no traffic as NHT, reports a viewability of 80% (10.4 million divided into 13 million.)

So what can you do about this, if you are a publisher, advertiser, or agency depending on viewability measurement? First off, there is good news; the MRC has recognized that differential NHT filtration may well be a driver of reported differences, and is working to make sure the viewability standards reflect this.

Also, George Ivie, CEO of the MRC, tells me that agencies report that vendor disparities have been coming down of late, as different accredited vendors adopt the MRC’s points of guidance, which is good news. One of these points of guidance refers to treatment of NHT.

For now, though, it’s important to note that the MRC requires that accredited measurement providers break out total impressions, impressions flagged as NHT due to active filtration, and impressions which are known viewable, known not viewable, and impressions for which viewability is unknown. Users should dig into these metrics, in order to understand how fraud detection is (or isn’t) driving reported viewability.

4 comments about "On Viewability, Fraud, And Measurement Disparities ".
Check to receive email when comments are posted.
  1. Joshua Chasin from VideoAmp, January 15, 2015 at 9:11 a.m.

    By the way-- I had asked George Ivie of MRC for a quote for this column. Unfortunately, George's quote was too long to be included. But I will share it here.

    "MRC has released guidance to measurement vendors and the marketplace on how to minimize counting discrepancies, and we have continued to work on this problem; recently we sent further guidance, for discussion purposes so far, to vendors of additional points that may be causing differences. We believe this further guidance will increase vendor consistency significantly. It is true that one of our points is the consistency of removal and reporting treatment of invalid digital traffic. Vendors apply different levels of 'advanced' filtration for invalid traffic (specifically 'advanced' filtration is filtration that goes beyond the standard filtration required in the IAB's served impression measurement guidelines of list-based and activity-based filtration in an effort to identify more difficult to detect invalid activity). Some accredited vendors apply extensive advanced filtration procedures, but others may do very little advanced filtration. Depending on the level of this advanced filtration and how this additional filtration is reported, it can result in counting and/or viewable rate differences between vendors. Our initial reconciliation guidance identified order of processing and filtration as an issue, and our newest reconciliation points include a reminder that this can be a source of differences, and will provide clear instructions on how this activity is to be reported in a consistent and isolated manner."

  2. Jeff Burkett from Washington Post, January 16, 2015 at 1:55 p.m.


    Thanks for continuing this important discussion but I feel like we are still on slightly different pages. Since my story, many of the other vendors have reached out to me and we have had very productive discussions. I hope we can do the same.


  3. Joshua Chasin from VideoAmp, January 16, 2015 at 1:59 p.m.

    Hey Jeff. I just sent you some times via LinkedIn for next week. Look forward to connecting.


  4. Kristian Magel from Initiative, February 9, 2015 at 12:36 p.m.

    Josh and Jeff, pls publish the outcome of your conversation! If you already have, maybe send me a link :-).

Next story loading loading..