Commentary

The Importance Of Third-Party Measurement

In the media advertising business, third-party measurement is a cornerstone providing the facilitation of commerce. It has been thus at least since 1929, when old Archibald Crossley started calling folks to see what they listened to on the radio the prior day. Third-party research — especially audited, accredited third-party research— has always ignited whatever ad market the research served. For example, the early history of television in the United States and the history of the Nielsen ratings are inextricably intertwined.

Historically, the advent of third-party measurement has provided advertisers with the confidence and accountability to fully commit to spending in a medium, allowing media operators to monetize their inventory to the maximum extent possible. This has truly been a win/win institution.

The Internet has long been called the most measurable medium; for better or worse, the corollary to that is the Internet is the medium with the most measures.  So we have website audience sizing — “the ratings” — but we also have campaign audience tracking, viewability, brand safety, fraud prevention, attribution, ad effectiveness: all different types of measurement that are, collectively, a fundamental part of the ecosystem, and that comprise the engine on which digital business gets done. And there is certainly no shortage of third-party measurement providers.  Just consider everybody’s favorite measurement type, viewability. By my count, there are currently 14 third-party measurement providers with a viewability accreditation from the MRC.

Clearly, I don’t have to sell you, the Metrics Insider readership, on the importance of third-party measurement. But there are two dynamics for us, collectively, to consider.

First, there is the dynamic of first-party measurement — of publishers providing their own measurement to their advertiser clients. In addition to those 14 viewability providers I mentioned above, I counted another five accreditations that I would call first-party viewability accreditations. Digital technology enables companies that are not in the measurement business per se to create and accredit measurement systems, and many blue-chip publishers have done so. It is not my intent to argue against first-party measurement; but I will observe that there is much evidence that the marketplace generally prefers and demands independent third-party measurement for commerce to truly flow.

Second, and probably the knottier dynamic, is the fact that at this point most digital measurement systems depend on publisher participation via tagging.  I was first confronted with the phenomenon of measurement requiring media operator participation back in the ‘90s when I was at Arbitron and we were developing and launching the PPM service. With PPM, broadcasters needed to encode in order to show up in the ratings. Many of us thought this was wholly unworkable; if one big broadcaster didn’t like their numbers, or was in a feud with the measurement company, they could pull the encoders and effectively undermine the service. But this fear proved to be unwarranted, and today measurement requiring publisher participation is the norm — especially in the digital space.

But some publishers push back on third-party tagging. And many of them have good reasons, including concerns about site response latency and “data leakage.”  It's hard not to be sympathetic to these concerns. But I believe most publisher concerns about third-party tagging can be addressed by the technology as it stands today.

I also believe that open, ubiquitous third-party measurement is what the market wants —both buyers and sellers. If this is true, then that market should coalesce around the value of third-party measurement, and reaffirm the importance of the ubiquity and free flow of such measurement. Indeed, one major publisher, Yahoo, has just done this very thing. And advertisers — upon whose patronage we are all, ultimately, beholden — should support this value with their purse strings, in order to assure that the ecosystem continues to work best for them, and for the rest of us.

12 comments about "The Importance Of Third-Party Measurement".
Check to receive email when comments are posted.
  1. Daryl McNutt from New Moon Ski & Bike, June 3, 2015 at 3:45 p.m.

    I am a big Josh Chasin fan. I totally agree with the concept behind this article. My only ask is that comScore and Nielsen are transparent about the data and accuracy of their measurement models. I think we can have guaranteed buying based on vCE or OCR, but advertisers and agencies need to know the range of accuracy for the demo estimates. Also, both use underlying data sets of players that are competing for media dollars against others that are not part of the data being fed into these models. I think once we can have clarity and transparency we can really see third party measurement be a driver of media buying for the future. Might even get some of the TV spending to move over;-)

  2. Ed Papazian from Media Dynamics Inc, June 3, 2015 at 3:46 p.m.

    A very good piece, Josh. One additional comment. In the old days, the agencies, in particular, were the watchdogs over third party audience and related measurement "services" that were used to evaluate media buys. One of their most important functions was to insist on proper validation of the methodologies. Sadly, the emphasis has changed from being concerned about the accuracy of the findings to getting more and more data----even if it is overtaxing the research  designs and asking for more than can be reasonably be obtained. Even more sadly, I doubt that this is going to change----there certainly is no sign of any newfound interest in validity. Data rules----long live data---any data.

  3. Joshua Chasin from VideoAmp, June 3, 2015 at 3:55 p.m.

    Hey Darryl, thanks for the kind words. Darryl, Ed, agree with both of you.

  4. James Curran from www.staq.com, June 3, 2015 at 5:01 p.m.

    Publishers also push back on it because it can be too much work for them to include it. Now, there's a third party reporting vendor everything... it started with Impressions, then rich media vendors, then fraud, now viewability. It creates more work that someone needs to pay for. There are companies now, like STAQ that exist only because of the amount of data vendors used in digital media today.

  5. John Grono from GAP Research, June 3, 2015 at 5:05 p.m.

    Great post Josh.   Agree 100% with Ed, and have a question along Daryl's line of thinking.

    Given that so many third-parties and first-parties (14 and 5 by Josh's count) have MRC accreditation, does the MRC report each validated systems results on "the same" data set?   We don't have a single MRC here in Australia but we have bespoke industry bodies for each medium that consists of media owners, media agencies and advertisers that are charged with ensuring that 'like-for-like' when the data is processed that 'the same' result ensues.

    For example, for television and radio we have a gold standard (and I realise that the internet is different).   A dummy data set (based on real data but with some algorithmic changes to the raw data) that is sent to third-party software vendors, along with the specifications of the validation and algorithms to be used.   This is accompanied by a series of data runs for which the correct results are provided.   The vendor codes their software until they match the correct results are achieved.   They are then sent a second data set and a second series of data runs but with no results provided.   They process the data runs and return the results - if they get 100% then they are gold standard certified and can operate in the marketplace.   Random checks are done every six months or when new data types etc are included.

    This ensures that every vendor, buyer and client get the same 'agreed' results.   You can argue to the cows come home whether the 'agreed' results are the 'correct' results, but it removes a great degree of confusion from the martketplace.

    Of course the internet is very different in that there is no "one data set", but I am sure that the principle can be extended so that a client could be sure that at least the data was being validated and processed the same way.

  6. Ed Papazian from Media Dynamics Inc, June 3, 2015 at 7:14 p.m.

    One of the ongoing confusions about terms like "accredited" and "validated" is that people who are not familiar with research and usually use only one source for the audience "currency" in their media buys think that the two are the same. They aren't. For example, I would assume that Arbitron's radio diaries were accredited and that the same thing happened to the PPMs when they appeared on the scene---replacing the diaries. Yet the two measurements produced very different answers and it is now assumed that the diaries were way off base in their findings. In like manner, in the magazine audience field, I assume that the Simmons "through-the-book" visual recognition studies were accredited as well as the MRI "recent reading" studies, yet these also produced very different audience levels. Currently we have the PPMs being used for out-of-home TV "viewing" measurements, presumably as an add-on to in-home peoplemeter findings----and I wouldn't be surprised if the PPMs are eventually accredited as regards their TV "viewing" data----but are the two methodolgies really measuring the same thing? That's a "validation" question.

  7. Anto Chittilappilly from Visual IQ, June 3, 2015 at 10:42 p.m.

    Josh, Great article. Quite important to have an unbiased third party to do the marketing measurements for any marketer.

  8. Joshua Chasin from VideoAmp, June 4, 2015 at 11:33 a.m.

    John: the notion of designating a single data set as :righ" would probably be anathema to an American maretplace. But the MRC is indeed grappling with these issues. I'm typically loathe to speak on their behalf, But i can tell you thatthey are using standard-setting for both Viewability and Invalid Traffic reporting to requirte reporting that allows for as close an apples-to-apples comparison as possible across vendors. We've seen, and I've written in this space about, situations where two viewbility vendors on the same campaign show vastly different numbers, but the differences are almost eclusively drien by differential ability of the provbiders to identify and filter non-human traffic. THe MRC is moving toward a structure where providers would report viewability on all impressions post the "easy" filtration (e.g. the IAB bot list), then exclude other NHT and report the net viewability. THe notion is, the first set of metrics should be as close as possible across vendors. Then the "special sauce" that different vendors layer on can be assessed by thd buyer of the research.

    MRC also undertook an iniative to collect data from viewability users of results when 2 or more services were measuring the same campaigns. You might want to take a look at this link, where they describe the test and call for participation: http://www.iab.net/mrcdatarequest

    Cheers.

  9. Joshua Chasin from VideoAmp, June 4, 2015 at 11:33 a.m.

    Sorry for all th typos.

  10. Tony Jarvis from Olympic Media Consultancy, June 4, 2015 at 6:56 p.m.

    Sorry but this excellent piece (also a tremendous Josh fan!) and all the comments, notably from John Grono and Ed Papazian, simply remind us again in the US (!) that the JIC - Joint Industry Commitee - approach to Third Party measurement practiced virtually by the rest of the world would provide as Josh stated, "open, ubiquitous third-party measurement is what the market wants".  JICs also provide a single industry accepted currency for each media channel with a unique opportunity for harmonization or a common currency across all media currencies.  That currency measurement approach would save the US industry millions annually in the cost of ratings and greatly simplify the "basic" plan/buy/sell process with further savings accrued and much of the extensive confusion eliminated. 
    Such an approach would, and does not, discourage the devlopment and use of ancilliary research from a wide variety of companies to extend the basic ratings to higher more "eloquent" levels for the agency, the media channel or the brand. 
    Josh's article really begs the question as to why the major global advertisers (and their agencies) have permitted the current and generally appalling media measurement structure in the US to continue for so long when the advantgaes of JIC's worldwide are so clearly evident?  Per Josh, advertisers should, "assure that the ecosystem continues to work best for them".  FYI:  It sure doesn't currently.  Shall we merely start with Spot TV?!!

  11. John Grono from GAP Research, June 4, 2015 at 9:56 p.m.

    Just an addenda.   Let's say that the MRC approach allowed a 'tolerance' of +/-5% for accreditation by the various vendors (tolerance from what benchmark I'm not sure).   That is an 'allowable' spread of 10%.   But so many deals are either done based on CPM, or CPM is a major consideration.   So, one publisher may be selling at a $10 CPM, which is in effect 'the same' as an $11 CPM.

    There is a good reason why we have an official exchange rate, why the NYSE reigns supreme etc.   They provide surety upon which to trade commercially.   Surely media ratings is a big enough each medium to trade the same way.   

  12. Daryl McNutt from New Moon Ski & Bike, June 4, 2015 at 10:01 p.m.

    I love all the conversation on this topic. Josh I think we could put something together for a great forum. Would be fun. Lets talk.

Next story loading loading..