U.S. Media Research Scene Has More Questions Than Answers

  • by January 21, 2016

As of now, only LPM (25 largest) markets are accredited by the Media Rating Council, MRC for NSI Local Monthly reports. The entire multiple measurement approaches to spot’s measurement of the balance of 210 markets, including the new so called “viewer assignment model,” which uses a probability-of-viewing model based on look-alike homes within the area, is currently under MRC review. 

The latest move by Nielsen affects the next tranche of 31 medium-large TV markets, which used a combination of set meters for calculating household ratings and diaries for estimating audience demographics.

Nielsen will continue to use set meters, but demographic data will now be collected using their “Viewer Assignment Model.”

Diary-only measured TV markets, the balance of 154 spot markets, had MRC accreditation officially withdrawn in 2010, which in the opinion of many was far too late. 

(Accreditation was withdrawn after the MRC found the company had not mailed enough of the diaries to households to generate what it considered a sufficient sample size.)  



A new metering technology called the Code Reader, which passively and continually collects tuning data (not viewing data) via watermark detection, will be introduced in 14 of the 154 “diary markets” to replace diaries in those 14 markets.  That leaves a balance of 140!  

Radio audience measurement: Average quarter-hour ratings in today’s media environment?  You have to be kidding right?  Worse! 

After its introduction over 10 years ago, PPM developed by Arbitron, and now part Nielsen Audio as a result of the ignorant decision of the FTC, still has only 25 markets MRC accredited of 45 submitted. 

For all its faults, PPM is far superior to diary measurement and can provide average minute and average commercial minute audiences ,which should be today’s currency for this terrific medium to harmonize its performance metrics against TV. 

In terms of critical local market multimedia ratings, notably for newspaper audiences and including online reading, Scarborough is no longer MRC-accredited. 

Its readership data and reach/frequency basis was always marginal for some at media agencies. 

The Internet digital media measurement arena in the U.S. and abroad has been wrestling with three key issues: viewability, bot fraud along with ad blocking. 

Led by IAB in full co-operation with MRC, joint communiques have been provided regarding measurement guidelines to move to more meaningful measurement and reporting. 

Your “MRC Accredited” digital ratings measurement provider will help interpret these industry advisories including: what is measured at what level of rigour; whether different viewability levels can be selected; and/or whether bot traffic is removed? 

Of special note is that White Ops, a bot fraud detection company, has submitted their technologies and techniques to MRC for accreditation.  They should be applauded!  

In the U.S.. media audience ratings are generally delivered by unregulated monopolies. They typically hide behind anti-trust regulations to protect their sole media currency provider positions. 

This structure is essentially due to the lack of JICs - Joint Industry Committee’s in the U.S.  Contrary to completely misguided industry belief and as can be confirmed by any experienced anti-trust lawyer, JICs are quite legal in here.

In serving the entire industry user base in most other developed countries, JICs fund, direct and manage the various media currency ratings research on behalf of the entire industry, typically at high levels of technology and quality cost effectively.  

There is actually one JIC that operates in the U.S.  The TAB, Traffic Audit Bureau measures most major OOH and DOOH formats across major markets at the Eyes-On or exposure ratings level with acceptable quality and at extraordinary cost efficiency when compared other media ratings services in the US. 


Without JICs in the U.S., the MRC, Media Rating Council, plays a very special role and its critical importance to every part of the programming, editorial and advertising business cannot be underestimated. 

“The objectives or purpose to be promoted or carried on by Media Rating Council are:

 To secure for the media industry and related users audience measurement services that are valid, reliable and effective.

 To evolve and determine minimum disclosure and ethical criteria for media audience measurement services.

 To provide and administer an audit system designed to inform users as to whether such audience measurements are conducted in conformance with the criteria and procedures developed.” 

We believe  the entire industry should be pushing MRC members and board to dramatically increase MRC’s scope and capabilities. 

They also need to demand that media currency ratings accreditation can only be awarded based on including measurement at the ad or program exposure, Eyes-on, viewed or heard level per the ARF model outlined in “Making Better Media Decisions.”

Media measurement across all media, whether digital or traditional, has moved toward various integrated approaches, sometimes called hybrid media research rather than single source measurement. 


These new techniques typically use a combination of passive electronic or traditional survey techniques, census type data, digital behavior data streams, etc. 

Such integrations help maximize the value of a wide array of multiple surveys together with big data (digital behavior) to drive more rigorous granular reports - especially for cross-platform ratings, increasing the scope and value of the research. 

Certain of these new hybrid approaches are referred to as pseudo-single source databases.  Whatever the surveys/integration techniques, the entire process requires independent industry accreditation via MRC. 

A brilliant example of such a hybrid approach involving the creation of a pseudo single source database is Project Blueprint. 

It provides cross-platform measurement of TV; personal computers; smartphones; tablets; and radio.  It originated under the auspices of CIMM, The Coalition for Innovative Media Measurement, founded in 2009 by TV content providers, media agencies and advertisers to promote innovation in audience measurement.

It was subsequently embraced and developed by ESPN with the support of comScore and the use of PPM data.

As Andrew Green, Global Head of Audience Solutions, Ipsos Connect, suggested recently: “Old–style people-meters are no longer sufficient to measure television audiences in all their forms.  Surveys can no longer capture total readership behaviour.  The future is hybrid.”  

In summary, media measurement in the US is a mess and has been so for too long. 

The two take aways from this POV are simple - two Web sites: 






Should these US organizations merge to form a U.S. Super JIC?  Our industry’s global corporations have long reaped the benefits of media ratings currencies from JICs based on superior media research at the most competitive cost effectiveness levels. 

Many of these companies operate in the U.S.  So what are they waiting for?
5 comments about "U.S. Media Research Scene Has More Questions Than Answers ".
Check to receive email when comments are posted.
  1. Ed Papazian from Media Dynamics Inc, January 21, 2016 at 9:22 a.m.

    I agree with just about everything you say, Tony, especially about getting a far better indicator of actual ad exposure. The problem arises about funding. Will advertisers and agencies be willing to substantially increase their share of the funding for media research services or will they continue to pass the buck to the media, who, of course, are terrified of real, not assumed,  measurements of ad exposure? In any event, I fully support the idea of getting the MRC more actively involved in true validation efforts, not just the fine policing job it is now doing. 

  2. dorothy higgins from Mediabrands WW, January 21, 2016 at 12:42 p.m.

    Amen. Spot-on. 

  3. Ed Papazian from Media Dynamics Inc, January 22, 2016 at 9:32 a.m.

    Tony, it's interesting to see the relatively muted response to your fine article and call to action. I hate to say it, but these days the "data" people are almost totally in charge and they oppose anything that will slow its flow---even if the data is flawed or dangerously misleading. Of course, everyone will say that the "data" should be as accurate as possible, but making it so---if it cuts offf part of the flow or slows it down----is a no no. It's sad, but to many of our colleagues, what we have now---or what is being promised by the new breed of "data scientists" and "data architects" ----is Ok and please stop scaring us about what pitfalls may be underlying our precious "data".

  4. Tony Jarvis from Olympic Media Consultancy, January 22, 2016 at 3:31 p.m.

    Ed, as always your perspectives are, as Dorothy would say, Spot-On!  However I will continue to scare the industry regarding the accuracy and lack of harmonization of the media ratings "data"and the mutlitude of inappropriate supplemental "data" integrations that are so often flawed but which appear to offer added insights. The direct effects on the billions invested in media should be very scary at the very least to advertisers.  I suspect we could remake the "The Big Short" on our industry which would reveal the real truths of what the so called data engineers are doing versus those of us that are truly Research Archictects.  Only when the "data" people start asking and fully understanding, "Is the data any good? Is it comparable? Can it be harmonized?"  before even commencing any analytics, can advertisers have  confidence in the "data" reports presented.  
    Perhaps this discussion will at least see a massive increase in advertiser members of MRC & CIMM?!  Regretably it appears that their agencies are part of the problem and shareholders should expect nothing less?

  5. dorothy higgins from Mediabrands WW, January 22, 2016 at 7:40 p.m.

    Thinking of the billions spent in/on various digital platforms with the reality that as much as 50% might be unviewable, up to 30% lost to fraud, compounded by the inability to capture uniques across aggregators to control frequency capping, one must wonder if we have become inured to inaccuracy? It comes together in its most glorious/egregious fashion as thes digital data dangers are applied to local cable tv "measures" to create aggregated, "national reach" of behaviorally defined audience targets.  Many of us refer to this as OTV (Oz Tee Vee - at first you don't see it, now you still don't see it) 

Next story loading loading..