Media Ratings Auditor To Assess Agency Media Audits, MRC Forms Working Group

In a surprise move, the ad industry’s media ratings watchdog -- the Media Rating Council -- is poised to probe the people big brands hire to probe their agencies’ media performance. The MRC has formed a working group to begin reviewing the methods and data used by media auditors that marketers retain to evaluate the performance of agency media plans and buys.

The move, which comes at a time when big marketers and agencies are embroiled in a wide range of fraud and “transparency” issues, represents a pivot for the MRC, which was created by the industry as a self-regulatory body to audit and accredit media ratings methods.

The MRC was asked to form the working group by media suppliers, because they are often caught in the middle between marketers and agencies when the data they use differs from what the suppliers use as the basis of their sales.



“They’re asking to set some ground rules for how agency media audits are done,” MRC CEO and Executive Director George Ivie confirmed in response to a MediaDailyNews query.

You might ask why the sellers care about that,” Ivie continued, explaining that the big issue is in the type of data used by auditors as part of their evaluation process. Ivie said that the auditors frequently utilize data processed by third parties that may differ from the original data from the ratings provider, creating confusion and discrepancies between the original media buy and how it was posted.

Ivie said the working group is poised to have its first meeting soon and it will include representatives from all sides of the marketplace -- agencies, advertisers, media suppliers and auditors -- and that the goal will be to establish uniform guidelines that reduce the “variability” of media ratings data used as the basis for agency media audits.

“Why would they ask the MRC to do this?” Ivie said rhetorically. “We don’t do agency audits and we don’t want to do that, but we have a very transparent audit process and we were asked to organize this.”

Ivie said the extent of the MRC’s role likely would be to simply “get people around the table to set some ground rules.”

It wouldn’t be the first time the industry asked the MRC to set some ground rules. The MRC has become the industry’s default arbiter for establishing guidelines not just for auditing media ratings, but for what constitutes key components for defining what is being rated. The most notable example is the MRC’s guidelines for desktop and mobile ad “viewability,” which have become the industry’s de facto standard.

Ivie said it’s unlikely the MRC or the new working group would issue standards and said the goal is simply to help educate the industry on what happens when media ratings data are processed by third parties and end up in media performance audits.

He said agencies apply that data in a wide variety of ways and interpretations and they may not all utilize the same rigorous standards.

He likened the process to corporate financial reporting, noting that when a corporation releases a “financial statement,” some of the most meaningful information is contained in the “notes” to the statement provided by the certified public accounting firm responsible for preparing the statement.

Ivie said the project likely would last for a few months and that the main goal is simply to give all sides of the industry a better sense of the risks and volatility of the data used to post and audit media buys.
8 comments about "Media Ratings Auditor To Assess Agency Media Audits, MRC Forms Working Group".
Check to receive email when comments are posted.
  1. Ed Papazian from Media Dynamics, April 4, 2016 at 9:52 a.m.

    I agree, Joe, this comes out of left field. However, it may prove to be a positive development when it comes to evaluating the media buying skills of an ad agency or media buying entity. When syndicated  CPP or CPM estimates, which are based on averages, are compoared with "actual" agency performance for national and spot TV, one is always on dangerous ground. Often, the buyers operate under circumstances which cause them to perform at above average CPMs. Unless this is accounted for, the comparisons can paint a distorted and unfair picture. I don't see the MRC's initiate having much to do with media planning, however, as this is a highly subjective area and decisions are rarely made strictly by the numbers.

  2. John Grono from GAP Research, April 4, 2016 at 10:25 a.m.

    Ed, the issue here in Australia is the 'depth' of the pool that the audit benchmark is calcualted on.   You have FMCG brands targeting Grocery Buyers with children in with People 16-24 to calculate the CPM.   Lead time on the buy is also critical (greater lead time generally is lower CPM).   Buying objectives are also critical - one may be a reach maximisation buy (higher CPM) and another may be a GRP maximisation at lowest cost buy.   By the time you filter the pool on those three key factors you often end up being the only active buy!   The only option then is to put as many active buys as you can back into the pool which produces a distorted average.

  3. Martin Albrecht from CROSSMEDIA, April 4, 2016 at 11:18 a.m.

    This is fantastic: it is an audit of auditers of neutral advisers. Is there still anybody out there that thinks that trust between advertisers and agencies is NOT an issue?

  4. Ed Papazian from Media Dynamics Inc, April 4, 2016 at 12:39 p.m.

    John, having been involved in several "audits"---as an auditor----myself, as well as having had numerous discussions with auditors and their clients, I don't believe that most advertisers are asking the outside guys to evaluate the way their brands target media or the importance they place on particular demographic or other targets. It's mostly a question of the skill the agency exhibits in negotiating the buys, the level of transparency, how well the buys are serviced and the quality of the post buy negotiations that account for program schedule changes, audience guarantee shortfalls, etc. Most of the time all of the activity in a given year or past few years is evaluated, with the auditor referencing one or more outside sources when it comes to judging the CPMs payed by the advertiser. This is where considerations such as lead time or advertiser dictated buys comes into play as these often are the resons for above par CPMs. In other words, it is often the advertiser's own fault, not the agency or its time buyers.

  5. Ed Papazian from Media Dynamics Inc, April 4, 2016 at 1:12 p.m.

    Martin, this has little or nothing to do with advertiser "distrust" of agencies. They are simply trying to help those involved  organize their thinking a bit better. I wish all involved good luck.

  6. John Grono from GAP Research, April 4, 2016 at 5:23 p.m.

    Thanks Ed.   I wasn't trying to convey that the external auditor was 'auditing' strategy, demographic targets, lead times etc.   Audits here tend to be all about the bottom-line - having a lower CPM than the pool.   My issue is that the CPM of the pool is a blunt instrument - cost paid divided by audience '000s.   Lead times can affect the cost paid (short lead => higher cost) whereas audience should be compared on the same basis.    People 25-54 is the most bought demographic (around 1-in 4 TV spots) here.   It's pool is robust.   But if you are buying Males 16-39 and its pool is too small your buy may be judged against another broader demo such as People 16-39 or Males 25-54.   That's apples and oranges that is simply unfair.

  7. Ed Papazian from Media Dynamics Inc, April 4, 2016 at 6:31 p.m.

    John, in general, we don't have a problem with the depth of the pools used to gauge the average CPMs. I estimate that approximately 45%-50% of the TV buys are bought based on 18-49 demos while the 25-54 ratio is about the same. That leaves a small handful of 18-34 and the incredibly absurd 35+ "demo" negotiations as exceptions to the rule. Of course, in many cases these age definitions are further defined by sex.. Still, a service like SQAD's Netcosts or our own TV Aces, is quite stable as regards either of the 18-49 or 25-54 targets. My issue is not so much with the general validity of these industry-wide CPM sources but, rather, with the tendancy to ignore most or all of the mitigating factors and judge any buy that excedes the relevant averages to be suspect, when, as you and I both agree, this may not be a fair assessment.

  8. Peter Cornelius from Ebiquity, April 5, 2016 at 6:14 p.m.

    Hi John, I value your input as a respected media researcher, but I really wish you would check your facts before you go public and clearly you don't know everything. At Ebiquity Aust, we NEVER benchmark P16-24 against GB with Children as you state. Why would we - its a totally different demographic mix and the TV buy would be nowhere similar. Instead, we match client demos to other clients where the demos are the same or a close alignment. The example you use of P16-24 is matched to P16-39. Having said that, I'm not aware of any clients targeting P16-24 on TV these days so it rarely comes up in analysis or conversation. If we did we would highlight this given the niche nature of the demo. The depth of our pools (multiple by demographic) is massive and more than large enough for detailed analysis and interrogation. And the work we do, we use the term benchmarking and not auditing, is not JUST CPM. Sure its one metric we measure, we also measure quality of delivery, attainment of reach and other goals, scheduled audience profile, position in break AND as you reference, lead-times. We provide clients accurate data of what impact late lead-times and late changes have on schedule delivery and encourage them to provide their agencies as much time as possible to maximise their schedule delivery - cost, audience performance and delivery of strategic goals. Mate - you are better than that, please don't put out false information.

Next story loading loading..