The Next Five Years: A Conversation With The MRC's George Ivie

After 22 years as its executive director, the board of the Media Rating Council last week renewed George Ivie’s contact for another five years, even as it embarks on some potentially consequential new areas of auditing and standards-setting, including “outcomes-based,” cross-media, CTV, and in-game advertising measurement. The extension comes as a variety of industry players have been pushing their own standards, “certifying” suppliers and taking actions that might seem to undercut the role of the MRC, which was created as a neutral, independent industry watchdog following Congressional oversight and a consent decree with the U.S. Department of Justice to ensure audience measurement is even-handed and to avoid regulation and law-making of it. In the following Q&A, Ivie explains the reason for the timing of his new contract, the MRC’s relationship with various parts of the measurement supply chain, as well as his main areas of focus going forward.



MediaPost: Your contract doesn't come up for renewal until the end of the year. Why are you announcing this now?

George Ivie: I’ve been with the MRC for quite some time. I started January 1st, 2000 and I think the MRC leadership appreciated many of the things I’ve done, but we’re facing a lot of turmoil in the industry right now, and I think the MRC board wanted to step forward and lock in our leadership as we go through this period. And I thought that was a wise decision.

MediaPost: Speaking of turmoil, we’re seeing a number of entities in the industry trying to set standards. Does that conflict with the MRC’s role as the industry’s self-regulatory authority for measurement standards and accreditation?

Ivie: It’s not unusual that others, especially in the agency marketplace, want to use metrics to define their relationships with customers. If you go back to when we set our viewability standard, that was a tremendous change for the marketplace. And the marketplace had a lot of pain with that. At that time, GroupM came forward with a standard that said, “Well we don’t want to just use two seconds for video. We want to use a longer period of time. We wan to set more aggressive standards than MRC’s minimum standards to take better care of our clients and differentiate ourselves from other agencies.”

And to me, this feels a little bit like that. We have already set a standard for accounting for TV off and continuous play, etc. In fact, a little while ago we issued a press release on all of our television activity. It talked about our Nielsen audit and our Comscore audit and then there was a long list of CTV vendors we are auditing. And with every single one of those vendors, we’ve enforced viewability and continuous play and on/off accounting for those vendors.

This happens to us all the time. People are always trying to engineer around what we do here and there. What we try to do, is stay the course. Keep it central. Keep it consistent and do the the best job we can and that eventually carries the day.

But you can’t just create your own metric and call it “certified.” Or anoint vendors to be the best vendors on the planet when there’s no basis for that. The MRC has a structure. We have a CPA audit on it, mandated by Congress. Our members review that. Layered over that, is the independent staff of the MRC. There are seven of us who work at the MRC and we don’t work for any media company – buyer or seller – all we ever do is look at research and tell whether it’s in compliance or not.

I don’t want to lose sight of that. It’s a very important part of what we do and it’s part of why I renew my contract, because we still have a lot of work to do. We’re not done. There’s a lot to accomplish. And the independent nature of what we do is now more important than ever.

Every company I’m talking to nowadays is trying to build a panel. And two years ago, they were telling me how useless are. Now they’re trying to build panels, because they need to build cross-media measurement.

MediaPost: It feels like it’s not just media company agendas that are competing, but there are also cultures colliding. That there has been a change of power inside the big agency holding companies with digital natives taking charge and they just don’t have the patience to wait for accreditation processes the way they did in the past. Does that set up some potential liabilities for the industry?

Ivie: I don’t want to be in front of Congress again. But what’s interesting is that Google and Facebook have drunk the Kool-Aid and they’re really working extensively with us to do auditing of all aspects of their products. And by the way, so is Amazon and Walmart and Snap and Pinterest. There’s a lot of penetration in digital now.

MediaPost: The problem is those companies work with everybody and have multiple agendas and they have the resources to placate everybody.

Ivie: Well, as long as they submit their metrics I can verify them. There’s a lot of devil in those details.

MediaPost: Looking forward, has there been an update to your master plan for the MRC’s long-term agenda? When you say the next five years is going to be a lot of hard work, what should we know about that. You alluded to AI when you announced your contract extension. As complicated as the world is right now, why will the next five years be even harder?

Ivie: First of all, discerning who to trade with is going to be harder than ever. What measurement metrics to use is going to be harder than ever. And I think the MRC is a lighthouse that can help people through that. You reference unintentional consequences. Something we look at a lot right now is “modeling bias” and “AI bias.” It’s a big deal for us. It’s not unlike when we were testifying to Congress about whether everybody pushes their buttons on a TV meter. Now you have a different thing assigning identity at a unique person level and is that biased?

We have four big things that we are pursuing right now. One is that we just issued an in-game measurement standard, completely updated. It’s a big area of commerce that needs to be nailed down and regulated – the advertising that is going to be in in-game environment and how that is measured. It’s a big area.

Another one is CTV. It’s growing and it’s a hard to measure environment. We’re auditing more and more vendors and having more and more impact.

Another one is outcome measurement. We just issued a standard for public comment on that. That’s a big deal. It creates an entirely new area of MRC auditing of outcome vendors.

And the last is cross-media. Products like Nielsen One, Comscore’s CCR. And other infrastructures that are being built by [the Association of National Advertisers] and VideoAmp and all these other vendors out there.

Just those four areas alone are tons of work, trying to standardize, get the metrics down and reported right and getting reliability and validity to it.

MediaPost: You alluded to the role of algorithms, machine learning, modeling and AI. I know you already apply judgement in your audits about those things right now. Do you think there’s a need to create industry standard definitions and best practices for those too?

Ivie: There already are pretty standard practices in place in this area to start with. There are different modeling techniques that can be applied and they are more or less standard. And we know all of those and have experts on them.

And we have built some standards information into things like outcomes and audience measurement and cross-media that deal with AI and modeling, so we’ve already created guidance within each major component. But we haven’t seen the need to come out with, let’s say an “AI standard,” or a “machine learning standard,” because training those models, updating them, having a champion/challenger model and all of those approaches, they’re already pretty standard, because it’s not just for the media business. This goes on in all kinds of businesses. So right now, we don’t have a plan to do that.

MediaPost: What about “attention metrics?” It’s become a big buzzword in the past couple of years, and lots of companies have been leaning in and some of them have been “certifying” and sanctifying suppliers. Is that an area that needs to be vetted?

Ivie: We view it as part of outcomes measurement, because it’s a refinement over and above exposure and we’ve built some of that guidance into our outcomes measurement standard, but there’s more in that area.

It’s a little bit like a few years ago when everybody was talking about “engagement” as a buzzword for the industry and there was going to be all these industry projects to define what engagement means. You coud talk to everyone end-to-end and never reach a conclusion about what attention metrics are, so we need to be careful about it. I’m keeping a watchful eye on what’s evolving from a use perspective and we may attack it later, but not right now.

MediaPost: Well you already have a lot on your plate to fill up another five years, but as much as things have changed in the past five years, they’re probably going to change even more so in the next five. Good luck.

7 comments about "The Next Five Years: A Conversation With The MRC's George Ivie".
Check to receive email when comments are posted.
  1. Jack Wakshlag from Media Strategy, Research & Analytics, June 20, 2022 at 2:06 p.m.

    Joe, great interview with George here. So long as advertisers need an unbiased source to review media measurement, and it provides relief from government intervention, the Ivie led MRC serves a critical role worthy of support and admiration. They do great work. 

  2. John Grono from GAP Research, June 20, 2022 at 7:21 p.m.

    A very good piece Joe & George.

    There are 'hard metrics' such as the TV is switched on), and 'soft metrics' which started with people in the room, then became engagement, which is now morphing to attention.

    Well I must say you had my attention.

    My concern is around attention to what?   Traditionally TV has measured program content - largely because they paid for the Nielsen ratings.   But the underlying force for attention seems to be attention to the ads.

    As we are all aware, that attention to a program is relatively stable during the airing of the program, but attention to the ads during an ad-break is largely determined by the quality and creativity of the ad - which is 100% the responsibilty of the advertiser (and its agents).   Consider a one hour programme and the cost of attentiveness measurement.   In that one hour programme there would be scores of ads that the market is now wanting measurement.   Who will bear the additional cost - the broadcaster, the agency or the advertiser?   I know who has borne the load for the past decades.

    Having said that, measuring attention is technically easier with digital ads ... as long as privacy is preserved, lest we get an non-representative sample.

  3. Ed Papazian from Media Dynamics Inc, June 20, 2022 at 9:28 p.m.

    John, there are significant variations in attentiveness---measured by eyes-on-screen---during program content. I have seen same, however, the focus now is on ads and time selling/buying, which in my opinion is a very limited use of the concept.

    As for who would pay, say,  one had a panel of 50,000, homes---properly sampled and maintained----it's not an insurmountable added cost to monitor three sets per household with "eye cameras".  You would probably augment this panel with a much larger device-only panel and meld the two together---I'm assuming that the device-only panel is selected on a random probability basis, as well. Such a system would have no issues picking up every national commercial in every program---no matter how many there were and no matter how cluttered the breaks happened to be. How the data is tabulated would be decided by an industry committee involving both buyers and sellers.

    Sadly, as the sellers are going to pay 75-80% of the cost and they see the rating surveys as sales promotional as well as selling tools they will block any move that reduces their "audience" levels. Consequently, I see little chance of attentiveness being included---so not to worry. Who wins?---the sellers. Who loses?---the advertisers who wont pay a dime for attentiveness information ----or,  for that matter, for any kind of media audience survey. So it's "impressions" it will be, based mainly on device usage---producing hopelssly inflated estimates of "viewing". The difference will be that with larger panels we will be able to analyze bad "viewing" data on a "granular" basis and "long tail" channels with tiny audiences will be broken out in  a statistically "reliable"manner.  What more could you want?

  4. John Grono from GAP Research, June 20, 2022 at 10 p.m.

    Agreed Ed.

    One issue with attentiveness is that the 'attentiveness quotient' is generally reported as some type of average - e.g. 78% of the panel were attentive on TV, 72 % attentive on digital devices, 58% attentive to OOH, 81% attentive to magazines, etc. etc. (N.B. made-up data as a theoretical example).

    But ads, particularly TV and magazines ads, can be highly variable due to the 'environment' that the ad is seen in.

    My reading of the market is that the advertiser wants a greater level of precision than a broad average.   We were using broad averages back in the late 1990s/early 2000s in our effectiveness models.   Further, we got the creatives and marketers to award Gold, Silver or Bronze to the ad and use that as a variable in effectiveness.   Collectively they were explanatory variables but generally lower in the pecking order.

    If I was a broadcaster/seller I would also use the higher number as I would be selling space in the program and not the response to the advertisers ad that is not under the control.

    But as you point out "Who loses?---the advertisers who wont pay a dime for attentiveness information ----or, for that matter, for any kind of media audience survey." best sums it up.   It's a bit like eating the biscuits in the supermarket without paying for them.

  5. Tony Jarvis from Olympic Media Consultancy, June 21, 2022 at 12:37 p.m.

    As one of MRC's greatest supporters, sternest critics and admirers of George Ivie's expertise, leadership and ability to tread on egg shells, hearty congratulations on this contract renewal.  Hard earned and well deserved.  Toughest job in the industry!
    As George is aware, the "convenient" and extensive confusion instigated by their term, "Viewable Impressions", which is strictly a device measure and has no audience dimension, is beyond regrettable. The term "impressions" in media have had a persons/audience measuremnt and consequently a potential target audience exposure dimension (versus merely a rendering on a device - possibly never seen!) for the last 50 years of media measurement.  MRC needs to change "viewable impressions" to "content rendered counts".
    As for the critically important "attention metrics" which are persons based, I believe John Grono and Josh Chasin, VideoAmp are on the same page.  Attention measures, whether based on "Eyes-On/Ears-On" (seeing/hearing), or a higher level of persons cognizant  exposure (looking/listening), are very different for programming or content versus the ads even when ad are adjacent to the content.  Consequently attention measures for content should be considered a media "contact" metric (or media  currency) while attention measures for an ad shoud be considered an ad impact measure. 
    Regarding the MRC's Outcomes Standards, there can of course be no outcomes if the content is not rendered on the device to specificactions.  However, as fundamental there can be no outcomes if the ad has not generated attention or at least contact.  I do not beleive this is explicit in these Standards.  Time for a revision before release?

  6. Ed Papazian from Media Dynamics Inc, June 21, 2022 at 2:35 p.m.

    What's laughable about thinking that  TV "impressions" represent "opportunites to see" an ad message is that 30% of the time the "viewer" isn't even present. Some opportunity.

  7. Tony Jarvis from Olympic Media Consultancy, June 21, 2022 at 2:47 p.m.

    Exactly!  As evidenced by a water pressure study in Quebec City during a Stanley Cup playoff game many years ago identifying when the commercial breaks were.
    In today's marketplace "impressions" must always be defined

Next story loading loading..