Commentary

Making 'Making Measurement Make Sense' Make Sense

Recently the IAB, the AAAA, and the ANA have joined forces in an initiative called "Making Measurement Make Sense" (MMMS).  If you're reading this column, chances are that isn't news to you.

I'd been meaning for a while now to use this space to provide a measurement company perspective on MMMS.  As the Chief Research Officer at comScore, naturally it's a topic with which I'm pretty highly engaged.  Then two weeks ago, I had the privilege of speaking at the OMMA Metrics Conference in Manhattan.  I knew that the IAB's Joe Laszlo was scheduled to provide an update on MMMS; catch Joe right here.  So I figured I'd use my slot to provide the "vendor perspective." 

So you can watch my presentation right here.  Or, keep reading, and I'll bottom-line it for you.  Or, hey, why not do both?

First off, I want to stress that I am not rebutting the MMMS initiative; rather, I am participating in the dialogue.  And I mean that quite literally; I've had a constructive, ongoing dialogue with Randall, Sherrill, and Joe from the IAB since the initiative first became public.  We're all adults, colleagues, and, I'd like to think, friends.  But I do believe that the input from those of us who actually make the metrics is essential if this initiative is to be successful, and I applaud the IAB for welcoming that input, even on the points where we might bring differing perspectives.

My presentation may be summed up thusly: three pain points, a plea, and a prescription.  To wit:

Pain point #1: The complexity of the ecosystem. One of the objectives of MMMS is to "Define standard metrics and measurement systems that are transparent and consistent to simplify the planning, buying and evaluating of digital media."  But as that great media planner Albert Einstein famously said, "Everything should be made as simple as possible -- but not simpler."

We've all seen the graphic depictions of the digital ecosystem.  In contrast, the TV ecosystem is almost comically simple; advertisers give money to agencies, who in turn buy TV time. Unfortunately, there is no magic number that will make the process of planning, buying and evaluating digital media as simple as TV, because that process is not a simple process.  The ecosystem chart is a map of that process, and the complexity of the ecosystem itself is the problem.  It is one we need to deal with head-on.  

This is important.  When advertisers bemoan the state of digital by saying "if only there was one magic number that made digital as easy to buy as TV," some hear that as a measurement problem.  But I don't think it is.  I think it is a complex-ecosystem problem.  We're very good at the science and algorithms behind efficient delivery of impressions to cookies in real time. It is no wonder that some advertisers, with money to spend on a desirable, engaged audience, balk at digital's complexity.

Pain point #2: We need metrics to quantify the differential value of impressions.  At OMMA I probably overused the vernacular of "engagement" (much to the chagrin of one Tony Jarvis) to describe this concept.  I've also heard it called "Page Value," but that term doesn't allow us to expand the application to other types of inventory (e.g. videos) let alone across media.  Basically, in media math we've always had two core metrics: how many (reach) and how much (frequency.) The product of these two metrics is GRPs.  But the concept of GRPs a priori assumes that every piece of inventory -- each exposure, each impression, each media vehicle -- are of equal value, which we all know intuitively is not the case.

So I think a fine outcome of MMMS would be to determine a single unified construct for differentially valuing inventory units -- exposures, impressions -- based on some measure of the quality of the experience the consumer has with the ad.  What's the relative value of a view of a 30-second TV spot versus the same spot in an online video, versus a banner ad, versus a Facebook "like"?  This of course becomes the cross-media Holy Grail that MMMS seeks.

Pain point #3: Pre-buy versus post-buy.  In TV, a single currency is used for planning, buying, and post-buy evaluation.  But that's because TV hasn't got empirical delivery data.  Since here in the digital landscape we have Page Views and publishers, perhaps print provides a better construct for us.  In print, MRI (magazines) and Scarborough (newspapers) are the planning and pre-buy currencies, but campaign delivery is evaluated after the fact based on circulation.  Online, we have ad server data to tell us how many ad impressions were delivered after the fact, and this is a good thing.  So the layer of complexity that different pre-buy and post-buy metrics introduces is something we'll just have to navigate together.

The plea: We do not need a metrics tribunal.  Here is the point about which I find myself most passionate.  The MMMS initiative calls for a "measurement governance model."  Metrics friends, this is a very bad idea.  The Internet is the most measurable medium, and as I wrote in my very first appearance in this space, back in 2007, that makes us immutably the medium with the most measures.  Metrics are in fact a vital and thriving part of the ecosystem, and like all other sectors of that ecosystem, we thrive on innovation and competition.  The last thing anyone needs is a metrics tribunal.  Such a thing would be anathema to innovation.  No one would suggest a DSP tribunal, or an exchange tribunal; such a notion would be chilling.  Despite the apparent comfort of One Magic Number, such a tribunal would be equally chilling in the metrics space.

Besides, if what we need is a tripartite group (advertisers, agencies, media) to audit and accredit measurement services, we already have one: the MRC.  And if your company wants a seat at that table, George Ivie tells me it's still just $12,500 a year (the best deal in media.) 

The prescription: Improving the narrative.  So what should we do? I think before we set about "fixing" measurement, we need to make sure that the stories we tell are in place. Metrics enhance, embellish, and drive home the compelling stories we tell about the medium.  But we need to make sure we are telling the right stories. Have we made the case for digital in as simple and compelling a way as the case is made for TV, or for magazines? I fear that we have not; that we have let the organic growth we have been fortunate enough to enjoy in this still fast-growing space blind us to the need to tell stories so simple my mom would understand them, about why advertisers simply have to be here. 

Here's the funny thing about metrics.  If you get the stories straight, the metrics will beat a path to your door.  But first let's make sure we've written the stories; then we can all live happily ever after.

6 comments about "Making 'Making Measurement Make Sense' Make Sense".
Check to receive email when comments are posted.
  1. Guy Powell from ProRelevant Marketing Solutions, April 5, 2011 at 6:01 p.m.

    Interesting comments. Your points really illustrate how important detailed data collection and measurement are in assessing your marketing effectiveness. But it also shows the complexity of trying to measure marketing effectiveness. There isn't just one magic number, nor should there be. It's in the differences that marketers can drive innovation to deliver extraordinary results.

    Thanks for the article.

    Guy Powell
    http://www.ROIofSocialMedia.com

  2. Joshua Chasin from VideoAmp, April 6, 2011 at 10:34 a.m.

    Thank you, Guy.

  3. Ignasi Pardo from consultant free lance, April 7, 2011 at 2:48 a.m.

    @comms_planning

    "core metrics: how many (reach) and how much (frequency), product is GRPs."
    "to determine differentially valuing based on some measure of the quality of the consumer experience"

    Metrics Insider mediapost http://bit.ly/dJH2bx #MediaMath

  4. Nick Drew from Yahoo Canada, April 7, 2011 at 12:12 p.m.

    You raise some very interesting points here, and it's great that there is this kind of dialogue going on; but I can't help feeling that to some extent your own experience and role has coloured your view.

    While one of the roles of measurement is to give experts like us a much better understanding of the delivery and efficacy of online ads, another key part it has to play is in making online advertising easily understandable to one of our key target audiences – planners and marketers less au fait with online as a medium. Many of this audience are planning multi-channel campaigns, and they *need* something that allows them to easily compare online to TV, radio, print and so on. While we can probably agree that a simple idea of R&F is far from ideal, I’m a firm believe that it’s a first step to making online comparable to TV – because if we can’t make online comparable to TV, as a medium it’s unlikely to surpass TV, either in terms of spend or marketers’ perception of its value.
    There’s no doubt that this will have to incorporate some measure of quality – to split the 30s pre-roll video, or the 15s interaction with an in-ad game, from the below-the-fold static gif – and your point about pre-buy currency vs post-buy measurement is spot on. But I think we have to accept that our core demographic is going to look first at R&F when it comes to buying ads.
    As an aside, having taken part of a discussion panel with your colleague Gian for the IAB Europe, it’s interesting to note that his presentations do seem to diverge somewhat in some of these details from your take!

    And to your penultimate point about not needing a tribunal, while you may be correct in the letter, in the spirit (or at least how I’ve interpreted it!) you’re well wide of the mark. With comScore only one of several metrics companies, and offering only one of many (paid-for) measurement solutions, there has to be consensus in the industry of what metrics should be regarded as the currency. Impressions are pretty much meaningless, as we know, and buyers want to talk about reach; at the very least the industry needs to agree on the preferred metric and a minimum standard to which these metrics should conform. Otherwise, while we should be presenting a united front to bring more dollars online from offline channels, we’re squabbling amongst ourselves as to what we should be pitching.

  5. Nick Drew from Yahoo Canada, April 7, 2011 at 12:17 p.m.

    Curses - I knew there was something else important to add!
    ...which is that you do realise that while you're talking about making online measurement rich, and not trying to trade down to the level of TV buying, comScore has just launched its AdEffx tool in Europe? Which, and I quote:

    "identifies the number of people exposed to the online ad and give information about audience composition" and "is designed to allow clients to compare offline and online media plans by using the same measures that are used for TV – reach, frequency and gross rating points (GRPs)". At present it doesn't include the more qualitative or behavioural tools, apparently, sticking just to Reach, Composition, and GRPs.

    Which mostly seems to run counter to the points you make in your column(?)

  6. Joshua Chasin from VideoAmp, April 15, 2011 at 3:59 p.m.

    Hey Nick. I think you may have concluded from the column that I'm in the "R&F doesn't work for digital" camp. Very much not the case; I've even written about it in this same space. I've heard many of the digerati argue that GRPs are an antiquated "analog" metric; but as long as you can know the number of impressions and the population size of your target, GRPs are a priori known (impressions/population X 100.) I'm coming from a place of, accepting the efficacy of reach, frequency and GRPs as a given and moving on from there.

    As far as the notion of evaluating impression quality, I think that is important even if you took digital out of the mix; a planner still wants to know the relative value of reaching a consumer in print versus with a TV spot.

Next story loading loading..