Commentary

Measurement That Really Counts

Marketers have long used impressions to measure the value of a campaign, no matter the media; it's all measured the same. But that thinking is crumbling against evidence that measuring impressions is a bit of a shell game.

A recent study showed that 63% of TV impressions were ignored (YuMe and IPG Media Lab study, May 2011). Those "successfully delivered" impressions that actually reach their target lead to a desired result (i.e., branding, awareness, conversion) and are what contribute to a return on investment, while the rest is waste. Buying the "opportunity" to have someone see your media has little to do with whether or not someone actually engages with your product.

Marketers are aware of this, even without a study. They are skeptical about the value of impressions, yet are stuck in the "impressions" trench. Countless out-of-home media buyers and planners have told me that they customarily cut the number of impressions claimed by media suppliers by one third to one half and recalculate the CPMs. Because there is no standard, their gauge is subjective. Typically, planners cut a greater percentage when they are less familiar with the vendor or medium. Of course, this gut calculation makes some programs appear to be more or less efficient, and influences what media they recommend in their plans.

Not all campaigns can and should be measured the same way. A better way to determine the effectiveness of a campaign is to track whether or not it acquired consumers. One way to start that process is to measure recall. Especially when applied to non-traditional media, such as out-of-home, recall is a far more important gauge of campaign success. Research shows that high recall leads to more sales.

Rather than randomly discounting claimed impressions, why not use available data on aided and unaided recall as a starting point to weigh the effectiveness from one media element to the next? If one program delivers 10% unaided recall, while another delivers 20%, the greater efficiency of a program might justify a higher CPM.

The downside to this is that many marketing services companies fear this level of scrutiny, and won't lead the charge that might show their weaknesses. Out-of-home agencies sit in the best position to lead this innovative thinking by requiring vendors to do research on more programs to get category benchmarks. These benchmarks can be used in predictive ROI models that can be applied at the point-of-view phase when recommending the most efficient programs to their clients.

Measuring recall is a particularly important consideration for integrated campaigns that agencies are now in the business of delivering. If you know that there is media you can use that leads to activation, even it's twice the CPM of more traditional avenues, that media should be evaluated through a different lens, right?

Tags: commentary, roi
Recommend (4) Print RSS
2 comments about "Measurement That Really Counts ".
  1. Jim Dugan from PipPops LLC , August 24, 2011 at 11:44 a.m.

    Good article, Sherry.

    For some reason it reminded me of the fact that the 0-3 years' period of a baby's life are, by far, the most important years of her life on earth.

    {Additionally, personally I think, at least, the year before the baby is born is even more important.}

    So, to my comment: What is the mindset that we have, having developed a potentially successful program and process by which to achieve that plus an entire understanding of the entire program by the client and the results desired (and the client fully in agreement).

    Are we doing enough with our entire set-up, including the intended "end result" for exactly what we're trying to achieve? Is our program and entire strategy and direction correct in the first place?

    If we know and use those "pre-program" answers to lay the foundation, the ground work (the first 0-3) and follow our designed plan (assuming that the client understands exactly what we're looking to achieve and exactly how we're going to go about it and has agreed to everything), then there should be no fear of any level of scrutiny - by anyone.

    However, scrutiny or other means to understand "what's happening" as the program unfolds is critical. Scrutiny isn't a bad thing and is not only important, but necessary.

    If the program needs to be tweeked for the next run, we'll know where and how and so will the client.

    At least we've been honest with ourselves, the client, and the results every step of the way.

    And, most importantly, we've learned and so has our client, and learned much ~ Priceless ~

  2. Lucy Spencer from spencerCreates , September 6, 2011 at 5:13 p.m.

    Great point - I like the "shell game" analogy. The numbers can be pretty flexible depending on who is looking at them; at the end of the day, it's still more about the action than the viewing.

    I would even go a step further and agree with the research team that found a delay factor in their measurements. How long after the impression does a viewer become a customer? How many impressions over what period of time does it take to generate recall?

    It just goes to show that my SEO clients are correct. The ROI isn't in the number of followers or friends; it's in the value of the connections you make with them.