Unreachable Reach And Frequency

It is a shame that after so many years of advancement in digital media planning and buying, it is still such a pain to pull USEFUL reach and frequency-related stats from ad servers. Granted that major ad servers like DoubleClick or Atlas all feature reports that would give you reach frequency numbers in some forms. Unfortunately useful insights you can glean from their standard/free offerings can be quite limited. If the standard reports on impressions, clicks, conversions are the benchmark of what a campaign report should look like, reach frequency reports fall far short of that standard in two areas:

First of all, standard ad-server reach and frequency reporting does not always allow the kind of slicing and dicing that planners/analysts need to understand campaign performance. Rather, standard reporting in this area generally contains only campaign-level information at a canned up-to-now or weekly/monthly intervals. The truth of the matter is that measurement of campaign effectiveness for optimization purpose is an exercise that should go far beyond how the campaign has been performing on overall level at a rigid/pre-defined time interval. Good optimization practice is usually preceded by deeper understanding of how each site/placement/creative has been faring, so that intelligent decisions can be made and appropriate fine-tuning can be carried out by pulling such media levers as capping frequency level, shifting media dollars across site/placement, or throwing out non performing creative assets.



Secondly, one of the most important aspects of reach frequency report (if not THE most important aspect of it) is to know and understand frequency distribution as well as the impact of such distribution upon conversion. And most times the ad server reports on frequency distribution suffer from a very similar type of rigidity as the regular overall reach report does. Namely almost none of the frequency distribution reports would go beyond the overall campaign level. Even though frequency distribution in theory would provide one of the most actionable metrics for campaign optimization, for reason beyond the scope of this small piece, frequency distribution at overall campaign level is only something interesting to look at. To make it actionable, we would at least need site-level information.

The lack of more flexible reach and frequency reporting is particularly troublesome because reach and frequency are supposed to be the king of media planning - at least' that's been the case with traditional TV channel for very long. After all, the most powerful measurement in TV market is GRP, which is a composite of reach and frequency.

No one in the media community would dispute the fact that reach and frequency are important. Hence it is a little disquieting that in digital media, where on the measurement front everything is supposed to be better than traditional media due to availability of data, we can't even do campaign optimization on the very metrics that are proven to be the most effective. This has led, for instance, to under-utilization of a powerful media lever, frequency capping, which currently is either implemented based mostly upon some kind of gut feeling or is not implemented at all.

I know while I am making my argument here, there would be people in our community eager to discourage me from using ad server reports, because of some well-known issues relating to cookie tracking: cookie deletion and counting computers vs. counting people. I want to say that I am well aware of the problem associated with cookie-based reporting. However, unless the size of online panels is large enough to speak statistical significance to every nuances of the buy, we probably will have to live with a lesser evil for in-flight campaign optimization. After all, online panels in the shape they are in today, though they could be powerful in driving a better media plan, are still inadequate in churning out performance reporting for optimization.

Also by pointing out that ad servers so far have been doing an inadequate job with reach and frequency numbers, I am NOT trying to trivialize the difficult process they have to go through to produce such stats. I fully recognize that numbers around reach and frequency require exponentially more computational resources even with the current sampling approach. However, it is also a fact that computation in the past 10 years has been becoming cheaper and faster. In contrast, we have only been seeing a glacial improvement in reach and frequency reporting. In the history of business, there are plenty of instances in which business decisions are being held hostage to engineering priorities. Hopefully we, on the business side of media, will be seeing some daylight at the end of the tunnel very soon.

6 comments about "Unreachable Reach And Frequency ".
Check to receive email when comments are posted.
  1. Gian Fulgoni from 4490 Ventures, April 22, 2011 at 12:04 p.m.

    Good article. Only point I disagree with is the recommendation that "unless the size of online panels is large enough to speak statistical significance to every nuances of the buy, we probably will have to live with a lesser evil for in-flight campaign optimization".

    I think that's shortsighted advice. While not able to "address EVERY nuance of the buy", panel-based R/F data are free from the ravages of cookie deletion and the unreliability of a cookie to say who was actually using the computer at any point in time. Panel based data provide uniquely powerful insight into the actual R/F that was delivered and who actually received the ads. That's vital information for diagnosing and eveluating any online media plan

  2. Tyson Roberts from Lucid Commerce, April 22, 2011 at 12:16 p.m.

    Chen, while I appreciate the post, it is a real stretch to refer to the GRP as the metric that has been "proven to be most effective". The GRP exists because it was possible within a broadcast medium, not because it was right. For another opinion, check out this article that came out today on Digiday Corey offers that, not only is the GRP outdated, it also makes no practical sense for online video.

    Why drag the internet backward to an age of imprecise measures that were the product of a broadcast environment?

  3. Howie Goldfarb from Blue Star Strategic Marketing, April 22, 2011 at 7:27 p.m.

    You forget Ambiguity benefits the Ad Servers. They know they under perform and if we saw the gory details their Ad pricing will plummet. No different than any other silo of Advertising. Why do you think facebook keeps so much data that would damn it into marketing oblivion secret? What we don't know helps them. And what we need to know to help us...doesn't.

  4. John Grono from GAP Research, April 24, 2011 at 8:47 a.m.

    A great piece Chen.

    But I do have to agree with Gian, that you do NOT need massive panels to establish the humanistic usage patterns (cookie deletion, duplicated usage, repeat usage etc.) that server data simply can't provide.

    Having said that, even the best panel in the world will not pick up all the traffic (for example it is hard to get panellists who work for banks, government departments, defence etc.)

    Clearly, the solution requires a hybrid blend of the non-robotic traffic from the server side to establish the total quantum (which panels struggle to do), and the behavioural usage patterns to determine reach & frequency both within a publishers domain and across domains for a campaign.

    And Mr. Roberts, having the luxury of working with the audience metrics for all media here in Australia, I can categorically state that internet measurement based on server-based traffic (the numbers most publishers like to analyse and sell off) are the most imprecise by many orders of magnitude. As a quick example, the number of monthly Unique Browsers here in Australia has recently hit 130 million. Not bad in a country of 22.5 million which is why more credence is placed on the panel based data which estimates the active universe at 14.7 million. Hence you comment about "dragging the Internet backward to an age of imprecise measures that were the product of a broadcast environment" is sadly astray (especially as GRPs, reach and frequency distribution are the pillars of all good communications plance, not just broadcast television).

  5. Nick Drew from Yahoo Canada, April 26, 2011 at 10:41 a.m.

    An interesting article Chen, and a good read! I think what you're drawing here is a shape of the tools to come - perhaps in a year for some of these features, out to perhaps 3-4 years for others such as in-flight optimisation. But there are some tugs of war that will need to be resolved to reach that point.
    As you address, one of the key balances will be between server-side analytics and panel provided options - and even between panel providers, given the occasional size of disparity between comScore, Nielsen, Hitwise etc. As with TV, print and radio, my gut feeling would be that eventually the industry will have to use one standard currency (or have an auditing process that ensures equality in metrics), because the idea of you buying on comScore R&F, me selling on server-side R&F and the client measuring on Nielsen R&F sounds rather tricky!
    Secondly, any metric is a compromise between what the client and agency want (more detail, to the nth degree, allowing analysis by x y and z) and what the publishers are happy to work with (more detail, but to a point, and an avenue to highlight their strengths while perhaps downplaying some of the most damning statistics) - and some of the measures you suggest may need to reach that point of compromise before they genuinely become currencies.
    Nonetheless, I reckon this could be an interesting article to revisit in a year, 2 years and beyond! :)

  6. Milind S from Everything Media Pte Ltd, June 7, 2011 at 12:24 a.m.

    I would tend to agree that GRP, particularly in the online environment is not very relevant. Engagement, time spent, sentiment and finally, the financial return on media investment is what will matter.

Next story loading loading..