Commentary

Practice Makes Perfect -- Or Does It?

Online ad effectiveness research is an imperfect science. But if we apply greater scientific rigor and follow best practices like those released by the IAB on July 13, we can make it a more valuable and useful tool.

For 15 years, the measurement of advertising effectiveness in the most accountable and measurable medium, the Internet, relied on methodology that was developed before changes in technology and consumer behavior made it a mass medium.  As more consumers embraced digital media, technology advanced, online content evolved and advertising changed, the methodology for measuring ad effectiveness stood still.

Over the years, the demand for online ad effectiveness research grew. The use of quasi-experimental designs proliferated and the most frequently used methodology was site intercept.  Concurrently, response rates declined precipitously and the complexities of ad inventory optimization increased dramatically.  More and more inventory and more and more workflow disruptions were required to complete the studies.  It should be noted that other research solutions like sampling members of existing online panels were also sometimes used.  They, too, had methodological flaws. 

In 2010, the IAB published An Evaluation of Methods Used to Assess the Effectiveness of Advertising on the Internet, an impartial, in-depth review by Dr. Paul Lavrakas.  Dr. Lavrakas concluded that despite many solid aspects to the measurement of Internet ad effectiveness, the threats to the studies' external and internal validity put  "the findings of most of the studies in jeopardy."  Because of three fundamental methodological problems, we really do not know if the findings of these studies are right or wrong.  These three problems are: low response rates; reliance on quasi-experimental designs rather than classical experimental designs; and the lack of valid empirical evidence that the statistical weighting adjustments to the sample are used to correct for potential biases inherent in the methodologies.   Dr. Lavrakas and the IAB advocated for follow up research to refine and improve the methodologies.

Still, despite the cooperation of the key research vendors in having their services reviewed by Dr. Lavrakas and in being interviewed for the best practices white paper, funding for collaborative industry research has not come.  However, vendors have invested in developing panel-based methodologies that have yet to be validated.  The best practices paper reinforces that like any other piece of business, deciding to conduct online ad effectiveness research entails the right decision criteria and planning:

·       It is essential that agency buyers consult with their colleagues in research and analytics both before asking publishers to run studies and throughout the course of the research.  The goals of the campaign and the possible answers from a comprehensive research program must be considered.

·       Thresholds should be a minimum of 15 million impressions, and the costs of the research should not exceed 10% of the total buy.

·       Four weeks lead time is ideal for an agency request for a study from a publisher; currently, there are those who operate under the misconception that 24 hours lead time is acceptable.

·       Publishers must become more proactive in planning and upholding thresholds.

The best practices to optimize the quality of research are detailed in the paper.  One instructive section covers estimated costs of true experimental design and the imperative to redefine what is good enough.   An exceptional discussion of how cookie deletion, ad formats and complex ad delivery chains adversely affect the quality of control groups puts the issues in context.   Best practices require validation that the target population is represented in the research. 

We must do a better job training all the people in the supply chain who touch these studies, on both the agency and publisher side.  We must drive widespread adoption of these Best Practices for Conducting Online Ad Effectiveness Research. The materials are so comprehensive that reading the paper should be a best practice in and of itself.   

Practice does not make perfect; best practices make better.  Funding necessary research on research will get us closer to perfect.

3 comments about "Practice Makes Perfect -- Or Does It?".
Check to receive email when comments are posted.
  1. John Grono from GAP Research, August 3, 2011 at 5:30 p.m.

    I must take issue with the phrase "the most accountable and measurable medium". This is subjective, and depends on the lens the marketer looks through.

    If the marketer's objective is to judge accountability using people-based metrics then the internet has a problem. if the marketer is happy to measure against clicks etc. then the internet wins hands-down.

    The problem is that WAY too many marketers equate clicks, unique browsers etc with unique people, and that simply is not true. They certainly have correlations but there is no one-to-one relationship. This means that what the marketer THINKS they are buying is not what they are getting in far too many instances.

    If I may cite one statistic. Using tagged based measures (the most common basis) Australia now has over 140 million "unique browsers" per month. Not bad for a country with a population of 22.5m of which not all are connected to the internet. (And yes that is a 'domestic-only' figure).

    The problems relate to well known issues such as duplicate site and machine access, cookie deletion etc.

    I certainly agree that the internet is the most MEASURED medium, but a lot of those measures are pretty much useless.

  2. Nick Drew from Yahoo Canada, August 4, 2011 at 9:28 a.m.

    > Using tagged based measures (the most common basis) Australia now has over 140 million "unique browsers" per month. Not bad for a country with a population of 22.5m of which not all are connected to the internet.

    Ah, but we also know that if a mere 20% of the population deleted their cookies every day, they would look like 120m unique users a month - so although the topline numbers are large, the actual scale of the problem can be surprisingly small. It obviously is still a major concern for research though.

    Going back to the original article, the first half talks about very valid themes from a metrics and research point of view; but the second half seems taken from a different article! The question really isn't about a minimum number of impressions (which of course changes hugely by market), but arguably should be focused on whether quasi-experimental site-intercept surveys are actually a useful measure. Sure, they're used a lot, but that doesn't mean the same thing, and in the "most accountable and measurable medium" we must surely be able to do better than "do you recall seeing this ad? [place question in overlay behind which the ad is visible]". I'm far from a cheerleader for comScore, but in their (and Nielsen's) approach of looking at actual behavioural differences after ad exposure they have taken things on a step further towards providing useful, insightful ad effectiveness data.

  3. John Grono from GAP Research, August 4, 2011 at 6:43 p.m.

    Fair point Nick, the 'inflation' in the total market figure obviously exceeds that of individual sites. But audience inflation of +250% is quite common on the key sites and from an advertising and trading persepective that is totally unacceptable. From a perception perspective, if you can't measure the total industry size (generally the easier task) what chance of measuring the individual properties.

Next story loading loading..