Online ad effectiveness research is an imperfect science. But if we apply greater scientific rigor and follow best practices like those
released by the IAB on July 13, we can make it a more valuable and useful tool.
For 15 years, the measurement of advertising
effectiveness in the most accountable and measurable medium, the Internet, relied on methodology that was developed before changes in technology and consumer behavior made it a mass medium. As
more consumers embraced digital media, technology advanced, online content evolved and advertising changed, the methodology for measuring ad effectiveness stood still.
Over the years, the
demand for online ad effectiveness research grew. The use of quasi-experimental designs proliferated and the most frequently used methodology was site intercept. Concurrently, response rates
declined precipitously and the complexities of ad inventory optimization increased dramatically. More and more inventory and more and more workflow disruptions were required to complete the
studies. It should be noted that other research solutions like sampling members of existing online panels were also sometimes used. They, too, had methodological flaws.
In
2010, the IAB published An Evaluation of Methods Used to Assess the Effectiveness of
Advertising on the Internet, an impartial, in-depth review by Dr. Paul Lavrakas. Dr. Lavrakas concluded that despite many solid aspects to the measurement of Internet ad effectiveness,
the threats to the studies' external and internal validity put "the findings of most of the studies in jeopardy." Because of three fundamental methodological problems, we
really do not know if the findings of these studies are right or wrong. These three problems are: low response rates; reliance on quasi-experimental designs rather than classical experimental
designs; and the lack of valid empirical evidence that the statistical weighting adjustments to the sample are used to correct for potential biases inherent in the methodologies. Dr.
Lavrakas and the IAB advocated for follow up research to refine and improve the methodologies.
Still, despite the cooperation of the key research vendors in having their services reviewed by
Dr. Lavrakas and in being interviewed for the best practices white paper, funding for collaborative industry research has not come. However, vendors have invested in developing panel-based
methodologies that have yet to be validated. The best practices paper reinforces that like any other piece of business, deciding to conduct online ad effectiveness research entails the right
decision criteria and planning:
· It is essential that agency buyers consult with their colleagues in research and analytics both before asking
publishers to run studies and throughout the course of the research. The goals of the campaign and the possible answers from a comprehensive research program must be considered.
· Thresholds should be a minimum of 15 million impressions, and the costs of the research should not exceed 10% of the total buy.
· Four weeks lead time is ideal for an agency request for a study from a publisher; currently, there are those who operate under the misconception that 24
hours lead time is acceptable.
· Publishers must become more proactive in planning and upholding thresholds.
The best practices to optimize
the quality of research are detailed in the paper. One instructive section covers estimated costs of true experimental design and the imperative to redefine what is good enough. An
exceptional discussion of how cookie deletion, ad formats and complex ad delivery chains adversely affect the quality of control groups puts the issues in context. Best practices require
validation that the target population is represented in the research.
We must do a better job training all the people in the supply chain who touch these studies, on both the agency and
publisher side. We must drive widespread adoption of these Best Practices for Conducting Online Ad Effectiveness Research. The materials are so comprehensive that reading the paper
should be a best practice in and of itself.
Practice does not make perfect; best practices make better. Funding necessary research on research will get us closer to
perfect.