IAB Report Slams Most Online Research Methods

Watch out, research firms! The Interactive Advertising Bureau has embarked on a broad initiative to improve online brand effectiveness research, and its initial findings aren't pretty.

What's wrong with most research that attempts to measure ad effectiveness? Small respondent size and low response rates for starters, according to an initial report from the IAB.

Above all else, the validity of such research is threatened "by the extremely low response rates achieved in most IAE studies," according to Paul Lavrakas, Ph.D., the report's author, and former chief research methodologist for the Nielsen Company.

Average research is also "threatened by the near-exclusive use of quasi-experimental research designs rather than classic experimental designs," in the words of Lavrakas, author of "Telephone Survey Methods: Sampling, Selection, and Supervision."

Worse still, industry research is often compromised by "a lack of valid empirical evidence that the statistical weighting adjustments ... adequately correct for the biasing effects," Lavrakas attests.

"In instances where the sample size is at the lower end of this range [less than 800 participants] and the clients want subsample analyses to be conducted ... these subsamples may not have enough members in them to provide precise analyses," Lavrakas concludes. "Thus, subsample analyses based on small sized subsamples [fewer than 100 participants in the subsample] will have relative large sampling errors."

To accompany Lavrakas' findings, the IAB is launching a cross-industry task force to help the interactive ad industry better understand how most studies impact the broader supply chain, and then suggest ways to minimize potential inefficiencies they may create.

"If the phrase 'marketing science' is to have any meaning, participants in the ecosystem must demand that their vendors employ rigorous, tested research methodologies, even if doing so costs more," said Sherrill Mane, SVP of Industry Services at the IAB.

The IAB also plans to create a set of U.S. best practices for online brand impact ad effectiveness studies, including recommendations for marketers, ad agencies, research vendors and publishers.

The IAB gave research vendors the opportunity to respond to Lavrakas' report.

ComScore, for one, said Lavrakas' "report evaluates two methods for assessing online attitudinal brand studies: 'site intercept studies that sample persons in real-time as they are using the Internet and studies that sample members of existing online panels.' Neither of the methodologies discussed in this report is fully representative of the methodologies in use today by comScore."

4 comments about "IAB Report Slams Most Online Research Methods".
Check to receive email when comments are posted.
  1. Joshua Chasin from VideoAmp, August 6, 2010 at 10:39 a.m.

    It is always the easy way out to blame the research company. Lavrakas makes it clear that the vendors know how to conduct this research, but that in practice doing classroom- and laboratory-sanctioned work puts the price points of the research beyond the willingness of the client to pay. In fact, patrons of this research aren't clamoring for rigor in design; quite the contrary. They are using spurious one-question "research" from the cheapest suppliers available. Perhaps it would help the ecosystem if the IAB worked to educate members on the value of paying for methodological rigor. It is not elusive; it's just more expensive than the research flavor-of-the-month.

  2. Jeff Einstein from The Brothers Einstein, August 6, 2010 at 10:46 a.m.

    How much cash and resources should we commit to proving the efficacy of a model that returns CTRs of statistical zero and sub-$1 CPMs?

  3. Lee Smith from Persuasive Brands, August 6, 2010 at 11:35 a.m.

    Research and marketers refuse to acknowledge a few simple facts.

    1. Few people respond to today's invitations for surveys--including those for online ad effectiveness.
    2. Long surveys are rarely completed by the "typical" online respondent.
    3. Each of the above is a real problem for research quality--but together they are deadly introducing bias that weighting simply cannot correct.

    The boil-the-ocean approach to data capture in today's online ad effectiveness studies is harmful to brands and publishers. The practice of finding any nugget of good news for an otherwise terrible campaign needs to stop. Given the incentives of marketers, publishers, and research firms, I sense only the IAB can make this happen.

    The above issues need to be addressed head-on by the IAB if they want to improve the quality of online ad effectiveness research.

    Josh, it's not exclusively an issue of price as cost effective solutions exist; it's about shining a light on what's really happening with the commercial methodologies in use today for online ad effectiveness and other applications of online research.

  4. Daryl McNutt from New Moon Ski & Bike, August 8, 2010 at 12:18 p.m.

    I agree with most of this article. As an analyst that has done over 1000 research reports for online, newspaper and television, the scientific boundaries are being stretched for online analytics due to low sample sizes and outdated survey methodologies. I want to preface this next statement by saying I am a former comScore employee. This article loses some impact by only calling out comScore. All the research firms (Nielsen included) have methodologies that need to be improved for sampling online. The IAB needs to get the right people together to provide advertisers and agencies a standard for research measurement and implementation.

Next story loading loading..