Commentary

Rooting Out Worst Practices

Welcome to one of our first New Year’s reminders. Perhaps we can make 2012 the year when “Best Practices for Conducting Online Ad Effectiveness Research” become standard operating procedures.  And, in a year of action, let’s encourage the vendors to do the research on research that the esteemed Dr. Paul Lavrakas recommended in his 2010 “An Evaluation of Methods Used to Assess the Effectiveness of Advertising on the Internet.” The list of issues and items to consider when adopting best practices is rather long, largely due to the inadequacies of the current research methods and the costs of doing better work.  

Let’s address some of the most common misconceptions and root out some worst practices:

1.  It is essential that agency buyers work with their colleagues in research and analytics to determine what the objectives of the research are and which methodologies are most appropriate.

2.  Every buy does not warrant a study.

3.  Cost of an ad effectiveness study should be less than 10% percent of the per-publisher buy.

4.  For best results, research plans and requirements must be well communicated:

a.  RFPs should identify the research vendor and describe the publisher responsibilities.

b.  IOs should include recruitment impressions.

c.   Publishers should get prior approval of surveys to prevent editorial/brand conflicts.

d.  Publishers should have and provide a list of preferred vendors.s

e.  Once a study has been agreed upon, vendors should proactively reach out to publishers to assist with prep for the study.

5.  Allow four weeks lead time prior to going live.

6.  Low response rates are pervasive; invest in reducing research clutter on sites, in validating other research designs and in broader research programs around the campaign being tested

7.  Cookie deletion, emerging ad formats and complex ad delivery supply chains make control group recruitment extremely challenging:

a.  More research is required to better understand the possible uses of scientific sampling as an alternative method. 

b.  Test/control comparisons and lift calculations are meaningless unless it is possible to guarantee that control group members have not been previously exposed.

c.   Vendors must take extra care to ensure that recruitment rates are correctly aligned.

d.  Media plans must be the basis for recruitment rates.

8.   The industry should employ more third-party validation to ensure that respondents are representative of the target audiences.

9.   Control and exposed groups should come from the same sites and same target.

10.  Results need rigorous weighting to campaign delivery.

11.   Survey length should not exceed 10 minutes or 20 questions.  In fact, the optimal time is 5 to 7 minutes for a respondent  to complete a questionnaire.

12.    Changing ad creative midflight alters the survey results; if the campaign must be changed midflight, then the research should not be used for decision-making.

13.    Sometimes, survey invites appear on the screen while the ad being measured is still visible; surveys should be given in a clean viewing environment -- that is, without the visual of the measured ad.

By no means is this an exhaustive checklist.  It is a reminder of the issues that must be juggled when researching online ad effectiveness -- another call to vendors and users to invest in improvements.

Next story loading loading..