Commentary

Media Metrics: Here Comes the Bride

  • by April 2, 2008
Here Comes the BrideOnline advertising, they say, is a marriage of art and science. But if this is true, it certainly was a wedding of the shotgun variety. Since the days of John Wanamaker, corporations lived with the hope that at least 50 percent of their advertising was not wasted. Flash forward to 2000 and the waste ratio of online advertising looked much worse, with 98 percent of the target audience refraining from clicking on campaign ads.

Clients did the math and concluded that online consumers were advertising-blind, which pushed the arty agency folks to embrace hard science as a means to prove that online advertising does, in fact, move the needle. As with any branch of science, the most powerful tool to determine the truth is the experiment. This usually brings to mind white-coated scientists in a lab mixing ominously glowing substances at their own risk. While white coats are optional in advertising research, advertisers have practiced controlled lab experiments for several decades. For example, they randomly separated subjects into test and control groups, showed them different versions of an ad and observed their reactions. This is how we learned that subliminal advertising does not work.

Advertisers used to shy away from two other types of experiments: 1) field experiments that create controlled environments in the real world; and 2) quasi-experiments (also known as natural experiments), which require no intervention but rely on elaborate observations. The former were considered too difficult to perform, the latter often inapplicable, due to a lack of good data.

The invention of digital ad serving made it possible to turn any online campaign into a large-scale field experiment. While it was nearly impossible, in the traditional advertising world, to ensure that a control segment of the target audience wouldn't come into contact with the actual ads, digital advertising researchers can use cookies to isolate two otherwise identical but mutually exclusive groups of online users: those who have seen the ads and those who have not.

Millward Brown pioneered the test-versus-control-group approach in 1997 with its Brand Impact product. However, it was Dynamic Logic's AdIndex, launched in 2000, that truly made online field experiments a must-have tool. Dynamic Logic and its competitor, InsightExpress, created mutually exclusive test and control segments of users, with the latter being served PSAs in place of actual ads. They simultaneously recruited test and control groups from the have-seen-the-ad and have-not-seen-the-ad segments via an intercept survey that let them measure these consumers' brand awareness, brand favorability and purchase intent - and how it differed among the two groups.

This method became the gold standard (today, Dynamic Logic's database captures results from 3,674 controlled experiments from a combined sample of more than 5.5 million), and later extended to include offline natural media exposure measurements.

It is possible to forego intercept surveys and still run field experiments if the campaign's behavioral data includes a concrete advertising-effectiveness indicator. Examples of such indicators could include a visit to the advertiser's site, signing up for a newsletter, or a purchase. In this case, you can set up a test with an ad-serving tool like DoubleClick's dart, which provides the campaign segmentation necessary to isolate audiences from test and control placements, as well as compare conversion rates between the test- and control-user segments.

The biggest hurdle to conducting field experiments is that it takes a substantial portion of your media budget to create the control-audience segment by replacing a brand's ads with control ads. The more you hope to learn from the experiment, the more control ads you need to serve - and the more costly the research. Every time you intend to drill deeper into a test group to evaluate the effect of a particular site category, day-part, audience segment, etc., you need to grow the control-group sample at a corresponding level to yield a proper comparison.

Since 2002, Internet advertising expenditures have grown at double-digit rates for all industries except for consumer-packaged goods. Unlike other industries, CPG marketers couldn't usually tie campaign performance to consumer behavior, since virtually all CPG sales occur offline. Then Yahoo came up with an original design: combine data from browsing logs with Nielsen Homescan data to see how Homescan panelists' exposure to online ads eventually translates into their purchases at grocery stores.

An attractive side benefit of this offering: a free control group. Yahoo extracts a look-alike control sample from Homescan panelists who haven't been exposed to the online campaign and compares their purchasing behavior to those of the panelists who've seen the online ads.

This method has led to improved targeting and spending decisions and it's widely used today. Last year alone, Yahoo ran 63 Consumer Direct studies that resulted in CPG advertisers steadily increasing their use of online and consumer direct advertising. AOL and msn followed, and now also use the natural control approach when evaluating CPG campaign effectiveness.

What's the must-have prerequisite for identifying a valid natural control group? Comprehensive knowledge of the audience in question: its ad exposure, demographics and past online behavior. It can come from a user dataset like the subscriber databases of major portals or, better yet, a metered online panel.

For example, comScore offers quasi-experimental research that explores the advantages of maintaining a large audience-measurement panel. It passively observes who was exposed to a brand's advertising running across any publisher's Web site and uses a nearest-neighbor algorithm that handpicks non-exposed panelists who most closely match the exposed based on multiple dimensions of online activity not related to the ad contact. This approach is advertising's best practice of non-invasive natural experiment techniques, commonly used in sciences such as epidemiology or astrophysics, where it is unethical or simply impossible to manipulate the variables.

Most recently, comScore launched Brand Metrix, which employs the natural control group to measure online advertising's attitudinal and branding impact. It also compares the exposed and matched controls on key behavioral metrics, such as visits to the advertised Web site and purchases, to establish the lift due to the ads. ComScore intends for Brand Metrix to compete head-to-head with Dynamic Logic's and InsightExpress's field-testing products.

The good news is that regardless of the shotgun start, today online advertising's marriage of art and science has matured into a solid union and one in which any campaign is, potentially, a revealing experiment in progress. Contributing to its long-term success is the fact that sometimes you don't even need to change any variables (and come up with the funding to do so) to establish effectiveness. You just need to look at what is already out there in a different way.

Yaakov Kimelfeld, Ph.D., is vice president of digital research and analytics director at MediaVest USA.

Next story loading loading..