Commentary

What Can Attribution Tell You About Display Incrementality?

These days, all marketers (and plenty of tech companies) are talking about attribution, and with good reason. Every CFO wants to know how digital advertising expenditures translate into new revenue earned.

But that begs the question: What’s new revenue? If your digital advertising initiatives target consumers who would convert anyway, is that revenue really new? More importantly, if you could focus your ad dollars on consumers who are more likely to be influenced by them, how much more revenue would you see?

To answer these questions, let’s start by taking a closer look at what attribution can and cannot do. According to the IAB, attribution is “the process of identifying a set of user actions (‘events’) that contribute in some manner to a desired outcome, and then assigning a value to each of those events.”

In other words, attribution takes all of the conversions that result from your campaigns and divides up the credit among the publishers that displayed your ads to converters. What it can’t do is tell you how much incremental revenue each of those publishers delivered, or which publishers are most effective in influencing wholly new customers to convert to your brand.

Intuitively, it makes sense to focus display dollars on publishers that send a high rate of paying customers to your brand, especially since so many marketers are under pressure to demonstrate results for their ad spend. Advertising with publishers that reach loyal consumers will deliver a great ROAS story for your CFO, but it’s not the best way to grow your brand’s footprint. Incrementality, on the other hand, seeks to measure the impact of an ad by measuring converters against a control group.

Numerous industries and sectors rely on A/B testing, and it is, as a piece in Wired magazine points out, “more or less standard practice from the leanest startups to the biggest political campaigns.” Amazon and Google are continual A/B testers.

A/B tests are used in digital advertising as well to assess the efficacy in achieving brand lift. In these cases, the control group is presented with a public service announcement (PSA) in place of the creative, and the test group sees the ad. Doing so makes it possible to compare the incremental or causal conversion. This approach, while relying on a sound methodology, is inefficient, costly, and the results are directional at best. Media plans and ad campaigns change often, and as soon as they do, this research is deemed dated, as it doesn’t account for the new elements of the campaign.

Recently, data scientists who study causality have been exploring ways to simulate the A/B test methodology to measure incremental lift, building on the foundation of unviewable ads as the control group. On top of this, they layer causality algorithms, to address the true lift that the ad caused. In these cases, marketers gain significant dynamic media efficiencies, and results are less prone to error, as tests can easily be modified to address changes in the parameters of a campaign.

Digital has long been viewed as the one advertising channel where everything is measurable. Sometimes that’s easier said than done (as anyone in the industry will tell you). But measuring incremental lift of ads is one of those things that is within reach, and will go a long way in helping marketers make smart decisions about their media plans.

1 comment about "What Can Attribution Tell You About Display Incrementality?".
Check to receive email when comments are posted.
  1. John Grono from GAP Research, September 2, 2014 at 7:09 p.m.

    Why not A/B test on "those served my creative" versus "those not served my creative"?

Next story loading loading..