Commentary

Using Experiment Design To Build Confidence in Your Attribution Model

A key finding of the recently published joint Forrester Consulting/IAB report, “Digital Attribution Comes of Age,” was that “algorithmic attribution models are gaining acceptance, but some marketers remain skeptical.”  Specifically, the report states that algorithmic attribution’s supporters say that it is “statistically principled, objective and unbiased,” and that “data is setting the weights, not opinions.” Meanwhile, detractors  of the model say it is “difficult to explain,” “opaque,” and “subject to ‘dangerous math’ that can create misleading outcomes.”

Fortunately the algorithmic approach can be tested both in terms of the outcome and the causality assumptions.  

The outcome accuracy of the algorithmic approach can be tested if the methodology not only provides attribution of credit for marketing tactic contribution to a given desired outcome, but is also predictive.  In that predictive case ,the lift associated with optimal media plans that result from the attribution process can be predicted against other methodologies, such as those based on last ad, last click or simple rules-based selections. The confidence in the methodology can be built as the population exposed to the optimally recommended/predicted media plan demonstrates the conversion lift expected relative to other segments of the population that are subjected to other methodologies.

Scientific verification of causality assumptions may be accomplished using a technique called “Design of Experiments” (DoE).  DoE offers marketers the ability to work with their attribution provider to test the accuracy of algorithmic attribution models. This is more important when the amount of data used to compute the model isn’t as statistically significant as desired -- leading to a wider margin of error than desired within the model.

DoE is done differently for top-down (using summarized marketing performance data from all channels, including both online and offline) and bottom-up (using granular, user-level marketing performance data) attribution scenarios. 

The DoE process involves selecting either a group of users from a particular DMA or a totally randomly selected group of users, and exposing half of them to either:

  • The channel, campaign and tactic mix that the algorithmic model predicts will produce a given lift in the desired outcome (conversion) – the test population
  • A placebo ad – the control population

When a statistically significant percentage of the universe being tested has been exposed to one or the other of these options, the amount of lift in conversions that the test population shows over the control population can be compared to that which was predicted by the original model. If the amount of observed lift is the same as that of the predicted lift, the marketer can feel confident that the data model is accurate enough.

DoE can also be used test the validity of certain attribution models in which the marketer does not have a lot of confidence because of reasons other than the absence of statistically significant data sets. 

It makes sense that any new technology or technique that professes to predict future performance be approached with a healthy amount of caution.  But by using DoE, marketers can relatively quickly and economically test the water before confidently diving headfirst into the wholesale optimization of their marketing efforts based on the findings/recommendations of their attribution model.

Next story loading loading..