Commentary

A Scientific Model For Multi-Touch Attribution And Ad Optimization

It’s becoming widely known that attribution and optimization both play important roles in helping online advertisers improve the performance of their cross-channel digital campaigns. But today, many attribution and optimization methods still rely on simple models and human intervention, which are fraught with errors and pitfalls when it comes to accurately and effectively improving ad campaign performance over time.

To start, let’s talk about attribution. The two most common attribution models are “last event” and “ad hoc weighting.” As the name implies, last event assigns 100% of the credit for a conversion to the last event (such as a click), even if that click was influenced by a whole series of other display or search advertisements over time. Ad hoc weighting assigns decaying weights to events that are further in the past, but may add a bonus for the first ad seen.

The problem with both models is they’re based on subjective assumptions, not on scientific analysis. Instead of relying on guesses, why not let the data itself demonstrate the effectiveness? If a particular ad is effective, shouldn’t users who saw that ad be more likely to convert than users who didn’t? Shouldn’t we be able to measure this lift from the data?

The answer is a qualified yes. As long as you have access to all of the data -- including converting and non-converting users -- you can accurately and objectively assign the proper credit for conversions using algorithmic attribution analysis. Only this approach to attribution will give you the effective recommendations you need to improve your cross-channel campaigns over time.

Now, let’s discuss advertisers looking to improve the performance of their ad campaigns using optimization.

A common approach to optimization is to conduct a series of A/B tests comparing different sites, creatives, ad positions, etc. to each other and then keep the best at each step. The problem with this approach is that it can’t properly handle the complex non-linear interactions of the real world, and therefore will never result in a completely optimal set of recommendations. Let’s examine why.

Say an advertiser conducts an A/B test to compare creative 1 with creative 2 to determine which provides better performance. In this case, let’s assume the results show that creative 2 is better. The advertiser is then going to evaluate if creative 2 works better on site B or on site A. If this test shows that site B performs better than site A, the advertiser will move forward by advertising using creative 2 on site B. 

The problem with this method is clear. The advertiser has never tested creative 1 with site B since creative 2 performed better in the first test. It may be the case that creative 1 gives the best performance when used in combination with site B. A series of linear A/B tests like these, while commonly done, will never produce the accurate results advertisers need.

The best solution is to use an algorithmic approach to optimization that simultaneously analyzes all possible scenarios to see which combinations produce the best incremental results. This creates an accurate predictive model that takes into account all of the non-linearities and interactions. Once this model is in hand, we can find the optimal point subject to budget, volume, and bidding constraints.

When leveraging attribution and optimization in campaigns, one thing is clear: only an objective scientific model can accurately predict how advertisers should adjust campaigns to improve results. In this increasingly complex cross-channel ad world, it is becoming even more important for brands to make sure they are applying such principles in their campaign measurement initiatives.

15 comments about "A Scientific Model For Multi-Touch Attribution And Ad Optimization".
Check to receive email when comments are posted.
  1. Nick D from ___, March 1, 2012 at 11:48 a.m.

    "As long as you have access to all of the data -- including converting and non-converting users -- you can accurately and objectively assign the proper credit for conversions using algorithmic attribution analysis."


    Woah woah woah there! You can't just say "as long as you have the data, it's as easy as that, now let's move onto something else". This is a huge subject, and a hugely complex one; as you've just about touched on, a brand needs a huge amount of data, including that of *non-converting* consumers in order to even get CLOSE to assigning attribution to different ads or media.

    Done correctly, attribution will become increasingly important in how advertising shapes itself across channels in the next 5 years, but unfortunately there is LONG way to go before it's consistently done correctly.

  2. Ken Mallon from Ken Mallon Advisory Services, March 1, 2012 at 12:01 p.m.

    Robert,
    I see two main problems with many attribution systems. First, as you state, heuristics rather than scientific methods are used to allocate credit to digital touchpoints. Second, attribution systems attribute 100% of the credit to digital, ignoring important factors like other media, prior favorability to the brand, trial use, etc.

    Heuristics versus science. The term attribution implies that causality has been proven. If you attribute 80% of an online sale to a click on an ad, it means that ad *caused* 80% of sales. But, this attribution has not been scientifically demonstrated.

    The only scientifically accepted way to prove one thing caused another is to perform randomized controlled tests, just like the Food & Drug Administration requires for drug approvals.

    Translated to digital, this means having control groups. For example, if you had ads running on a certain site and you carve out some percentage of the delivery to be dummy ads (PSAs or publisher house ads), you can calculate the percent in each group (exposed to advertiser ads versus exposed to dummy ads) who convert later. The difference in this percentage is the attribution.

    100% to digital? We wonder why some don't take digital seriously. Perhaps it's because we haven't worked hard enough to develop the trust that other media has. Claiming a product sale is 100% due to digital exposures doesn't help matters. Let's take cars for example. Suppose John bought a Toyota RAV4. He had a prior positive opinion of Toyota and one of his neighbors had one. One day he test drove one and liked it. Finally, when it came time to buy, he visited a variety of auto websites and got exposed to lots of auto ads, including Toyota ads. He also did some searches and got exposed to sponsored search links. Eventually, he went ahead and made a purchase online. Suppose he spent $25,000 for it. What percentage of that would you attribute to his digital touchpoints and, in particular, to the final search he did that came before the purchase? Pretty small, I'd imagine. Not 100%,

    I'd say he was mostly influenced by his prior brand perceptions of Toyota which were built up over time from branding done on TV, billboards, online and other places. He didn't click on any of those. He was also very influenced by trial use and seeing the vehicle around town in grocery store parking lots and such.

    This auto example is not an outlier. This is how most product purchasing occurs. People have prior beliefs in a brand, built up sometimes over a period of years. Word-of-mouth, friend recommendations and trial use play a big role in purchase decisions. Attributing 100% of a purchase to a series of recent digital exposures is misleading to advertisers.

    Great topic. Sorry for the long post.

    -Ken

  3. Ken Mallon from Ken Mallon Advisory Services, March 1, 2012 at 12:02 p.m.

    Such a long post and I neglected to comment on the optimization aspect.

    Firstly, let me say that a lot of time and money is put into optimizing campaigns after they launch. If just one tenth of that time, effort and money were put into creating better ads before they launch, digital would be much better for it. Copy-testing is routinely done in other media but digital has largely ignored it. Now, assuming the campaign has already launched, you are right that a series of A/B tests might not lead to the best optimization. However, 2X2 and more complex designs can allow one to test more than one feature at once. You can run randomize ads A and B on site 1 and do the same on site 2. This will allow you to estimate both site effects and ad effects and be able to tease out which deserves more attribution. I believe you also need dummy ads.

  4. John Grono from GAP Research, March 1, 2012 at 4:55 p.m.

    And wouldn't "all the data" include things like pricing, non-digital communications (paid and earned), competitive activity et. al. This is such a microscopic view that appears to be driven by the 'available data' rather than a marketers 'real world' view. I would rather have a complete model of my market (my brands and competitors brands) based on much broader drivers of consumer actions (albeit some would basically be estimates or approximations) than have a microcosm view. Think of it as having a partial view of the whole picture rather than having a complete view of a part of the picture.

  5. Brian Dalessandro from media6degrees, March 2, 2012 at 9:42 a.m.

    I think we need to first applaud Robert for promoting progress in this field. I would hope that we can all reach consensus that attribution should be data driven and scientifically measured.

    One easy solution to the above concerns is that attribution measurement should not necessarily have to assign 100% of the credit to the online channels. We might never get to a point where all online and offline touch points are represented in the same data sets. Let's admit that to ourselves and focus on what we can do - which is measure online activity. So if we can establish, for a given campaign, that 60% of online conversions were driven by the online marketing activity, then the attribution system can divvy up that 60% to the online channels. I don't have answers to the offline portion, but I can say that it is possible to separate the two.

    A/B testing is definitely a safe way to go to establish causal effects, but it isn't the exclusive way. One can, under the right conditions, establish an unbiased measure of causal effects with observational data. This is done often in other disciplines (such as bio-stats and pharma). Our research group and M6D has explored this idea (along with causal attribution) and have published results here: http://m6d.com/blog/

  6. Steve Latham from Encore Media Metrics, March 2, 2012 at 12:10 p.m.

    We all agree that you need a statistically validated attribution model to assign weightings and re-allocate credit to assist impressions and clicks (is anyone taking the other side of this argument?). And we all agree that online is not the only channel that shapes brand preferences and drive intent to purchase.

    I sympathize with Ken - it's not easy (or economically feasible) for most advertisers to understand every brand interaction (offline and online) that influences a sale. The more you learn about this problem, the more you realize how hard it is to solve. So I agree with Brian's comment that we should focus on what we can measure, and use statistical analyses (coupled with common sense) to reach the best conclusions we can. And we need to do it efficiently and cost-effectively .

    While we'd all love to have a 99.9% answer to every question re: attribution and causation, there will always be some margin of error and/or room for disagreement. There are many practitioners (solution providers and in-house data science teams) that have studied the problem and developed statistical approaches to attributing credit in a way that is more than sufficient for most marketers. Our problem is not that the perfect solution doesn't exist. It's that most marketers are still hesitant to change the way they measure media (even when they know better).

    The roadblocks to industry adoption are not lack of smart solutions or questionable efficacy, but rather the cost and level of effort required to deploy and manage a solution. The challenge is exacerbated by a widespread lack of resources within the organizations that have to implement and manage them (typically the agencies who are being paid less to do more every year). Until we address these issues and make it easy for agencies and brands to realize meaningful insights, we'll continue to struggle in our battle against inertia. For more on this, see "Ph.D Targeting & First Grade Metrics" at http://bit.ly/tyjrWk.

  7. Robert Marsa from Adometry, March 2, 2012 at 4:43 p.m.

    Thanks for the comment, Nick D.

    Yes, I agree that getting all of the online data is difficult, but not impossible. With the advances of comprehensive advertising databases/platforms and cheaper computing power, advertisers have access to the tools they need. The non-converting data is important because without it one never knows the true impact of an event on a path. In addition, by examining all impressions, advertisers can get a more complete picture of recency and frequency impact. And, you're correct – attribution is and will continue to be important to advertisers.

  8. Robert Marsa from Adometry, March 2, 2012 at 4:44 p.m.

    You make great points about attribution, Ken. I view attribution as coming in two flavors: relative attribution and absolute attribution. The purpose of relative attribution is to discover how important things were relative to each other. For instance, one site deserves 3 times more credit than another site. Absolute attribution, on the other hand, would discovers the absolute amount of credit that a particular touch point deserves. For instance, one site deserves 0.25% credit for this a particular conversion.

    Both types of attribution can be valuable and provide actionable insite for marketers.

    Fundamentally, I agree with your comment regarding the influence of off-line and word-of-mouth marketing. Anyone who claims that 100% of the credit for sales belongs to digital marketing is over-simplifying how multi-channel marketing works. missing the boat. But with enough data, and proper test and control methods, we can accurately estimate what the aggregate credit that should go to digital marketing.

  9. Robert Marsa from Adometry, March 2, 2012 at 4:45 p.m.

    Great point about optimization, Ken. You are absolutely right that there are approaches between simple A/B testing and global non-linear optimization. Those approaches will take into account some of the interaction effects.

  10. Robert Marsa from Adometry, March 2, 2012 at 4:46 p.m.

    Thanks for your comment, John.

    Truly having all the data about a particular individual's exposure to a brand or a competitor's brand would be fantastic, but is also impossible. The relevant question is, 'how much data do we need in order to be able to make meaningful accurate and statistically valid assertions about online interactions people's behavior?' Having all of the data about display advertising, for instance, will let lets us determine the relative effectiveness of different parts of out display advertising. This is valuable by itself.

  11. Robert Marsa from Adometry, March 2, 2012 at 4:50 p.m.

    Great points, Brian. You can use a version of absolute attribution to provide an upper bound to the amount of credit that the online channels deserve even if you can't say how to correctly split the remaining credit among the offline channels.

  12. Robert Marsa from Adometry, March 2, 2012 at 4:57 p.m.

    Steve, thanks for pointing out that even a perfect solution to the attribution problem won't benefit anyone unless we can convince marketers to implement it. It is important to make solutions easy, both to deploy and to get insights from.

  13. Ken Mallon from Ken Mallon Advisory Services, March 3, 2012 at 5:45 p.m.

    @BrianD. I agree that any effort that moves marketers away from last click attribution (absurd) should be applauded. However, I have to disagree with your assertion that causal relationships can be established with observational data. It's simply not true.

    I worked as a biostatistician in an epidemiology department (UCSF, med school) and was very fortunate to have been trained by the great John Neuhaus. We were very careful to never imply causality with the observational research and models we built. We were able to successfully educate medical students, physicians and others on the difference between observational research that can show relationships versus randomized, controlled studies that proove causality.

    They learned how to use phrases like "Y is associated with an increased risk in X" rather than Y causes X. Or X can be attributed to Y.

    Later, I did clinical research and biostatistics at both Amgen and Genentech that lead to drug approvals. The FDA doesn't accept observational studies as proof of effectiveness. There is a reason for that. Only properly controlled, randomized studies are accepted by scientists as proof of causality.

    Let me give you an example. Suppose you wanted to study the causes of drunk driving. People who sit in bar stools are probably more than 10 times likely to be involved in a drunk driving accident within 5 hours after sitting in a bar stool than people who didn't sit in a bar stool. But, we all know that sitting in bar stools doesn't cause alcohol-related accidents. It's drinking that increases the risk of both bar stool sitting and drunk driving. If you were to build a model, bar stool use would be very highly correlated with drunk driving accidents and would be "attributed" some of the credit/blame for the accident.

    Similarly, searching for "iPad2" and being exposed to a sponsored search link, doesn't cause you to buy an iPad2. It's your desire to own an iPad that leads to both the search and the purchase. No amount of modeling can correct for this mis-attribution. A controlled study, in which iPad2 searches randomly either saw an iPad2 sponsored search link or not, would tell you how many incremental purchases to attribute to the sponsored search link.

  14. Ken Mallon from Ken Mallon Advisory Services, March 3, 2012 at 5:46 p.m.

    @SteveL,
    You state "We all agree that you need a statistically ... to assist impressions and clicks (is anyone taking the other side of this argument?)

    I guess I am, indeed, taking the other side of that argument. If one does randomized, controlled studies, you can get an unbiased estimate of attribution without doing any models. Now, it's true that no one has figured out yet how to make this scale and cover dozens of digital touch points. But, you can at least start by first assessing display-- ads on site A, get a good estimate of the incremental effects of A. Then assess site B and so on. I know it's possible to do tests in which both the display ads and sponsored search can be randomized to allow one to estimate how much to attribute to each. Going beyond two touch points starts to get complicated. But, I'd at least start there with scientifically controlled studies so you have some grounding in truth.

    ---
    Again, I'm not an attribution hater. I would just like to see more science or at least a recognition that controlled studies are ideal and that observational models can lead to mis-attribution.

    So, I would urge correlation vendors (I can't bring myself to call it attribution until control groups are used) to work a bit harder to incorporate control groups into what is done in this space.

    Meantime, let's all work together to end last click attribution!

  15. Brian Dalessandro from media6degrees, March 5, 2012 at 10:12 a.m.

    @Ken. Observational methods work when there is some level of natural experimentation in the data. This means that all possible confounders (between treatment and outcome) are measured and all combinations of treatment and confounder are observed. In many circumstances, these conditions are not met, and thus a properly executed A/B test is the way to go. In your example, if the level of drinking as well as bar-stool sitting were observed, one would be equipped to establish the causal relationship between bar stools and drunk driving accidents. The FDA I'm sure is an organization that prefers to stay conservative and thus requires randomized studies. That isn't exactly proof that observation methods can't work (only that they have more potential for error). Mathematically, the A/B test and the right observational method under strict assumptions estimate the exact same thing. Any non-random difference between the two is a matter of the data and access to honest/good statisticians. I would like to introduce you some work that the statisticians at M6D have been doing around observational methods. In the spirit of progress, your professional and scientific comments are always welcome. Thanks for the lively conversation.

    http://m6d.com/wp-content/themes/m6d/documents/CausalKDDWorkShopPaper2.pdf

Next story loading loading..