It’s becoming widely known that attribution and optimization both play important roles in helping online advertisers improve the performance of their cross-channel digital campaigns. But
today, many attribution and optimization methods still rely on simple models and human intervention, which are fraught with errors and pitfalls when it comes to accurately and effectively improving ad
campaign performance over time.
To start, let’s talk about attribution. The two most common attribution models are “last event” and “ad hoc weighting.” As the
name implies, last event assigns 100% of the credit for a conversion to the last event (such as a click), even if that click was influenced by a whole series of other display or search advertisements
over time. Ad hoc weighting assigns decaying weights to events that are further in the past, but may add a bonus for the first ad seen.
The problem with both models is they’re based on
subjective assumptions, not on scientific analysis. Instead of relying on guesses, why not let the data itself demonstrate the effectiveness? If a particular ad is effective, shouldn’t users who
saw that ad be more likely to convert than users who didn’t? Shouldn’t we be able to measure this lift from the data?
The answer is a qualified yes. As long as you have access to
all of the data -- including converting and non-converting users -- you can accurately and objectively assign the proper credit for conversions using algorithmic attribution analysis. Only this
approach to attribution will give you the effective recommendations you need to improve your cross-channel campaigns over time.
Now, let’s discuss advertisers looking to improve the
performance of their ad campaigns using optimization.
A common approach to optimization is to conduct a series of A/B tests comparing different sites, creatives, ad positions, etc. to each
other and then keep the best at each step. The problem with this approach is that it can’t properly handle the complex non-linear interactions of the real world, and therefore will never result
in a completely optimal set of recommendations. Let’s examine why.
Say an advertiser conducts an A/B test to compare creative 1 with creative 2 to determine which provides better
performance. In this case, let’s assume the results show that creative 2 is better. The advertiser is then going to evaluate if creative 2 works better on site B or on site A. If this test shows
that site B performs better than site A, the advertiser will move forward by advertising using creative 2 on site B.
The problem with this method is clear. The advertiser has never
tested creative 1 with site B since creative 2 performed better in the first test. It may be the case that creative 1 gives the best performance when used in combination with site B. A series of
linear A/B tests like these, while commonly done, will never produce the accurate results advertisers need.
The best solution is to use an algorithmic approach to optimization that
simultaneously analyzes all possible scenarios to see which combinations produce the best incremental results. This creates an accurate predictive model that takes into account all of the
non-linearities and interactions. Once this model is in hand, we can find the optimal point subject to budget, volume, and bidding constraints.
When leveraging attribution and optimization in
campaigns, one thing is clear: only an objective scientific model can accurately predict how advertisers should adjust campaigns to improve results. In this increasingly complex cross-channel ad
world, it is becoming even more important for brands to make sure they are applying such principles in their campaign measurement initiatives.