Clickthrough conversions are those conversions that happen after a user clicks on an ad. The conventional wisdom is to credit every clickthrough conversion back to advertising. The idea is that when a user clicks on the ad, he/she not only indicates that they saw the ad but also shows an interest in the product that is being advertised. Hence the subsequent conversion must be directly effected from the advertising they saw/interacted with.
Unlike clickthroughs, viewthrough conversions happen when a user is served an ad upon visiting the publisher site. These conversions are differentiated from clickthroughs by the fact that the individual did not click on the ad even though the conversion did occur at a later time. For marketers, it is the lack of a click that raises the question whether such conversions are due to advertising. After all, it is entirely possible that some of the 'exposed' users may not have even seen or paid attention to the ad. For these users, the conversions are simply incidental. Those conversions would have happened anyway.
Because of this complexity, while measuring ad effectiveness, marketers/agencies generally take two very different approaches towards viewthroughs. They either attribute 100% viewthrough conversions back to an ad campaign or attribute no viewthrough back to the campaign. The truth of the matter is that such dichotomous treatment is as far from the truth as you can be, and I believe that both sides would agree that reality is hardly either-or but how-much. The fact that we have continued to do something that we know is not foolproof is mainly due to the lack of an alternative. The interactive community has thus far failed to come up with a viable method to account for how many viewthrough conversions can truly be credited to an ad campaign.
There has been no lack of effort for trying. The best publicized study was done by DoubleClick and Continental Airlines in 2004. By using a carefully designed control group, the study found that 67.5% of the viewthrough conversions can be attributed to the ad campaign. The finding is significant in the sense that it empirically demonstrates that some viewthrough conversions (but not all) are a result of ad exposure. Unfortunately the number 67.5% hardly means anything for those who are anxious to quantify how well their own campaigns have been doing. After all, the attribution percentage is likely to be different from campaign to campaign, from placement to placement, from creative to creative, from industry to industry and etc. To make matters worse, such a control study is awfully expensive to run. It is simply not feasible to implement it as a day to day routine. In the end, such studies have left the marketers/agencies still without a viable/better way to measure and understand how effective their campaigns are doing. So the conventional either-or approach remains.
Have we truly hit the viewthrough dead end? Are we at a point where we should raise our hands and give up on this? Not so fast. There is a new and innovative way to approach this seemingly unsolvable problem. Not only will this methodology allow you to estimate the true viewthrough attribution percentage but it will also let you do it with very little additional cost (as a matter of fact, for most campaigns the incremental cost of doing this can literally be none). Besides, unlike the control study approach, this method requires virtually no change to your original media plan. The only requirement is to track your campaign properly, which I believe is what we are supposed to do.
The crux of this approach is something called ad decay. We all know that human memory fades with the passing of time. This certainly speaks to advertising as well. No matter how effective/memorable/influential your creative is, if consumers stop seeing it for a while, it gradually disappears from their mind. In other words, advertising effect decays over time. Empirically, the decay can be represented through a downward trending curve.
When we are focusing upon viewthrough conversions, the curve is constructed by bucketing all the viewthrough conversions by day(s) of latency on each frequency level. Latency measures the lag effect of advertising on conversion. A latency of 1 day includes all the conversions that happen within 1 day after the last time the user is served the ad. Since we normally track it for 30 days, we are looking at 30 different latency buckets If the ad has any effect, we should see a downward trend as latency days extend. In other words, more conversions happen at the beginning than at the end of the latency scale for the frequency level that the curve is built upon. The decay curve, in and by itself, demonstrates that there exists some impact from the ad campaign upon conversions. As a matter of fact, if there were no ad campaign, consumers would "convert" at a fairly constant pace on the latency scale. In other words, we should be looking at a flat line instead of an ad decay curve.
To take this line of thought one step further, we can actually calculate the elusive viewthrough attribution percentage from the decay curve. If the curve decays at a certain frequency level, there will be a point on the decay curve at which the curve starts to look more like a flat line than a curve.
Indeed, it is from this point on that the effect from an ad campaign has become negligible. As we have already explained if there had been no advertising the rate of conversion would be a flat line, the daily rate of conversion with no ad for the plotted frequency level should be just the y value of the 'flat' portion of the decay curve. After we extend the 'flat' portion of the decay curve back to Day 1, the area below the flat line is the total number of conversions that would have happened even without the ad campaign. The area between the decay curve and the flat line covers the number of conversions induced by advertising. The viewthrough attribution percentage for that frequency level is the area between the curve and the flat line divided by the entire area below the curve. This line of calculation should be repeated across different frequency levels and an overall attribution rate can be aggregated from the numbers derived from such frequency level analysis.
The above process has provided us with a viable and innovative method to assess the impact of advertising on viewthrough conversion. However, I don't think we should limit its application just to viewthrough conversions. In reality, it should be implemented to clickthrough conversions as well. The current approach of crediting every clickthrough conversion to advertising campaign is conveniently misconceived. As a matter of fact, like viewthrough, some clickthrough conversions are also a result of incidental. The fact that someone saw an ad, clicked on it, and made a conversion later on does not necessary mean that the person would not have converted anyway without the ad. If anyone cares to construct an ad decay curve on clickthrough conversion, most likely he/she might see the same pattern as in viewthrough. The curve for clickthrough eventually flattens out. In other words, there exists some level of incidental for clickthrough. Not every single clickthrough conversion is impacted by advertising. As a result, in addition to viewthrough attribution percentage, we should also be calculating clickthrough attribution percentage. The good news is that the approach we developed for viewthrough can easily be implemented in clickthrough scenario.
I don't think I need to emphasize more on how important it is to measure ad effectiveness. Agencies rely heavily on effectiveness metrics to do their media planning and buying. To give too much or too little credit to ad campaigns as we have been doing today is the best we can do under the circumstance but does leave some media tactics on shaky ground. For instance, you can easily have a publisher website that, on the surface, drives a lot of conversions (both viewthrough and clickthrough) from an ad displayed on that site. But in reality, a majority of them are a result of incidentals. Whereas you have another site where the conversion number is lower but incidentals are minimal. Without knowing the incidental (current practice), we are saying first site wins the game but in reality second site may perform better. This also changes the cost metrics associated with those conversions. The cost per conversion of the second site may in reality be a lot lower than that of the first one. It is likely that with the correct ad effectiveness metrics calculated using the new approach, we will be looking at different and more optimal media plans than those we have been creating under current practice. As a result this simple approach may bring some significant changes to our online media plan/buy landscape.