Commentary

Fractional Attribution: Nice To Have, But Hard To Use

In the past few years, there has truly been a blossoming of companies trying to tackle -- or claiming to have tackled -- one of the thorniest issues in digital media metrics: fractional attribution.

Fractional attribution is ad essentially effectiveness measurement beyond the last ad exposure. For those who are new to the field, the last-ad model has been dominating the digital measure for as long as we can remember. More recently, there has been a chorus of criticism on the last-ad approach, arguing quite correctly that even though last-ad exposure may possibly be more important than any other exposure, to completely ignore the latter does not make empirical and/or theoretical sense. Consequently conversion metrics based upon last-ad model are not as accurate as they are supposed to be.

Fractional attribution is a way to address the flaw of last-ad attribution by taking multiple prior exposures into account -- in practice, giving a percentage of attribution to each touch-point leading up to conversion.

In today's market there are two different types of products/services that claim to do exactly that. First, there are companies whose tools give users the option to assign their own weights/fractions to different exposure points. The problem with this approach is that most users have no clue what kind of weights they should be putting into the system. What makes it even worse is that the tool would not be able to tell us whether one weight schema is more valid than others. In fact, it is usually not even able to demonstrate empirically that fractional attribution is a better approach than the last-ad model. So to a certain extent, by using the tool we are making a leap of faith.

However, there is a more sophisticated attribution service in the market that would go beyond the user-defined-weight approach. Unlike the self-serve tool, the service usually includes an attribution engine that is able to create weights via statistical modeling based upon the campaign data that is being fed into it. As a consequence, the attribution is done in an automated fashion without any gut feeling guesswork from users. Moreover, attribution done this way is far more accurate and valid than what users can come up with from experience. It speaks directly to the campaign we are supposed to measure effectiveness on.

Unfortunately, at least at present, attribution modeling is not something that can be done cheaply. To crunch through log files is computationally intensive, which inevitably translates to high cost. And cost is one of the factors that prohibits its wider adoption.

Regardless of which fractional attribution approach we are using, the end point of an attribution process is no different in FORM from what comes out of last-ad model. We should expect to see the number of "conversions'" that are credited to each placement/creative. Unfortunately effectiveness measurement is never the end goal of optimization. Having proper measurement does not automatically lead to knowing how to make such media decisions as optimal media budget (re)allocation across different sites/placements.

To know precisely which sites/placements deserve additional spending and by how much is not a trivial thing. Many times such decisions are made with a lot of art but very little science, even with ad effectiveness measurement in hand. On the one hand, the industry definitely needs adequate performance metrics that are more accurate than those from the last-ad model. On the other hand, the industry would benefit even more if there were a decision engine that could take some of the art out of the media planning process.

There is no denying that tech firms that have been focusing upon fractional attribution have been doing a tremendous job in getting closer to a truer attribution model. However, to nail the attribution question without a good optimization engine/tool is putting the cart before the horse. I would safely predict that the demand for fractional attribution service would increase significantly if some of the results from fractional attribution could be fed into a media decision engine.

So for now, fractional attribution tools/services in their current form will remain something that are nice to have if there is extra budget -- but far from essential to the media decision-making process.

7 comments about "Fractional Attribution: Nice To Have, But Hard To Use ".
Check to receive email when comments are posted.
  1. Craig Macdonald from Covario, July 29, 2011 at 12:28 p.m.

    Chen,

    I couldnt agree more with this. I have been doing work on this problem for 3 years now, and have consistently been baffled by the approach being taken around what I have called Transctional Attribution Modeling -- which is trying to find the incremental value of each media touch point. the number of permutations by which consumers a) interact with brand advertising digitally and b) then convert seems to defy the approach. All attempts to ultimately reallocate "last click" credit using first click, or 50.50 split between first and last click, or whatever -- they all seem to have just as many issues or are providing arbitrary weights. I dont think computing power is the limitation. That is becoming increasingly cheap. The limitation seems to be the approach. It is not answering the salient question -- "how should i budget by channel in order to optimize media return." Do you think econometrics is a more appropriate apporach to this issue?

  2. Matt Curcio from Aggregate Knowledge, July 29, 2011 at 5:23 p.m.

    Chen,
    Fractional or Multitouch Attribution is is only a piece of the puzzle. Regardless of the sophistication, MT Attribution is merely a scorecard of media based on historical data. The other pieces are testing (generally with public service announcements or PSA's) and econometric portfolio optimization tools as Craig mentioned above. The combination of MTA score-carding, thoughtful testing, and portfolio optimizations and will ultimately enable advertisers to be incredibly efficient in their buys. Of course, a platform that enables MTA models and testing to be done everyday is implicit in making all of this actionable.

    Matt

  3. John Grono from GAP Research, July 29, 2011 at 5:45 p.m.

    Good article Chen, but I found its focus quite narrow.

    Let's say the digital attirbution model is miraculously solved overnight - that then leaves the issue of including all the non-digital components of the communications mix into the communications attribution measurement model.

    In a past life I did a lot of work with econometric modelling. These models (in the majority) had strong explanatory power (>80%) but they ALL worked on aggregated data to explain what was driving aggregate sales. Further, while the explanatory variables analysed in each model numbered in the dozens, the key driving factors rarely exceeded 5. That is, while all these factors contributed in some way, the majority were either correlated to other more important factors and therefore could be explained in some other way, or their contribution (at an aggregate level) was comparatively small. Yep, Occham's Razor at work again.

    The thing we have to keep top of mind is that we do not have to understand every relationship at the micro-level when what a marketer is looking at is the big picture - which lever do I pull or which button do I press that sells the most product. Click attribution models work best when you have brands that ONLY use internet advertising to advertise brands that can ONLY be purchased via the internet (and there's not too many of them around).

  4. Andre Szykier from maps capital management, July 30, 2011 at 2:27 p.m.

    A consistent problem found with fractional attribution is the use of beacons to track page views CPM, click throughs CPC and purchases online CPT. The time between CPC and CPT can be long and if the ad appears at more than one site whereas the beacon tracks only one site, then sale attribution for the CPT can be disputed by the seller.

    Typically the contract says that a max 30 day window between CPC (from original page view tracked by beacon) and a CPT (regardless of which site got the viewer there) gives the value to the ad provider or ad network providing the tracking. After that, no value is recognized to any party once a sale is completed.

  5. Bill Muller from Visual IQ, August 1, 2011 at 9:53 a.m.

    Great article Chen! As long-time fractional attribution practitioners we at Visual IQ agree with a great deal of what you say about the superior science that it provides to marketers. Where we differ with your thoughts is in the ability of marketers to translate its insights into actions. The key to doing this is for the fractional attribution provider to deliver a set of metrics which mirror their last ad metrics, but which are recalculated to include the fractional attribution. In other words, all the cross channel, cross campaign and cross attribute impact is built into these new metrics. Since most marketers and their agencies typically already have systems and processes in place for optimizing their media buys, putting fractional attribution into action is as simply as substituting the recalculated TRUE metrics, for the skewed last ad metrics. This truly allows marketer to turn the corner from “art” to “science” – as does be the ability to use media scenario planning functionality based on these recalculated metrics to inform future spending allocations (which serves as an “easy Button” even if they don’t have existing optimization procedures). Simply by replacing skewed last ad metrics with metrics that incorporate fractional attribution marketers typically see 10-40% increases in media efficiency – which in a direct response context translates to a 10-40% increase in ROI. In our view this type of return makes an investment in fractional attribution more than affordable.

  6. Wilson Kiw from Tips-For-Excel.com, August 3, 2011 at 4:57 a.m.

    interesting but how would you tackle the increasing use of multiple devices in attribution? My journey starts on my iPad then converts on my home PC for example...

  7. Wilson Kiw from Tips-For-Excel.com, August 3, 2011 at 5 a.m.

    also, here's a good review of an agency (admittedly one I work at) that tackles attribution using statistical methods. They're beginning to look at how social should feature in these paths too.

    http://www.havasdigital.com/insights/artemis-attribution-weighting/

Next story loading loading..