Commentary

Tackling Conversion Attribution: The Importance Of Human Analysis

With all of today's technological advancements in accurate conversion tracking, how are you tackling the question of how to assign credit based on the multiple influences on your conversion activity?

It is certainly tempting to take the tried-and-true approach to this new problem: Study it, develop a product feature and release it to throngs of eager marketers. This approach has worked to resolve many challenges our industry has faced in the past.

The problem with this approach, however, is one of complexity.

• Can you simply credit all interactions equally and then get a score for each channel that contributed to a conversion? That gives no weight to recency, sequence, frequency or event type.

• What if we changed the scoring so that clicks were worth more than natural searches which in turn were worth more than impressions, and then layered on diminishing value for age and sequence? That would be better, but frequency is still ignored, as is site "stickiness."

• Surely a site should get more credit if at some point the user interacted with the same rich media banner multiple times in a row compared to interactions spread out over time, with a bunch of impressions sandwiched in-between.

• And how do you account for targeting resulting in the banner choice or content within? Furthermore, don't the variables I choose to measure inherently influence the results?

If one wanted to build such a product feature, I'm sure they could. They'd come up with a justifiable methodology for each of those questions. Because they're hard questions, trying to measure the indescribable means that each decision we make cannot help but reflect some bias.

But how is that actionable? If I find that Site X deserves more credit for conversion activity than I previously thought using last click allocation, does that mean I should spend more on Site X? Or do the results really need interpretation at a finer granularity? We could build a complex modeling scheme in which we play around with the credit allocations, without changing the actual conversion results, until we find a bunch of valuations we think make sense. But what would make sense about it? That it matches our pre-conceived notions? We could do this for hours, and ultimately all we have are a bunch of differing theories that all fit the data. Is that any more actionable?

This is a slippery slope. In the end, you still can't write a program that analyzes this type of complex information better than an intelligent, well-trained human.

The following is an example from a study conducted for a client that underscores the importance of human analysis in approaching these challenges.

• Close examination of search vs. display patterns showed that users used paid search links as a quick navigational tool to get to the website. Although nearly 50% of their conversions were credited to paid search, a high percentage of these paid search conversions only had a single paid search event in their path.

• In their case, performance is all about branded keywords. Nearly 90% of the paid search clicks leading up to a conversion and getting credit for the conversion were from branded keywords. 34% of total conversions had ONLY branded keywords, 5% had ONLY non-branded keywords, and ONLY 1% had both branded and non-branded keywords in their path.

• Display Media buys are continuing to drive conversions and influence conversions being won by Paid Search.

• 55% of total conversions were credited to display media (last ad wins).

• 68% of total conversion had at least 1 display media event in their path.

• 55% of total conversions had ONLY display media events in their path.

• Excluding conversions that only had 1 event in their path, 23% of the remaining conversions had at least 1 display event and 1 paid search event in their conversion path.

During the analysis, it was discovered that what was clearly missing from the mix was Natural Search tracking. If branded keyword navigation is a key end-user technique for reaching their site, it's necessary to know how many people were using the natural search results in addition to, or instead of, the paid keywords.

Fortunately, technology was able to show the interaction between the two, rather than just considering them mutually-exclusive channels. Data collection to refine the analysis goes on today with this client.

So what should marketers do? Don't expect tools or systems to perform the same level of analysis as a human. This is why conversion path data is a fabulous resource when used in conjunction with a human to interpret the data. Marketers should either pull the data in-house with a detailed data feed and put an analytics team to work on it, or hire consultants with experience to evaluate it. Demand specific recommendations, and then try them out with the next campaign. The essentials of marketing haven't changed: Apply a discipline, and then test, test, test. Conversion Path data is fresh and new -- work with it for a while before deciding the real patterns. Some basic standardized reports can start us in the right direction, but the white coats in the lab still provide the best answers.

4 comments about "Tackling Conversion Attribution: The Importance Of Human Analysis".
Check to receive email when comments are posted.
  1. Adam Goldberg from ClearSaleing, February 5, 2009 at 10:24 a.m.

    Excellent blog. How do you tell if the attribution rules established by human analysis are working? What is your definition of a model that works?

    My company tracks Purchase Path data so we can clearly see in a chronological fashion all of the impression, clicks, direct visits, and SEO visits that led up to a conversion. We calculate the true profit (revenue - margin - ad expense = true profit) earned on any conversion even if it happens offline. When we apply new attribution rules we see if total profit from advertising increases. If we see an increase then we know the model we developed and put into place is on the right track.

    I would love to know how you do the same.

    Thanks,

    Adam

  2. Ann Betts from FetchBack, February 5, 2009 at 12:05 p.m.

    This issue is one our clients have brought forward to us more and more with each passing day.

    When working with any organization, our best solution is exactly what you suggested above. We (FetchBack) use detailed analytics to measure all campaigns, and use that data to get a true gauge of what is happening for the customer experience to help marketers get the most clear understanding possible and make adjustments as necessary.

    Hopefully in the coming years more scalable and comprehensive solutions will be made available. That, in conjunction with the human expertise should help to ease some of this stress.

  3. John Grono from GAP Research, February 5, 2009 at 3:18 p.m.

    And may I humbly add, what if the impetus to go online and either search for the product/brand or got to sites to see the display ad was triggered by "traditional" media. That is, just maybe it was the TV campaign that triggered the spike in searches and the subsequent click-throughs and sales. What attribution should be given to the marketing activities further up the funnel. Maybe it was the letter-box drop that triggered the user typing in the URL. While I concur with your post and its sentiment I think it is just a tad narrow in its focus.

  4. Glenn Mar, February 6, 2009 at 2:09 a.m.

    John, I agree that those items are worth considering. While they are difficult to measure directly, the timing of offline exposure shouldn't be too hard to account for even if the actual overlapping reach is.

    Ann, nice to hear that you are seeing the same demand.

    Adam, in MOJO Adserver we can certainly log revenue per conversion event and use media cost (accounting for many different buy types and duration) as part of the automatic financial reporting. We can also accept offline activity and integrate it into our reports, providing there is some key value to connect with our data collected online. Product expense is (naturally) something our clients generally don't want to break out for us, but I see no reason why the revenue figure we get passed couldn't have a unit cost already subtracted from it. So in short, everything you're saying is already supported out of the box.

Next story loading loading..