Marketing Attribution Models Are Not A Magic Wand To Improve Sales

Advertisers may have become more pragmatic when it comes to marketing attribution models and tools, but they are no magic wand. While the right ones bring value to a business, we’re still in the early days of their implementation. On their own, for instance, most models cannot easily capture the full complexity of the journey taken by customers to make a purchase today. That journey now takes place across an increasingly fragmented display landscape that came into being following the advent of real-time bidding (RTB) and from the perception that display advertising can drive performance in a fashion similar to search.

As a result, it’s important to understand that all direct-response advertisers -- from next-generation algorithmic method users to the tried-and-true last-click attribution users -- may have different methodologies when it comes to attribution. However, a clear consensus is emerging about the importance of evaluating whichever model you have in place so that it ultimately has a positive impact on your business.



First of all, you must challenge whatever findings you are provided with by your model. No matter how cutting-edge, a model will always remain just that -- a simplified version of reality for easier review. So it’s important to know where the boundary is between simple and simplistic by getting a clear view of what your model is missing. If you don’t, you might miss out on sales opportunities. Although 80% of direct-response advertisers use last-click, for example, there are now marketing attribution models and methods -- from single-touchpoint to multi-touchpoint -- that enable advertisers to become more sophisticated in their approach to determining ROI across digital channels as well as multiple screens, sales channels, or media. To fully reap the benefits for your business’ needs, you must recognize early on that you’ll need to leverage new technology platforms and skills, be prepared for disruption, and that you may need to develop new relationships with external vendors.

In the meantime, remember what you used to hear about assuming when you were in grade school? It still holds true when it comes to making the best use of your model for marketing attribution, requiring you not to assume but start testing right away. After all, testing is the best way to demonstrate causality –- from clearing up doubts concerning the value of a marketing channel to seeing what doubling your spend there would really do to your sales.

Two solutions that you can immediately implement for this purpose are “Second View” -- using advanced attribution tools to examine which channels are most present in the customer journey -- and “A/B Testing,” which involves comparing online behaviors between two groups of users -- one using traditional display and another using performance display. By enriching existing metrics through Second View, you can get a more practical view of the channels which are instrumental for generating sales while A/B Testing can deliver a true measure of causality and keep infrastructure changes at bay before you’re ready. 

When it comes to influencing the purchasing decision, there are certain touchpoints that you need to focus on and others whose importance is dubious. For instance, your model shouldn’t include direct traffic or “navigational” channels -- those that enter the customer journey but don’t make a difference in convincing the customers to complete their purchase. In an eConsultancy survey commissioned in by Google in April 2012, only 14% of advertisers said they believed that such touchpoints -- which fall under the rubric of last-click attribution because they are situated one click prior to purchase in the customer journey -- are “very effective.” Although this might not seem like a game-changer when it comes to switching attribution models, it’s actually a big step toward what really matters to you, the advertiser: generating more sales. 

As a marketer, knowing how to get customers to the cash register is as much your job as making strategic decisions as to what your advertising spend should be. By keeping these suggestions in mind, you may not have found a silver bullet for managing the complexity of marketing attribution models, but you’ll have an improved understanding of your ROI until emerging, more sophisticated, multi-touchpoint models appear in the marketplace.
3 comments about "Marketing Attribution Models Are Not A Magic Wand To Improve Sales".
Check to receive email when comments are posted.
  1. Jeff Zwelling from Convertro, October 25, 2013 at 2:28 p.m.

    Interesting article, Greg. While you make some valid points about the complexity of the customer journey, that is not a stumbling block for all attribution providers on the market. Technology can now track and link the paths that were previously thought to be impossible to measure, such as cross-device browsing, TV and direct mail. The same goes for your comment about making assumptions – there are some models that do hinge on that particular strategy, but it’s not a reality of every solution. Media and marketing mix models do rely more heavily on assumptions, but data-driven multi-touch attribution models go the extra step of incorporating the full customer journey, however complex it may be. At Convertro, we use algorithms that track the customer’s path to purchase from start to finish, across all marketing touch points, both online and off. This allows us to assign credit to the various touch points in the journey in a scientific way,rather than simply relying on the most commonly held industry beliefs. It may not be a magic wand, but the technology exists to measure user responses, not proxies, bringing companies much closer to the ideal than what people believe is possible.

    Jeff Zwelling
    CEO and co-founder of Convertro

  2. David Dowhan from TruSignal, October 25, 2013 at 4:28 p.m.

    Jeff - Greg is very careful to make the distinction between correlation and causation. He is saying that advanced attribution tools can give good insights into the path to purchase...which touch points are correlated with a purchase. But just because a particular touch point is on the path to purchase does NOT mean it helped to cause the final purchase itself. The causation of a particular campaign can be assessed through carefully controlled A/B testing, not by connecting the dots along a path to purchase. I do understand that advanced attribution systems can provide extremely valuable insights into the relative importance of various channels and help guide channel budget allocation; however it is incorrect to calculate the "credit" that a particular campaign should be awarded on the basis of the timing, frequency, and placement of a touch. This is making a huge leap of faith from correlation to causation.

    In short, my position is:

    Attribution Systems good for understanding path to purchase, and allocating budgets across channel mix.
    A/B testing - good for campaign-level ROI analysis

    I would absolutely love to be proved wrong here (and maybe a demo of Convertro would help me see the light), but I have not come across an attribution system yet that can calculate campaign-level ROI using causation as the basis instead of correlation.

  3. Nathan Janos from Convertro, October 25, 2013 at 6:30 p.m.

    Hi David, Convertro approaches the multi-touch attribution problem from a machine learning perspective using statistics and a contribution algorithm to determine the relative influence of each channel of advertising exposure at the individual path level. It *is* important to distinguish between causality and correlation and Convertro does this by applying various business rules (such as down-weighting navigational sources) in a post-regression application of the model. Marketers can't yet reach into people's minds and model the psychology of each individual user, but we've gone a long way toward understanding each touchpoint at a user level by applying robust statistical models to our data in a source-agnostic fashion, thus eliminating built-in bias that is a primary effect in every type of heuristic model in the market. A/B testing is, in theory, a great way to understand differentiability of campaigns, but only when the campaign complexity is at a very basic level. It would be impossible to A/B test 10s or 100s of thousands of individual creatives and campaigns against each other just due to sheer combinatorics (not to mention the technical challenges), which is why we use a statistical approach to determine weights across a deep granularity of sources to fill in these gaps (down to the keyword level in the case of PPC, for example). In our validation studies we've found that our model predicts conversions in a hold out data set significantly better than any other heuristic model. Nathan Janos - Chief Data Officer @ Convertro

Next story loading loading..