Commentary

Attribution & Analytic Models - Incredible! But Only As Good As The Plethora Of Databases Involved

  • by , Op-Ed Contributor, November 19, 2020

Day 3 of ARF’s “Attribution & Analytics Accelerator” focused on “Analytics-Driven Business Results.”

Five case studies touched on optimizing the creative mix, media selection beyond digital and television, and even beyond media to encompass the full marketing channel mix when based on a truly holistic approach.  

Kroger, AXA Insurance, Citi, Kellogg, and K12 with the day’s sponsor iSpot.tv, revealed the different values and contributions to improving marketing decisions that different modeling approaches and their associated databases can deliver. 

They also echoed the two previous day’s admonitions regarding database holes in any resource, overall data availability and validity, plus, the complexities, if not impossibilities, of harmonizing disparate databases.  

The Session Leader John Leeman, vice president integrated marketing at PetSmart clearly understands the marketing challenges that these approaches can address and the challenges with the approaches themselves. 

In what was for me the most intriguing presentation of the day, Sarah Landsman, vice president marketing, Kroger and Mike Menkes, senior vice president Analytic Partners presented “Adaptive Analytics for Speed to Insight and Real-Time / Right Time Decisioning.”

It outlined Kroger’s program that is helping to track performance, shape quicker decisioning, and inspiring a test-and-learn culture for the entire organization. 

Marketing mix modelling, which has unique capabilities, has been notorious for the time it takes.  It has been put on steroids with a few twists by Analytic Partners.  The Kroger-AP partnership is now providing positive results in “real time,” while still holistically measuring across marketing and non-marketing drivers, as well as factoring in controllable and non-controllable factors while continually testing new capabilities.  

Sarah also stressed the importance of using analytics to drive both long-term brand health and short-term performance.  The extensive integrated database that the model uses was addressed as, “A single source of truth” which, in my opinion, is rather misguided especially when purchasers of Facebook and Google data, to name a few, are known to be full of holes!  

AXA Insurance underscored the nightmare of data collection and accuracy to meaningfully “feed” any analytic model in a presentation called “Machine Learning Models for Underwriting and Pricing Around the World.” The presenters described the data quality and reliability involved as “questionable.”

AXA has been building and deploying machine learning enabled models worldwide for underwriting and pricing utilizing internal data merged with third party data.  Their innovative analytic approach currently enables them to guide the business decisions of over 100 users and their more than 5,000 accounts for this highly competitive billion-dollar global business.

Newcombe Clark, business solutions, strategic analytics, AXA XL and Rob Moss, global head of pricing, marine and crisis management, AXA XL now have the ability to balance pitching and pricing decisions based on profitability versus gross revenue which are often in juxtaposition in this business. 

Anthony Michelini, managing director, global head of brand strategy, media and analytics, Citi and Marc Vermut, vice president marketing solutions, Neustar focused on the challenge of managing near term ROI versus longer term ROMI across the portfolio of Citi products.

Halo effects of the Citi brand name play a major factor in the marketing of any one product which is why Citi uses a unified brand measurement framework.  Neustar embraces both multi-touch attribution, MTA and MMM to address respectively, audience propensities and specific campaign investment adjustments and, to budget resources in the portfolio based on what channel is driving the greatest incremental impact based on the total ecosystem. 

“Leveraging the Super Learner’s Counterfactual for Scenario Planning” is Kellogg’s unique household targeting tool developed in partnership with NCSolutions, NCS.  Janelle Bowman, senior director, advanced data & integrated analytics, Kellogg Company, Dr. Leslie Wood, Chief Research Officer, NCS and Shobana Balasubramaniam, senior manager, R&D, NCS explained that Kellogg is leveraging a machine-learning sales effect methodological approach that produces a “counterfactual” with estimates for sales with and without advertising for every household.

Essentially, identifying the most responsive household target groups to advertising including those that were not reached and assessing their potential ROI value if they were reached. 

David Young, vice president, demand generation, K12 started his presentation by positioning the limitations of MMM which was likely considered contentious with the modelling cognoscenti despite David’s recognized expertise in this arena.

Like Mike Menkes of Analytic Partners, and Marc Vermut, Neustar, each modeling approach clearly has value and is suited to addressing quite different marketing or advertising questions.  For example, it was suggested that MMM cannot measure differences by creative, dayparts, stations, spot lengths.

As a strictly online marketer, K12 has unique media and marketing challenges.  David reviewed, “Combining real-time TV tracking and mixed models to double TV ROI” with Mark Myers, senior vice president, customer success, iSpot.tv.

K12 combined real-time TV tracking data (iSPot.tv) with traditional marketing mix-models to measure varying and long-term TV ad effectiveness.  The team was up against varying coefficients, massive volumes of data, and challenging complexity, and ultimately doubled the ROI of national TV advertising.

It should be noted that iSpot.tv measures viewable impressions -- content rendered to a screen and not audience “viewing.”  This was a data relevance concern that was raised in yesterday’s session.    

The key questions for analytic techniques and models that have made tremendous strides remain.  “Is the data any good and relevant?”  When is an ‘impression’ (media) not an ‘impression’ that will actually drive a brand outcome?  (Rhetorical!) 

Stay tuned for my Commentary on the last Day, “Accelerating Recovery” featuring MolsonCoors, Estee Lauder, GSK, Chick-Fil-A and OptiMine. 

1 comment about "Attribution & Analytic Models - Incredible! But Only As Good As The Plethora Of Databases Involved".
Check to receive email when comments are posted.
  1. John Grono from GAP Research, November 19, 2020 at 4:45 p.m.

    Thanks again Tony.

    Ironically, there are creative ways to address the suggestion that "MMM cannot measure differences by creative, dayparts, stations, spot lengths".

    While most data is numeric, you can use a numerical measure for things like spot length, station mix and daypart mix,or a use a dichotomous Y/N variable.

    One brand we worked on was 'Scratch Lotteries'.   Clearly the prize pool was the biggest driver, but sometimes sasles results didn't meet expectation.   The agency and the lottery client both thought that the creative was probably the differential factor.   But how do you measure creativity?

    This was in 2000 in the lead-up to the Sydney Olympics so we got the creatives, the agency marketing team and the client to 'award' Gold, Silver, Bronze medals to each ad.   We used that in the model and blow me down ... it worked.   We fine-tuned it with the 'theme' of the ad and improved it even further.

    Moral of the story - not all predictor variables need to be numeric.

Next story loading loading..