Commentary

Attribution: Why We're Still Doing Precise Things With Imprecise Data

  • by , Op-Ed Contributor, November 15, 2022

Day One of the Advertising Research Foundation’s (ARF) annual “Attribution & Analytics Accelerator” conference on Monday began the way past years have: raising as many questions as answers on various modeling approaches and techniques. The focus was on experiments to improve best practices, techniques and the rigor of estimating incremental ROI results. 

“Encouraging forward-thinking and separating causality from co-incidence,” noted Jim Spaeth, Partner at Sequent Partners, which co-hosts the annual event with the ARF.

Media Rating Council (MRC) Senior Vice President Ron Pinelli set the stage with a review of its recently released “Outcomes & Data Quality Standards,” which are the result of a 300-member MRC working group. It builds on an array of existing standards previously published by the MRC and has particular relevance and application to the complexities and potential accreditation of any attribution service.

advertisement

advertisement

Tonal Vice President of Growth Ian Yung and Measured CEO Trevor Testwuide presented experimental approaches they’ve been using to understand media’s contributions to incremental ROAS (return on ad spending) and reducing wasted. They suggested “last touch attribution is not working and cannot be trusted.”  Amen!

Of particular note, was their insight that ad spending on social media platforms can be overkill, based on their approach and evaluation of ROAS in certain circumstances.

In what appeared to be a pitch for Meta’s open source “GeoLift,” Meta Marketing Science Partner Nicolas Cruces suggested “democratizing measurement” and “providing the opportunity to replicate research.”

I found that encouraging in light of the data walled gardens many social media companies have in place.

Mercado Libre Insights & Analytics Manager Victoria Schiappacasse, said using GeoLift provides high impact at low risk for marketers.  However, ad adjacencies to toxic material on any social media feed always is a brand safety concern, whatever the analytics indicate.   

When it comes to social media platforms, Ocean Spray has ascertained that prospecting works better than re-targeting based on massive sample random control tests directed by Marketing Attribution CEO Ross Link.

Link suggested that random control testing (RCT) has no selection bias.  Based on the MRC’s outcomes standards, perhaps that needs to be assessed?  

Adding to the complexities of attribution models is the need to protect users’ data privacy amid increasing regulation, as well as the depreciation of cookies.

This factor was stressed by Dentsu International Executive Vice President-Media Effectiveness Sudeshna Sen and echoed by Snapchat Global Agency Ad Research Lead Aarti Bhaskaran.

While this concern encourages media vendors to become partners with agencies and their clients – and to share data – it raises the question of how such vendor/seller data is independently verified and based on what standards.  

These two presenters made the case for a “single source of truth,” though in my experience, there is no single source – nor truth – in today’s media and marketing analytics world.

Technology permits us to do very precise things with very imprecise data, and the concluding panel of the day reminded us that significant marketing decisions do not necessarily need the most stringent detailed data, approach, or analysis to potentially increase a brand equity or sales.  

A saving grace or...?  

Stay tuned for my commentary and questions from Day Two on Tuesday along with further insights on the art of attribution modeling vs. the science and use – or as the case may be, abuse – of AI.  Or attend virtually. 

Next story loading loading..