Poor Attribution Models Waste Billions Of Ad Dollars

Collective has developed a model to determine online ad attribution and plans to release a study Wednesday pointing to flaws in methods marketers use today, such as click-through rates, post clicks, and impressions. The strategy -- causal attribution -- aims to support ROI from display and video advertising by linking it to advertising spend, including identifying campaigns that don't work.

Jeremy Stanley, SVP of data sciences at Collective, said today's methods don't prove the causal link between the advertising expenditure and resulting purchase. Instead, they confuse correlation with causation, misleading advertisers. He calls such methods "misleading" and "subjective."

The study considers Forrester Research's estimated $12 billion in U.S. display ad spend in 2011 -- and assumes that 75% of it went to direct response campaigns. It also assumes 20% of that spend was wasted due to misleading measurement systems, suggesting companies in the U.S. threw away $1.8 billion last year.

Causal attribution relies on the creation of an experiment that measures outcomes caused by advertising. Collective describes this model as one that monitors change in desired outcomes through A/B testing for both online and offline ad campaigns. The tests are based on random audience groups, rather than randomly selected impressions. It measures the cumulative effect of multiple advertising impressions over time on individual users.

The six-step process requires limiting the experiment to cookies likely to remain in the person's browser for the entire duration of the experiment. This is done to limit the impact of cookie deletion. Collective defines a stable cookie as one seen at least once within the last 28 days and on at least two separate days over the life of the cookie.

Step two requires dividing the users into test and control groups, before delivering advertisements only to the test group in step 3. Steps 4, 5, and 6 observe and measure desired outcomes, along with calculating the causal lift, respectively.

However, two major limitations exist in the model: dependence on browser cookies, and time it takes to achieve statistical significance.

If someone deletes the cookie in their browser there is a chance it can weaken the experiment;  the model requires the ability to track the cookie for 48 hours. Collective can analyze more than 20 billion impressions and identify cookies likely to persist beyond 24 hours. It can limit the findings to about 200 million users who don't continually delete cookies.

The company says it can work with any online conversion or action, such as purchase, registration or download, or social action. It also works with offline conversions with anonymous point-of-sale, credit purchase data, CRM purchase or value data. The model supports brand measurement to determine awareness. 

Campaigns with significant volume can return results in two weeks. Stanley looks for data that helps build a brand over time, rather than "chasing their tail trying to increase click-through rates."

4 comments about "Poor Attribution Models Waste Billions Of Ad Dollars".
Check to receive email when comments are posted.
  1. John Grono from GAP Research, February 15, 2012 at 11:53 p.m.

    So should Step 0 be to include all non-online marketing activity as well, as causality can only be attributed to the variables being analysed.

  2. Robert Brazys from DataXu, February 17, 2012 at 1:49 p.m.

    @ John Grono - YES! First Party data is absolutely necessary for a completed attribution model. But sadly, most agencies and brands STILL silo their online and offline efforts and even create completely separate budgets and teams that rarely if ever speak with each other. While this may make sense in the back of the house, it really only serves to decrease efficiency.

  3. Brian Dalessandro from media6degrees, February 17, 2012 at 2:49 p.m.

    It is good to see others express attribution as fundamentally a causal problem. At M6D we have been researching attribution and have come to the same conclusion. We wrote about it here: http://m6d.com/blog/ . We also link to the white paper we wrote on the subject.

    Also, as an aside, with to establish the causal effect of on-line activity using A/B testing, you don't really need to account for off-line marketing activity, as the randomization should average it out in the test and control groups.

  4. John Grono from GAP Research, February 17, 2012 at 6:04 p.m.

    @ Robert. Agreed. When I worked at an agency in the late '90s here in Australia (before we split away into creative and media agency) I helped set-up an econometrics modelling unit with the express intent of developing marketing effectiveness attribution models. It was a tough slog back then to get the client data but with savvy account managers who were not scared of research and numbers we were successful with some of the more switched-on clients. Everyone wanted to see these 'case studies' but none wanted theirs put in the public domain - that is the other great problem. In a nutshell, advertising was (as we have always expected) a 'weak force' in nearly every category. Price relativity, promotional weight and distribution tended to be the "must haves". Most models ended up with around five to six key drivers (Occham's Razor at work) and fortunately advertising weight was, with only one or two exceptions, a key driver. And when it was it was because we could demonstrably prove that it drove the base-line sales higher, while the other activities 'spiked' sales. I'd love to get my hands data now that online marketing and e-commerce are significant players. And as you say, if they are treated as silos by either the client or agency then it is really only a partial picture. I'd rather have good knowledge of how my market works (i.e. tales into account competitors) than perfect knowledge of a single component of my brand's marketing.

Next story loading loading..