The following column was previously published in an earlier edition of Online Spin:
Recently a media buyer asked me, in the course of a presentation, whether certain insights were modeled. It’s a good question, sort of. When I qualified the answer, telling him that the network in question had 30% return paths, and a census of declared data based on actual customer addresses, he repeated the question: Sooo it’s a model, (becoming terse) right!? Yes, but … this model, in the grand scheme of things, was very predictive.
In retrospect though, my answer was not too great either. We both fell into the widening gulch (like Thelma and Louise?) between assumption and reality created by the complexity inherent in data-driven advertising.
So, What’s a Model?
Almost everything. Nielsen ratings are a model. A map is a model. Your own freaking brain is basing your perception of reality on a model. Almost every conclusion in marketing is based on a model of some sort.
They say neurosis is a bad map of reality. And so it is with models; they can be a good map, or a bad map. The proof is in the pudding.
The problem is how to tell good from bad. The right questions are helpful, but rigorous thinking is important, too. The questions to ask regarding the veracity of any model should be about the underlying assumptions and inferences. Is it layered on other models, do conclusions typically correlate to real-world outcomes? Is the data census or sampled, or a combination? Does a visit to BMW.com make me an “Auto Intender”? Meh.
Data quality counts, too. A lot of data is either noisy or used incorrectly. For example, retailers often set a cookie to identify an abandoned shopping cart. That’s fine. But that same data might be used to infer the characteristics of a broader population. Say, for example, the browsers that abandoned carts also looked for coupons. Say people who look for coupons are low-income. So, now, are people who abandon shopping carts low-income? Meh.
How far down a chain of weak causal connections can you go before truth flies out the window? Answer: Not far. In the fuzzy universe of inference, the problem becomes how to balance the flaws in a model with the possibilities it might hold.
Plenty of Hope
Models can vastly expand the universe of possible advantage in marketing because they give us a route to make conclusions about the many from the behavior of the few. They give us the ability to test a hypothesis in vitro, as it were. But the difference between success and failure depends on figuring the odds correctly, and having the discipline (and imagination!) to structure an experiment. There are way too many “tests” in market. What are we testing, and what will we do differently depending on the outcome? If you can’t articulate that, it’s not much of a test.
Intention and the Pursuit of Insights
Insights don’t precipitate out of media like raindrops in a cloud. They start as a theory, and are validated by experimentation. Brands have trouble with this because of endemic difficulty closing the loop. However, the “why” will be an inference in any case, and the only test of a theory is consistency.
From Wikipedia: “Rejecting or disproving the null hypothesis—and thus concluding that there are grounds for believing that there is a relationship between two phenomena (e.g. that a potential treatment has a measurable effect)—is a central task in the modern practice of science.”
So, folks, if you are buying something based on a model, it makes a lot of sense to qualify the assumptions underlying the inferences, and the hypotheses it was designed to address. I learn this lesson every time I try to open a can of paint with a screwdriver.