Measuring The Immeasurable: How To Attribute TV Behavior

Veteran direct marketers believed that non-DR TV couldn’t be measured. But that was before advances in digital measurement and data analytics. Thanks to these technologies, behavior can now modeled and predicted in online and offline channels.

How does a company pull that off? We asked Alison Latimer Lohse, the co-founder and COO of ConversionLogic, a three-year-old firm specializing in analytics-based attribution.

How do companies measure TV?
We believe in measuring 100% of the media exposures — every single spot that ran at the time it ran, and correlating events: QSR, online ordering, downloading coupons, within minutes, to understand the relationship between stimulation and response. Often, you can use a panel, from 25,000 to a million people to measure their deterministic behavior, but it’s highly biased, and you miss some of the subtlties of how the media work together. You have to see how ads impact build and decay over time.



Do TV ads with Web site URLs qualify as DR?  
Consumer are so savvy, they don’t need a URL anymore. If an auto spot comes on, they’re watching the spot and researching Chevy Tahoe through a search engine or going to the Web site on their mobile phone. It’s symbiotic behavior that isn’t so isolated anymore.

Why are some marketers missing this?
Marketers get stuck in historical metrics, meaning you built goals off of information you had been accumulating over the last 10 or 20 years. It’s hard to undo historical ways of measuring. When Web sites first came out, it was the language of the hit — how many hits did I get? — then SEM: how many clicks? But marketers are moving away from that single-channel language to more holistic measurement. Some companies move faster than others; others have a harder time breaking the muscle memory of those practices. But there’s a big opportunity here.

What’s that?
It has nothing to do with measurement — it has to do with getting more focused on data hygiene and data management. Whether you’re measuring TV or radio or digital channels, you have to have a good clean data house.

Brand is important, but so is data, and I believe that will pay dividends long-term.

Does model-based attribution facilitate media buying?
Absolutely. Companies are more intelligent. They’re not looking for insights for insight’s sake, but for activation of media. With model-based attribution, they end up having an attributed CPA, meaning an attributed conversion for every spot based on its effectiveness at driving a business outcome. They can use the attributed CPA to modify the day part, station and program mix, and decide how to switch their programming for the week or the next quarter.

Who’s doing it well?
There are a lot of evolved firms in the ecommerce and subscription-service space, everyone from companies like Uber to prepared-food delivery services to subscriptions like Birchbox. They quickly tap out in digital channels to find new customers, so they have to turn to television. And they’re not rooted in the assumptions and legacy language of how TV works. They want to know the return of every dollar they put in, independently and with other channels.

Can you apply this science to offline channels like billboards?
We’re focused on the time of exposure, so we concentrate on TV and radio. But we’re doing some experimental work tying billboards to mobile exposure. The question is: Can you use mobile data to suggest when someone was exposed to a billboard?

What about addressable TV?
We’re working with partners to connect user IDs and map TV exposure in the user path to see the deterministic exposure of TV spots. There’s some maturity in the technologies that allow that. But we need more critical mass.

What excites you about 2017?
It’s that data is coming in from unexpected places. An obvious example is that we can now determine how mobile or location data enhances media-exposure data. A less obvious one is on the research side. We’re starting to see more addressable data, not 25,000 from a biased survey, but panels of 5-million to 10-million people, where there are actual respondents we can incorporate into the models. So we can take brand metrics and model them. There’s some real potential. 

7 comments about "Measuring The Immeasurable: How To Attribute TV Behavior ".
Check to receive email when comments are posted.
  1. Ed Papazian from Media Dynamics, January 3, 2017 at 9:26 a.m.

    I assume that the "biased survey" of 25,000 people refers to Nielsen's national peopplemeter panel that measures in-home "linear TV" set usage and, to some extent, "viewing". Actually Nielsen's national TV rating panel is considerably larger than 25,000 people, though it hardly qualifies as a "big data" source with 10 million panelists. I would have liked to hear Alison's explanation of exactly what makes the Nielsen panel so "biased", by which, one might conclude, she means that Nielsen is putting out wrong and misleading rating information. Or does she mean that Nielsen is not able to measure every snippet of TV viewing---especially on digital platforms? If that is so, than Nielsen is not biased, it's merely not providing the most complete picture.

  2. John Grono from GAP Research, January 3, 2017 at 4:42 p.m.

    In 2010 the US Census Bureau 'surveyed' 308,745,538 people and among ther statistics found that 49.2% of the population was male and 50.8% of the population was female.

    So, to all the big data fans out there, I ask, how many people do you think you would have to survey or have data on, to arrive at the same result?   1,000?   10,000?

    Sampling works.   (And often works better than big data by the way.)

  3. Doug Garnett from Protonik, LLC, January 3, 2017 at 5:56 p.m.

    As a DR advertising exec with specialty in TV and it's impact on retail, it's hard to find anything in this write-up that I agree with. Especially the vast over-promise of measuring TV to behavior.

    Unfortunately, behavior in the real world is incredibly subtle and resistant to accurate measurement. I've worked with results from companies who have claimed to know all this and to be able to deliver this... And what I found was that the vast number of incredibly off-target assumptions made the results of the modelling wrong.

    The problem has been that models like this (I don't know this one specifically) are primarily new ventures designed to return incredible investor enthusiasm. As we saw with Theranos, there's a big difference between pitching a theory investors will buy that customers want to buy AND delivering the working, useful, valuable product.

    Problem is that DIGITAL can't be measured this way. Oh, they promised it. But start digging in and we discover how little can be measured even for online behavior and very little about how offline behavior is influenced by online behavior. To suggest that it's possible for TV when the easy problem hasn't been solved defies credibility.

  4. Ed Papazian from Media Dynamics Inc, January 3, 2017 at 7:25 p.m.

    Agreed, Doug. Also noteworthy is the promotion of the time spent solution, which may--repeat, "may"---- be a useful indicator of ad exposure for digital media but is not workable at all for TV, radio and print media, which constitute the vast majority of branding ad impressions currently and for the foreseable future. While human based measures like ad recall and purchase intent have flaws and do not always provide 100% perfect predictive answers, the digital folks will eventually learn that you must incorporate  these, or something along the same lines,  in tandem with purely mechanical metrics like time spent with the page, to get workable data for making media selection and media mixing decisions. Another thing they must learn is that "legacy media" advertisers are not as dumb as they think and the smoke and mirrors approach, whereby one makes grandiose promises designed to get the business, but which can't be delivered, isn't going to work. It's too late for that as the proverbial cat is out of the bag. Solution: instead of trying to sell vague, unsubstantiated promises, why not learn the advertising and media business, see what is needed and what effects can actually be  measured and adapt what you've got---or think you've got ---to offer a valid and affordable service?

  5. Douglas Ferguson from College of Charleston, January 4, 2017 at 1:57 p.m.

    Once again, I learn more from the Comments than the article.

  6. John Grono from GAP Research replied, January 4, 2017 at 6:04 p.m.

    Doug you raise extremely good points regarding real-world subtleties and the difficulty in measuring. However, in my 40 years of research there has been much progress made in small samples of people to 'get closer to the truth'.

    Where the apple-cart gets turned over is when those (and similar) findings are extrapolated to a population.

  7. Doug Garnett from Protonik, LLC, January 23, 2017 at 8:43 p.m.

    John - I agree that small samples can help us out. What they don't do is offer conclusive learning. We desperately need the small samples to come to our OWN judgement calls. What I've seen, though, is that attribution vendors then entrench those small sample theories into big programs and promise that they're absolutely right. And they aren't...ever.

    I have this wish that we'd understand that research isn't like a spy satellite resolving down to 3" square vision all over the earth - predicting perfectly what's happening in the market and what the result of a choice will be.

    Truth is that research needs to be thought of more like being an explorer in the 1850's in St. Louis trying to reach California. You go around and gather every learning about what's ahead of you that you can. And then you set out with a whole set of learnings. None of the learnings tell the whole truth. But some of them will make the difference between life and death, between reaching California or ending up stuck in the Teton's.

Next story loading loading..