Commentary

When It Comes To Attribution, It Feels Like TV Is Selling Last Year's Model

One of the most interesting developments I have seen in the video advertising marketplace over the past couple of years has been the rapid acceleration of TV attribution modeling, and for the life of me, I couldn’t really understand why.

Sure, TV has long used models. In fact, the original marketing-mix models were developed mainly to measure and adjust the efficacy of big TV advertisers and it was only a matter of time before digital embraced the science of modeling and made it its own. But digital did it out of necessity, because it lacked the kind of currency-grade audience-ratings data that TV had and needed a better way to correlate media exposure with consumer actions like clicks, downloads, “conversions,” purchases and repurchases.

So when attribution modeling became the rage in the TV industry over the past couple of years, I figured it was just a way to placate the digital native side of advertisers and agencies that needed it as a rationalization to buy a mass reach and awareness-building medium like television. And that whether it was necessary or not, it couldn’t hurt. I was wrong.

I learned just how wrong I was when I sat in recently on one of Mitch Oscar’s so-called “Secret Society” meetings hosted by U.S. International Media in its Midtown Manhattan offices.

The meeting convened local TV sales, technology, data and attribution modelers to discuss how and why they have gotten into the modeling game, what their results have been so far, and perhaps most importantly, what the big issues are that they’re facing.

While there were some great success stories, like the woman from one major broadcast group who said their use of attribution models has enabled her local sales teams to crack some new categories, including a “stairlift” company whose attribution models demonstrated some lifts in brand awareness and sales.

But for all the reinforcing anecdotes I heard, it felt like TV attribution has a long way to go for local broadcasters, both technically and culturally. Many of the speakers discussed how hard it has been educating local sales staffs steeped in “spots and dots” ratings logic, as well as their customers, in the concept of modeling -- and especially the science of it.

Another big problem is that some of the science may not be that good, and it certainly isn’t as consistent as what digital suppliers have developed over the years.

“It felt like they are doing what the digital companies were doing eight years ago,” said Russell Zingale, president of USIM’s eastern region and the meeting’s host.

“It felt like they’re still working in silos,” he added, noting the disparate tools and methods being used across different local TV sales organizations, as well as the cultural issues within them.

To some extent, that is to be expected given that the science of attribution modeling is relatively new to local TV advertising, but given that the meeting included some of the industry’s leaders, it felt to me, anyway, like TV is playing catch-up.

Even worse than the level of sophistication is the fact that they may just be getting the science wrong altogether.

In fact, Zingale shared an anecdote with the attendees that illustrated exactly that point.

Zingale described a test campaign USIM ran for tourism client Aruba in April and May on a national addressable TV advertising platform.

“They came back with a brand lift study showing traffic to the aruba.com website,” he recalled, adding: “And of the 10 markets they’re serving, four of them had what I’d refer to as ‘negative increases.’ They went down.”

Zingale’s point is that it would be counterintuitive for TV advertising to lead to a decrease in consumer behavior, noting: “It should either be flat or an increase.”

More importantly, the net finding of the analysis was that the agency and the client would actually do better NOT to advertise on TV.

And if that’s the case, maybe it would be better for the TV industry NOT to trumpet its attribution models until they have modeling science that’s at least as good as digital’s.

“I used that example with the group here, because it seems like they’re working in a vacuum,” Zingale explained, adding: “I told them you need to go back to the basics and understand what the overall media mix and awareness-building properties of a reach medium are and try and fit that into how you’re selling this.”

In the meantime, he said USIM is not abandoning the initiative and plans to hold another TV attribution model summit organized by Oscar in early July, but he said the focus would be on everyone getting on the same page with the first step of at least coming up with a “common language” or a glossary for the terms about the inputs and outputs being used by different local TV advertising attribution modelers.

“The first step is developing a commonality of terminology being used,” he said. “Everyone is talking about attribution and uses a lot of terms, but in a lot of cases they mean something completely different. Having a glossary of terms is really needed.”

15 comments about "When It Comes To Attribution, It Feels Like TV Is Selling Last Year's Model".
Check to receive email when comments are posted.
  1. Doc Searls from Customer Commons, June 11, 2019 at 10:49 a.m.

    The problem with "attribution" is that it's a virtue of direct response marketing (best known offline as junk mail), not of advertising. Wanting "a better way to correlate media exposure with consumer actions like clicks, downloads, 'conversions,' purchases and repurchases" is a road to hell for broadcasters, and it gets steeper on the downhill side with every new data-rationalized pitch for broadcast advertising to get as accountable as possible at the personal level.

    Broadcasting's greatest virtue as an advertising medium is its effects on populations, not on how it gets individuals to act.

    Consider this: I don't own a Ford, but I know the company's trucks are Ford Tough. I don't have insurance with Geico, but I know fifteen minutes can save me fifteen percent on my car insurance if I choose Geico. I know those things because I watch and listen to a lot of sports, which are sponsored by Ford and Geico. That lots of people know the same thing is great for those brands. And broadcasting made it great.

    Sponsorship is the great dividing line, and it's a huge advantage of brand-building media that have not yet bit the poison apple of wanting everything to be "attributable."


    Broadcasters should know what publishers are only beginning to learn, probably too late to save their asses: that adtech—tracking based "behavioral" digital advertising to individuals (euphemized by its perpetrators as "relevant," "interest-based" and "interactive")—is about tracking eyeballs and advertising at them wherever they go, not about sponsoring a station, a network or a show. Adtech will mark eyeballs in one place, track them elsewhere, harvesting personal data along the way, and then pelt them with ads at another: ads aimed by spyping on those eyeballs.

    Adtech is the very antithesis of sponsorship. it's also a big reason why ad blocking on the Internet, which may top two billion people by now, is the biggest boycott in world history.


    Broadcasting has been blessedly safe from corruption by adtech. It should be working hard to stay that way.


  2. Chris Peterson from Rain the Growth Agency, June 11, 2019 at 12:12 p.m.

    Joe, I think the situation is very different with national TV media. Also, the term "modeling" can be construed too many different ways. Measuring "lift" across several local markets is always a mixed bag, even if you have a strong set of match markets. There are so many variables at play.

    But more importantly, measuring "lift" is not state of the art for "modeling." Multi-stage regression modeling is - the ability to tease out total effects of TV advertising without having to look at baselines. Regression modeling also captures what Doc Searls is striving for - the overall market effect of TV - because regression modeling captures the rising tide of awareness as advertisers increase their investments while managing to an overall business return. It also pegs media investment level to ROI via saturation curves as a reflection of the current media strategy (not what can be done). 

    Regression modeling is the opposite of most ad tech, which is focused on the little transactional things that Doc Searls points out. Regression modeling deals with a time series analysis of impressions and sales in a manner that is more medium to long term. All to say that the words "modeling" and "attribution" scoop up many many things, some good, some bad, some really bad. 

    Finally, the brands that Doc Searls points out - Ford, Geico - have the distinct advantage (similar to Apple, Jack in the Box, and Samsung) that they have a very large addressable audience, very high customer value, and very strong market fit. When you combine those three things, you end up with advertising budgets in the hundreds of millions of dollars. Most advertisers do not have all three. Many successful ones have two, but you aren't going to see them consistently night after night in prime time without all three. If you only have one, then you really don't have a business. 

  3. Chris Raleigh from Advisor to Philo, June 11, 2019 at 2:07 p.m.

    Joe,

    I had a different takeaway from the gathering. I was very encouraged by the significant advancement of attribution in both the local and national linear/OTT marketplaces in the past 18 months. Although we are limited by Mitch Oscar's "veil of secrecy", highlights from the meeting included a local broadcast group's platform that aligns local media spend directly to the dealers' inventory and tracking web site visits now being "table stakes" for most MVPDs' retail advertisers. It allows "TV" inventory owners to much more effectively compete with the digital natives, particularly when they can bundle other platforms. Moreover, national platforms are currently able to utilize dynamic ad insertion (DAIS) to measure results of versioning and geo-targeting among other capabilities.

    As for common terminology and metrics, the IAB is actively advancing SMID (Secure Interactive Media Interface Definition) which includes SSAI, and will apply to all platforms, including mobile and OTT. The MRC is also working towards common metrics for outcome modeling.

    I applaud the efforts and early results and look forward to additional innovation and increasing the incremental value of TV to marketers.

  4. David Briefstein from TVSquared, June 11, 2019 at 5:43 p.m.

    In order to understand why there has been a rapid acceleration of TV attribution modeling in recent years, we need simply to pose the question.  And the answer is, because we can and we should leverage these tools. Unlocking the ability to measure not just media reach and frequency, but the downstream outcomes that matter (ie. Sales) at the most granular level is a powerful tool that must be deployed throughout the industry.

    This type of attribution gives us a better understanding of the efficacy of the ad dollars we spend. Furthermore, when used properly, these insights can begin to drive optimization of media spend that benefit everyone in the ecosystem (advertisers, agencies and media owners). 

    So, the real question isn’t “why,” it is “why not?”

  5. John Grono from GAP Research, June 11, 2019 at 7:12 p.m.

    Some core facts.

    1. No single media owner can accurately model sales.   Simply because they are a single channel within a medium.
    2. No single medium can accurately model sales.   Simply because they are a single medium within the advertising landscape.
    3. The closest an advertiser will get to having a model that has 'decent' coverage of sales stimuli is if they have a single media agency planning and booking all activity.
    4. The closest the media agency will get to 'decent' accuracy is if they also include the category competitors advertising stimuli.
    5. Beyond advertising market factors such as distribution, stock levels, pricing, timing etc. must be considered for all active brands.
    6. Beyond market factors broader economic and external factors such as inflation, workforce data, wages data, weather data, population change and growth, international economic influences need to be considered.

    7. Who is in the best place to do all this work?   It's not the media owner.   It's not the medium.   It's not the ad agency or the media agency.   It is the marketer themselves as they have access to the biggest and deepest pool of data that covers the entire market.   They also have the most to gain.   But whilever advertising is continued to be seen as a cost centre that needs to be cut by the procurement officers I can't see that happening.

  6. Ed Papazian from Media Dynamics Inc, June 12, 2019 at 12:49 a.m.

    Lots of good points in this discussion by all parties.

    The basic problem with attribution is the notion that if, somehow, you could trace the sales effect of every TV ad exposure for every individual exposed you would be able to "attribute" the result to that particular exposure---and the media vehicle that provided the exposure. Yet most ad campaigns involve many multiple exposures and many platforms over time and the overlapping and cummulative impact of said exposures, plus many other factors---like what competing brands are doing, your own parallel sales promotional activities, distribution issues, the buzz of word-of-mouth endorsements---etc. all play a role. This is true, even for direct response campaigns but it is especially so for branding efforts where the advertiser's goal is to stimulate overall sales attained by independent distributors, as opposed to selling direct via a website.

    As John correctly points out, the only entity that might be able to do a valid attribution analysis is the advertiser, not any single TV time seller, or even the ad agency that bought the time for its client. However, even if such attempts were made, I doubt that the full range of data that is required would be available and I doubt that it would be possible to say to one time seller that an exposure on one of its TV shows performed significantly better than an exposure on another seller's TV show. That's simply cutting it too fine and the data that is available probably wouldn't support such conclusions.

    To be clear, it is true that certain platforms are superior to others in terms of generating ad exposure and providing a positive climate for ad impact via less ad clutter and a compatible editorial environment. This provides "directional" information as it is, no doubt, better to use such "superior" platforms --providing the cost premium is not too great and you aren't sacrificing too much reach. But attributing each sale specifically to each ad exposure----is virtually imposible---unless your entire campaign consists of a single exposure on a particular show on a single network.

  7. John Grono from GAP Research, June 12, 2019 at 2:55 a.m.

    Well said Ed. 

    In fact many sales occur without any direct advertising stimuli.   Take the "50% Off" sale on coffee in the store (one of the best sales stimuli for instant results, but not the profit margin). 

    All advertising has a longitudinal effect - I still remember the VW Lemon ad and the man in the Hathaway shirt.   All good attribution models need a longitudinal dimension to them.  Think of it as the "compound effect" of the weight of advertising over a period of time.  The effect diminishes over time but for good ads that effect can be negligible, but less so for run-of-the-mill ads. 

    In the many longitudinal non-linear econometric models I have been involved with it is a massive success if you can explain 75% of sales movements.  When you see higher claims  check that it isn’t that the results have been reported as 'share of sales drivers'.  For example, the model might explain 60% of sales changes (up and down) with TV driving 30%, but that is all too often reported as TV driving 50% of sales.   It isn't. 

    Most of our models used three years of weekly or daily data.   We would start with 100+ variables and often up to 500.  But what we found was that after around the fifth to seventh explanatory variable there was no significant additional explanatory power.  Models could be made more efficient by condensing or aggregating data.  For example, rather than using actual prices for you and your competitors you could express your price as the proportion of the category average or as the price relative to the category leader or main competitor (agnostic over time). 

    The goal became to produce the most parsimonious model (i.e. least variables that needed to be collected over long periods of time) that had the most explanatory and predictive power (tested by data removal). 

    In some models less obvious, but completely logical, factors are needed, such as temperature when looking at ice-cream sales.  We found was that it wasn't the actual temperature, but whether the temperature reached a threshold (e.g. 85F) or whether there had been a rapid warming over a short period of time (e.g. +10F over the last week).

    One model looked at Happy Meals for which rainy weather was one of the biggest drivers of sales spikes for after-school pick-ups - the easy solution was to indulge the kids with a Happy Meal Drive-Thru.  The media implication was drive-time radio ads reminding them of Maccas.   And yes, it won an Aussie effectiveness award. 

    The lesson is ... don't just look at the obvious.  Dig through all the data - advertising and non-advertising - and see what works time and time again.  And don't always look for 'last touch' because it could also be the 'softest touch' that is not worth its price premium.

  8. Rick Ducey from BIA Advisory Services, June 12, 2019 at 8:37 a.m.

    Joe, as always very insightful and well-timed article. BIA's excited to be part of this attribution initiative with the (shhhhhh!) Secret Society and appreciate your continuing converage. You raise some good points, as of course Russ does as well. We have the opportunity recently to do some research for TVB on attribution and some highlights of our work are shared here: https://www.tvb.org/DetailsPage/tabid/1569/ArticleID/6455/Attribution-is-Coming-to-Local-TV-Highlights-from-BIA%E2%80%99s-Study-for-TVB.aspx. Hope your readers find this useful.

    Thanks!

    Rick

  9. Lucas Sommer from LeadsRx, June 26, 2019 at 5:29 p.m.

    I am honestly a bit confused by some of the points in the article and especially the comment by Doc. 

    We seem to be at conflicting cross roads here. Are we going to try and apply attribution to TV/broadcast so that advertisers can know how to attribute its success or are we going to put our heads in the sand and act like we shouldn't be applying attribution to broadcast? I think this is a foregone conclusion - attribution MUST AND WILL be applied to broadcast. It already is and advertisers who understand this are going to win and those who don't are going to lose. 

    I find it concerning that Doc would suggest that "Broadcasting has been blessedly safe from corruption by adtech. It should be working hard to stay that way." Why would we in the advertising, content, sales business ever consider not applying adtech to broadcast? How is that a solution to say "let's not measure it with attribution because measuring it would be bad."

    TV and Radio are for big advertisers like Ford and Geico who don't care about attribution because they just want to raise awareness of the brand and then cross their fingers that it leads to more orders at the dealerships. This really doesn't make any sense at all. Unless Gieco is selling "brand awareness" and not insurance then I would agree, but they don't. Geico sells a "conversion" just like every other busines. Eventually a customer shows up, signs on the dotted line, and converts - this is, was and will be the end game. Pertending like it isn't is like pertending analog/vhs/betamax is better than their digital competitors - they are not. We can be nostalgic, but not measuring broadcast is a problematic approach. 

    If we look at Ford, the correct approach would be to measure broadcast, measure the traffic on the site, measure sign ups at dealerships, measure new leases, measure sales performance at dealerships - and bring all that data back to the marketer using adtech. Then with cookies and data science accurately attribute conversions back to broadcast proportionate to the effect they have on conversions. 

    If you can't measure something, you don't know how to improve it. 

    The days of "spray and pray" are dead. TV, and its advertisers, need to get on board or their advertising dollars will be dead too. 

  10. John Grono from GAP Research replied, June 26, 2019 at 6:03 p.m.

    Lucas, I understand your confusion.

    While I can't talk for the other commenters, I don't think anyone is saying we shouldn't be applying attribution to broadcast.

    My issue is that when a medium or a media owner applies attribution that medium or media property tends to end up at the top of the pack.   Funny that.

    Put simply, you can't get agnostic attribution from a medium or media owner.   Current attribution systems tend to be very siloed.   TV will overstate their claim just because some TV ads were on.   Online (let's stop calling it 'digital' for a start) overstate their claim just because some online ads were clicked on - they are mixing up propinquity (God bless Erwin Ephron) and proximity with causation.

    The best source of agnostic attribution is the advertiser themselves as they have access to all the marketing stimuli (and plans and history).   The problem I see is that want it done for them rather than by them.   The next closest tough-point would be a media agency if the advertiser is channelling most of their activity through them and that they have the tools and people (rough guide - at least two-thirds).

    The least reliable source of attribution would be a media owner due to single silo vertical measures and models.

    But our responsibility is to provide equitable and representative metrics to feed such attribution models.

    The irony is that when you use multivariate non-linear longitudinal micro-econometric modelling the model doesn't actually know or understand what each of the metrics are.   It is we humans who try to cognitvely compare a 'view' on TV with a 'view' online, with a 'view' in a magazine or newspaper, with a 'view' in a cinema, with a 'view' of a illboard, with a 'listen' to a radio or a podcast etc. etc.   So we then tend to invoke a requirement that they are all measured the same.

    A good example of how models work is, say, ice-cream.   You need temperature in your model.   It doesn't care whether you use Celsius or Fahrenheit - it will work it out.   Your could use US dollars or even convert it to Zambian kwacha - it will work it out.

    The key thing is to have a standard (be it right or wrong) that is consistent and used consistently and let the attribution software do the comparability and effectiveness heavy lifting.

  11. Ed Papazian from Media Dynamics Inc, June 26, 2019 at 6:45 p.m.

    There are many types of TV advertisers and to say that all must march to the attribution beat---if we can figure out how to determine what elements---or combination of elements---- truly cause a sale or visit to a website----is plain silly. Also, it is wrong to assume that most TV advertising by large brands has as their primary ---or only---goal simple brand awareness. Far from it. Most of the ad campaigns you see on national TV are trying to convince people who already buy the brand that they have made a wise choice---reinforcement of their current base--- and  woo those who use other brands. It's not merely awarenss of the brand but awareness of its positionning strategy that's at play and, it is hoped, that the latter, if compelling, will lead to improved sales results.

    To accomplish their goals, most TV advertisers understand that they are waging an extended campaign which will have a beginning, a middle phase and, eventually, an end. Sure, the campaign consists of individual ad exposures on various TV shows and each exposure has some effect but, for the most part, the build up of momentum and the cummulative impact is what determines the outcome, coupled with other factors---like word-of-mouth endorsements,sales promotional activities, what the other brands are doing, and, finally, what happens when a new customer finally is persuaded to buy the advertised brand. If it delivers on its promise, the conversion may develop into regular usage; if not then the brand will continue to face the leaky bucket syndrome---winning some new users but also losing some, thereby making no overall progress.

    As I have said in other posts, many TV advertisers monitor their awareness and sales results very closely---not consumer by consumer---but in aggregate and by sensible breakdowns---regionally, by sex, age, etc.---often week by week. This gives them a perfectably valid read on how  well their campaigns are---or are not ---working, whether they have peaked or are still gaining momentum, etc. If they had corresponding data for every consumer such advertisers probably wouldn't know what to do with it. They would be innundated by data overkill.

    Finally, as I have said many times, I agree with John about the folly of trying to make the media time or space seller responsible for the success of each advertiser's ad campaign. And, I agree that it's not the media's function to try to trace the sales effects of every ad "exposure" it generates for every advertiser. That's asking much to much and simply won't fly.

  12. John Grono from GAP Research replied, June 26, 2019 at 6:53 p.m.

    Ed, next time I am in NY we'll have to have lunch!

  13. AJ Brown from LeadsRx, June 26, 2019 at 8:53 p.m.

    I tend to agree with the notion that the media vendors themselves may not be the best source of attribution analysis. This goes across ALL mediums from Google and Facebook to TV and radio. Independent modeling by the advertiser and their trusted partners is really the best approach. To be honest, even marketing agencies sometimes suffer in their ability to be 100% neutral... it depends on what they were hired to accomplish.

    Are there any independent attribution modelers in your secret society group?  If not, I would certainly volunteer.  This is critical to get right.

  14. John Grono from GAP Research, June 26, 2019 at 9:46 p.m.

    Spot on AJ.

    Regarding the "secret society group" ... if we told you, we'd have to kill you.    LOL.

  15. Ed Papazian from Media Dynamics Inc, June 27, 2019 at 7:55 a.m.

    John, regarding the relationship between actual ad exposure and ad impact, all of the studies I have seen tell us that TV shows with less ad clutter in their breaks as well as shows which are more involving, generate higher levels of commercial viewing than their opposites. This, in turn, produces higher ad recall and message registration levels per ad.The reason is obvious. When you are watching a highly emotional medical drama and a break appears, you are less likely to leave the room ( or dial switch ) because you might miss the continuation of the drama when it returns to the screen. In contrast, when you are watching a silly reality show, this is much less of a consideration and the same is true of talking head programs and shows like "Today", where the out-of-the-room factor must be huge during breaks as well as during content---like being in the bathroom. My point being that opportunity to see is not a valid barometer when it does not reflect the true opportunity to see situation across various types of content and ad clutter situations. If you aren't in the room---even though the rating service thinks that you are there---you don't have an opportunity to see.

Next story loading loading..