Commentary

TV Bigwigs Debate Data, Industrywide Groups

How often does one go to a sales-oriented conference and hear panelists rhapsodize about data? Never before in my corporate lifetime as a researcher for a range of television networks. But if you hang around long enough, I guess you see everything come to pass. And so it was at the recent B&C Advanced Advertising conference.

Here are some of my takeaways from the conference:

Data is moving out of the research department — into sales. This is not what all the panelists said, but it was the leitmotif of this and other conferences on media: Data is being pulled out of the research function and moved either into siloed departments reporting to the same C-level executive, or moved under sales.

What I thought would be a renaissance for research seems to be turning into a new level of purgatory, as it moves more and more into the background. In my opinion, data without research-applied analytics is worthless.

Is it time for a JIC (joint industry committee)? This is arguably one of the most controversial and legally risky ideas in our business. But that does not mean that others are not talking about forming an industry-wide group to discuss things like standardization, edit rules and metrics. In a common-interest group, all would participate, so the issues of anti-competitiveness or antitrust should be moot.

Linda Yaccarino -- NBCUniversal’s chairman, advertising sales and client partnerships -- fired the first public volley in this battle by asking, “How do we come together as an industry to better measure our product? It has to be more intuitive, and we’ve got to get to a place with a uniform currency. The good thing about Nielsen is that it has decades of experience but it is largely self-reported. We need to come together and coalesce as an industry.” Boom.

Standardization of metrics is pivotal. The standardization and creation of common metrics came up on practically every panel. As Yaccarino explained, “We have to have a common currency and have to measure the efficacy and value of the consumer experience.”

When asked what the greatest impediment to the adoption of TV programmatic was, Brent Gaskamp, senior vice president,  corporate development, N.A., Videology, replied: “No standard metrics.” Shereta Williams, president, Videa, concluded that “measurement has to get better, especially cross-platform measurement.” But even if we were to all agree, nothing is easy. Frank Foster, senior vice president/general manager, TiVo Research, added a new wrinkle. He explained: “We currently don't have stewardship systems that can handle the new metrics.”

Will bigger networks with higher ratings continue to dominate? The answer is “not necessarily,” but it depends whom you ask. For Johnathan Bokor, senior vice president/director of advanced media at Mediavest, “we built a system over the past 75 years where big nets and big ratings get the most money. But as you move to an addressable-based paradigm, this type of spending needs to be justified… Large networks will need to prove that they are worth the premium money.” But Lance Neuhauser, CEO of 4C, countered, “the small guys will still have to figure out a way to prove its value.”

The media landscape continues to shift. Definitions of programmatic, advanced advertising and addressable advertising continue to merge. But some things are eternal:  Make it easy to implement and bring value through the sales funnel. Marianne Gambelli, executive vice president, chief investment officer at Horizon, summed it up when she said: “I want research across all media to unlock better value that we can't get our arms around manually.”

10 comments about "TV Bigwigs Debate Data, Industrywide Groups".
Check to receive email when comments are posted.
  1. Ed Papazian from Media Dynamics Inc, October 22, 2015 at 1:06 p.m.

    What continues to amaze me about this "data" thing is that we have had tons and tons of data for decades---all sorts of data. We have the ratings broken out in reasonable detail, we have data on viewer engagement, attentiveness, etc. on a show by show basis, along with ad recall for thousands of ad campaigns. We also know what shows heavy users--or light users, for that matter--of peanut butter, diel colas, frozen entrees, etc. watch as well as what brands they prefer. We have show by show data on new car buyers---by category and, sometimes by make, as well as frequent business travelers, people who are concerned about pollution, those who are image conscious, price conscious, those who have or don't have DVRs, etc. etc.

    So what's so new about "data"? And what data are they pontificating about? Also, why isn't what's available now being utilized? It may not be perfect and it's is often human-based rather than electronic and the samples are smaller---though none of this necessarily means that the information is wrong.

    I wish some of the gurus who are pounding away at the inpending "data" revolution would take the trouble to look at the data they are championing. Exactly what striking new insights---never dreamed of previously---have they suddenly uncovered? Are advertisers really wasting a huge amount of their media dollars and can "data" fix this? OK---how?

  2. Charlene Weisler from Writer, Media Consultant: WeislerMedia.blogspot.com, October 22, 2015 at 1:33 p.m.

    Hi Ed, For me, the exciting part of the new big data is STB data which is very granular (second by second) and has a very large footprint (although it has to be modeled for total US representation). That data coupled with the electronic ease by which we receive other data sets, enable us to get even greater behavioral insights. But much of it is still siloed. My prediction is that companies who solve for data silos will get the most value from data.

  3. John Grono from GAP Research, October 22, 2015 at 8:40 p.m.

    Charlene, I agree about STB data, but the second-by-second granularity can be problematic.

    There are latency issues with STB data so that the content shown at hh:mm:ss on one STB can be/is different to the content shown at the same hh:mm:ss on another STB.   I have seen this latency reported as up to seven seconds - or half a standard 15-second ad.

    My concern is that there is a level or precision implied by that granularity that may not/does not exist.   It could even be a misleading vanity.

    Clearly second-by-second is in the advertiser and agency interest.   Minute-by-minute should be sufficiently granular for network programming ebb-and-flow analyses.

    Maybe it is time to take a leaf out of the online world's book and start 'tagging' advertising content (like Nielsen's DAR approach).   Even being able to audio-match ad content on the current panel should provide greater granularity and transparency around how people behave during advertisements and ad-breaks.

  4. Ed Papazian from Media Dynamics Inc, October 23, 2015 at 8:57 a.m.

    John, just out of curiosity, even in you could get accurate set tuning data on a second by second basis---which doesn't tell you if the "viewer" was even in the room, let alone attentive and/or "watching"---- what would you do with the information? The data is unlikely to be particularly sensitive, except for small second to second variations. What good is it and would you regard it as extremely "valuable"?

  5. Charlene Weisler from Writer, Media Consultant: WeislerMedia.blogspot.com replied, October 23, 2015 at 10:20 a.m.

    You have a very good point. Latency is an issue and modeling may be the answer. But too is the issue of content recognition. CIMM (the Coalition for Innovative Media Measurement) is working to help establish standard codes for both programs and ads.

  6. Charlene Weisler from Writer, Media Consultant: WeislerMedia.blogspot.com replied, October 23, 2015 at 10:21 a.m.

    For an advertiser with 15s, 20s and 30s it would enable closer measurement.

  7. Ed Papazian from Media Dynamics Inc, October 23, 2015 at 10:40 a.m.

    Charlene, any advertiser who thinks that set top box set usage ratings are measuring commercial "viewing" is kidding himself. What's more, the numbers would be virtually the same as the average minute ratings, give or take a couple of decimal points. In other words, if the so-called average commercial minute rating, per Nielsen's way of calculating, it is 2.5% and this is slightly misleading relative to a perticular 15-second ad in the one minute time span, a specific tally for the exact 15 seconds when the shorter commercial ran might show an average second rating of 2.3% or, maybe, 2.6%, but, most likely, the small ups and downs would cancell out across many placementst in many shows.

    In any event, even if a "big data" panel can produce such "granular" ratings on a statistically more "reliable" basis---because of its larger sample----one will have to consider how its overall rating estimates compare with Nielsen's. Are they a perfect match? Are the big data ratings usually higher or lower than Nielsen, hence some adjustment is required before one can demand makegoods from a network because the second by second data showed a 1.5% shortfall over what was guaranteed? Finally, will the networks guarantee commercial by commercial GRPs---for specific ad campaigns? What a can of worms that would open up?

  8. Ed Papazian from Media Dynamics Inc, October 23, 2015 at 10:45 a.m.

    Sorry about the typos in my last post, guys. Must learn to read what I've typed before posting. Sigh!, Sigh!

  9. Charlene Weisler from Writer, Media Consultant: WeislerMedia.blogspot.com replied, October 23, 2015 at 11:16 a.m.

    I think we are on the way to commerical ratings - maybe not next year but soon. I agree that we are talking about tuning rather than viewing sith STB data but Nielsen must also ascribe to account for the lack of button pushing in certain homes. So there is no perfect solution. But I am hopeful that we as an industry will get together and work towards solutions to measurement suing the new data sets available. Deciding too which are the most valuable sets and which are perhaps "nice to have" but not pivotal.

  10. Ed Papazian from Media Dynamics Inc, October 23, 2015 at 2:39 p.m.

    Charlene, you may well be right. Even though the resulting big data "granular" findings will probably show tiny differences between one commercial and another, I wouldn't be surprised to see the media buying community pressuring the networks to switch to this exciting new metric, despite the fact that it does not tell us much of anything regarding who watches what commercials. After all, the ongoing assumption holds that any time a set is tuned and a commercial is on the viewer must be "watching". Does anyone believe that this is even close to the truth? Has anyone even thought about it?

    As I noted, it may not be as easy as one thinks to convince the networks to guarantee "audience" delivery for specific commercial placements----but they might. If so, one wonders how Nielsen will report ratings for TV shows in general. The current "metric" is average commercial minute, which provides a level playing ground for all programs and channels. Would this standard then change to the average commercial's "audience"? If so, you have a different mix of commercial lengths and commercial loads for each show as well as network types and dayparts. Using an average ad as the base could bias the findings in favor of those channels which have a higher ratio of "30s". Or Nielsen could do it by commercial length, giving us a rating per show for the average "30" versus the average "15". Or Nielsen might just go back to the old system and show all-content, average minute ratings. It's quite a dilema.

Next story loading loading..