Commentary

Demystifying The Use Of TV Audience Data: A Call For Standardization

With all the announcements recently of network groups adding TV audience data to their upfront presentations, it may be easier now to name the networks that aren’t doing this than those that are. The offerings range from applying standard data sets like Nielsen Catalina shopper card data, to very complex, custom-built data management platforms that can ingest an advertiser’s first-party data.

These investments in TV audience data suggest that this is not a passing fad — and may soon become the lingua franca for most TV campaigns. That means TV media buying is about to become very complex very quickly. As they say, “the devil is in the details.”

To build a data-driven TV plan requires understanding several key aspects of the data available and how it will be applied to the inventory. First is the audience segment definition: for example, heavy purchasers of high-end women’s beauty products. If the data set does not have standard definitions of what a “heavy” purchaser is, then a decision must be made: say, two times more likely to purchase than a typical consumer.

Next we must define the time period over which the audience segment’s viewership should be evaluated.  Because many audience segments are quite small and are matched against a fairly small panel or even a sizable set of set-top-box data, a long period of time must be used to ensure that the viewership is actually readable. A month of data may be required to make sure that a signal can be found among all the noise.  

Then decisions must be made about outliers. Because most of the TV audience data sets create outputs of rankers of audience segment composition of a network and daypart, the data output is a set of indexes. If a very small audience segment viewership is divided into a very small network/long tail viewership, the result is a very high index that is not meaningful – it’s an outlier. Deciding what is an outlier and what’s not is a matter of judgment.  

Audience segment composition must also be balanced with reach. We found that financial news networks like CNBC had the highest concentration of heavy purchasers of high-end women’s beauty products, contrary to gender stereotypes. (I recently had a lot of fun at a conference quizzing people on their assumptions of affluent women’s viewership, hopefully shattering some stereotypes.) Still, the absolute number of these affluent women consumers is low and must be balanced with higher-reach networks with a lower concentration of such consumers. A decision must be made on what that right balance is, to determine the correct allocation of weight across networks on the plan.  

Finally, we must identify the type of addressability of the TV inventory. Is it national, DMA, zone- or household-addressable? The level of addressability of the inventory is crucial because it brings into play a much broader set of dimensions to consider.

Additional factors that need to be considered are how to combine different data sets, how to forecast with reasonable accuracy when working with small samples — and, of course, the latency issue that is inherent in all advanced TV data sets.

I’m outlining these issues not to overwhelm or to discourage advertisers and their agencies from pursuing data, but to acknowledge the complexity and challenges on the horizon with the transition to a more data-driven TV world.

Right now, teams of data scientists and researchers at networks and agencies are crunching through reams of data, balancing necessary judgment to make the right decisions. But to make data-driven advertising a success at scale, we need to simplify the use of the TV data in building and optimizing a campaign. Data-driven TV will only achieve true adoption when an agency can easily communicate with confidence to a client advertiser how a plan was built using a handful of simple, well-understood and standardized data metrics.

3 comments about "Demystifying The Use Of TV Audience Data: A Call For Standardization".
Check to receive email when comments are posted.
  1. Ashish Chordia from Alphonso Inc, May 8, 2015 at 11:29 a.m.

    Great points Walt. Very timely post. 

    The amount of TV viewership data availabe is exploding right now. Beyond the panel and small sample based methods you cited, many companies like ours and others in TV ACR space are brining oodles of data to the market. 

    The challange remains on how to make this data actionalble, how to deliver true 'insights' rather than meaningless charts and analytics. 

    As you point out, empowering decision makers at agencie and brands is a key step in delivering the true value. 

  2. Ed Papazian from Media Dynamics Inc, May 8, 2015 at 2:11 p.m.

    Walt, I won't start in by opining that there will be few if any meaningful national TV "programmatic" buys this year. Rather, I found many of your other points to be well taken.

    There are some important issues to be addressed, however. For example there is no operational system which can accurately translate "big data" set top ratings into reliable viewer ratings. If you assume that because a particular TV show indexes above the norm in set usage among upscale or younger homes, that this is also the case for one or both of the adult heads of house in such homes, you are likely to be wrong by a significant margin. Indeed it's quite possible that the show in question, under performs among individual adults in such households. One just doesn't know.

    A second issue concerns the melding of data from various independent sources to come up with a market value index per viewer, per show, that can be married with Nielsen ratings to give the buyers and sellers an audience "currency" to work with. In effect, this is attribution laid upon attribution, which is likely to soften---or randomize---- many of the actual distinctions that may exist between one show and another, let alone individual telecasts ---or installments---of a series. So far, I have seen nothing that tells me that the many aspects of survey quality and compatability have been reconciled and that the findings of such statistical machinations have been validated against hard, single source databases.

    I agree with you about keeping it simple---so people like brand managers, for example----can grasp the core concept. However, I suspect that those teams of scientists and researchers at the networks and agencies that are pouring over all of that "big data" set usage information and the results of various "big data" marketing compilations, may jump to hasty conclusions and forget to validate said assumptions, before attempting the immensly difficult task of trying to make a national TV programmatic buying system---  involving all major players---not just "long tail"sellers ---operational. I wish them lots of luck as they will certainly need it.

  3. dorothy higgins from Mediabrands WW, May 8, 2015 at 4:45 p.m.

    quite smply we need common standards that are not so finely parsed that they render viewership data unstable  and that are so difficult to translate across partners and platforms that we end up defaulting back to age/gender for commonality.

Next story loading loading..