With all the announcements recently of network groups adding TV audience data to their upfront presentations, it may be easier now to name the networks that aren’t doing this than those that
are. The offerings range from applying standard data sets like Nielsen Catalina shopper card data, to very complex, custom-built data management platforms that can ingest an advertiser’s
first-party data.
These investments in TV audience data suggest that this is not a passing fad — and may soon become the lingua franca for most TV campaigns. That means TV
media buying is about to become very complex very quickly. As they say, “the devil is in the details.”
To build a data-driven TV plan requires understanding several key aspects of
the data available and how it will be applied to the inventory. First is the audience segment definition: for example, heavy purchasers of high-end women’s beauty products. If the data set does
not have standard definitions of what a “heavy” purchaser is, then a decision must be made: say, two times more likely to purchase than a typical consumer.
Next we must define the
time period over which the audience segment’s viewership should be evaluated. Because many audience segments are quite small and are matched against a fairly small panel or even a sizable
set of set-top-box data, a long period of time must be used to ensure that the viewership is actually readable. A month of data may be required to make sure that a signal can be found among all the
noise.
Then decisions must be made about outliers. Because most of the TV audience data sets create outputs of rankers of audience segment composition of a network and daypart, the data
output is a set of indexes. If a very small audience segment viewership is divided into a very small network/long tail viewership, the result is a very high index that is not meaningful –
it’s an outlier. Deciding what is an outlier and what’s not is a matter of judgment.
Audience segment composition must also be balanced with reach. We found that financial
news networks like CNBC had the highest concentration of heavy purchasers of high-end women’s beauty products, contrary to gender stereotypes. (I recently had a lot of fun at a conference
quizzing people on their assumptions of affluent women’s viewership, hopefully shattering some stereotypes.) Still, the absolute number of these affluent women consumers is low and must be
balanced with higher-reach networks with a lower concentration of such consumers. A decision must be made on what that right balance is, to determine the correct allocation of weight across networks
on the plan.
Finally, we must identify the type of addressability of the TV inventory. Is it national, DMA, zone- or household-addressable? The level of addressability of the inventory
is crucial because it brings into play a much broader set of dimensions to consider.
Additional factors that need to be considered are how to combine different data sets, how to forecast with
reasonable accuracy when working with small samples — and, of course, the latency issue that is inherent in all advanced TV data sets.
I’m outlining these issues not to overwhelm or
to discourage advertisers and their agencies from pursuing data, but to acknowledge the complexity and challenges on the horizon with the transition to a more data-driven TV world.
Right now,
teams of data scientists and researchers at networks and agencies are crunching through reams of data, balancing necessary judgment to make the right decisions. But to make data-driven advertising a
success at scale, we need to simplify the use of the TV data in building and optimizing a campaign. Data-driven TV will only achieve true adoption when an agency can easily communicate with confidence
to a client advertiser how a plan was built using a handful of simple, well-understood and standardized data metrics.