Rentrak, Ken Papagan, President & Chief Strategy Officer
There are great advantages to panel-based sampling when done properly. It's cheaper, quicker and immediately actionable. Any market scientist "worth their salt" will tell you that you do not need a complete census to predict a group's behavior. While there are many questions around the "conventional" ratings that Nielsen produces from compliance, to ascription methodologies, to sample, etc. the set-top data from our 5+ million set-tops from Dish Networks and AT&T U-verse suggests that when it comes to the top 25 programs or so, the 14,000 Nielsen households generally seem to portray similar viewing data to the mega samples we are seeing with STB data. However, once you begin to drill into un-rated networks and local TV markets the effect of the "long tail" of content offerings and distribution outlets that TV now represents, cannot adequately be portrayed by a relatively "tiny" sample of panel data. The current research needs to be augmented with deeper inquiries and insights. We strongly believe that with a wider sample of millions (for example, Rentrak may have 30,000 homes in a market where Nielsen has 400) instead of thousands, a much more accurate understanding of television viewing can be gleaned.
OTX, Bruce Friend, President
I think we as an industry need to start thinking more outside of the proverbial (Nielsen, set top, desktop, etc.) box. As such, I believe we need to put together an innovative-minded "think tank" of the best and the brightest academic, media industry and technology industry minds, and figure out how we can develop better, passive ways to measure the consumer and not the device. Also, this needs to be government (grant) or industry (client) funded and not vendor-funded.
TNS Media Research, George Shababb, President
The advent of Return Path Data (RPD) has reshaped and forever changed the face of TV audience measurement in the United States and around the world. Not only do RPD provide databases that are orders of magnitude greater than traditional meter approaches, but also data that are free of non-response bias and respondent fatigue. Moreover, RPD services provide key insights on how commercial audiences behave based on granular second-by-second tracking as well as the ability to understand how mainstay and emerging networks and programs are viewed. RPD provide these benefits with both census and robust sample panels.
Horizon Media, Brad Adgate, EVP Research
My thought has been that, at least for now, set top box data is an overlay to panel data. There is an enormous amount of potential for STB information but making sense of all that data and setting up parameters is going to take some time. I think some of the early examples of using STB data will be to test creative execution of TV ads, pod position, pod length, commercial length and so on. For example, instead of doing a quintile analysis to determine commercial wear out, STB will do so in a real environment. Will see how well the media theories existing for decades will hold true.
Rainbow Networks & Services, Charlene Weisler, SVP Research
As the television landscape continues to evolve and fragment, there is a greater need for larger measurement samples. Not only is there a value in measuring "beyond the primary home" - office, hotels, bars and second homes etc -- there is also a need for stable viewing levels for smaller digital networks/VOD services/tiered pay as well as highly targeted consumer groups and niche lifestyle segments. The current currency falls short. The advantage of set top box data, beyond its ability to provide a census, is its ability to parse out the finer details of the viewing experience. It is not perfect, but it would expand the range of data possibilities. It would be helpful to have the major suppliers of set top box data involved in the creation of standardized methodology as a next logical step.
Carat, Shari Anne Brill, SVP, Director of Programming
We are unable to truly harness the power of STB data unless we can put it into context with the following:
Sequent Partners, Jim Spaeth, Co-founder
In the age of long-tail, media-affordable, high quality samples run thin quickly and there is a real attraction to the kinds of large household counts that set top box data can provide. But there is more to sampling than large numbers and more to measurement than collecting electronic events. A new science of media measurement needs to be developed and that requires both a significant investment of resources (time, talent and money) and open minds.
Acxiom, Joshua Herman, Digital Marketing Innovation Leader
Our clients in database marketing have been awash in measurement data for a long, long time. Having confidence in the accuracy and currency of 'who and how many" you're reaching with which message, and 'who and how many' responded to each message, is the only chance you have to continually improve, repeat successes and avoid repeating mistakes in your marketing spend. There are so many measurement check points along the path to getting a target marketing campaign out the door that having the most accurate data possible is considered table stakes before our clients would even agree to pull the trigger for a campaign. In the context of making decision for TV spending, it feels like an amazing luxury to operate with the confidence of marketing data for almost 132MM households in the U.S. Empirical confidence in your ability to measure what happened on the front-end and back-end of a campaign with both predictive and descriptive data is the best way to get a good night's sleep.
With the introduction of set top box data there's going to be a mixed blessing. So the good news is, you've got lots of data -- and the bad news is, you've got lots of data. If we take a lesson from online advertising, it's that the answer isn't behavioral data is king or that "demographic data is king," but that true optimization takes place with the synthesis of both behavioral and demographic data. So when it comes time to wade into the set top box data to tease out the wheat from the chaff, in addition to the statistical complexity, it'll also be important to consider and define the categories of data needed to find the right answers for advertisers. One important discipline I've tried to enforce with students of target marketing over the years is, "You are not allowed to touch the computer until you can clearly articulate the English language question you want the data to answer."
So the data sampling questions are key -- but with digital set top boxes, limited data won't be the issue as much as how we categorize the data and the questions we want the data to answer. Prioritizing the questions we want answered will go a long way to drive the sampling questions in this new data-rich world of TV.
EVAD Consulting, Frank Foster, Principal
While discussing the merits of panels and set top box data is an interesting exercise, moving the industry beyond the traditional approach has proven impossible. The reason, in my opinion, is that no one admits empirically how good or bad the current ratings are. If the industry is to do anything other than talk about the problem of a changing television landscape, it must first evaluate the good, the bad and the ugly of panel based television research. Standard error margins are calculated based on the assumption that the panel is both randomly generated and representative, neither of which is true. Bias associated with panel selection and ancillary behavior requirements are ignored.
Why do we allow ratings to be published without error margins and confidence intervals? It is silly, misleading and a very poor research practice. A thorough evaluation of panel bias and error analysis is in order. Without such a backdrop, any new approach will be straddled with comparisons against a mythical "gold standard."
Current TV, Theresa Pepe Falcone, VP, Ad Sales Research
I am in a constant state of confusion about evolving sampling methods and quality vs. traditional approaches.
When you are a big network with a distribution that is close to the broadcast networks, or if you are in a channel position attractive and memorable, you may benefit from a panel or a "sample." There is no way a panel will evaluate a small network or a network that is in the digital tier position or higher, in the accurate or beneficial way as real STB data will. Now "real" STB data means data from at least a few MSOs from different DMAs, big and small, different channel positions, different types of distribution (cable, satellite, telcos) and those can give you accurate up-to-the-second viewership by box. Hopefully, in the near future, you will be able to distinguish between few boxes in the same house, broadband usage connected to the same distributor, time shifted viewership, all of which we cannot and will not be able to receive from a Nielsen panel or a Nielsen sample.
Tim Brooks (Lifetime Research Emeritus)
Many people are familiar with Santayana's famous observation, made more than a century ago, "those who cannot remember the past are condemned to repeat it." Few, it seems, actually learn from it. The piece of history that needs to be recalled as we go gaga over set top boxes and their huge samples is the Great Literary Digest Debacle of 1936 which, in many ways, gave rise to the modern science of surveying.
The Digest was a large and influential magazine that fielded one of the largest public surveys ever undertaken up to that time, with some two million respondents. It confidently predicted that the winner of the upcoming presidential election would be -- Alf Landon. After Landon was buried in an FDR landslide, it was revealed that those two million respondents had been drawn from the Digest's own upper-class subscriber lists and lists of automobile owners and telephone subscribers. Sure there were a lot of them but they were not remotely representative of the electorate at large.
The Digest went out of business shortly thereafter, while a young statistician named Arthur C. Nielsen built an empire based on scientific sampling.
There may indeed be millions upon millions of set top boxes, allowing measurement of tiny networks without significant "statistical error," but what are they representative of? Certainly not all or even most viewers in the U.S. or even viewers in homes that have an STB (since most of them do not have a two way box on every set) I cringe every time I see a presentation that touts the huge samples that will now be available, how we'll have a "census" (not in our lifetime, Charley), how even small networks or specialized programming will be accurately measured. Santayana (not to mention Alf Landon and A.C. Nielsen) would be spinning in their graves.
The FIRST slide in any presentation on STB measurement should address how this fundamental flaw will be addressed. Not to do so is like ignoring the flammable dope on the surface of the Hindenburg, or the faulty bolts in the Titanic. Can it be addressed? Yes, but it will require some rather sophisticated combination of STB data, scientific sampling, weighting and possibly modeling. STB data can be valuable, but only as one part of the picture. Otherwise we'll be taking a glorious ride on the Hindenburg, or sail on the Titanic. Alf Landon for president, anyone?
Center for Media Design, Mike Bloxham, Director, Insight & Research
While the move toward the widespread adoption of set-top data is both inevitable and right, as with other means of measurement, it will never be seen as perfect. As the media landscape has become increasingly complex, as content appears across different platforms and as technology enables a wider range of media consumption behaviors on the part of consumers, research and measurement will need to keep pace. Ultimately we will rely on the kind of advantages set-top box and other electronic metering can provide, but this will be complemented by other -- more behavioral or sociological -- research methods that enrich our understanding of how consumers are using media, thereby enhancing our ability to target them. In time, this combination will provide the new cross-media version of what we now think of as "currency." It won't be easy and it won't be cheap and will fundamentally depend on the willingness of the market to pay for it -- but it's where we need to go in order to move beyond the rather simplistic and increasingly redundant definitions of "currency" that we use at present.