What Really Embarrasses Me About The Coverage Of TV Research
I understand that journalists are responsible for explaining the context of research they report on as best they can. Early in my career as a trade journalist, at the invitation of former McCann-Erickson Media Director Gordon Link, I enrolled in the agency's media training program so that I could understand how to better cover things like media research.
I am not perfect, and over the years when I have made some mistakes, I've always tried to correct them and set the record straight. But let me tell you, when it comes to the topic of research, it isn't that easy.
And here's the really ugly truth: There is no perfect research. There are just different methods that yield different results. Occasionally, our industry comes to a consensus around some of those methods and results, making them de facto standards and even "currencies" for the purposes of planning, buying and evaluating the performance of media buys. That's certainly the case with Nielsen's TV ratings. People in the industry --journalists, researchers, media planners and buyers -- may talk about them like they are the absolute truth, but they are just a consensus for estimating the size and composition of TV audiences. In fact, Nielsen never refers to its ratings as actual audience numbers. It calls them "estimates."
So why am I reminding you about this now - today, when I should be wishing you good tidings of comfort and joy, or waxing on about some other important year-end TV issues. Well, it's because I need to set the record straight, because MediaPost recently erred in reporting on some new industry research. It was a story about a report by Forrester Research - a survey in February and March of 42,784 North American adults - which found, based on the methods Forrester used, that the amount of time those adults spend online is now equal to the amount of time they spend watching television. Our mistake was using language in the headline ( "Internet And TV, Equal Time For U.S. Households") and in the article, that treated the results as fact, and not doing a proper job of explaining that the results were a function of the method -- a self-reported survey -- that Forrester used.
The lead paragraph of our article asserted that Forrester's research "confirms" those media behaviors, when in truth, it only confirms the behavior of the people responding to the survey -- that they now spend the same amount of time online that they do watching television.
People can argue with the relevance or representativeness of Forrester's findings, but I for one think they were worth reporting on, if only as an example of how people perceive the amount of time they spend with each medium. Others, however, treated our coverage as some form of heresy. To them, the industry consensus data -- Nielsen's estimates -- are sacrosanct, and no other research should ever be cited.
"One of the greatest frustrations among media researchers, is when we see headlines touting obviously bogus research studies," Steve Sternberg, a former Madison Avenue media researcher, wrote on his blog, "The Sternberg Report." Sternberg went on to chastise trade journalists for covering the Forrester study, asserting, "Any reporter who presented this gibberish, and any editor that allowed it to be printed should be embarrassed. Anyone who writes about this business for a living should know the reputation of the company involved, and at the very least should have quoted several industry researchers - all of whom would have disagreed with the findings. They also should have pointed out that the findings went against virtually every objective research study on the same topic."
I may not be allowed to call myself a researcher, but apparently some researchers think they are better judges of objective news coverage than journalists. Ironically, Sternberg cites both Nielsen data and the Council For Research Excellence's "Video Consumer Mapping Study" as presumably more objective sources of the truth, but fails to disclose his part in the CRE committee that fielded the study and that it was paid for by Nielsen. That's no reflection of the validity of the study, which was conducted by Ball State University using its highly regarded "observational" methods in which people actually observe how other people use media.
Sternberg also failed to disclose that he makes his living primarily off of Nielsen data. He even pitches readers of his blog to buy "My Exclusive Primetime TV Insights Reports." The reports, published by Baseline Intelligence, sell for $395, and are based primarily on analysis of Nielsen's TV audience estimates.
Now, few who know Sternberg would argue that he isn't a solid and credible researcher, but he is not a journalist and he is not necessarily the best arbiter of journalistic objectivity. And just because the industry trades billions of dollars worth of TV advertising time, and makes billions of dollars worth of TV programming decisions, based on Nielsen's estimates, doesn't mean those estimates are the truth or should be cited to the exclusion of anyone else's estimates.
The truth is that there have been times when Madison Avenue utilized two concurrent sources for TV ratings estimates: Nielsen's and Arbitron's. And if you go back to the early days of TV in the '50s and '60s, there were a half a dozen ratings services measuring television in different ways and with different results. That was also a period when a Congressional probe about the TV ratings business led to the creation of an industry self-regulatory watchdog, the Media Rating Council, to watch over and accredit the integrity of various research estimates and methods. Interestingly, the national TV ratings that are currently used for those billions of dollars worth of TV advertising decisions, are not technically accredited by the MRC. Parts of Nielsen's convoluted systems are, but not a key component: the commercial monitoring data that Nielsen uses to estimate its so-called C3 ratings.
For that matter, the MRC recently pulled its accreditation for all of Nielsen's diary-only local TV ratings estimates, because Nielsen failed to meet its standards. The diary reports are based on a sample of TV viewers who self-report their viewing behavior by writing it down in printed reports and mailing them back to Nielsen.
The truth is that MRC accreditation does not determine whether Nielsen's ratings are currency or not. Industry consensus does, and advertisers, agencies and local TV stations continue to trade billions of dollars worth of advertising time on the basis of those diaries, even though many may believe they are not the most objective method for measuring actual viewing behavior in the current multichannel, time-shifted TV programming environment.
So the best we can hope for as an industry is for people on all sides of the business -- advertisers, agencies, researchers, research suppliers, consultants, bloggers, and yes, even trade journalists -- to be as complete as they possibly can about disclosing methods and biases, including their own self-interests about the research they cite as gospel.
That's why, a while back, I asked another well-regarded industry researcher, Gabe Samuels (Advertising Research Foundation, J. Walter Thompson, etc.) to help MediaPost craft a disclaimer for our Research Brief newsletters.
It reads, "We use the term research in the broadest possible sense. We do not perform an audit, nor do we analyze the data for accuracy or reliability. Our intention is to inform you of the existence of research materials and so we present reports as they are presented to us. The only requirements we impose are that they are potentially useful and relevant to our readers and that they pass the rudimentary test of relying on acceptable industry standards. We explicitly do not take responsibility for the findings. Please be aware of this and check the source for yourself if you intend to rely on any of the data we present." Good words to live by.