What Really Embarrasses Me About The Coverage Of TV Research

Here's a disclaimer I've had to make far too many times during my 30-year career covering media, especially television: I am not a researcher. I am a journalist who, from time to time, writes about research, including research about television. I do not know how to compute a "statistical deviation," and I cannot tell you what the "regression from the mean" really means. I simply report on research findings and, when relevant, on the methods researchers use to come up with those findings. That's it. It's not a cop-out. It's just the truth.

I understand that journalists are responsible for explaining the context of research they report on as best they can. Early in my career as a trade journalist, at the invitation of former McCann-Erickson Media Director Gordon Link, I enrolled in the agency's media training program so that I could understand how to better cover things like media research.

I am not perfect, and over the years when I have made some mistakes, I've always tried to correct them and set the record straight. But let me tell you, when it comes to the topic of research, it isn't that easy.

And here's the really ugly truth: There is no perfect research. There are just different methods that yield different results. Occasionally, our industry comes to a consensus around some of those methods and results, making them de facto standards and even "currencies" for the purposes of planning, buying and evaluating the performance of media buys. That's certainly the case with Nielsen's TV ratings. People in the industry --journalists, researchers, media planners and buyers -- may talk about them like they are the absolute truth, but they are just a consensus for estimating the size and composition of TV audiences. In fact, Nielsen never refers to its ratings as actual audience numbers. It calls them "estimates."

So why am I reminding you about this now - today, when I should be wishing you good tidings of comfort and joy, or waxing on about some other important year-end TV issues. Well, it's because I need to set the record straight, because MediaPost recently erred in reporting on some new industry research. It was a story about a report by Forrester Research - a survey in February and March of 42,784 North American adults - which found, based on the methods Forrester used, that the amount of time those adults spend online is now equal to the amount of time they spend watching television. Our mistake was using language in the headline ( "Internet And TV, Equal Time For U.S. Households") and in the article, that treated the results as fact, and not doing a proper job of explaining that the results were a function of the method -- a self-reported survey -- that Forrester used.

The lead paragraph of our article asserted that Forrester's research "confirms" those media behaviors, when in truth, it only confirms the behavior of the people responding to the survey -- that they now spend the same amount of time online that they do watching television.

People can argue with the relevance or representativeness of Forrester's findings, but I for one think they were worth reporting on, if only as an example of how people perceive the amount of time they spend with each medium. Others, however, treated our coverage as some form of heresy. To them, the industry consensus data -- Nielsen's estimates -- are sacrosanct, and no other research should ever be cited.

"One of the greatest frustrations among media researchers, is when we see headlines touting obviously bogus research studies," Steve Sternberg, a former Madison Avenue media researcher, wrote on his blog, "The Sternberg Report." Sternberg went on to chastise trade journalists for covering the Forrester study, asserting, "Any reporter who presented this gibberish, and any editor that allowed it to be printed should be embarrassed. Anyone who writes about this business for a living should know the reputation of the company involved, and at the very least should have quoted several industry researchers - all of whom would have disagreed with the findings. They also should have pointed out that the findings went against virtually every objective research study on the same topic."

I may not be allowed to call myself a researcher, but apparently some researchers think they are better judges of objective news coverage than journalists. Ironically, Sternberg cites both Nielsen data and the Council For Research Excellence's "Video Consumer Mapping Study" as presumably more objective sources of the truth, but fails to disclose his part in the CRE committee that fielded the study and that it was paid for by Nielsen. That's no reflection of the validity of the study, which was conducted by Ball State University using its highly regarded "observational" methods in which people actually observe how other people use media.

Sternberg also failed to disclose that he makes his living primarily off of Nielsen data. He even pitches readers of his blog to buy "My Exclusive Primetime TV Insights Reports." The reports, published by Baseline Intelligence, sell for $395, and are based primarily on analysis of Nielsen's TV audience estimates.

Now, few who know Sternberg would argue that he isn't a solid and credible researcher, but he is not a journalist and he is not necessarily the best arbiter of journalistic objectivity. And just because the industry trades billions of dollars worth of TV advertising time, and makes billions of dollars worth of TV programming decisions, based on Nielsen's estimates, doesn't mean those estimates are the truth or should be cited to the exclusion of anyone else's estimates.

The truth is that there have been times when Madison Avenue utilized two concurrent sources for TV ratings estimates: Nielsen's and Arbitron's. And if you go back to the early days of TV in the '50s and '60s, there were a half a dozen ratings services measuring television in different ways and with different results. That was also a period when a Congressional probe about the TV ratings business led to the creation of an industry self-regulatory watchdog, the Media Rating Council, to watch over and accredit the integrity of various research estimates and methods. Interestingly, the national TV ratings that are currently used for those billions of dollars worth of TV advertising decisions, are not technically accredited by the MRC. Parts of Nielsen's convoluted systems are, but not a key component: the commercial monitoring data that Nielsen uses to estimate its so-called C3 ratings.

For that matter, the MRC recently pulled its accreditation for all of Nielsen's diary-only local TV ratings estimates, because Nielsen failed to meet its standards. The diary reports are based on a sample of TV viewers who self-report their viewing behavior by writing it down in printed reports and mailing them back to Nielsen.

The truth is that MRC accreditation does not determine whether Nielsen's ratings are currency or not. Industry consensus does, and advertisers, agencies and local TV stations continue to trade billions of dollars worth of advertising time on the basis of those diaries, even though many may believe they are not the most objective method for measuring actual viewing behavior in the current multichannel, time-shifted TV programming environment.

So the best we can hope for as an industry is for people on all sides of the business -- advertisers, agencies, researchers, research suppliers, consultants, bloggers, and yes, even trade journalists -- to be as complete as they possibly can about disclosing methods and biases, including their own self-interests about the research they cite as gospel.

That's why, a while back, I asked another well-regarded industry researcher, Gabe Samuels (Advertising Research Foundation, J. Walter Thompson, etc.) to help MediaPost craft a disclaimer for our Research Brief newsletters.

It reads, "We use the term research in the broadest possible sense. We do not perform an audit, nor do we analyze the data for accuracy or reliability. Our intention is to inform you of the existence of research materials and so we present reports as they are presented to us. The only requirements we impose are that they are potentially useful and relevant to our readers and that they pass the rudimentary test of relying on acceptable industry standards. We explicitly do not take responsibility for the findings. Please be aware of this and check the source for yourself if you intend to rely on any of the data we present." Good words to live by.

Happy holidays.

Recommend (1) Print RSS
8 comments about "What Really Embarrasses Me About The Coverage Of TV Research ".
  1. David Kleeman from PlayCollective , December 23, 2010 at 4:19 p.m.

    For a further perspective on this, particularly in the realm of children and media, see my Huffington Post commentary:

    http://www.huffingtonpost.com/david-kleeman/screen-time-screeds----wh_b_658738.html

  2. Doug Garnett from Atomic Direct , December 23, 2010 at 6:29 p.m.

    Joe -

    A superb note - a thorough and honest discussion. I'd be interested in your thoughts about a trend which seems to challenge journalistic integrity.

    Over the past decade and a half, the growth of new media has been fueled by what I've come to term "bandwagon research". This is research is designed to show enthusiasm for a new media. And that enthusiasm causes a larger group to "jump on the bandwagon".

    The reason this research is concerning is that the results can be created with PR. Careful dissemination of articles and content focused on creating that enthusiasm result in - surprise, surprise - increased enthusiasm.

    Then, when research follows to measure that enthusiasm, it is released with major headlines that are picked up by the journalists reporting that the new media is liked - giving the appearance of meaning. But, this type of research is always without foundation - showing the new media to be useful or effective.

    This type of research created the panic over DVR's destroying TV advertising. Except, that didn't happen and TV ads remain as effective - or more - even with high DVR penetration. And, over the past year, I've seen this about nearly every new media as well as from the promotional item industry.

    Political parties have been using this type of research as well. Nate Silver's blog analyzing research offers nice counterpoint - showing where slight wording in questions creates certain answers that benefit the parties funding the research.

    Here's my question: As a journalist, how well do you think journalists detect this deceptive practice? I'm not inside with you - so I don't have that perspective. I just see the headlines and wonder how that research was picked up.

    Regards,

    Doug Garnett

  3. Joe Mandese from MediaPost , December 23, 2010 at 10:57 p.m.

    Thanks, Doug. Thoughtful response, and important questions. I'm not sure I know all the answers, but I will tell you what I think.

    First of all, I do think there are questions about journalistic integrity, but I think the bigger issue is journalistic competence. I’m sure some reporters willfully hype the kind of bandwagon research you are talking about, but I think the real problem is that many journalists don’t have the faculties necessary to evaluate research, and report on it because it is simply newsworthy.

    That said, I think the validity of all industry research is relative, and I’ve also seen people jump on bandwagons for research claiming to prove the efficacy of old media, and debunking the vitality of new media. There is a significant amount of money at stake in research that preserves the status quo of the media industry, at least on Madison Avenue, if not on Wall Street. The amount of money spent on advertising in traditional media far outstrips what is spent on new media. And I’m pretty sure the amount of research dollars invested in proving the efficacy of traditional media still is greater than new media. But that’s a guess, and I’d need to conduct some research to prove that.

  4. Douglas Ferguson from College of Charleston , December 24, 2010 at 2:02 p.m.

    I have a modest proposal. Why not compile a list of research mavens through whom you can ask a simple question: does this latest research news pass the smell test? Better yet, offer an internship to at least one college student whose sole qualification is that they got an "A" in media research methods? It might be better than using the "we-don't-really-understand-research" excuse. (At least you're not peddling over-the-counter medications.)

  5. Doug Garnett from Atomic Direct , December 26, 2010 at 2:43 p.m.

    Joe -

    Excellent point about the old media money. What I've observed from a reader's perspective (reading many pubs) is that the new media research is so fresh of a story that it's quickly picked up. (Much more dramatic to say "old media is going away" than to say "old media continues to be really productive".)

    And, I've quickly ignored many of the old media studies because they are so painfully defensive in nature. Why can't the 4A's be more insightful?

    There have been a few studies that catch my attention - like one that observed consumers in their homes and suggested that the place the internet is taking in consumer lives is very much like replacing radio's role. A very interesting theory.

    Sadly, I find that many editors and writers lack your dedication to reporting the research with integrity (it's a pleasure to see the wide range you report).

    And, as you point out, journalists aren't trained to analyze research. Once saw a study in the 1990's that headlined "Sugar Doesn't Affect Kids at Night". And, it was blasted out through all the media - local papers, national TV news, etc. Except, the conclusion was based on anecdotal evidence from 40 subjects - entirely unreliable statistically. Asked a journalist friend of mine about it. She observed that neither she nor her colleagues had the research training to detect anything like that. So, they report what the study sponsors report.

    The "Truth Meter" work that we're seeing in politics might be a nice addition to health reporting as well as new media reporting.

    Thanks for the thoughts. You and the team at Media Post do great work.

  6. Ngoc T from Iowa , December 27, 2010 at 12:52 p.m.

    Excellent! This should be mandatory reading for everyone in J school, students and teachers alike.

  7. Joe Mandese from MediaPost , December 28, 2010 at 8:42 a.m.

    Doug (Garnett),

    You are right about the nature of news, and it is a bigger issue than just covering research and research methods. It relates to everything the press cover.

    I think the word "news" itself reflects a bias of sorts. Something has to be "new" to be news, and if it's old, it is not news. At least, that's how some people perceive it.

    Beyond that, I believe many reporters, editors, and readers, weight the importance of news based on relative "news values." One of the biggest news values is consequence (news about something that could have consequences for the reader), especially when covering businesses. And something that maintains the status quo (traditional media businesses operating as usual) is not perceived as having as much consequence as something that changes how people live or work (new media challenging the business models of traditional media).

    Obviously, there are many other important news values, but consequence -- telling someone something that might have an immediate bearing on how they live or work -- has to be pretty high up there.

    It's important for a journalist to tell a reader why a story is relevant to to them, and may affect their lives. And stories that have those values generally carry more weight than those that do not. That's just human nature.

    As for the truth, well, I think the reader plays a role in that too. And these days, readers can let others know exactly what they think the truth is pretty quickly, and just as publicly as any journalist can. In fact, I think you just did.

    Joe

  8. Doug Garnett from Atomic Direct , December 28, 2010 at 5:28 p.m.

    Joe -

    Thanks for offering your views on this. It's difficult from the outside to understand the ways story thoughts & ideas are processed as an editor searches out the right reporting. You make tremendous sense and I appreciate the insights...

    ...Doug