Commentary

Confessions Of A Research Industry Journalist

Here they go. First, and perhaps most painfully, I'd like to confess that I've been covering media and marketing research for nearly 30 years. And no, I didn't begin covering it in grade school.

Secondly, I have to admit that even after all that time, I really don't know much about research. I write about research findings, research methods, research providers and research users, and how the use of research influences the advertising and media marketplace. But I am not a researcher. I wouldn't know how to derive a "standard deviation." And I don't know what a "regression from the mean" means.

Why am I telling you this? More importantly, why should you care?

Let me answer the first part first. I'm telling you this, because periodically, people question -- make that chastise -- what I, other reporters at MediaPost, and even other journalists in our trade, write about research. They criticize how we cover research findings that may not necessarily meet accepted industry standard methods or sample designs. They ask why we don't probe research practices, and expose the researchers who use what they deem to be inferior practices.

Fair questions, to be sure. The answer: Most journalists, including myself, lack the technical knowledge to vet the quality of marketing and media research. We report on the findings, and simply say what the researchers said. We leave it up to readers to judge the validity of the findings. We also leave it up to knowledgeable sources to tell us when there's a problem, and thankfully they occasionally do, giving me a great deal to write about over the past quarter century.

Don't get me wrong. I'm not a complete ignoramus. I know some things about research. Back when I was breaking into the business as a cub reporter covering media at Adweek, I even enrolled in McCann-Erickson's media training program. I did this at the behest of the agency's then media director, Gordon Link, who caught me flubbing some of his media math, and figured he could do the industry a great service by having me trained. The problem is I only know so much, and research can be such a complicated business. It is a technical art and an artistic science that requires dedication of craft. It is not what we do as journalists. What we do is talk to people who have that knowledge and apply those skills.

As for the second question, why you should care - you should care because what the trade press reports about research goes a long way toward shaping perceptions about the quality and utility of research. That's especially important in the media industry, where research isn't simply theory, or even applied science, but often functions as market currencies like Nielsen TV ratings. It's important in the marketing research field too, because how we cover research can influence perceptions of industry best practices that could lead to the success or failure of products, or even entire categories or markets.

I'm not telling you this out of some grandiose sense of importance. I'm telling you this because it is what people who I respect have told me while simultaneously chastising me for the way we cover research. Among the chief chastisers have been Tony Jarvis, the global head of research for Clear Channel Outdoor; and Gabe Samuels, the former research chief at the Advertising Research Foundation, and a long-time Madison Avenue media maven. There have been others too, but nary a research report goes by that I don't receive some irate missive from Tony or Gabe about the shoddy nature of the industry press coverage - usually MediaPost's - of some research finding or development. They've done this not necessarily because they like saying "Gotcha!," but because they genuinely care about how our industry thinks about the quality of research. They care so much, that Tony and Gabe collaborated to draft a disclaimer, which MediaPost will begin adding to our daily Research Brief reports.

The disclaimer reads: "We use the term research in the broadest possible sense. We do not perform an audit, nor do we analyze the data for accuracy or reliability. Our intention is to inform you of the existence of research materials and so we present reports as they are presented to us. The only requirements we impose are that they are potentially useful and relevant to our readers and that they pass the rudimentary test of relying on acceptable industry standards. We explicitly do not take responsibility for the findings. Please be aware of this and check the source for yourself if you intend to rely on any of the data we present."

Of course, that only goes so far. To help close the rest of the trade press' research gap, I encourage our readers to weigh in however and whenever they want. You can write me directly at joe@mediapost.com, or you can begin posting comments to the Online Metrics Insider blog. For that matter, if you've got a juicy point-of-view or an opinion you want to share with our readers, I'd welcome you to submit a metrics insider column of your own.

So what does all this have to do with online metrics? Well, lately a lot of the research we and the rest of the trade press write about emanates from a variety of online survey and polling methods that some experts have found questionable. And even online audience measurement, which theoretically should not be the subject of questionable practices, has become so. The Interactive Advertising Bureau has organized an industry task force to address that issue. The Advertising Research Foundation has formed one to take on the whole subject of online research methods. These are positive steps for an industry that relies on research to generate insights about consumers, products and markets, and to serve as the basis of media negotiations and advertising accountability. MediaPost, for one, looks forward to covering their outcomes. And if we don't get that right, please let us know.

Next story loading loading..