Commentary

Put Your Research Data to the Test!

Chances are, whether you're involved in media, marketing, sales, research or all four disciplines, you probably rely on market research data to help you make strategic decisions, or just as likely, to prove your case to someone else - whether that be a client, prospect, your boss or your boss's boss. But, no matter what your use of research information, the integrity of the data itself is of vital importance.

If it's not good data, you run the risk of either making a bad decision or losing credibility with the very people you're trying to persuade. Everyone knows that data can lie, and statistics can be used to prove almost any point (as the son of an economist, I can personally attest to this fact).

So, how do you know when research data is any good? And a corollary to that is, why is that researchers frequently come out with wildly different estimates and projections when they're supposedly measuring the same thing?

We've developed a four-step process for evaluating research information. It boils down to asking four key questions every time a new piece of research data comes out.

advertisement

advertisement

The first question you should ask is: "Who is the source, and how reliable has their data been in the past?" We've been tracking Internet and e-business research data since 1996, and frankly, not all research firms are created equal. Moreover, some are better at measuring and projecting activity in some areas than in others. Finally, watch out for bias, including studies that are sponsored by interested third parties, such as a wireless survey sponsored by a leading telecom firm.

The second question is, "What specific definitions were used in the research?" Often, the greatest disparities seen in estimates between research firms are a result of different definitions being used. For example, while 10 leading research firms estimate the number of Americans online, the range is between 113 million and 178 million. Now, that's what I would call a wide margin of error! However, by holding the definition to include: a) all age groups; b) access through all locations (i.e., work, home and school) and c) only active, weekly users, there is remarkable convergence in the numbers. Using this definition, both the leading panel measurement firms, comScore Media Metrix and Nielsen//NetRatings, and Harris Interactive converge on number of 140 million online Americans, give or take a million or two.

The third question is, "What methodology was used to produce the results?" An online poll of 32 executives, for instance, is not going to yield results that are as definitive and actionable as a survey among 4,000 respondents in a random-digit-dialed survey over the phone. It's also a good idea to ask related questions, like, "What was the margin of error?" and "How were the respondents recruited and what incentives were used?"

The fourth and final question is our favorite, "How do the results of this new research data compare with the existing body of evidence?" In other words, don't take just one piece of research and go to market with that. It's worth your while to check and see if there's other data out there - from different sources - that either supports, or contradicts, the information you're looking at. If there is a high degree of convergence between the new data and all the historical data from other sources, you can have greater confidence in the results and you are minimizing your risk of making a wrong decision.

As an example, look at broadband projections in the US. Our data show that nine out of eleven researchers agree there are roughly 16 million to 17 million households with broadband connections in 2002. What's more, all researchers are predicting at least 33% growth for 2003. That high degree of convergence should give marketers and other industry watchers great confidence that broadband has indeed reached critical mass and that it will continue to grow rapidly.

If, on the other hand, new research data comes out this is contradictory to the existing body of evidence, you have to wonder. It is possible that the market has changed? It is just as likely, though, that the methodology was flawed, or the researcher is using a different definition of measurement.

And when it comes to future projections, the research data is only as good as the assumptions that went into them. If the researcher doesn't spell these assumptions out, then you have to take the data with a huge grain of salt. If all the Internet predictions made back in 1998 - 2000 were realized today, the economy wouldn't be in the dumps, the Dow would be cruising along and NASDAQ would be soaring.

The bottom line: a careful evaluation and weighting of multiple sources usually leads to a more accurate picture than any single source could possibly provide. After all, if you were deciding whether or not to go to the beach tomorrow, would you rather rely on the advice of a single weatherperson, or that of a dozen?

Geoffrey Ramsey is CEO and co-founder of New York-based eMarketer, a firm which aggregates and analyzes research information and statistics related to the Internet, e-business and online marketing.

Next story loading loading..