One of the obstacles many in the industry saw as preventing this migration to the web by general market advertisers was a dearth of research. Seemed that there just wasn’t enough of the good stuff to convince old school marketers to make a larger commitment to the web as an advertising medium.
So the industry got down to business and started producing research.
This research has come in all shapes and sizes, from basic counting like audience traffic measurement from NetRatings and comScore to the more Byzantine proclamations from Forrester, to the downright confounding mythic astrological readings from the Oracle of Delphi somewhere hidden inside the offices Jupiter.
advertisement
advertisement
Now, there has been enough research done to demonstrate the value of online advertising to re-sod a football field.
But guess what?
It’s been brought to my attention recently that some people don’t believe the research. That’s right, they don’t trust it.
Someone posted to a discussion list I belong to that they have a client who does not believe any research other than what comes out of MRI or Nielsen.
Some of this could be attributed to genuine numerical chicanery and playing 3-card Monte with results. But to simply out and out dismiss findings from a purveyor of data because it is does not come from an old stand-by is hardly a good reason for doubt.
This to me seems a bit unfair, and a little naïve. Nielsen has, over the years, proven itself if by no other reason than by virtue of their durability. But does anyone really believe that 5,000 people can really be the basis for projecting the viewing habits of over 100 million households?
I feel like if we compared sample sizes of what is had offline to what can be had online, it would be embarrassing.
When I asked this question, a consultant acquaintance of mine in San Francisco gave me a brief run-down of why something like Nielsen or newspaper polls work given their relatively small size. It turns out that what is important isn’t necessarily the size of the sample, but it’s representative composition of the population you are researching.
If I’m asking you a yes-or-no question, that seems reasonable. But my gut tells me that the more attributes, behaviors, decisions, etc. that one is trying to project the larger the sample group needs to be. I wonder if there is a difference between trying to project persons pulling lever A vs. lever B than knowing which of 60 channels they watch and what brand of pickle they eat.
I'm no statistician or mathematician or even a research guy -- though I like all those things. And I would agree that the proper demo mix of the sample base is more important, to some degree, than the size of the sample. But it is my feeling that, when measuring multiple modes of behavior, the more there are, the more size of sample would matter.
What I mean to say is this: if all I'm trying to find out is if someone will chose A or B, then a representative sample of the universe is sufficient. But if I'm trying to find out if someone will chose A, B, C, D, E, F, G, H, I, or J, then a larger sample becomes necessary, because the statistical viability of projections based on those respondents choosing one of A through J relies on the size of the sample choosing each.
So, when I had only 3 choices for TV, maybe 5,000-sample base was adequate. But when I'm talking about 60 choices for TV, perhaps that 5,000 isn't enough.
Since I don't know anything about conducting real research, this is all just conjecture based on lonely nights thinking about media research. If a real research person can confirm or deny, it would be great to hear from you.
Until then, this is the argument I’ll be using with the non-believers.