Maybe Online Research Isn't So Flawed After All

Is online research valid? Could it be even more valid than other traditional techniques?

The answer to that question lies partly in Nate Silver’s analysis in The New York Times of the accuracy of dozens of major polls predicting the outcome of the last presidential election. He put the most popular survey methods under the microscope: live surveys via telephone; live surveys via mobile phone; and online surveys.

This passage sums up the findings: “Among the nine polling firms that conducted their polls wholly or partially online, the average error in calling the election result was 2.1 percentage points. That compares with a 3.5-point error for polling firms that used live telephone interviewers, and 5.0 points for ‘robopolls’ that conducted their surveys by automated script. The traditional telephone polls had a slight Republican bias on the whole, while the robopolls often had a significant Republican bias. The online polls had little overall bias, however.”

How could this happen?

The market research industry has never been fast to innovate. That’s perhaps partly due to its loyalty to what’s “worked” in the past, and partly to a defensive reaction to new and unsanctioned alternatives. Whichever the case, online research pioneers have had to overcome serious skepticism over the past 15 years, and still often do.

Today, we live in a digital world, and online connectedness is the norm -- not disruptive access to panel respondents via a telephone. That norm reflected Obama’s supporters, and that’s why they were so underrepresented in recent telephone-based predictions. This analysis is not the first proof of online research validity, but it further forces skeptics to reconsider.

These findings prompt key questions as we look to the future of market research.

First, what becomes of the future of panel selection? The ivory tower of online market research has often hung its hat on random digit dial (RDD) sampling methods, where blocks of telephone numbers are randomly called to create a representative research panel. The findings from this analysis suggest we need to consider the evolution of RDD methods.

Second, what do we actually mean by online polling? Telephone or mail polling means something specific: asking survey questions via those channels. Online is not a single channel, but a platform for many channels. With online, polling could take place via Skype, Facebook, desktop, mobile, browser, app, email, video or audio. I presume the most popular method today involves email, where existing or prospective panelists are solicited, qualified and presented with a hosted online form. Regardless, the channel matters, as do the questions. (I would also believe that Facebook -- with its one billion users, high engagement and rich profile data -- holds the most valuable panel and tools with which to poll and estimate outcomes.)

Third, polls and surveys typically rely on self-reported data. Self-reported survey data will always have a place in helping marketers, politicians and academics understand their world and form intelligence. But the advent of online analytics, passive behavioral analysis and (pardon the buzzword) big data is enabling promising new methods for predicting outcomes. How about predictive markets? Some were right on the money.

So here is my prediction: Demographics and psychographics will continue to shift, while new technologies and growing data sources will continue to disrupt how we communicate and observe. A competitive industry will sustain for the purpose of out-predicting everyone else. However, the industry’s leaders in eight years (two presidential election cycles) will probably look radically different than they do today.

Recommend (10) Print RSS
2 comments about "Maybe Online Research Isn't So Flawed After All".
  1. Pete Austin from Triggered Messaging , November 14, 2012 at 6:48 a.m.
    Very good point. But note that a lot of money is spent assessing voting intentions, so there's been lots of opportunity for online pollsters to "get the bugs out" by now. Doesn't mean that there will be similar accuracy in less tested subject areas.
  2. Nick D from ___ , November 26, 2012 at 9:24 a.m.
    "Maybe Online Research Isn't So Flawed After All"
    Thanks very much, nice of you to say so. From my point of view, as a career researcher, perhaps social marketing performance analysis isn't so flawed after all. I mean, after all, it's measuring mind-boggling quantities of chatter online, much of it meaningless. And the social measurement industry has always been to fast to innovate, regardless of the value of what it's doing, and without thinking of what it actually *means*.
    So here's my prediction: brands will continue to pay for social marketing analysis, while gradually gaining more understanding of what it's actually worth, before realising that they really need to rethink their approach to online, social and measurement, to understand what actually motivates consumers, not just what they say.