So the polls got it wrong. Very wrong. And that got me thinking…
There are already some initial published postmortems on the “why” of the epic pollster fail, 2020
edition. What do they reveal?
In 2016, the consensus (per the New York Times) was that poll data underrepresented non-college-educated voters, and they voted Trump. When
the pollsters adjusted this omission, they found their 2016 Trump numbers were underrepresented by 4 points, which to a large degree explained their “miss.”
The Times concludes that in 2020, the issue was perhaps not underrepresentation of any voter groups, but rather that certain groups simply did not want to participate in polls, while
others were over-eager.
One group of non-participants are the oft mentioned “Hidden Trump voters” a group of people who vote Trump without disclosing this to most
anybody else. And there is an assumption that many other Trump voters simply did not want to participate, because the polls came from the “fake news” outlets they distrust so
much.
advertisement
advertisement
Then there is the “resistance” group, i.e., all people that felt strongly anti-Trump. These people are so vocal in their anti-Trump feelings that they made
their voices heard at any opportunity they got. When the pollsters came a-knocking, these people were super-motivated to participate, while their counterparts were far less inclined to do so. So one
side got overrepresented, and the other underrepresented.
Kent Harrington over at MediaVillage explains that “with smart phones that the law says can't be
autodialed, caller ID that makes pollsters easy to ignore, and Millennials and Gen Zs who would rather text than talk, the good old days of easy outreach to the man-in-the-street are long
gone.”
It is perhaps a tad ironic that with the increase of technology and data, the accuracy of our polling data is decreasing. Still, in that respect it follows the same
rule as my “Law of Marketing Data and Understanding,” which I shared
in 2014: In the case of political polling, with each increase in technology and data, we lose an equal amount of actual voter insight and understanding.
And that got me
thinking about consumer research. I ‘m not referring to data tracking that happens via cash registers, credit card readers, online transactions, etc. Those are real, perhaps comparable to what
happens on election day in the voting booth. What I am referring to are research efforts into consumer perception, consumer motivation, etc., which depend on respondents voicing their
opinion.
I can totally imagine there are banana haters out there, ready to be vocal about their disgust at the mere mention of bananas as an acceptable food or snack. I
myself am such a person (#bananasareevil -- Google it!). So if someone would call on me for my opinion on bananas, I am ready and primed to give them a solid piece of my mind. But what about the quiet
banana lovers out there? Would they come out completely underrepresented in the research?
It is a known fact that researchers are finding it harder and harder to recruit
people for actual in-person opinionating. The alternative is to use electronic surveys made available on the device of choice. Given how that worked out for polling, I am now wondering if we should
rethink consumer research -- just as we’re rethinking polling.