Commentary

How Many Polls Does It Take To Screw In A Lightbulb?

If the biggest winner of the 2020 U.S. presidential election was American democracy, the biggest loser was political polling -- and by extension -- survey-based consumer research.

Since presidential campaigns arguably are the highest-stakes form of marketing, then polling is also the highest-stakes form of marketing research. And while caveats abound, it largely failed, raising questions about how good polling is not just for political campaigns, but for any form of marketing.

What did perform well was forecasting models, especially Ipsos’, which predicted the outcome of the election with relative precision.

That’s because the Ipsos method factors a variety of variables, including national, state and local polls, as well as tracking much more predictable factors, such as the key issues on voters' minds heading into the election. So it might be a good model for marketers in other categories to follow too.

advertisement

advertisement

While political pollsters said they adjusted for under-sampling of key constituents -- mostly non-college-educated White men -- in the 2020 election, what they could not control for was the degree to which people either provide false, politically correct responses to researchers, or that they just lie, possibly to themselves.

On Friday, the Pew Research Center released a good post-analysis of what went wrong, and breaks it down into roughly four categories, each of which has different ramifications for the polling industry in particular, but likely more broadly for many forms of survey-based market research:

Partisan nonresponse: According to this theory, Democratic voters were more easily reachable and/or just more willing than Republican voters to respond to surveys, and routine statistical adjustments fell short in correcting for the problem. A variant of this: The overall share of Republicans in survey samples was roughly correct, but the samples underrepresented the most hard-core Trump supporters in the party. One possible corollary of this theory is that Republicans’ widespread lack of trust in institutions like the news media – which sponsors a great deal of polling – led some people to not want to participate in polls.

 

‘Shy Trump’ voters: According to this theory, not all poll respondents who supported Trump may have been honest about their support for him, either out of some sort of concern about being criticized for backing the president or simply a desire to mislead. Considerable research, including by Pew Research Center, has failed to turn up much evidence for this idea, but it remains plausible.

 

Turnout error A – Underestimating enthusiasm for Trump: Election polls, as opposed to issue polling, have an extra hurdle to clear in their attempt to be accurate: They have to predict which respondents are actually going to cast a ballot and then measure the race only among this subset of “likely voters.” Under this theory, it’s possible that the traditional “likely voter screens” that pollsters use just didn’t work as a way to measure Trump voters’ enthusiasm to turn out for their candidate. In this case, surveys may have had enough Trump voters in their samples, but not counted enough of them as likely voters.

 

Turnout error B – The pandemic effect: The once-in-a-generation coronavirus pandemic dramatically altered how people intended to vote, with Democrats disproportionately concerned about the virus and using early voting (either by mail or in person) and Republicans more likely to vote in person on Election Day itself. In such an unusual year – with so many people voting early for the first time and some states changing their procedures – it’s possible that some Democrats who thought they had, or would, cast a ballot did not successfully do so. A related point is that Trump and the Republican Party conducted a more traditional get-out-the-vote effort in the campaign’s final weeks, with lar ge rallies and door-to-door canvassing. These may have further confounded likely voter models.

Pew said it plans to conduct a review of its own polling methodology and analysis of overall political polling to understand what went wrong and how to fix it. But the real lesson here might be a key insight that polls aren’t a very effective predictor of what people will actually do, and good modeling may be a better way to go.

8 comments about "How Many Polls Does It Take To Screw In A Lightbulb?".
Check to receive email when comments are posted.
  1. Ed Papazian from Media Dynamics Inc, November 16, 2020 at 9:30 a.m.

    Joe, as you probably know, I offered some of the same points about the political polls in a recent post on MP. However, what is still perplexing is the fact that the national polls came pretty close to predicting the vote---as opposed to the electoral college outcome--and this was also true in 2016. However, the state polls---in so-called "swing states" were off to a disturbing degree. This suggests that lying and refusing to cooperate---by Trumpists---was more pronounced in those areas because the voters knew how important their votes might be. I woder how accurate the same kinds of polls were in states where there was no doubt about the winner---like New York, California, Oklahoma or Idaho? If these were highly accurate, one might conclude that this was mainly a "swing state" issue. If not, it indicates that serious rethinking of their methodologies by the pollsters should be a top priority.

  2. Paula Lynn from Who Else Unlimited replied, November 16, 2020 at 11:45 a.m.

    How many people will not admit they are racist or support fascism when they vote for someone who is ? They are in the mix.

  3. Craig Mcdaniel from Sweepstakes Today LLC, November 16, 2020 at 12:12 p.m.

    There is a "C" answer.  This is people are burned out over the past 10 years of the number of spam telephone calls. Everyone hates them and are taking measures to block calls.  Things are so back that even legit callers are having problems getting through. In short, polls have been pegged amoung the worst or the worst no matter the good intentions.

  4. Douglas Ferguson from College of Charleston, November 16, 2020 at 12:49 p.m.

    When pollsters overestimate the outcomes of two consecutive Presidential elections by such a huge margin, one wonders whether polling is a form of vote suppression.  I wonder how many undecided voters stayed home when they were convinced by the polls that the candidate to whom they leaned simply could not win. It is fitting that the same suppression that elected Trump also may have proved his undoing. Mission accomplished.

  5. Ed Papazian from Media Dynamics Inc, November 16, 2020 at 4:51 p.m.

    Douglas, if the pollsters did it intentionally, I'd agree with you that it's a form of voter suppression---but I doubt that that's the case as they would be drummed out of the business if it became known that they slanted their studies in this manner. For example, the Fox News Channel is an ardent supporter of Trump, yet many of its polls showed him in a worse light than those sponsored by "liberal media". Wouldn't you expect the opposite ---if they were engaged in voter suppression? More likely we are seeing an unusual situation, caused by the hatred and fear that has been generated in a divided country playing out---to the chagrin of mostly honest pollsters.

  6. Craig Mcdaniel from Sweepstakes Today LLC, November 16, 2020 at 5:01 p.m.

    There is one other issue with pollsters and their callers.  Most work on a hourly basis. However some work on hourly and bonus for completed survey. Last is all commission surveys. Hourly is best but the numbers and easliy be corrupted. 

  7. Joshua Chasin from VideoAmp, November 17, 2020 at 11:01 a.m.

    It's been a long time since I've come upon a "shy" Trump voter. 

  8. Joe Mandese from MediaPost Inc., November 17, 2020 at 11:27 a.m.

    @Josh Chasin: That is the best line ever!

Next story loading loading..