If the biggest winner of the 2020 U.S. presidential election was American democracy, the biggest loser was political polling -- and by extension -- survey-based consumer research.
Since presidential campaigns arguably are the highest-stakes form of marketing, then polling is also the highest-stakes form of marketing research. And while caveats abound, it largely failed, raising questions about how good polling is not just for political campaigns, but for any form of marketing.
What did perform well was forecasting models, especially Ipsos’, which predicted the outcome of the election with relative precision.
That’s because the Ipsos method factors a variety of variables, including national, state and local polls, as well as tracking much more predictable factors, such as the key issues on voters' minds heading into the election. So it might be a good model for marketers in other categories to follow too.
While political pollsters said they adjusted for under-sampling of key constituents -- mostly non-college-educated White men -- in the 2020 election, what they could not control for was the degree to which people either provide false, politically correct responses to researchers, or that they just lie, possibly to themselves.
On Friday, the Pew Research Center released a good post-analysis of what went wrong, and breaks it down into roughly four categories, each of which has different ramifications for the polling industry in particular, but likely more broadly for many forms of survey-based market research:
Partisan nonresponse: According to this theory, Democratic voters were more easily reachable and/or just more willing than Republican voters to respond to surveys, and routine statistical adjustments fell short in correcting for the problem. A variant of this: The overall share of Republicans in survey samples was roughly correct, but the samples underrepresented the most hard-core Trump supporters in the party. One possible corollary of this theory is that Republicans’ widespread lack of trust in institutions like the news media – which sponsors a great deal of polling – led some people to not want to participate in polls.
‘Shy Trump’ voters: According to this theory, not all poll respondents who supported Trump may have been honest about their support for him, either out of some sort of concern about being criticized for backing the president or simply a desire to mislead. Considerable research, including by Pew Research Center, has failed to turn up much evidence for this idea, but it remains plausible.
Turnout error A – Underestimating enthusiasm for Trump: Election polls, as opposed to issue polling, have an extra hurdle to clear in their attempt to be accurate: They have to predict which respondents are actually going to cast a ballot and then measure the race only among this subset of “likely voters.” Under this theory, it’s possible that the traditional “likely voter screens” that pollsters use just didn’t work as a way to measure Trump voters’ enthusiasm to turn out for their candidate. In this case, surveys may have had enough Trump voters in their samples, but not counted enough of them as likely voters.
Turnout error B – The pandemic effect: The once-in-a-generation coronavirus pandemic dramatically altered how people intended to vote, with Democrats disproportionately concerned about the virus and using early voting (either by mail or in person) and Republicans more likely to vote in person on Election Day itself. In such an unusual year – with so many people voting early for the first time and some states changing their procedures – it’s possible that some Democrats who thought they had, or would, cast a ballot did not successfully do so. A related point is that Trump and the Republican Party conducted a more traditional get-out-the-vote effort in the campaign’s final weeks, with lar ge rallies and door-to-door canvassing. These may have further confounded likely voter models.
Pew said it plans to conduct a review of its own polling methodology and analysis of overall political polling to understand what went wrong and how to fix it. But the real lesson here might be a key insight that polls aren’t a very effective predictor of what people will actually do, and good modeling may be a better way to go.