Commentary

Polls Are Off In Predicting Election Results -- Can Marketers Trust Marketing Data?

The validity of research once again came into question following the 2020 U.S. presidential election. I’ve been wondering how research companies polling those who live in the United States could not have predicted a tighter outcome between President Elect Joe Biden and President Donald Trump.

Actually, I’ve been asking myself a similar question since the 2016 U.S. presidential elections, when most research polls predicted Hillary Clinton would win by a landslide. How wrong they were. The race for the White House this year came in closer than many would like to admit, with nearly 73 million U.S. residents voting for President Donald Trump vs. Biden’s 78 million.

While some continue to ask themselves how the election was much closer than polls suggested in several battleground states like Wisconsin and Ohio, I know why after living in middle America for the past two years.

The Pew Research Center late last week released commentary on how the 2020 election polls performed and what it might mean for surveys and studies.

advertisement

advertisement

The analysis suggests Democratic voters were more easily reachable or more willing than Republican voters to respond to surveys, and routine statistical adjustments fell short in correcting for the problem.

The share of Republicans in surveys samples smaller, but mostly hard-core Trump supporters. One possible theory, per Pew, is Republicans’ lack of trust in institutions like the news media -- which tend to sponsor much of the polling.

The Pew analysis highlights a major stop point: If polls are systematically under-representing some types of conservatives or Republicans, what ramifications will this have for surveys that measure all kinds of behaviors, from views on the coronavirus pandemic to attitudes toward climate change?

Surveys and real-world behavior are typically off in terms of representing all views.

Not all Trump supporters are honest about their support for him. If that’s the case, do independent thinkers participate in other polls? Are they willing to share their thoughts even if they are different than others?

This creates a huge challenge in measuring attitudes -- not just when it comes to the presidential election, but in terms of how consumers feel about brands and their positions in issues.

Since surveys are used by marketers to understand a specific market, they are sometimes skewed toward that market. Some marketers use the data to validate their market before designing or producing a product. Some use it to determine competitive prices.

Don’t treat one survey as conclusive, but rather as a guideline to dig in even further.

Consider all types of behavior, and understand that people across the U.S. need to feel reassured that their opinions are taken seriously.

6 comments about "Polls Are Off In Predicting Election Results -- Can Marketers Trust Marketing Data?".
Check to receive email when comments are posted.
  1. Ed Papazian from Media Dynamics Inc, November 16, 2020 at 10:33 a.m.

    An interesting piece, Laurie. To give you one example to ponder consider what the situation is for a person who is asked to join a TV rating panel, including all members of the household, for an indefinite term---but probably a long stint. It's a fair amount of work to keep indicating whether each person was "viewing" whenever a TV set is turned on and, thereafter, every time the channel is changed---day after day after day. Suppose you and other members of your family just love TV and watch a huge amount of TV programming, plus commercials. Are you more or less likely to cooperate in such a panel operation? Probably yes. But what if you regard most TV content to be of little consequence and rarely watch? Are you equally inclined to participate in a TV rating panel? Perhaps not. Now suppose that heavy viewers are over represented in these studies by 10% while chronic light viewers are  under represented by 25%. Would that cause an inflation in the ratings and overall TV viewing stats? Yep. And the same issue applies to many other survey operations---especially those that rely on panels and, consequently, must contend with extremely low overall cooperation rates. If the subject is the purchase of margerine or detergents perhaps there is no problem. But what if the activity inviolved is personally more relevant to the consumer and involves a highly opinionated mindset---what then?

  2. John Grono from GAP Research, November 16, 2020 at 3:37 p.m.

    Ed, what you say is true.

    But first, the panels are stratified (as a simple example, the proportion of one-person homes need to reflect the population) which averts HH composition biases.

    Second, the actual 'tuning' is passive.   Turn the TV on and the tuning is automatic.

    Your point about 'lazy' respondents is extremely important.   For example, Liittle Johnny is mid-teens and now refuses to report in after pushing buttons for a month or so.   Sure Mum & Dad can push the button for Lazy Little Johnny in he family TV room, but you lose his viewing when he skulks off to his bedroom.    But given that the panel is longitudinal it becomes quickly apparent that not all of Johnny's viewing is being captured so that HH wiil be replaced.

    It ain't perfect, but it is addressed and accounted for.

    And before you mention 'Attention' I acknowledge it is more like 'presence in the room'.

  3. Ed Papazian from Media Dynamics Inc, November 16, 2020 at 4 p.m.

    John, I'm not picking on Nielsen's TV ratings in particular as my point applies to media surveys of all types when the might-be respondent knows immediately what the true purpose of the study is. Also, one can use various statistical weighting schemes to make the panel---or sample---resemble the known universe being measured, however, the underlying assumption is that whatever you got---of any demo group---or combinations of groups---mimics the whole  group in its behavior---but is just a tad under- or over- represented. But what if the respondents you got---again,, of any demo description---includes too many heavier than normal users for that group? How do you account for that? One way to get at this---if anyone cares---would be for Nielsen to ask potential respondents to estimate how much time they typically devote to TV viewing in a day---or the last day, to make it a bit tighter. Then compare the results between those who co-erated and those who didn't. If they are the same then I'll shut up---if not---???.

  4. Ed Papazian from Media Dynamics Inc, November 16, 2020 at 4:30 p.m.

    Make that "co-operated" not "co-erated" in the last sentence of my last post. If only we had an editing option. Sigh!

  5. John Grono from GAP Research, November 17, 2020 at 5:37 a.m.

    Understood Ed.

    One of the analyses we do is a 'determinants of vierwing' analysis.   Basically it established ths factors that are strongly correlated to heavy or low viewing.   For example, a 'cell' may be ... a 4+ person HH with 3+ TVs, 2 kids under 18, and 3+ subscriptions.   When you recruit you pay less attention to things like age and gender.

  6. bob hoffman from type a group, November 17, 2020 at 11:09 a.m.

    Everyone in marketing needs to read "Everybody Lies" by Seth Stephens-Davidowitz.

Next story loading loading..