Election 2016 - How did the Media Get it Wrong?

  • by November 23, 2016

An alternative title for this piece could be: “The media should have known better despite insufficient cautions from pollsters.” 

Yes, confidence levels for all the major polls were almost always reported.

Large, truly representative probability samples without significant non-response bias are rare if not impossible to achieve.  Only surveys with these rare properties can support the traditional margin of error calculation. 

Consequently, the margins of error identified on various polls were unfounded.  The media should have recognized and reported that statements such as: “Results are +/- 3.5% accurate at the 95% confidence interval” on any survey or poll were specious. 

Add to this hidden biases that must be accounted for in any survey approach, whether quantitative or qualitative.

As Ed Papazian of Media Dynamics Inc. reminded us in Media Post recently, based on well-established media consumption studies, an unknown percentage of respondents will misrepresent their actual behavior or mask their opinions so as to be more in step with what they perceive to be politically correct. 

Sometimes called the “social desirability” or “lying factor,” this aspect will subsequently confound almost any close poll result.  We suspect this bias was significant, based on the nativism ultimately reflected by much of the population during this election. 

So were the numerous projections from so many reputable research companies and their media partners that far off the mark? 

No! Especially when you throw in all the targeted dark posts (by the Trump campaign) on Facebook; The James Comey announcements so close to Election Day; and the anticipated Gary Johnson vote in a tight race, especially in critical swing states. 

Regarding the Facebook influence, this chameleon is a tech company, a research company and surely a media/advertising company that poses an array of issues for those in media and marketing.  In her article “The Secret Agenda of a Facebook Quiz,” New York Times, Nov. 19, 2016, McKenzie Funk stated: “If Mr. Zuckerberg takes seriously his oft-stated commitments to diversity and openness, he must grapple honestly with the fact that Facebook is no longer just a social network. It’s an advertising medium that’s now dangerously easy to weaponize.” 

The majority of polling averages had Clinton winning the popular vote by ~2%-4%.  She won by ~1%, despite the wonky error margin for any of the individual polls.  This raises the second anomaly. How could a ~2%-4% Hillary lead translate into a 60% -75%+ likelihood to win based on polls with multiple and known flaws and the Facebook, Comey and Johnson dimensions? 

The third anomaly for the polls was they also “missed” in most of the key battleground states, notably NC, WI, PA and Michigan, where the final result is still to be declared.  Again, the difference between the poll and the result was generally less than ~4%-5%, well within any truly pragmatic error estimate. 

Tom Anderson, Founder OdinText, has suggested the real problem may be with quantitative polling itself. “It just is not always a good predictor of actual behavior.”   

He believes the failure of the polls was an over reliance on simplistic-aided questions.  While easy to collect and analyze, such quantitative polls lack the richness of unaided, qualitative type questions.  Likert-scales, a series of questions with a set of answers or a ratings scale, typically used in virtually every poll and quantitative survey, tend to fail when used to study behaviors that are not concrete or “quantifiable.” 

OdinText asked a straightforward unaided, top-of-mind response via an open-ended question:  “Without looking, off the top of your mind, what issues does [insert candidate name] stand for?”

The OdinText qualitative results strongly suggested that Hillary Clinton was in more trouble than polling data indicated.  The No. 1 most popular response for HRC - the high perception of dishonesty/corruption versus the No. 1 and No. 2 most popular responses for Donald Trump, relating to platform - immigration, followed by pro-USA/America First and only a third by perceived racism/hatemongering. 

Tom stated: “If I want to understand what will drive actual behavior, the surest way to find out is by allowing you to tell me unaided, in your own words, off the top of your head.”  But even his surveys will reflect the vagaries and biases of an imperfect science despite sophisticated analytic software.  

The media should have been reporting, or more aptly “interpreting,” the polls as “too close to call” well before the actual election night! 

The ARF & GreenBook are sponsoring “Predicting Election 2016” - Tuesday Nov. 29.

1 comment about "Election 2016 - How did the Media Get it Wrong?".
Check to receive email when comments are posted.
  1. Tony Jarvis from Olympic Media Consultancy, November 24, 2016 at 9:56 a.m.

    Agreed. Will leave the article on Media's motivations to misinterpret the polling results to you.  As a researcher I just  wanted to underline the limitations of polling and the importance of proper interpretation of such surveys at least to the honest journalists in our business per Joe Mandese's article regarding the NY Times. 

Next story loading loading..