Commentary

Telling Tales About Online Surveys

We regularly harp about how data is collected by the Nielsens, Arbitrons and MRIs of the syndicated research world. But what about the quality of data from media agencies' own proprietary research, where no oversight by an organization like the Media Rating Council exists?

As a participant in a number of online panels, I shake my head at the design quality of some of the surveys, e.g., interminable length, poorly worded questions and puzzling question flow.

For example, last week I answered a survey about vision correction. At the onset, I established that I wear eyeglasses but not contacts. Yet there were numerous questions about my opinion regarding contact lens-oriented products -- "not applicable" wasn't a choice. However, in order to go further into the survey, I had to choose an answer.

In another survey, I was asked to choose the financial institution my employer-sponsored 401(k) was invested in. My company wasn't listed, but there was no choice of "other" as an answer, so I couldn't continue unless I chose a company. Maddeningly, I was forced to select one that I don't use.

Some of the most tedious surveys are those pertaining to banking or insurance because the category is not exciting, and the questions are often much too long. Even surveys about relatively interesting product categories (e.g., beverages, electronics, fast-food restaurants) can become maddening when they attempt to milk respondents for every conceivable bit of information.

When a respondent is subjected to a lengthy, meandering survey at what point does he/she become disengaged (or just opt out)? And how does that impact the accuracy of their answers?

These flaws suggest that too many people, given the responsibility of putting together a survey, lack an understanding of the complexities and subtleties of questionnaire design and respondent psychology. They bring a simplistic "if you ask, they will answer" attitude to the task.

That's why the oversight function of media researchers is a crucial one. Without it, questionable or misleading "insights" from flawed surveys end up being presented to clients or new business prospects.

While it's vital that we never stop insisting on quality research from our suppliers, agencies also need to be vigilant about internal practices when conducting their own research.

3 comments about "Telling Tales About Online Surveys ".
Check to receive email when comments are posted.
  1. Glenn Enoch from ESPN, Inc., July 26, 2011 at 9:04 a.m.

    Not to mention the issue of whether the results of these online surveys are projectable to any population!

  2. Paula Lynn from Who Else Unlimited, July 26, 2011 at 9:30 a.m.

    Clarity from financial institutions are not their best suit. ;) Intentional ? Any questions? Wrong questions yield wrong answers yield wrong "campaign" wasting millions by those who couldn't afford professional research.

  3. John Grono from GAP Research, August 14, 2011 at 9:34 p.m.

    When clients pay peanuts the get survey monkeys who don't understand research (representativeness and projectability) or questionnaire design.

    When I was being taught my research principles at university my lecturer used to say "give me any answer you want and I can construct the questionnaire to produce it". I sure learned the importance of wording and adding "Not Sure/Don't Know" and "Not Applicable".

    But my favourite question and answer was: Q. "How hard do you find it to pay the rent each month?". A. "Not too hard - the landlord lives upstairs."

Next story loading loading..