I continue to see a number of highly intelligent business editors covering the marketing industry get fooled into running stories based on PR polls disguised as research. And when that happens,
everyone in the marketing community gets hurt.
New research is exciting and relevant to marketers. It leads to new thinking and ideas. And it makes good copy. As an example, one major
trade publication recently ran a story about a survey of "Chief Marketing Officers" and marketing measurement with the headline "Survey Finds Marketing Contributes to the Bottom Line." This
undoubtedly made it into countless PowerPoint presentations in budget meetings.
But scratch beneath the surface and this "survey" was comprised of 423 "business executives and marketing
professionals" who may have come from any industry, or be in companies ranging from billion-dollar marketing budgets to $10k budgets, or may be CEOs or entry-level managers. In other words, unlike
what its title indicates, the sample is reflective of no particular group, aside from those willing to be surveyed.
Even more dangerously, the story went on to state that 39% of the
respondents agreed the marketing is doing a good job of contributing to the financial condition of the business, up from 19% last year. Presumably this was referring to better use of marketing
metrics to better measure marketing. But were these the same people surveyed last year? Were they selected to match the profile of people surveyed last year, so their responses would be scientifically
valid? No. They were just this year's batch of willing respondents, bearing little resemblance to last year's group, thereby making any sort of trend analysis invalid.
Unfortunately, this is neither uncommon nor harmless. Marketing struggles every day to earn trust and credibility with finance, operations, sales, and other functions that have a more
skeptical and discerning eye when it comes to research. But if the marketing media suggests it's OK to accept PR polls as research, it indirectly encourages marketers to include some of these
"findings" in rationalizing their recommendations to others.
In fairness to my hard-working friends in the media, most editors and staff writers (let alone marketers themselves) have
not had the benefit of training in how to tell a bogus survey from a truly reliable one. They're very busy trying to produce more content to feed both online and offline vehicles with smaller payroll
and more pressure to get readers. So perhaps I can offer a few simple tips to separate the fluff from the real stuff:
1. Before you even read a survey's findings, ask to see a copy of the survey questionnaire and check out the
profile of the respondent group so you know how to interpret what you're being told. Get a clear sense of "who" is supposedly doing/thinking "what," and ask how the respondents were incentivized. Then
ask yourself if a reasonable person would really put the effort into answering completely and honestly.
2.
Check the similarity or differences among survey respondents. If the vast majority of them share similar traits (e.g. company size, industry group, annual budget), then it's fair to extrapolate the
findings to the larger group they represent. But if no single characteristic (other than being in "marketing") ties them together, they represent no one, regardless of the number of respondents.
You'll need to separate the responses by sub-group like larger marketers versus smaller ones, or B2B vs. B2C. In general, you'll need at least 100+ respondents in any sub-segment to make it a valid
result.
3. Check to see if the sample has been consistent year-over-year. If it has, you can safely say that
something has or hasn't changed from one year to the next. But if the sample profile is substantially different year-to-year, comparisons aren't valid due to differences in the perspectives or
expertise of those responding.
4. Ask about the margin of error. Just because 56% of some group say they
feel/believe/do something, doesn't mean they actually do. EVERY survey comes with a margin of error. Most of the PR-driven polls in the marketing community use inexpensive data collection techniques
that offer no ability to validate what people say and no mechanisms to keep respondents honest. Consequently, 56% may actually mean somewhere between 45% and 65% -- which may change the
interpretation of the findings. Ask the survey sponsor about margin of error. If they aren't sure, don't publish the numbers.
And if all that is too difficult, call me and I'll
give you an unbiased assessment for free.
It's difficult to produce a survey that provides real insight and meaningful information. That's why real research costs real money. And while
there's nothing wrong with polls being used to gather perspective from broadly defined populations, or PR folks using these for PR purposes, confusing PR with real research slowly poisons the well for
all of us in the marketing community.