Commentary

Don't Become An ROI Measurement 'Victim'

  • by , Columnist, August 11, 2009
As sports property holders and their agencies face louder and more incessant demands from sponsors/partners to demonstrate return on their marketing investment, it becomes all too easy to fall into the commoditization trap that has become so prevalent in ROI research today.

Today's cluttered marketing environment makes it virtually impossible to gauge the direct impact of a single marketing execution. Much as we'd like to think that there is an easy way to directly attribute consumer behavior with exposure to a single marketing tactic, recognize that consumers don't think like media planners. The purchase process is often too multi-tiered across multiple touch points. To paraphrase John Wanamaker's quote, we know that 50% of advertising works, it's just not that easy to figure out which 50%.

Yet, there remain many who eschew this reality and insist on trying to force feed ostensibly simple black boxes that rely on marketing research that measures the wrong things in the wrong ways. Simply said, you can't expect a consumer to be able to honestly answer a direct question on whether a specific advertisement or sponsorship made them "take a purchase action". Consumers' brains aren't wired that way. It confounds me that so much of what passes for ROI research today, actually poses these types of questions in an attempt to measure ROI.

This fatal flaw is exacerbated by other pitfalls of poorly constructed ROI research, including:

  • Leading questions No one wants to call the baby ugly. Leading questions beget perceptual parity across competitive brands, creating doubt and confusion in the mind of skeptical sponsors.
  • Poorly recruited respondents As the immediate past president of the national Marketing Research Association, I've had more than a fair share of exposure to industry discussions on researchers over reliance upon "convenience samples," "professional respondents" and other potential pitfalls made more prevalent with the proliferation of online studies. Thankfully, there have been great strides made and methodological checks and balances developed in the online research space. But, unfortunately, these practices seem all too often remiss in a lot of the ROI currency that I have seen.

Towards a 'Reasonable Approach'

Rex Briggs and Gregg Stuart, in their book, What Sticks, focus on what researchers call "experimental design," which is consistent with our point of view at Sports and Leisure Research Group. Simply said, the best and most practical way to measure ROI in the present environment is to blindly test consumer perceptions of a wide competitive set of brands within a sponsor's category over multiple waves of research conducted before, during and after deployment of a sports marketing campaign or sponsorship.

Each of these research waves is ideally fielded against two parallel samples of target consumers, one that can be verified to have been reasonably exposed to the sports marketing initiatives in question (and that is NOT by asking them directly if they have been!), while the other is an unexposed control group.

Done properly, the benefits of such an approach provide both property holders and their partners with a rich set of insights that serve as a means of revealing which aspects of the brand's desired essence is resonating with the exposed target audience, and which require amplification.

The added benefit here is that this takes the evaluative focus off of the often unfair, elusive and commoditizing scorekeeping that makes so many property holders shudder at the risk inherent in today's ROI research, and moves the dialogue to a more productive discussion about how to optimize the benefit derived from a sports marketing relationship.

4 comments about "Don't Become An ROI Measurement 'Victim' ".
Check to receive email when comments are posted.
  1. Jon Hickey from allen & gerritsen, August 11, 2009 at 2:54 p.m.

    Interesting and accurate perspective. A key practical challenge with this approach is generally the reluctance of clients to fund/field this level of extensive pre-during-post research.
    Interested in seeing case studies showing the validity and impact of this process, as those will clearly be needed for buy in and funding.

  2. Mark Heap from PHD, August 11, 2009 at 9:20 p.m.

    I agree with your general point of view that fundamentally, the current research methods don't credibly reflect the nuances of a complex business. I work with some clients who are obsessed with metrics and if they measure it, it instantly becomes 'a KPI'. You end up with objectives for a single campaign that include build awareness, drive consideration, increase purchase intent, sales revenue, repeat customer %, website hits, newsletter registrations, people think the brand is 'cool', etc. It's laughable that anyone thinks you can have a focused campaign with such divergent objectives.
    I'm not sure that your proposed solution is deep enough as marketers and their agencies will still want to understand what specific elements are working hardest. Econometrics can help with this but is very data heavy and relies on past conditions being fairly consistent in the future.
    At my agency (PHD), we've invested a lot into neuroscience research techniques, to help us to better understand both the conscious and subconscious reactions to different messaging and media stimuli. It's a very fascinating field with no 'silver bullet', and certainly needs a balanced and realistic perspective of how useful various research can be.

  3. Nicholas Cameron, August 12, 2009 at 12:09 a.m.

    The best bit of this article is this section, "...you can't expect a consumer to be able to honestly answer a direct question on whether a specific advertisement or sponsorship made them "take a purchase action" because consumers are not wired this way".

    These kinds of questions should be avoided because most people will focus on providing logical (functional) answers to direct questionning and will downplay emotions even if emotions are more impactful on consumer behavior.

    They do seem to be used to demonstrate that sponsorship isn't effective more than anything else and tend to be rather meaningless in a measurement approach.

    Basically, if clients want to know the effectiveness of sponsorship they need to invest in it the way that they do will other types of marketing evaluation.

  4. Haren Ghosh from Factor TG, August 13, 2009 at 2:55 p.m.

    Couple of points we also need to think while we are discussing about pre-during-post type of research such as:
    1. How this will be different from a regular tracking study, conceptually they are same. While the regular tracking studies track the overall metrics, the pre-during-post studies attempt to track the deltas (or changes between test and control group specific responses) in three different time periods.
    2. Obviously, one may argue that an individual respondent is not wired to answer correctly on any metrics (no matter how you define it), but when we aggregate a large number of respondents (read as potential customers) you can get a fairly accurate picture.
    3. I do not necessarily agree that “asking them directly if they have been!” would yield anything different or erroneous. Because the foundation of any primary data specific research is based on consumers’ responses. If we can rely on their responses elsewhere (such as stated awareness, intention, etc.), there is no valid reason for not believing their responses on the advertisement/media consumption.
    4. Certainly the data selection is very critical. A convenience sample may produce something that is not valid in the real world. In fact, that’s one of the major reasons most of panel data specific studies do not converge with the real world market situations. Randomization or a general representation is quite critical.
    5. The experimental design outlined here may provide something powerful indeed, however, one should not ignore the difficulty of implementing this type of studies in the reality. This is especially true where there are multiple media, and one cannot find out true control. This problem is more apparent when one is attempting to execute cross media studies applying a full factorial experimental design.

Next story loading loading..