When Testing Doesn't Work

There are probably thousands of articles on the Web about how to test marketing campaigns effectively. You have a hypothesis, you have test variables, you have test segments, and you have some view of statistical probabilities associated with the outcome. That's a perfect world. Let's first assume you actually have the time to build a proper test and the resources to create the many versions -- and you can actually deliver on the test program and keep the program sterile. What do you do when the tests don't tell you what you want to hear?

A consumer marketer I know with a really large database decided to test the cadence of their email programs. They developed several test cells: a high frequency cell, a low frequency cell, a control group that was exposed to the same cadence they've run for a year, and then a hold-out group. Seemed like a pretty straightforward test. The group with the highest revenue would lead to some conclusions on how often they should send promotional email. Well, what they found was that the hold-out group actually performed better in terms of revenue than any of the other test cells. At first glance, you'd think, our email program stinks and all this work we put in is wasted energy. We can generate these sales without email.



This test somewhat backfired on the CRM team; now they had to now justify what they do day in and day out. But they realized that the frequency of communications wasn't an indicator of their program's success. The problem was at the root of their program: early lifestage messaging. They had a very progressive offer strategy for new customers and people who signed up for their product/service, and they were effectively numbing their audience to their email promotions. It wasn't that they had declining revenue for their retention audience, but email was not as effective at driving conversion under their own terms, and increasing offers didn't increase sales at the same rate as the increased offer value.

What do you do in a circumstance like this? Would the senior team actually believe this new hypothesis and be able to deal with the subsequent costs of ripping apart the company's s early-stage promotional strategies? Could they migrate away from "crack marketing," where you're so addicted to doing the same things that you are afraid to make wholesale changes?

The challenge with a scenario like this is, the CRM team didn't think about all the possible outcomes and dynamics that might influence the results -- and they weren't ready to react to the results.

Another story involves a company that decided to test the effects of direct mail and email through different scenarios: email only, direct mail only, and a combination of the two. Seems like a fruitful exercise. If you can prove that email has a better influence as a standalone channel, you have a winner from a cost savings perspective. If you show the combination of email and direct mail have an influence on increased revenue value, you can potentially leverage this to be very timely in your launch and communications. But what if every scenario pays off? What happens if revenue is down all around? What happens if you find that a large portion of your loyal customers are no longer buying through the email channel, but your paid search numbers have increased dramatically?

Testing is a funny exercise. It's hard to contemplate all the potential outcomes, and it takes a lot of energy to put together good tests. Is the trade-off to do it poorly and risk outcomes that you can't justify? That is almost worse than not testing at all.

I won't belabor the point. The only time tests don't work is when you aren't prepared to react, haven't thought about all the potential outcomes, or take no action at all on the results.

3 comments about "When Testing Doesn't Work".
Check to receive email when comments are posted.
  1. Pat Mcgraw from [mcgraw | marketing], August 24, 2009 at 11:06 a.m.


    Excellent post - and I am happy to see your example about 'crack marketing' since I have seen the same thing at several organizations.

    Testing truly is funny and sometimes they can raise more questions than provide answers. But as you say, you have to be prepared for anything and you have to be willing to take appropriate action. Otherwise you are just throwing money into the air, hoping that more will be returned to you through some miracle.

  2. Peter Rosenwald from Consult Partners, August 24, 2009 at 12:05 p.m.

    Interesting and informative view of testing.
    As you suggest, if the variables are reduced to the minimum number (best is not to have more than a single one to test) the test should produce results that inform the next action.
    But it is not only the raw percentage response that matters. As you suggest, look at the net revenue generation implications because that is where the profits are.
    Any idiot can increase percentage response numbers by giving an incentive the prospect "can't afford to refuse" but that will almost always result in lower net revenue (after the incentive subsidy has been absorbed).

  3. Jennifer Kaplan, August 24, 2009 at 3:28 p.m.

    I was intrigued by the title "When Testing Doesn't Work"; however, I think it must be mentioned that you typically can't receive definitive results from one well executed test. While short term tests can serve as a basis for making quick decisions, best-in-breed email programs are consistently testing. I think the obvious solution to a test that doesn't work would be to conduct another test, especially when there are so many variables to play with: copy, mention of brand name, capitalization, images, offers, subject lines, time of day, day of week, frequency, etc.

Next story loading loading..