Is Your Email Success Attribution Wishful Thinking?
As a parent, I would love to believe that it is my influence (my husband's too) on my children that make them smart, funny, confident and just so darn cute! But realistically, it is some combination of nature and nurture that make my children who they are: their individual experiences with friends or bullies or teachers at school, the sports activities they participate in, other members of our extended family, and so many other things are ultimately responsible for molding my children. I maintain that my husband and I are still the master molders of our kids, but I can accept that there are other influences too (sigh -- good and bad!).
Much like with children, when evaluating email campaigns, we may be biased to attribute success in a single mailing to the new thing we tried with the content, or the time of day when we sent the email, or the new subject line -- all factors under our direct control -- rather than increasing brand awareness (something under indirect control), or statistically natural swings in the success rate over any given number of campaigns (entirely out of our control).
This phenomenon is similar to the psychological principle called the Fundamental Attribution Bias. It says that, when we evaluate other's behavior, we are more likely to blame something about them, rather than something about the environment. When evaluating our successes or failures, we often cite things we did, rather than features of the environment.
Why should we care? Because this bias naturally causes us to miss perfectly legitimate influences on our campaigns' performance. For example, consider a campaign from a cruise company promoting winter discounts for the upcoming holiday season. This company, call it CruisesRUs, decides to try a new promotion, one they have never tried before: 15% off if you book with more than two people in your party. Last year, they ran a promotion giving you a free day excursion if you booked more than two people.
The 15% off campaign goes out and the results come in. The promotion performed much worse than last year's promotion. Is this because the promotion was "bad"? Did it not work with the audience? Was it too much? Too little? You can easily see how any of those things could be offered as an explanation for the performance of the campaign, possibly causing the company never to run that promotion again.
Or was it simply a result of the terrible press the cruise industry has received in the last few years? That may affect conversion rates because less people are willing to see an invite to join a couple on a cruise as favorable, for fear of horrible consequences. Maybe a new trend has developed among social cruise-goers to go to a winter resort instead.
Point is, there are many possible explanations, and it's not necessarily true (or false) that the promotion itself caused the drop in the conversion rate. I know you can think of similar times where you wanted to attribute all of the success or failure of a campaign to one factor; it's just human nature to do so.
How do you mitigate the effects of this error?
1. Don't overreact. When you do evaluate campaigns, and you come up with an explanation, realize that it's just a theory. Don't overreact. Don't stop doing percentage-based offers, for example, just because one email with a percentage-based offer didn't do well. Don't use ALL CAPS for all of your subject lines because one email with ALL CAPS performed better.
You don't have to find all the possible explanations -- in fact, you can't --so simply observe possible explanations and treat them as probably causes, not absolute.
2. Evaluate over time. Test different approaches over time and with different campaigns to minimize the effect of influences beyond your control. If ALL CAPS subject lines give you a lift in your conversion rate only once in five trial campaigns, perhaps it wasn't the caps that caused the lift. If, on the other hand, they give you a lift each time, you're more likely on to something.
3. Realize you're not in control. Marketing is an art, and we can't guarantee results (no matter how much the C-suite wants us to). You're gonna do better sometimes, and you're gonna do worse sometimes. Don't freak and try to make things perfect. Look for incremental, sustained improvement.
I'm not saying you don't get to look for improvement every single time you send an email. Just realize that true improvement takes time -- time to build and time to know it's real, and not just statistical noise.