Commentary

Control Groups And Email's Impact

I thought I'd follow the Email Diva's approach this week and pose a question that was asked of me a while back and the associated response.

Question:    How do you really show the effect of email on the business?  We track open and click-through rates, but revenues are mostly generated through store-level transactions. 

Answer:   All reporting variables are important in this equation, but there are key elements you must administer to show the incremental impact of email -- and even then, there are steps you have to go through to show actual conversion. Unfortunately, attribution modeling -- in other words, giving appropriate attribution of a sale to the channel that drove the sale -- poses a challenge for many primarily because all the channels rarely agree on this attribution (who gets credit). 

I'll give you an example.   Let's propose that a customer sees an ad on television for a great new HDTV from Circuit City.  Later that night, online, he decides to search for that model television, types in the brand name, and sees a link to Circuit City.  He clicks through to the site, looks at the specifications and sees the promotional pricing that was offered on TV.  He decides to shop other competitors' sites and compare pricing and options.  Later that week, he sees a banner add on ESPN for that same HDTV from Circuit City, and then over the weekend he opens up his email and sees an ad from Circuit City.  For the sake of this story, let's just say he decided to purchase the TV at the store that weekend and have it shipped,  and now he's  a happy customer.

advertisement

advertisement

As online marketers, how are we to understand the value and impact we're driving through our online efforts?  We know that an anonymous user clicked through a search term, but no transaction substantiated the value of that key word or the value TV played in this connection.  We know that our media placed on ESPN.com was valuable, as we've got thousands of impressions and clicks to support it, but again "0" in the sales column.  We know this customer received an email on the weekend, opened it, clicked through to the site -- and let's just say we're a progressive marketer and even know the customer went exactly to the HDTV page and promotions page -- but again, we have "0" to show in terms of sales.

In this scenario, you could illustrate the value of SEM or even SEO by the type of branded keywords to support the conclusion of timing keyword to promotion, and make some assumption to the attribution to an offline sale based on this "intent" and timing of the site/TV promotion.  In the media world, you could take a very similar approach by audience type and what network they were exposed to.  For email marketing, we are generally lost in this attribution world again, unless we manually match store-level transactions to an email address on file and make some assumption that email had some influence.  But it's rarely an exact attribution and rarely believable.

One thing you can and must do, to show the impact of all the emails you send on incremental sales and form some attribution model that reflects these contributions, is administering a Master Control Group.  In any form of testing, it's critical that you have several cells to test -- those exposed to the stimuli (or ads) and those that aren't exposed, or at a different frequency.  We have to show the variance in performance -- this is truly the only way to protect the purity of this test.  In simple circumstances, you take a simple control group and hold out a statistically valid segment and watch their performance vs. that of your other groups.  In a more sophisticated scenario, you'd like to show these cross-customer segments and ensure you have a good cross representation of your database.

In either case, your business decision is how much stimuli to expose this group to, and what degree of variance statistically justifies your original hypotheses.

 Many email marketers do this for frequency testing.  One group gets a lot of email, one group gets the same cadence as the year before, and the last group is exposed to very little or none.  The nice thing about group testing is, you can show a pure incremental change in segments  and even isolate to types of emails they receive (some get newsletters, some get promotions only).  It's not difficult to administer, and just requires you to be consistent.

This is the only way you'll show email's impact statistically and scientifically, and will carry weight at those budget meetings.

Next story loading loading..