E-mail testing can be measured like a batting average--if you're a .300 hitter, you are in the big leagues.
I've always been amazed that even really good baseball players can miss more than half the time and still be considered great. (I'd sure like those odds on my work projects, please!) E-mail house program testing is similar. You won't get a home run every time, but every hit is solid progress and really important.
Since there are literally an infinite number of elements to test, and only a few of them will result in dramatic findings, it can feel like your testing program is striking out. Maintain your emotional and professional fortitude--and stick with the testing program even when you don't see big results every time. Even incremental results can turn into real revenue.
We all know the testing essentials--like limiting the number of elements per test; always testing subject lines; rinsing and repeating; having a very clear goal; and measuring what you are testing. If the test isn't about the call to action, then clicks might not be the right measurement. On the other hand, if clicks are the ONLY thing you can measure, be sure to set up tests that impact the call to action.
Also be sure to include testing in all your prospect and acquisition e-mail marketing. Test concepts, subject lines and headlines prior to rolling out - we've seen upward of 12 percent lift with just one test mailing.
Here are three testing tricks to help you improve your e-mail returns:
1. Develop educated testing hypotheses. Use all the assets at your disposal to inform
your testing. Watch trends in e-mail response by age of file and type of e-mail. Test impact on new subscribers vs. long time subscribers. Test secondary offers in promotions vs. transactional
What headlines pull best in search, direct mail and on the Web site? What are the logical merchandise up sells and cross sells to provide on the landing page? What can you learn from paid keywords vs. your site-search that informs your prospect vs. customer communications?
2. Segment. Not all subscribers are created equal.
Millions of dollars of research have gone into understanding the motivations of different generations, and even inter-generations. So why expect that one test to your entire, diverse house file will
result in a meaningful learning? In particular, test prospect actions distinct from customers. Set up different tests for recent purchasers and the long inactive, men and women, big spenders and