I have good news for you. You already have it! Every interaction with your consumer is a chance to learn and apply the results to pivot in a different direction.
Given the hyper-competitive environment across all verticals, the future of your brand depends on your continued progress in getting smarter about your consumers’ needs and optimizing experiences for them. Testing is a perfect way to constantly chip away at your consumers’ delights (or dissatisfiers!) and to change course in an iterative, constant way.
The challenge is getting started, though, and building testing into your daily process. Below are some practical tips on how to approach testing:
When to conduct a test. Ideally, every one-off campaign, trigger and automation would have some form of testing incorporated. If you can’t test everything, prioritize your automations and triggers, since those are ongoing and need to be optimized regularly. From there, test the one-off campaigns that have the biggest revenue opportunities or those that are drastically different than other campaigns you’ve sent in the past.
The idea here is that you’re maximizing revenue on major campaigns or learning as much as possible from a new approach.
You should build a test plan at the same time you do your campaign planning, so that you’re not scrambling at the last minute during production to make a game-time decision on what to test.
A/B vs. multivariate. A lot of marketers get hung up on deciding if an A/B or multivariate test makes sense for a particular message or program.
To keep it simple, I recommend A/B tests when:
-- There’s a clear question in mind and an answer is needed via test results quickly
-- Operationally, multivariate isn’t possible, given a shortage of team resources
-- The campaign has a smaller audience, thus making many splits for multivariate a bad option from a statistical confidence standpoint
I recommend multivariate when:
-- There may not be a single, clear question in mind, but several — and a desire to understand what combinations of variables resonate the best with consumers
-- In automations, triggers and campaigns that will repeat or run in perpetuity and need to be optimized over time, with results not time-sensitive
-- When the audience is large or can be run over time as noted above, until a set of statistical confidence level is reached in results
Variables to test. There are a vast number of variables that marketers can select for a test, such as subject line, creative, content, layout/location of elements, colors, cadence, timing, lifecycle stage, etc. The list goes on and on.
The point is that you need to select a variable or set of variables before setting up the test. Make sure you have a clear objective and hypothesis in mind and know what measures you will use to evaluate success in the end. Gretchen Scheiman recently wrote two useful columns that can give you some ideas on variables to test: "5 Easy A/B Tests" and "Stop Testing Subject Lines."
Making sense of results and moving forward. Sometimes a clear winner doesn’t surface when comparing results. In that case, run the test again if you are able to. When a clear winner is known, make sure that you have anywhere from a 95%+ statistical confidence level in the results. If you need help on calculating statistical confidence, there are many free calculators and Excel templates online to help you with this.
Lastly, share the results with colleagues, so the team is informed of what worked and what didn’t — and don’t forget to optimize your marketing efforts based on what you learned.
What is your approach to testing? Have you been able to prioritize a testing plan in your busy day-to-day to improve the consumer experience? Let me know in the comments!