Commentary

4 Ways To Avoid Common A/B Testing Pitfalls

While A/B testing has been touted as the ultimate way to boost sales conversions, many companies running such tests struggle to increase conversion rates. Indeed, only 22% report that they’re satisfied with their businesses’ conversion rates, according to Econsultancy.

Done right, however, the ROI can be game-changing: As detailed  in the Harvard Business Review, Bing’s A/B testing drove an annual 10% to 25% increase in revenue per search, and helped Bing’s share of U.S. searches soar from 8% in 2009 to 23% in 2017.

Here are four tips to avoid the most common testing pitfalls.

1.  Run A/A tests. Has your company performed a control or test version very successfully in an A/B test, only to see it fall flat when applied post-testing?

In such cases, a forensic look back at the A/B test often reveals that the sample size was too small or insufficiently representative of a business’s actual audiences. 

advertisement

advertisement

Brands can avoid this mistake by running A/A tests in which the test and control versions are identical. This helps ensure that your audience sampling is fair, representative, and predictive of real-world results. 

Online businesses should run an A/A test to “clean the pipes” at least once a year.

2.  Combat cross-contamination. Cross-contamination occurs when a visitor is influenced by factors unrelated to the test, resulting in the user temporarily or permanently leaving the test funnel. For example: users leaving the checkout flow to visit the “shipping and returns policy” may behave differently than those who don’t. If these diversions are not accounted for, they can distort test results. 

To ensure that test data reflects actual user behavior, businesses can leverage technological solutions that limit the test only to users who consistently remain within the funnel.

3. Make sure customers aren’t influenced by external factors. Unlike the sterile conditions in which academic testing is run, in the digital world, those conditions are nearly impossible to replicate. Accounting for factors like a participant’s country, language, age, gender, device, browser, digital behavior history, source of traffic, and numerous other factors is a daunting task. 

Preventing these factors from contaminating test results is even more difficult. However, it’s crucial to scrutinize the test flow and minimize the influence of such external factors. Otherwise, results will be unreliable at best, or simply downright inaccurate.

4. Beware of bots. Malicious bots account for a staggering share of overall web traffic – 21.8% in 2017, according to bot detection and mitigation firm Distil Networks. Such bots can generate misleading results in A/B tests.

It’s therefore essential to closely monitor your website’s analytics. Has there been a sudden surge in page views? Is traffic coming from unusual sources? Are site visitors staying on a page only very briefly? Solutions like DDOS protection can thwart bad bots, but there’s no substitute for diligence. 

Meticulous attention to detail is critical for brands to avoid the most common testing mistakes. With greater precision and a rigorous approach to harnessing the best insights, companies can run A/B tests that score big.

Next story loading loading..

Discover Our Publications