"Most advertisers are doing split testing all wrong," says Dan Thies. "That's because, in a typical A/B split test, what you're doing is keeping your Control (the best performing ad) running, and
creating a new test ad to run against it. Doing it that way is perfectly normal, but it's also completely, utterly, and totally wrong."
Thies' argument is that a new test ad typically
gets ranked lower than an established, well-performing control ad (no matter which engine or ad platform), and will by default get fewer views and CTRs--which at the end of the test, makes it seem
like the test ad isn't performing as well as the control. Thies says that in reality, the test ad never had a chance.
In addition to skewed data, Thies says that the standard split
test also causes marketers to lose money. "When you run two ads in equal rotation, your Control is only getting half of the available ad impressions, he says. "When a test ad fails to deliver good
click-through and conversion rates, you've just given up as much as 50% of the profits that you would have had, if you had just left your ad group alone."
Read the whole story at SEO Fast Start »