Making Email, Web Analytics Play Nice: Testing Is Key

While they're generally administered by different overworked, high-strung geeks, Web analytics and email marketing can indeed work together, a panel of specialists agreed Tuesday afternoon during the Direct Marketing Association's Email Evolution conference in San Diego. It's just a matter of getting the two groups to talk to one another.

And they should talk to one another. Moderator Tricia Robinson, vice president of product and market strategy for StrongMail, noted that a recent study from Jupiter Research found that 53% of email recipients who unsubscribe do so because the author or content doesn't interest them. Meanwhile, another study from Forrester says 63% of all recipients said most email ads don't interest them, period.

Of course, in cases where email recipients are also registered users of the company's Web site, there's a wealth of data the two channels can share to improve each other's performance. When you do finally herd them into the same room, Akin Arikan, senior segment manager for Internet marketing at Unica, emphasized that testing must be the center of any collaboration--because it should already be routine in the respective disciplines.

Arikan, speaking at the conference put together by the DMA's Email Experience Council, recommended A-B testing. The testing ideally would be of every aspect of Web site and email creative and delivery, with close attention to standard measures like conversion rates, purchases, and Web browsing behavior--including apparently negative results like abandoned online "shopping carts" and applications. Arikan noted that a follow-up email campaign by one bank targeting people who "bailed out" on their account applications produced a 52% open rate.

Co-panelist Tanya Hyde, director of professional services for Yesmail, agreed with Arikan that A-B testing is critical to getting Web marketing and email working together. She threw out a few testable examples, including email subject lines, creative, the offer, and timing and frequency of email delivery--while acknowledging it's nearly impossible to test them all at the same time. That's why marketers should set aside a period of at least several months for slow, incremental testing of different iterations.

To support accurate testing, Hyde offered a number of other pointers for marketers, with numbers still the focus. For starters, marketers should quantify daily and weekly volumes of email delivery, individual frequency, open rates, and Web site behavior. Then they can quantify subgroups like "how many people are abandoning their shopping carts," and within this group, "how many of these people are also in your email program?"

Like Arikan, Hyde emphasized that email allows marketers to revive sales prospects who abandoned their "shopping carts," provided it's done correctly. Citing a successful campaign for Hewlett-Packard, Hyde said Yesmail achieved a click-through rate 10 times higher than emails that weren't follow-ups. A similar program based on Web browsing habits (not abandoned "shopping carts") achieved comparable results.

But Hyde also warned against a possible "Big Brother effect," in which consumers react negatively to the idea of being watched by advertisers on Web sites, when their browsing items reappear later in an email. In fact, HP investigated the potential for such backlash through coordinated Web and email testing involving different creative content--although neither she nor HP email marketing specialist Britt Cutsforth disclosed the actual results.

Next story loading loading..