Why The Open Rate Must Die
Using the open rate to measure email marketing performance is akin to the music industry measuring sales based on the number of CDs sold. It just doesn't reflect how people buy music today.
How did we get here? The email open rate became a useful metric once HMTL messages became the primary email-message format, because it counted each time a tracking image loaded when the recipient opened an email.
This allowed marketers and publishers to measure how many people were "reading" their emails, distinct from those who also clicked on the email. Still, this metric was not a great measure because it didn't distinguish between someone who reads every word in a newsletter and someone who opens and then deletes immediately.
As soon as the major ISPs and email clients began disabling images by default, though, the open rate's accuracy and value plummeted. When combined with growing use of the preview pane, emailers got the double whammy of metrics mush.
Various studies (including my own from a few years ago) support that both consumers and business users of email view from a quarter to a half of their emails in a preview pane and 50% to 60% percent block images by default.
Now, add text emails to the mix. Someone, please, tell me how you can view open rates as an accurate and meaningful measure of anything other than the percentage of your HTML emails where images rendered?
Here are some real-world examples of the inaccuracies and inconsistencies of email opens:
· The email is "opened" (launched), but images are blocked: not counted as an open
· The email is not opened (launched), but images are enabled and is read in the preview pane: counted as an open
· The text version of a multi-part message is read on a BlackBerry. The HTML version (with images blocked) is later opened in Gmail (or other email service/client). The email has been opened and read twice -- but zero opens are recorded.
· A text version is opened and read but not clicked: not counted as an open
· A text version is opened and read, but the user clicks a link: not counted as an open with some email software. Others assign an open because the email was clicked on, which assumes an open.
I think you get my point. With marketers increasingly being held accountable for their marketing spends and actions, do they really want to base performance reports and marketing decisions on such a flawed and inconsistent metric?
Further, the open rate is a process metric that does not measure return on investment or how well the campaign helped you achieve a strategic initiative for your company. Showing how much email contributes to the bottom line, not how many people opened your email, will help you secure a bigger share of the marketing budget.
So, what good is the open rate? Some of my peers line up behind it, agreeing that it's flawed but still provides some diagnostic value (e.g., click-to-opens). Maybe I'm getting crotchety in middle age, but, come on, folks, we can do better than "well, it sort of works."
In a time when email continues to fight for its rightful place in the marketing mix and budget, our industry must communicate the channel's success with accurate and meaningful success metrics.
Over the years, I've tried to get marketers to think beyond the open rate to measure email performance, from replacing it with the more useful click-to-open rate, to creating "balanced scorecards." I'm now acting as a champion of both change and standardization as co-chair of the Email Experience Council's Measurement Accuracy Roundtable with David Daniels, vice president of JupiterResearch.
If you are passionate about helping the industry speak with a common metrics language, please join David and me in the Measurement Accuracy Roundtable.
Whether you decide to join the Roundtable or not, I'd love to hear your thoughts on the email open rate in the comments box below: Is it a tired old metric that is ready for retirement, or a slightly flawed but still valuable tool in the email metrics toolbox?