For most marketers, there is only one metric that counts: conversions. This usually means sales — how many sales did the banner ad or that
email blast bring in? How many newsletter sign-ups or Web video pass-alongs were created in a MySpace campaign?
Out of ignorance or laziness, that's usually about as far as most executives
get when it comes to exploring better ways to improve their online operations.
If you're using paid media, your site report and the reports generated by your third-party ad server may not be
enough to adequately measure the return on investments made in online display ads, search and email campaigns. A great many marketers who are bullish on paid search always point to successful
conversion rates. But these also may be misleading. This may be especially damaging for organizations that, in a recessionary economy, may have to dial down the dollars they spend on online marketing.
Sticking to what seems to have worked in the past may be a losing strategy.
Those marketers tempted to go the less expensive route and just purchase search rather than bother with display
advertising may be sacrificing more than lift — they may lose the whole ball game. That's what Brian McAndrews is telling us.
And Brian McAndrews isn't just anybody. He's held
top-level positions at aQuantive and Microsoft. So we should pay attention. McAndrews says search is given too much credit because of the way the effectiveness of ads is measured. When an online
transaction takes place, the sale is attributed to the last ad viewed, which is most often a search ad.
Since search logically is often the last thing people do, it's arguably getting more
credit than it deserves," he says. "It's probably overvalued now." For example, if a customer sees a banner promoting a product on MSN and watches a related video on Time Warner
Inc.'s AOL and then searches for the brand on Google before making a purchase, only Google gets credit for the sale.
There has been much discussion of the "last click" issue, with
conjecture that each industry might have its own formula for weighting "assists" that lead to a conversion. Most agreed that the "last click" has been over-attributed, in general
to the benefit of Google.
Until very recently, neither Atlas (part of aQuantive, now Microsoft) nor DoubleClick (now a part of Google) could see beyond the last click to show the role of display
ads and other search terms in contributing to the final click. Now, Atlas, DoubleClick, Eyeblaster and others offer tags such as Atlas' Universal Action Tag that reveal the path the user
took to get to a conversion. It's this newly available data, McAndrews argues, that will diminish the relative importance of search and boost other kinds of online advertising.
This does not
diminish the importance of search marketing, but it is more likely to enable us to really see the interrelationships of media exposure and to plan accordingly. Our old colleague Rick Bruner (with
DoubleClick, and now with Google) and John Chandler of Atlas recently teamed up for a presentation on "attribution management" that could be shown to track how adding display lifted
conversions by 22 percent, even though about 70 percent of the "final clicks" came directly from search links.
Why Even Good Numbers Lie
A very interesting
research project done in summer 2007 by Jim Sterne of Stone Temple Consulting compared different Web analytics tools on the same set of Web sites. The test looked at four different Web sites with
analytics tools supplied by ClickTracks, Google Analytics, IndexTools, Unica Affinium's NetInsight, Visual Science's HBX Analytics, Omniture's SiteCatalyst and Web Trends. With the
exception of the last two, engineering support was provided by the tools' developers to assist in the test. The results: Different tools installed on the same Web site produce different numbers.
In some cases the variance was staggering, as high as a 150 percent difference, according to Sterne.
A few factors cropped up to explain the variances. One involved the use of JavaScript on the
site; if placed low on a screen page (below the fold, you might say) a slow load might result in some visitors leaving the page before the JavaScript program could be executed. Thus, the data for that
visit would be lost, and data regarding the original landing page, as well as keyword data (if the visitor came in from a search engine), would not be recorded or counted.
Time lag in counting
also made a difference. In the test, it was observed that sometimes Google counted more, but generally counted faster. The Google tool executed its count in an average speed of 0.07 seconds, while the
next closest, IndexTools, took 4 seconds to execute the count. The differential of 3.3 seconds may be significant, considering it takes much less time for a Web user to click onto a Web page, look
around and depart.
The study was useful in that it uncovered typical reasons why a metric might not be counted properly. First among them: tagging errors. Making mistakes in placing JavaScript
on a Web page turns out to be a common problem when every page has to be tagged, and whenever pages are updated, added or removed.
Metrics for Web 2.0
Keeping track of
viral programming can be easier if materials are tagged before being sent off into the World Wide Web. This helps tracking, but analyzing the data is still rather vague.
Jodi McDermott is one of
several Web gurus proposing that measuring the effectiveness of a widget can be quantified by combining three types of data — the number of times a widget is imported into a new URL environment
("placements") plus the number of requests to view a widget ("views"), divided by the number of unique visitors to the Web site ("unique visitors").
Viral videos
can also be tracked by tagging their URLs with a bit of extra code, which is a method favored by viral consultants like Dan Greenberg.
A loyalty index may provide some measurement of how engaged
unique visitors are with a branded Web site. One clue to this may be return rate. David Smith of Mediasmith recently compared two high-volume Web sites, the networking site Facebook, and a pure
news-and-information site, about.com. In the time period measured, Smith noted that more than 60 percent of Facebook users were returning to the site more than 30 times in a month, or at least once a
day, while just 2 percent of about.com's visitors came back on a daily basis.
Again, this doesn't tell the whole story. The statistics suggest that the other 98 percent of about.com
visitors presumably found their answer to their specific question and didn't need to come back to the site. And the frequent visitors to the Facebook site may have simply been bored post-teens
briefly checking their status on a cell phone, rather than active participants in the online social whirl. Duration data might help here, but that still doesn't quantify how satisfied visitors to
either site might feel about the experience.
Eric Peterson is one of several people contemplating how best to measure engagement with a purely statistical analysis. His formula includes some new
and interesting metrics that appear to resemble those used to quantify search engine rankings, and includes "click depth" as a measure of how many visitors move beyond, say, the first two or
three pages of a Web site, and "recency" as a measure of repeat visitors within a given time frame.
Adapted from Digital Engagement: Internet Marketing that Captures
Customers and Builds Brand Loyalty by Leland Harden and Bob Heyman. Copyright © 2009 Leland Harden and Bob Heyman. Published by AMACOM Books, a division of American Management Association, New
York, NY. Used with permission. All rights reserved. http://www.amacombooks.org.