Many advertising executives, including myself, have made the argument that even with all the exciting, sophisticated measurement capabilities available online, there are still some significant
limitations for advertisers with respect to calculating a hard ROI on every impression. Brand marketing fundamentals remain critical to overall marketing success, even online.
Specifically,
I want to drill into two examples that support this thesis by illustrating two key underlying points:
- Valuable does not necessarily mean directly measurable. Value can be difficult to
measure.
- No model can substitute for domain experience and common sense. Models aren't always right.
One of the strongest examples of
"valuable does not necessarily mean directly measureable" can be found in a piece of research recently published by the Atlas Institute. The study concludes that
about 60% of all paid-search clicks are on branded terms. That number is simply astounding.
Obviously, Atlas' parent company, Microsoft, has a
vested interest in moving the market away from an excessive focus on Search, so skepticism is warranted. However, even if we cut that percentage in half, the inescapable conclusion is that a lot of
the money that brand marketers are spending on other media (online and offline) is having an impact -- even if we can't measure it precisely.
The
other equally inescapable conclusion, which is just as important, is that marketers who only spend on search are losing potential sales to those marketers who use the full funnel. A consumer who
begins by searching for "Campbell's Soup" must be more likely to end up buying Campbell's Soup -- versus buying Progresso soup or any other brand -- than if they began by just searching for
"soup."
As we improve online brand measurement capabilities, we will learn more and more about exactly how much that difference will be worth at the cash
register, but we certainly shouldn't ignore the obvious value here in the meantime.
To demonstrate the second point, "no model can substitute for domain experience and common
sense," I'll offer an example from my own experience at Yahoo. Early in my tenure, running pricing and yield management for the global display business, we converted the "house ad" allocation system
from a framework based on perceived "need" to a budget-based system based on economic value to Yahoo. Premium service business units (e.g., domains and hosting, personals) that
wished to consume otherwise salable display inventory for promotion were given internal media budgets ("purple $") and treated just like external customers (same pricing, operational rules,
etc.). The budgets were set during the annual planning process, and reflected the economic performance of the different businesses combined with overall strategic priorities -- just
like the budgeting process for cash and head count did. This new accountability intensified interest in measurement and optimization, which was a key objective. So far, so good.
One particular business unit (BU) had several advantages in this process. At the time, it had the largest cash and in-kind budget of the premium services units, dedicated analytical
headcount and a highly measureable, 100% online "supply chain." The BU team had also invested in a variety of highly customized and targeted capabilities for driving users toward conversion. The
effectiveness of each execution was measured using a complex attribution model and results were compared across executions based on CPA.
As the
level of budget accountability increased, the BU marketing team tested shifting its budget dramatically from top of the funnel media strategies to bottom of the funnel media strategies, which showed
significantly better performance according to the CPA attribution models they had developed.
This worked great for a month or so.
After that, not only did the volumeat the bottom of the funnel dry up, but the performance of the bottom of the funnel strategies also
declined; the BU both got fewer leads and converted a smaller fraction of leads. Not surprisingly, the overall financial performance of the BU (which heavily relied on network promotion)
declined sharply and the BU team quickly moved back toward a more balanced strategy.
The bottom line: the attribution model that the BU
was using, although thoughtful and sophisticated, didn't accurately value the top of the funnel.
According to other Atlas Institute research, this result is not
uncommon.
The overarching lesson here is not that "accountability is bad." Accountability is always good.
However, accountability should never be a mandate (or an excuse) for doing only things that can be precisely measured. Just because we can't measure something as precisely as we
would like doesn't mean that thing is not valuable. That's point #1.
Point #2 is that a measurement itself is only as good as the model on which it's
based. Models are a complement, not a substitute, for experience and intuition.
There's an interesting parallel to be drawn with the current financial
crisis. An army of "quants" had built complex and impressive models explaining how return could finally be separated from risk. Some experienced investors, following their own common sense, avoided
this trap -- Warren Buffett comes to mind. Often common sense is the best sense of all.