The process of identifying proper, actionable key performance indicators (or KPIs) is arguably the most under-appreciated ingredient of a well-rounded search marketing program. On-target KPIs can
empower the SEM to take action in response to intelligence that is mined from campaign data. Faulty KPIs can leave the organization confused or frustrated with its results.
A big reason why many program KPIs are so underwhelming is that a lot of practitioners lack the skills required to construct a comprehensive, strategic search program, and so opt instead to avoid that conversation altogether. It doesn't help too that search is often thought of as a silver bullet tactic, where a handful of ads and keywords paired together will yield strong click-through and conversion. The result is that many SEMs continue to embrace a "set it and forget it" mindset that worked once upon a time, when the landscape was easier to navigate and competition for clicks was a fraction of what it is today.
But today's elite SEMs recognize the importance of well-rationalized KPIs, and work hard to define precisely what they hope to accomplish before any unnecessary time or money is spent. So let's crack the code on Killer KPIs, beginning by defining the requisite vocabulary:
KPIs - primary indicators of the overall health and well-being of
the search program. These data points are very clearly stated and allow for action on the part of the marketer when either opportunity or under-performance is identified. Think of the best KPIs being
no different than what's displayed across a car's dashboard: data that informs the driver when it's time to get more gas or slow down, even that a door is slightly ajar.
Baseline - the initial starting value of a KPI. Baselines are often discussed when launching entirely new search programs, where benchmark and/or trend data does not yet exist.
Benchmark - the mean average of a KPI as seen over a period of time. For example, if the average cost per lead (CPL) in 2010 for an online business was $25, then $25 is then referred to as the benchmark against which future time periods will be compared.
Trends - a progression of KPI data over long periods of time. It's important to note that individual movements up or down do not necessarily reflect trends absent a sustained observation over some period of time.
Just bringing clarity to these terms is crucial to understanding the role of KPIs overall.
The first step is to recognize what the organization is hoping to accomplish through its investment in search. All too often we hear clients telling us that they "want more traffic" or "want more sales" as a result of their investment. But let's be honest, those are bogus goals. To be actionable, we need something more concrete to work with.
Ideal objectives would look more like, "We'd like to enhance our marketplace awareness through increasing our search-referred visit count" or "We'd like to boost profitability by lowering our cost-per-lead by $10 this year compared to last." These objectives are ones we can support with actionable KPIs like "Share of Search Clicks" or "Search-Referred CPL Trend."
We've found too that those types of clear objectives often aren't known or the program that's being discussed is a first of its kind. In those situations, leveraging the first 60 to 90 days of in-market activity to establish an initial baseline for any KPI is highly recommended. That new baseline then becomes the data point against which the program is judged from that moment on. This approach is typically preferred, even if industry-wide benchmark data is known.
There's nothing as pure as our own data.
A final consideration to keep in mind is that search plays a very broad role across the entire buy cycle process. In fact, it's very rare to find a communications problem that can't in some way be solved by a relevant presence across the search engines. So the keys to defining mission-critical search KPIs are:
1) assess the challenges facing the organization;
2) determine the most appropriate role search can play to assist with those challenges;
3) anticipate the resultant data that will then be available; and
4) focus on and optimize against those metrics mash-ups that best highlight program successes.