The Assist: Revolution or Quagmire?
To define an e-Marketing assist, let's use the example of e-commerce retail marketers who want to drive sales online. They buy search placements, display placements and trigger a viral campaign to get traffic to their sites and sell a product. They use typical click, conversion and other metrics from their analytic program to assess success and optimize. Rather straightforward, yes? But most adservers [there are exceptions] work on an assumption that more are starting to challenge. Most adservers ascribe the "credit" for a sale to the last known instance in which a consumer clicked or viewed an ad (to keep this simple, I will ignore the view-through challenges, which are very well-documented). So, if you have 10 sites in your plan, but MSN was the last place a consumer saw your ad before a sale, your adserver assumes that MSN "triggered" the sale, even if the same consumer saw two instances of the same ad in two sites prior to seeing it on MSN.
However, if you have mined data from a cookie perspective, and if you know that multiple frequency is a staple (and strength in many cases) of any advertising effort, you may ask -- why should the last-viewed or -clicked ad get the credit for the sale? Going by this logic, switching back to basketball, Steve Nash should never have won an MVP award, and no one should care that he consistently leads the league in assists by passing the ball to teammates in places where they have a higher probability to score.
Or, perhaps the reality is that without an "assist" exposure, the MSN exposure would never have triggered a sale.
It has been more than a year since David Berkowitz of 360i found that consumers in his study searched to start a purchase cycle, consume a few more ads, then often ended with a search before triggering a sale. If true on a mass scale, this could really redefine what, or what combinations of actions, is truly driving ROI in e-marketing.
But have we arrived at a place of greater uncertainty driven by an exponential increase in complexity? How can we know what combinations drive value? If one were to dig deeper into the basketball definition of assists, one would find a line that would bring out horror in a quantitatively inclined person. "The decision [to define an assist] rests with the judgment of the official game statistician." Whoa? What? A human decides what a valuable pass is? Steve Nash's 11.8 assists a game are because a human, with all the imperfections of a human, decides? Do we need to adjust Steve Nash's "true value" and if so, how?
Perhaps I'm an optimist, but while this does bring up interesting challenges short term, I believe it's ultimately going to help solidify marketing efforts in the future. Imagine unearthing industrywide trends that suggest Google, in tandem with two very specific sites, works better than Google itself for a given advertiser. (I will ignore studies published by Yahoo and others that urge purchasing their search and display products in tandem, simply because of the bias of the source, but I concur there may be some value there - what we really need is much more, broader, industry-wide, unbiased evidence to be presented). As an industry, to the extent that we mine multiple exposure data and arrive at some stable patterns, we will refine the concept of the "assist" and possibly arrive at truly exciting insights. We may need the standardization of actuarial level talent in order to get there, which also spells interesting implications for the analytic talent of the future.
Or maybe as someone who played a guard position in high school basketball, I just have a soft spot for the players that assist the top scorers, even if the top scorers get the spotlight.
What do you think?