The true accountability of our medium is in the primary success metrics that we measure and are held to for each of your campaigns. The goal of your marketing is centered on sales or market share, and the goals of your advertising efforts are primarily focused on conversion to a sale, lead or some other more immediate metric. Most online campaigns are managed to this as well. Conversion is measured by two categories; Click-Through Conversion and View-Through Conversion. We all measure them, but do we truly understand them? If we feel that we do understand them, how do we properly apply them?
First of all let us properly define the two types of conversions. Click-Through Conversions are actions that come after an initial click-through on an ad unit. Within Click-Through conversion there are two types; impulse conversion and latent conversion. The impulse conversion occurs in the same single session as the initial click-through whereas the latent conversion occurs when a consumer clicks on an ad and buys in a later session. This latency is typically measured within 30 days by most ad-servers, but can be adjusted to fit your needs.
advertisement
advertisement
The definition of a View-Through conversion is a person who is exposed to an ad unit, does not click-through, but does come back and make a purchase or provide some desired action. This is also typically set up as a 30 day measurement period, but it should probably not be this high as it may take credit for a conversion that is the compound result of multiple forms of media.
The biggest question regarding these types of conversions is what are the industry standards and how do we apply them to our campaigns?
The answer lies in the type of media that you are utilizing. Overall, Search drives almost a 99-100% impulse click conversion. Graphical advertising is different and seems to drive about a 35-45% click conversion (combination of impulse and latent conversion) on most efforts, while as much as 65% of the conversions come from view-through. This is important when you apply it to the optimization efforts of a campaign, as you then have to be able to project the breakout of these differing types of conversions and plan accordingly. Optimization decisions need to be based on a slightly longer-term decision rather than in an immediate, instinctual decision path. If you are seeing a graphical placement not performing very well, but the typical decision to purchase cycle is 3 weeks, then you should wait at least 3-4 weeks before canceling that placement. On the other hand, if you are focusing your ad budgets on search, then you should be able to make quick decisions and optimize rather immediately.
These decisions are based on our understanding of behavior, and if you are going to optimize a campaign then you need to understand the behavior patterns of your customer and the decision cycle itself. Once you understand these factors then you can start to focus attention to the types of media placements that you are recommending and how these should be planned and optimized. If you are optimizing too quickly and not considering these elements in your placements than you may miss something that is valuable to the growth of your efforts and may be removing something that could potentially work well in the future.
Another consideration is the window of opportunistic measurement itself. For click conversions it is easy to understand that the window should be opened to 30 days or whatever you see as the typical measurement window. For view-through conversion though we should be realistic on how much of the window we can take credit for. If you are running a strong, cross-media effort than you need to understand how your target utilizes the differing elements and how their exposure and decision path is affected. If you are only running in the online space, then maybe you can take credit for up to 30 days, but if you are running print and TV and building frequency across media, then this is probably not the best use measurement as you may be taking credit for something that you shouldn't be. It's key to remember that at the end of the day we are all working together and trying to prove how all media works to drive awareness and generate a reaction from your target. It is not useful to bicker about who gets credit for what, but rather we should be incentivized to prove that we are all capable of working together very effectively.
At the end of the discussion, the most important part for you to remember will be that you need to take into account behavior when doing your analysis. The days of the straightforward DR analytics are going away while we are entering into a more interesting state of DR analysis that has to factor in more intangible elements. In the initial stages of interactive advertising the optimization process could be automated and companies like Flycast had developed many tools to do just that. Now we are seeing that those sorts of tools are going to be outdated quickly and this needs to be factored in.
What do you think?