Granted that the issue I just raised is mainly a database marketing problem, we are interactive media people. Most of us work with excel spreadsheets instead of relationship database. Oftentimes we do not even talk to database people and most interactive agencies do not house large data engines at all. But does that make us immune to the problem? Are we sure the excel reports and the numbers we are compiling for performance metrics and plan optimization are always carefully designed by us, agency/marketing folks? By the way, where are the "raw" numbers from before we pivot them in our spreadsheet? Are they really "raw" (I mean raw raw)?
Well, it turns out that the "raw" numbers we are using aren't really raw. They look "raw" to us because 99.99% of the agency people do not have access to (or even want to look at) the raw raw. The raw raw is what the insiders would call log files. They are usually sitting with adservers (DoubleClick, Atlas, etc) and in their databases. Yes, you heard me right. There are databases, humongous ones that collect and compile cookie-level information. And there are database experts, lots of them. We do not see them, because they are not agency people. They work for adservers and sit behind the shields of adserver's account reps. So in the end we are very much like a database marketing operation, except that in our case database and marketing are siloed in two different organizations.
With this knowledge, the database marketing issue I raised earlier suddenly looks relevant to the interactive media community. So have we left some of our critical data decisions in the hands of adservers? My answer is that at least in one area we often do. To be more specific, it is the magic 30-day window we normally use to track conversions. As general practice, the tracking window is set up when the campaign is being trafficked. The rule of thumb is to use adserver default, which is normally set at 30 days. It literally means that any conversion that happens within 30 days after the exposure will be counted in our conversion report. So without us noticing, the rate of conversion, the total number of conversions, the engagement metrics, the cost per conversion numbers are all somewhat related to that window. If the window is preset with a different cutoff (e.g. 15 days or 60 days), most likely we will be looking at a different set of performance numbers. Subsequently the media decisions planners are making could conceivably be different from those of today.
The question that should be asked but very rarely asked is why the magic number 30. Why not 20 days or 40 days? As you would expect, it is not good enough to answer that we use the number 30 because it is given as default by adservers. The principal responsibility of adservers is to oversee the tech aspect of online campaigns. The bulk of their work involves making sure the ads actually run in the way that we want them to run. The campaigns are not in the end designed by them. One shouldn't expect them to have intimate knowledge about every nuance of our campaigns. Plus you can't really blame DoubleClick/Atlas for the 30 day default window, since the adserver interface does allow planners to overwrite it.
In fact, whenever the 30-day tracking window is deployed, we are making an assumption. That is, the effect of ad exposure will last for 30 days. As soon as it passes the 30-day mark, the ad fades completely out of viewers' mind.
There is nothing inherently wrong with this assumption. The problem is that we are applying it across board to most, if not all, of our campaigns. By using it that way, we are deemphasizing the difference that some of the key components of our campaigns, namely, creative, frequency of exposure, type of campaign (direct response vs. branding), etc. might bring. As media specialists, we all know that different plan and/or different creative should induce different reaction from viewers. Ad memory effect is certainly one of the reactions. Take creative for example. I am sure we have all seen some extremely memorable ads/creatives that were so impressively done we talked about them even months after we saw them. In the meanwhile, there are plenty of mediocre productions that we could barely remember as soon as the moment of exposure passes. To lump these two types of ads/creatives under one umbrella for conversion tracking is in the end to penalize the good for the bad. It's as if we are saying it's OK to be mediocre.
Again I am not saying that the 30-day tracking window should not be used for any campaign. There are indeed campaigns out there that would qualify for 30 day tracking. Nevertheless the decision on how long we should track a campaign (whether it is 30 day or not) should not be left on autopilot. Rather it should be based upon sound understanding of the campaign we are running and made by people who are intimately involved in the process, namely the planners/buyers, the analysts, etc. To relinquish such decision making power to adserver default is no different than letting the IT department take control of your marketing design, which we all know is a no-no. Personally I know media planners who think about and subsequently make adjustments to the 30 day tracking window when needed. To them this little piece serves no more than a little confirmation on what they have been doing all along. But to those who take adserver default for granted, I want to say there is really nothing magical about the number 30. In fact, the real magic should always be in your own hands. Do your own analyses, make your own decision, and have fun.