As someone working in analytics, I always get very excited in learning a company/agency has been collecting the kind of data that is needed to tackle tough questions. Yet often my initial excitement
is quickly replaced by disappointment. It's not that they don't have all those data that they claim to have -- but that the data are not collected in the way analytics folks would like to see, which
usually leaves the data kind of available but far from useful. One typical example of this is what I would call the lack of the single point of truth.
In the online media space, the lack of
the single point of truth almost always comes from advertisers/agencies using different vendors to serve a campaign. It is not uncommon to find an advertiser launch an online media campaign by using
multiple adserving mechanisms (e.g. Point Roll for the rich-media component, DoubleClick for non-rich-media display, Yahoo for some unique site buy, and Didit for paid search).
Such division
of labor is usually prompted by pretty sound business calculation. That is, a certain vendor is strong in one aspect of the campaign, but not all. To choose several instead of one is a way to make
sure ads are served in the most effective/efficient manner -- a strategy far from blameworthy.
However, it is also the case that at the back-end such arrangements may bring havoc to
performance measurement, especially for a direct-response campaign when performance metrics are tied to some type of spotlight/floodlight tag activities, whether it is hard transaction, coupon
downloading, or signing up for an email, etc. The problem rises because of lack of transparency across adserving/tracking platforms. To be more specific, an exposure happening via one platform is
usually only visible to itself, but not to other platforms.
Take a simple/hypothetical campaign in which we only use two vendors -- Didit's Maestro for search and DoubleClick for display.
Assuming spotlight activity is tagged from both platforms, we should be able to find conversion metrics from the standard reports generated individually by DoubleClick and Didit. The former would
attribute conversions to display exposure and the latter to search. The issue comes when there is considerable overlap in audience between these two channels. In other words, when that happens, you
would expect to see quite a significant number of conversions preceded by both display exposure and search.
Unfortunately, based upon the last ad model (I am not going to argue for or against
it in this article), such conversions will be counted as "true" conversions in both systems. However if the exposure WERE visible between these two platforms, the last ad model should have attributed
the conversion to the true last one (whichever happens later) instead of to both.
Such multiplication of "truth" can do real damage to performance measurement -- and subsequently to the
media plan/buy based off those metrics. As a rule of thumb, the more platforms one employs toward a single campaign, the more likely the metrics are going to be negatively affected. In addition, the
more overlap in audience exposure across the platform, the more "truths" you are going to get.
Ultimately, though, we can all agree that there is only one single truth out there that reflects
the true performance of the campaign regardless of how many vendors you are using. I don't think there will be argument on the importance of knowing what that truth is. After all, we measure campaign
performance not for the sake of measuring, but for the purpose of making our media plan and/or future plans better.
It is not at all the intention of this article to oppose the practice of
using multiple vendors/platforms to serve a campaign. As I pointed out earlier, there are plenty of good reasons to traffic campaigns that way. However, I do feel very strongly against going down that
route WITHOUT understanding the potential data problem -- and if need be, formulating a strategy to take care of the problem.
It would be in the interest of media dollar investors to seek
some form of single point of truth. The ideal approach is to have users of multiple trafficking platforms use a single platform to carry out tracking regardless of who the platform belongs to.
Subsequently only the performance metrics derived from that platform would be used to measure the performance of the campaign. I know it is going to cost (often quite a bit) to pixel-track every
impression and click. But for large advertisers that command huge buys, audience overlap can be considerable. Consequently, the cost of not knowing the truth can literally outrun the cost of getting
things right.
I do understand that under the current economic circumstance, it may not be possible to put tracking pixels on everything all the time. If that is the case, the fallback
solution is to at least track your campaign under one system for a period of time (for example, two months). Data harvested during that period of time will give you some idea on whether multiplication
of truth is an issue -- and if it is, what kind of adjustment factors you need to use to correct the bias down the road, when tracking is off. This is not the recommended approach -- but is the
minimum one should do (even for the purpose of finding out whether it is an issue or not).