Commentary

The Single Point of Truth

As someone working in analytics, I always get very excited in learning a company/agency has been collecting the kind of data that is needed to tackle tough questions. Yet often my initial excitement is quickly replaced by disappointment. It's not that they don't have all those data that they claim to have -- but that the data are not collected in the way analytics folks would like to see, which usually leaves the data kind of available but far from useful. One typical example of this is what I would call the lack of the single point of truth.

In the online media space, the lack of the single point of truth almost always comes from advertisers/agencies using different vendors to serve a campaign. It is not uncommon to find an advertiser launch an online media campaign by using multiple adserving mechanisms (e.g. Point Roll for the rich-media component, DoubleClick for non-rich-media display, Yahoo for some unique site buy, and Didit for paid search).

Such division of labor is usually prompted by pretty sound business calculation. That is, a certain vendor is strong in one aspect of the campaign, but not all. To choose several instead of one is a way to make sure ads are served in the most effective/efficient manner -- a strategy far from blameworthy.

However, it is also the case that at the back-end such arrangements may bring havoc to performance measurement, especially for a direct-response campaign when performance metrics are tied to some type of spotlight/floodlight tag activities, whether it is hard transaction, coupon downloading, or signing up for an email, etc. The problem rises because of lack of transparency across adserving/tracking platforms. To be more specific, an exposure happening via one platform is usually only visible to itself, but not to other platforms.

Take a simple/hypothetical campaign in which we only use two vendors -- Didit's Maestro for search and DoubleClick for display. Assuming spotlight activity is tagged from both platforms, we should be able to find conversion metrics from the standard reports generated individually by DoubleClick and Didit. The former would attribute conversions to display exposure and the latter to search. The issue comes when there is considerable overlap in audience between these two channels. In other words, when that happens, you would expect to see quite a significant number of conversions preceded by both display exposure and search.

Unfortunately, based upon the last ad model (I am not going to argue for or against it in this article), such conversions will be counted as "true" conversions in both systems. However if the exposure WERE visible between these two platforms, the last ad model should have attributed the conversion to the true last one (whichever happens later) instead of to both.

Such multiplication of "truth" can do real damage to performance measurement -- and subsequently to the media plan/buy based off those metrics. As a rule of thumb, the more platforms one employs toward a single campaign, the more likely the metrics are going to be negatively affected. In addition, the more overlap in audience exposure across the platform, the more "truths" you are going to get.

Ultimately, though, we can all agree that there is only one single truth out there that reflects the true performance of the campaign regardless of how many vendors you are using. I don't think there will be argument on the importance of knowing what that truth is. After all, we measure campaign performance not for the sake of measuring, but for the purpose of making our media plan and/or future plans better.

It is not at all the intention of this article to oppose the practice of using multiple vendors/platforms to serve a campaign. As I pointed out earlier, there are plenty of good reasons to traffic campaigns that way. However, I do feel very strongly against going down that route WITHOUT understanding the potential data problem -- and if need be, formulating a strategy to take care of the problem.

It would be in the interest of media dollar investors to seek some form of single point of truth. The ideal approach is to have users of multiple trafficking platforms use a single platform to carry out tracking regardless of who the platform belongs to. Subsequently only the performance metrics derived from that platform would be used to measure the performance of the campaign. I know it is going to cost (often quite a bit) to pixel-track every impression and click. But for large advertisers that command huge buys, audience overlap can be considerable. Consequently, the cost of not knowing the truth can literally outrun the cost of getting things right.

I do understand that under the current economic circumstance, it may not be possible to put tracking pixels on everything all the time. If that is the case, the fallback solution is to at least track your campaign under one system for a period of time (for example, two months). Data harvested during that period of time will give you some idea on whether multiplication of truth is an issue -- and if it is, what kind of adjustment factors you need to use to correct the bias down the road, when tracking is off. This is not the recommended approach -- but is the minimum one should do (even for the purpose of finding out whether it is an issue or not).

2 comments about "The Single Point of Truth ".
Check to receive email when comments are posted.
  1. Rich Morgan from Discount Tire, March 13, 2009 at 11:34 a.m.

    The Omniture Suite of products is great for this. It's wonderful to see the multiple touch points that led to a conversion rather than just the "last in".

  2. John Grono from GAP Research, March 13, 2009 at 10:27 p.m.

    An excellent post Chen.

    It parallels the scary situation of going beyond two providers into the realm of web measurement or audience measurement. Think of it as a campaign with billions of vendors. No wonder the audience metrics are so seriously flawed.

    Down here in Australia with a population of 21.5m of which around 80% are online in any month (meaning the audience should 'top-out' at around 17m), when we put all the tags together we get an audience of just over 45m in February 2009 - and that DOESN'T include all the major publishers! Try explaining THAT to a client who wants to invest their branding in online!

    You are correct with your fallback plan of tracking under one system - which is fine for a campaign, but impossible for audience measurement. Just maybe we need a compulsory "web audience measurement" tag - if you want to be reported in audience measurement you HAVE to use this tag - and if you don't you're not reported.

    Third-party systems are fine for SITE analytics but can't do WEB AUDIENCE measurement.

    Some other things to consider which distort these data are:
    * same user using multiple browers (generating multiple cookies)
    * same users accessing from different sites (home and work)
    * cookie deletion during the period of the campaign (its on the increase)
    * single PC with multiple users (just one cookie but different people)

    All in all, these 'exceptions to the rule' add up to MASSIVE discrepancies for any data that exceeds a single day. It's dodgy over a week, and just plain wrong over a month.

    Once again Chen, congratulations on a very pragmatic example using a dual-vendor campaign.

    John Grono

    GAP Research

    Sydney Australia

Next story loading loading..