Commentary

We Only Have Ourselves to Blame

Indulge me for a moment. Everyone who wishes there was more standardization in our industry, raise your hand. Everyone who has ever cursed the accuracy of syndicated TV panel and diary data, raise your hand.

The issue of standards is always a hot debate. Everyone wants them, but everyone hates them. This was never more apparent than at the recent Forecast 2004 event hosted by MediaPost, where I listened to countless experts quote audience measurement statistics to make their arguments, then later bash the same sources for lacking any basis in reality. When I finally became conscious of this contradiction, I felt like one the guys who blame the fast food companies for their waistline, yet still order the double-stack, with cheese and extra bacon, everyday for lunch. This love/hate hypocrisy raises very serious questions about how we as an industry pick and choose our standards.

I'm not pointing my finger at the stuff that we can standardize, and eventually will standardize without regrets, like ad sizes, the definition of an online impression, or making sure Yahoo! Finance is the same thing in one place as another. Rather, I'm talking about standards that put our future in the hands of monopolies, whom we later use as scapegoats for our own shortcomings.

advertisement

advertisement

Don't get me wrong, I'm all for standards. They create efficiencies and boost effectiveness - if chosen well. They must be scaleable, and accessible to everyone in the industry. But for us to pick a standard, that we know we're going to complain about 10 years from now when it's too late, is down right irresponsible, and we'll only have ourselves to blame.

So let's go through the exercise again.

Everyone who's sick and tired of the reach and frequency forecasting debate (i.e. the combined vs. panel-only based approach) raise your hand. I'll assume that if you didn't raise your hand, you would have, had you attended all the ARF, AAAA, IAB, iMedia and publisher consortium meetings. You may have already blocked the issue out of your mind, listening to blokes like me, pointing at complicated Venn diagrams explaining why the ARF unanimously recommended the integration of ad-serving and panel data. Well, I'll let you in on a little secret if you promise not to tell my boss. I don't care who wins either.

With a caveat. I don't want to complain about our choice once it's made. There are two methodologies on the table, the panel-only based approach and the combined approach which integrates ad-serving data. Both are scaleable, and easy to use. You pick your target demo, select some sites, input impression levels, and click "GO". From a user standpoint, neither is more complicated than the other.

But what if the numbers that appear on the nice report aren't anywhere close to reality? "Directional" numbers are estimates that are correlated with reality, but don't necessarily accurately measure them. What happens when you try to add your directional reach and Gross Rating Points (GRPs) to the TV, Print and Radio GRPs? You end up with a bunch of skeptical advertisers who don't know what they are getting for their money and budgets that match that skepticism. You don't need a research expert to tell you that you can't add "directional numbers" to what people have already accepted as "real numbers". This is especially true when the "real numbers" are the currency of TV, Print, and Radio, and the Online directional numbers are not. Make no mistake, we'll once again throw our tomatoes at the industry standard and complain that there are no alternatives.

There is a Right Answer

So which methodology should we choose? The most accurate one. It's a lot easier to figure out than everyone thinks. Given most agencies rely on third party ad-servers to deliver their ads, most have accepted that cookies present a consistent and accurate way to measure the impressions and gross reach of their campaigns across sites. This is due to the fact that the leading third party ad-servers (namely Atlas DMT and DoubleClick) use a very consistent way to deliver ads (the 302 redirect). Cookies don't have demographics associated with them, but if we can accurately forecast pre-campaign reach estimates by matching them to the gross reach measures post-campaign, we will have achieved immense strides.

This is not a complicated or huge effort. We can determine the most accurate methodology in a week. Anyone using third party ad-serving can do this on their own, and the ARF could conduct a broader impartial study. Take 50 buys you've done in the last 6 months across 15-20 different sites. Input the delivered impressions into the respective forecasting tools (give me a call if you'd like a trial version of the Atlas Forecaster), and compare the forecasted gross reach estimates to the reach estimates you get from your third party ad-server. Figure out how many of the forecasts come within 20 percent (or whatever level you're comfortable with) of the actual back-end totals. If you do this with different planning tools, you'll have a fair side-by-side comparison. We've done some of these studies ourselves and have empirical evidence that the combined approach is two-to-three times more accurate than the panel-only based approach.

Atlas DMT is the only system that has published a validation study, and it proved that the combined approach produces very good directional forecasts (you can find it here: http://www.atlasdmt.com/insights/dm.asp). But the real bar that will put online on the same page as the traditional channels is accuracy we can all live with and stand behind. The ARF is contemplating whether or not to conduct this study to help the industry move forward. I hope you'll join me in encouraging them to do so. We won't ever have a perfect system. But when we say, "this is the best possible solution", we'll have a standard that will let us sleep at night.

Young-Bean Song is the Director of Analytics & Atlas Institute at Atlas DMT, an advertising technology provider and operating unit of aQuantive, Inc.

Next story loading loading..