The Most Measurable Medium? We Still Have A Lot To Do!

Last week, Josh Chasin wrote about the state of online media metrics with considerable vexation. I share his frustration.

First, a mea culpa. In early 1995, I was on the advisory board for a company called Internet Profiles (I/PRO). I/PRO introduced the first metrics for the Web using terms like visitors and adviews and through its iAudit product, much like a BPA or ABC audit for print. Knowing then what we could measure on the Internet, and being well aware of the limitations of the sample-based media metrics for all traditional media, I uttered the phrase in an ideation session that we should have, as a goal, to make the Internet "The Most Measurable of All Media" within a year. It seemed like a good idea at the time.

The good news is, we accomplished the most measurable part, although it took longer. The bad news, as Chasin pointed out, is that we have a veritable tower of Babel in the number of different metrics that do not talk to each other. In fact, there are a number of cases where metrics from different phases of Internet measurement use the same word with different definitions.



We have more than a metrics issue, though. We have a major systems integration issue -- one where our multiple separate silos of data do not talk to each other. The research company data does not coalesce with the third-party ad server data. The third-party ad server data seldom talks to the back-end analytics systems. Even when they do, it involves heavy lifting and they generally have different definitions and different ways of attributing the same data (or not). Chasin goes on to talk about the audits being done by the MRC. I agree with him wholeheartedly that audits when we don't have standards are futile. We will have succeeded in auditing one view of the data while other systems maintain separate views.

Here's one example. Reach for a research company is more technically, site cume potential. NetRatings and comScore have historically measured site impact, not advertising exposure. The third-party ad servers measure ad exposure but, despite there being feigned integration through DoubleClick's reselling of NetRatings, reach and frequency planning data through IMS are not integrated with the R/F data on the back end of DoubleClick's actual R/F reporting systems. What's worse, most planners do not realize that the R/F planning that they do with DC does not even contain DC data!

A few years back, Rex Briggs and I toiled to produce a white paper for the Advertising Research Foundation outlining this problem. We made concrete suggestions relative to the integration of site data that had rich demographics from companies like NetRatings and ComScore with actual advertising exposure data from companies like Atlas and DoubleClick. Changes in management at the ARF and a general malaise in the industry (it was post-bubble burst and we were all trying to drain the swamp), resulted in this paper never getting beyond draft stage. But, it was a final draft and ready for prime time and still largely relevant today.

In addition to R/F, we have a lot of other issues that must be resolved. In articles I have written for iMedia, The Mediasmith Anvil and the MRCC Newsletter, I have talked about the standard for the impression definition, thanks to efforts by the Interactive Advertising Bureau, the American Association of Advertising Agencies and the Media Ratings Council. There is also discussion of a standard for clicks that is sorely needed. If you doubt this, try launching a campaign measured by Atlas on the agency side and any rich media vendor on the site side. The discrepancies are embarrassing.

But the click standard falls short. We need to add viewthrough or post-impression metrics to the click measurement initiative for, as you know, campaigns do not survive on clicks alone. So, in this case we are setting a standard that is just plain not enough.

Only by standardizing measurements of clicks AND viewthrough will we have clear definition of advertising-driven traffic. But wait, there's more.

We also need to establish some standards in analytics or back-end metrics. This is the data provided by companies like Coremetrics, Web Side Story, Webtrends, Omniture  and others.

Since data passback and dependence upon back-end analytics is so important to judge a campaign's effectiveness today, the IAB, ARF, AAAAs and MRC should be making sure that these systems are consistent with the metrics standards being established for third-party ad serving (see impression definition above). As Chasin pointed out in his article, the Web Analytics Association has just published a set of Web Analytics Definitions, which overlaps with the scope of the IAB-led initiative. I agree with him that we need to "standardize the standards."

By the way, these new definitions, however valuable, do not include view-through or post-impression. Why is that? It is because the tools espoused by the WAA are Webmaster-oriented. And heaven forbid that the world of Webmasters and the world of the CMO should ever get together and try to make sure that their systems and processes talk to each other.

Standardized use of pixels or ad tracking codes must also be a part of any new initiative. The hijacking of tracking attribution by these back-end programs to show that internal Webmaster-driven efforts (rather than advertising) get credit for bringing in the customer needs serious industry examination.

There is a need for the back end or Web analytics systems to use the same metrics standards as the research companies and the ad servers. Efforts by Blackfoot, Theorem, Atlas, DoubleClick and others will help in this effort. At my company (and some others, we understand) this is called "Multiple Attribution Protocol." While there are other names for this protocol, the concept is the same: looking at the full life of the relationship with the consumer, providing a weighted attribution, and crediting the most significant points of contact with the sale or consumer interaction.

And don't get me started on the different, new metrics "standards" being put forth to measure the various emerging technologies.

We've finally gotten a metrics track established in the OMMA program. So, to further this dialogue, come to these sessions later this month in New York and participate in the dialogue. Let's move things ahead together.

Next story loading loading..