(DISCLAIMER: As the chief research officer at comScore, I have an agenda in online metrics.)
The Internet, we have been told for a decade, is the most measurable medium. Hence it is not without vexation that the Internet business community observes the current state of online metrics. Because unlike a traditional medium such as network TV, where a single metric system has held uncontested sway for over 50 years, the Internet community is "blessed" with an embarrassment of measurement riches.
We've got comScore and NetRatings, who each collect detailed behavioral and demographic data from a panel of Internet users and then project to the online universe. Among all the online metrics providers, these two companies hew closest to the traditional construct of syndicated audience measurement.
Then there are companies like Alexa, which collects information from Web users who have downloaded and installed their toolbar, and Hitwise, which collects data from ISPs. There are new entrants like Quantcast and Compete, both of whom offer spiffy free online interfaces that allow users to profile a Web site's audience. Quantcast combines data from advertisers, publishers, ISPs, and ad networks; Compete combines data from ISPs, toolbars, and panelists.
Then we've got the Web analytics providers, including companies like Google Analytics, Omniture, WebSideStory, WebTrends, and Unica. These are the sources of what we often call "site-centric data." It is probably worth pointing out that in the Internet metrics space, we really have two distinctly different but overlapping disciplines: audience measurement and Web analytics. Each of these disciplines has its own history, culture, set of objectives, and set of best practices.
The Interactive Advertising Bureau made headlines earlier this year when it called attention to the differences between site-centric Unique Visitor (UV) counts and panel-based UV projections. The implication appeared to be that site-centric counts were a priori correct, while panel-based data was therefore inherently flawed. We refuted this contention in a white paper on the impact of cookie deletion on site-centric UV counts (). We've also observed that there can be issues involving non-human traffic, international traffic, and duplication between work and home usage that can inflate site-centric UV counts.
Lately I've been a member of the Web Analytics Association's listserv, a Yahoo forum founded by Eric Peterson, which is currently about 3,800 members strong. (If you want to learn a lot about Web analytics fast, check it out.) There I've come to learn that a Web publisher can install and run two separate analytics packages simultaneously and end up generating two divergent sets of site-centric data for the same site at the same time (even though both are based on the same so-called "census" of visitor data). In the 2007 Web Analytics Interim Report published this past May, Jim Sterne wrote by way of introduction: "Every web analytics tool measures clickthroughs and page views a little differently. They're all using slightly different yardsticks and getting slightly different results. The disparity is driving us to distraction."
Distraction. Vexation. Same difference.
How might we sort all this divergence and disparity out? Right now, the Media Rating Council (MRC) accredits ad impression counts from AOL, Atlas, CNET, Disney, DoubleClick, MSN, Univision, Weather.com, and Yahoo. Several other publisher audits are in process (http://www.mediaratingcouncil.org/Accredited%20Services.htm). With the exception of Univision, though, the UV counts provided by these entities are not accredited (Univision was granted an exception.) With respect to audience measurement services, currently both comScore and NetRatings are working through the MRC audit process. NetRatings is probably 6 to 8 months further along than we are (My position: it isn't a race.)
But it is unrealistic to expect that widespread MRC auditing will ameliorate the differences in metrics as reported by different sources. Accreditation will not force panel-based metrics and server-centric data into any kind of alignment. This is not to minimize the importance of an MRC audit. The process is extremely valuable for vetting methods and procedures, for assuring transparency and disclosure, and for providing industry constituencies with a window into the measurement companies' methodologies via MRC membership and participation. I have 27 years of experience in syndicated audience measurement and can attest that the audit and accreditation process makes the measurement company better. But it does not force convergent validity when there are multiple measurers.
Well then, maybe what we need is standardization.
Right now the IAB and the MRC are working together to develop a set of IAB Reach Measurement Guidelines (e.g., what is a "Unique?"). Meanwhile, the Web Analytics Association has just published a set of Web Analytics Definitions, which overlaps with the scope of the IAB-led initiative. So maybe we need to standardize the standards?
If the Internet is so darned measurable, why are there so many conflicting measures?
Let's take a breath.
See, here's the thing. One of the consequences of being the most measurable medium is that the Internet ends up as the medium with the most measures. We tend to experience this as paradox-- the Internet is inherently measurable, yet measures are abundant and widely divergent. When you think about it, though, this is not paradoxical at all. Measurability begets measures. We have to stop letting this abundance of metrics keep us from doing the business we need to do.
It is my hope and intention that this forum -- the Online Metrics Insider, and my own humble contributions hereto -- will help serve as a platform for placing the metrics in context, sorting them out, getting past the paralysis that multiple metrics sources can sometimes induce.
I'm thrilled to be working in this space at this time -- in the medium with the most measures. Let's see what we can do together to take advantage of this embarrassment of riches. Because as far as media measurement is concerned, the current online metrics environment will prove to be the rule, not the exception. Already, more than half of all U.S. ad impressions are served digitally, and that ratio will only increase. Digital media platforms -- the Internet, digital cable TV, the fast-emerging wireless-cellular medium -- will continue to beget and support multiple metrics sources. Alignment around a single metric source should not be perceived as a step toward legitimacy for the online medium; it should be perceived as a step backward, back into the 20th century, back to the ways of analog measurement.
I think we're ready to move forward. In fact, I'm counting on it.